Six key questions towards Ethical AI
in AI | By Red Marble AI
As companies emerge from these tough times of COVID-19, many organisations are looking to take advantage of Artificial Intelligence (AI) to build efficiency and improve customer and staff experience. Along with the many benefits, there are also risks and liabilities to consider – particularly when it comes to inheriting bias.
It’s critical to deploy any artificial intelligence project with ‘Ethical AI’ as a key lens as we move into an equitable future.
AI might seem immune from the moral tangles and complicated contradictions of humans, but in reality it’s difficult to escape the biases that unknowingly sit in the data used to train the models. Not only does this create an ideological issue, it also affects the business outcomes and increases the risk of negative impact.
Ethical AI – six questions to ask
As the market and more importantly the influence of AI continues to build, we need Ethical AI to be our directional “North Star” to ensure we manage the inherited risk of bias within AI models.
At a minimum, we suggest asking and discussing six key questions throughout the process of procuring, designing and implementing AI projects. This may be a project sponsor asking a vendor, or a project manager asking their developers, or a CEO asking a CIO:
- Which laws and regulations does the software need to adhere to?
- How does the solution design enable any decisions, or recommendations made, to be explained?
- Is there transparency about this use of AI with each stakeholder group?
- What biases may be inherent in your training data? How are you managing the risk of bias?
- What decisions will the software make or inform? Are the decisions reversible?
- What’s your process around code reviews and how are you adopting software engineering best practice?
Leading the charge for responsible AI
Many governments, organisations and businesses, particularly in Australia, have voiced their support of the Ethical AI movement.
The Australian Government’s Department of Industry, Science, Energy and Resources has created a list of eight voluntary AI ethics principles; among them, “Throughout their lifecycle, AI systems should benefit individuals, society and the environment.” However, we feel that most companies need a more practical approach and find those principles a little too high level.
Red Marble AI has an ethical AI playbook for companies looking to establish their capability quickly – please reach out for a copy. There are also a number of specialist ethical AI firms emerging, such as Dr Catriona Wallace’s new advisory which recently opened out of Sydney.
Ultimately, we expect companies will form an AI ethics advisory board or sub-committee, similar to existing risk and audit committees, to develop a framework for a company’s expectations around AI. But in the meantime, we believe that asking these six questions – and the discussions that ensue – will give most companies a strong head start.
It would be great to hear your thoughts on Ethical AI, and our six principles. How strict should we be? Do we need government legislation? Please get in touch here.