Skip to main content

Australia needs to be a world leader in ethical AI

Technology and Human Rights
A man reaching out to touch hexagons with words like "machine learning" written on them.

This opinion piece by Human Rights Commissioner, Lorraine Finlay, appeared in the Newcastle Herald on Saturday, 01 July 2023.

The human rights impact of new and emerging technologies has been a key focus of the Australian Human Rights Commission for a number of years. Technology is essential to our lives, but it needs to be fair. There has never been a greater need to pay attention to technology and seize the opportunities it presents. But we also need to recognise that these technologies can pose significant risks to our human rights and cause serious harm to individuals.

AI has the potential to help solve complex problems, boost productivity and efficiency, reduce human error and democratise information. The uses that have been canvassed in areas such as healthcare and education highlight the potential of this technology to enhance human rights.

One example of the potential benefits for human rights was seen in India in 2018 where the Delhi police used AI-based facial recognition technology to reunite nearly 3000 children with their parents in just four days. This pilot program reunited 10,561 missing children with their families after only 15 months in operation.

At the private enterprise level, AI products are readily being adopted in business models to improve efficiencies and outcomes for clients. However, these tools are not without risks.

It is likely that AI systems will deepen intrusions on privacy in new and concerning ways. AI products must be trained on incredibly large amounts of data. Social media companies operate on a business model which is incredibly reliant on the collection and monetisation of massive amounts of personal information. The collection of data to train AI products will only heighten these issues.

Despite the importance of the right to privacy, many enterprises which build and deploy large language models like ChatGPT have been reluctant to reveal much detail about the data used for training, or that data's providence. Questions have also been raised about whether these organisations have sought and received permission, or paid, for the use of internet data used to train AI products.

AI products effectively seek to "understand" human patterns of behaviour and, with access to the appropriate data sets, these AI tools can do so, drawing conclusions about all aspects of our lives.

It is one thing to have AI products store details about the music I listen to or movies that I watch at home. But AI can also lead to more intrusive inferences being drawn about individuals, including about their mental and physical condition, political leanings or even their sexual orientation.

While AI allows large amounts of relevant information to be considered in decision-making processes and may encourage efficient, data-driven decision-making, its regulation is becoming increasingly important.

Algorithmic bias can entrench unfairness, or even result in unlawful discrimination. Several AI products promise to recommend the best applicant for a job based on past hiring data. These systems may unintentionally produce discrimination. One example was the use of AI by Amazon which discriminated against women applying for technical jobs because the existing pool of Amazon software engineers was, by majority, male.

Cautionary tales are now emerging of AI chatbots hallucinating, spreading misinformation, producing biased content and engaging in hate speech. Generative AI is a game-changer in terms of it now being cheaper and easier than ever before to run mass disinformation campaigns, as distinguishing between fact and fiction will become increasingly difficult. Even knowing whether we are interacting with a human or a machine may become increasingly challenging. These are particularly critical challenges for democracies like Australia, where we rely on our citizens being informed and engaged.

We need to focus on how we can harness the benefits of new and emerging technologies without causing harm and undermining human rights. 

Humanity needs to be placed at the heart of our engagement with AI. At all stages of a product’s lifespan – from the initial concept through to its use by the consumer – we need to be asking not just what the technology is capable of doing, but why we want it to do those things. Technology has to be developed and used in responsible and ethical ways so that our fundamental rights and freedoms are protected.

Some governments and businesses are engaging proactively with these questions. However, far too many are not.

The recent release of the Department of Industry, Science and Resources AI discussion paper is a welcome step in the right direction by the Australian Government. Other initiatives that could be implemented immediately to help protect human rights would be the introduction of an AI Safety Commissioner by government, and the effective use of human rights impact assessments by businesses.

Australia needs to be a world leader in responsible and ethical AI. The truth is that AI itself is neither friend nor foe. The more important question is whether the people using AI tools are using them in positive or negative ways. Unless both government and business are prepared to step up and show leadership in this area, my fear is that we will see the risks to human rights increase exponentially.

Lorraine Finlay