By Edward Santow (Australian Human Rights Commissioner) and Nicholas Davis (Head of Society and Innovation, Member of Executive Committee, World Economic Forum).
We’ve all had something like it happen: one minute you’re searching for a present suitable for a two-year-old; the next, ads for nappies and prams are on every site you visit.
It’s unsettling. No one feels comfortable about bots following us surreptitiously as we roam around the web, when companies use what they learn from our online behaviour to promote products and services in creepy ways.
But could concerns around privacy and informed consent – though undeniably important – be distracting us from what we should be really worried about?
The exploitation of personal information for marketing purposes is a real problem. But the more serious risk is that our personal information can be used against us – not just to advertise a product we don’t want, but to discriminate against us on the basis of our age, race, gender or some other characteristic we can’t control.
For example, if you have darker skin, facial-recognition technology is dramatically less accurate than if you have a light complexion. As this technology is progressively rolled out across law enforcement, in border security and even in delivering financial services, the risk that you’ll be unfairly disadvantaged increases depending on your ethnicity.
Similarly, there are examples of artificial intelligence (AI) operating to prevent women or older people seeing certain online employment opportunities.
Not only does this violate the human rights of anyone negatively affected, but it also undermines community trust in AI more broadly. A collapse in community trust in AI would be disastrous, because AI has the potential to be an enormous boon – not just for national economies, but also in making communities more inclusive.
For every instance of AI causing harm, there’s also an uplifting counter-example. This could be anything from AI-powered smartphone applications allowing blind people to “see” the world around them, to huge strides in precision medicine.
The challenge, therefore, is to build enduring trust in the development and use of a tremendously exciting set of technologies, so that citizens and organisations around the world can take advantage of the opportunities while addressing the threats to universal human rights rights.
This might sound eminently sensible to you. Unfortunately, this challenge is made harder by a damaging but pervasive myth.
Righting the wrongs
Take Australia as an example.
A common counterpoint to the idea that common sense norms and rules should apply to the development or implementation of AI relates to the fact that other countries are less likely to do the same.
In which case, the argument goes, if Australia is to compete globally in developing AI products, Australian researchers and companies must not be fettered by human rights concerns, because other countries certainly aren’t. China, for example, is investing heavily in AI technology such as facial recognition to support its “social credit score” system, which involves conducting precise and determinative surveillance of its citizens. In the context of a global AI arms race, it is argued, Australia can’t compete with one arm tied behind its back.
This argument is dangerous and misguided. Australia’s liberal democratic values are one of its core strengths. The Australian Human Rights Commission’s consultation on human rights and technology has shown that, as Australians learn more about AI, there’s a growing demand that AI only be used in ways that respect their human rights.
This suggests that embedding human-rights protection in AI as it’s developed isn’t just morally right – it’s also smart. If Australia can become known for developing AI that gets the balance right, it can gain a competitive advantage.
After all, consumers in liberal democracies want the benefits of AI, through self-driving cars, better healthcare and super-powerful computers. However, they won’t accept a trade-off that involves mass surveillance, the exclusion of entire groups and a rise in discrimination.
So, what’s the solution?
We know that technology, and especially AI, is developing at breakneck speed. We also know that, in almost every country around the world, laws are slow to adapt.
This puts greater pressure on institutions in countries such as Australia to smooth AI’s rough edges in ways that allow us to harness the opportunities without allowing vulnerable members of our community to be crushed.
Luckily, there is a way forward. And it might just be in Australia that real progress is made.
Several influential voices have already called for an Australian organization to lead on AI. The World Economic Forum and the Australian Human Rights Commission have formed a partnership to consider this idea. These two bodies have invited leading decision-makers in government, industry and academia to meet at the University of Technology Sydney (UTS) to consider how we will tackle this AI leadership challenge.
Based on the consultation we have conducted to date, some of the key issues that should be considered include the following.
First, we should clearly articulate the values that should underpin AI. In Australia, these should be quintessentially Australian values such as equality or the fair go.
Second, there has been some support among stakeholders for a specialised organisation—either a new or existing one—to take a central role in assessing technologies and formulating laws, guidelines, accountability and capacity-building strategies in AI. This should be a national organisation with close connections with all stakeholders.
Third, this organisation should work closely with industry, government and the community to support the development of AI technologies that respect human rights.
The World Economic Forum and the Australian Human Rights Commission are consulting on these issues right now and have produced a white paper inviting comments, focused on the Australian context. But this is an issue facing all countries around the world, and we welcome your input in this process.