Foreword from the Australian Human Rights Commissioner, Edward Santow
This Discussion Paper sets out the Australian Human Rights Commission’s preliminary views on protecting and promoting human rights amid the rise of new technologies.
New technologies are changing our lives profoundly—sometimes for the better, and sometimes not.
We have asked a fundamental question: how can we ensure these new technologies deliver what Australians need and want, not what we fear?
We recently completed the first phase of the Commission’s public consultation.
We heard how Australian innovators are building our economy, revolutionising how health care and other services are delivered, and using data to make smarter decisions.
But we also saw how artificial intelligence (AI) and other new technologies can threaten our human rights. Time and again people told us, ‘I’m starting to realise that my personal information can be used against me’.
For example, AI is being used to make decisions that unfairly disadvantage people on the basis of their race, age, gender or other characteristic. This problem arises in high-stakes decision making, such as social security, policing and home loans.
These risks affect all of us, but not equally. We saw how new technologies are often ‘beta tested’ on vulnerable or disadvantaged members of our community.
So, how should we respond?
Australia should innovate in accordance with our liberal democratic values. We propose a National Strategy on New and Emerging Technologies that helps us seize the new economic and other opportunities, while guarding against the very real threats to equality and human rights.
The strongest community response related to AI. The Commission proposes three key goals:
- AI should be used in ways that comply with human rights law
- AI should be used in ways that minimise harm
- AI should be accountable in how it is used.
Sometimes it’s said that the world of new technology is unregulated space; that we need to dream up entirely new rules for this new era.
However, our laws apply to the use of AI, as they do in every other context. The challenge is that AI can cause old problems—like unlawful discrimination—to appear in new forms.
We propose modernising our regulatory approach for AI. We need to apply the foundational principles of our democracy, such as accountability and the rule of law, more effectively to the use and development of AI.
Where there are problematic gaps in the law, we propose targeted reform. We focus most on areas where the risk of harm is particularly high. For example, the use of facial recognition warrants a regulatory response that addresses legitimate community concern about our privacy and other rights.
Government should lead the way. The Discussion Paper proposes strengthening the accountability protections for how it uses AI to make decisions.
But the law cannot be the only answer. We set out a series of measures to help industry, researchers, civil society and government to work towards our collective goal of human-centred AI.
Education and training will be critical to how Australia transitions to a world that is increasingly powered by AI. The Discussion Paper makes a number of proposals in this area. We also propose the creation of a new AI Safety Commissioner to monitor the use of AI, coordinate and build capacity among regulators and other key bodies.
Finally, innovations like real-time live captioning and smart home assistants can improve the lives of people with disability. But as technology becomes essential for all aspects of life, we also heard how inaccessible technology can lock people with disability out of everything from education, to government services and even a job.
The Commission makes a number of proposals to ensure that products and services, especially those that use digital communications technologies, are designed inclusively.
We thank the many representatives of civil society, industry, academia and government, as well as concerned citizens, who participated in the Commission’s first phase of public consultation. This input has been crucial in identifying problems and solutions.
We also pay tribute to the Commission’s major partners in this Project: Australia’s Department of Foreign Affairs and Trade; Herbert Smith Freehills; LexisNexis; and the University of Technology Sydney (UTS). In addition, we thank the Digital Transformation Agency and the World Economic Forum for their significant support. The Commission acknowledges the generosity of its Expert Reference Group, who provide strategic guidance and technical expertise.
The Commission sets out here a template for change, but it is written in pencil rather than ink. We warmly invite you to comment on the proposals and questions in this Discussion Paper. We will use this input to shape the Project’s Final Report, to be released in 2020.
Human Rights Commissioner