You are currently viewing Expert Comment: Is it time to reconsider our human rights in the age of AI?

Professor Yuval Shany. Credit: Ian WallmanProfessor Yuval Shany, Institute for Ethics in AI

Human rights such as equality and privacy are under considerable pressure due to practices such as profiling and mass surveillance associated with certain AI systems; plus new AI systems also invite consideration of extending human rights protections to capture new human needs and interests implicated by their use.

Not only can these systems generate new benefits in areas such as health, education and work, they might also be used deliberately or inadvertently to inflict serious personal harms at scale, facilitate new forms of manipulation and subject human beings to non-transparent and inhumane forms of social control.  

Whilst to date the focus has mostly centred on the role played by ethical and safety and some human rights considerations in AI policy, law and regulations, I believe it is now time to look more comprehensively at how our human rights laws can be adapted to keep up with the times.

We are rushing headlong into an AI-driven future, but our legal protections – which should potentially offer a principled and effective set of guardrails –  remain stuck in the past. 

Through my recent work with leading human rights centres across four continents it has become clear that the world could greatly benefit from an international AI Bill of Human Rights.

We are not starting from scratch 

Legal instruments developed in recent years by the UN, EU, US, African Union and South Korea were intended to address gaps and inadequacies in existing law applicable to new technological conditions. A closer look, however, reveals several legal deficiencies that undercut their effectiveness in fully adjusting existing human rights law to the opportunities and challenges posed by AI systems: they provide only partial cover, use language that is either too specific or too general, or fail to employ the legal language of human rights.

We are rushing headlong into an AI-driven future, but our legal protections – which should potentially offer a principled and effective set of guardrails –  remain stuck in the past. 

The result is a patchwork of standards that struggles to address cross-border systems, protect individuals coherently, or assign responsibility, including to private companies, when harm occurs.  

A global approach: an international Bill of Human Rights

Over the last couple of years as Fellow of the Accelerator Fellowship Programme in the Institute for Ethics in AI at the University of Oxford, I have explored the feasibility of an international AI Bill of Human Rights.

My research included extensive expert consultations in cooperation with four international human rights centres in Oxford, Geneva, Pretoria, and Harvard. These consultations served as the basis of a concise bill of rights that articulates the minimum protections people should enjoy wherever AI is designed or deployed. 

They codify what people reasonably expect: to benefit from innovation without surrendering their freedom, equality and dignity; to understand how important decisions are made; to challenge mistakes and violations; and to interact with humans when it matters. They also help innovators by setting up clear expectations of guardrails necessary to protect individual entitlements.  

In the White Paper on the Feasibility of an International AI Bill of Rights, I offer an initial list of seven rights: 

  • Access to AI – people should have access to safe, reliable AI tools, and related technologies.
  • Privacy protections against harmful uses of AI – people should be protected from uses of AI systems for mass data capture and surveillance and for circumventing existing privacy protections.
  • Freedom from algorithmic bias and unfairness – systems should be designed to prevent discrimination, including profiling and the perpetuation of existing inequalities.
  • Transparency and explainability – people should know when AI is used and receive meaningful, understandable explanations of how AI contributed to decisions affecting their rights.
  • Protection from algorithmic manipulation – AI systems must not be deceitful and exploitative of vulnerabilities and cognitive bias; nor should they steer decisions and conduct ways that undermine autonomy, dignity or rational deliberation.
  • Human decision and a human-to-human interaction – individuals should be able to opt out of fully automated decision-making when important issues are at stake and retain access to meaningful human oversight and human-to-human interaction.
  • Accountability for harms caused by the use of AI systems – when AI causes harm, responsibility ought to be identified and effective remedies provided.

These rights are practical and seek to protect our very ability to develop and use AI systems in ethical and safe ways compatible with human wellbeing, in ways that minimise harm and meet standards of fairness and justice.

The adoption of an international AI bill of human rights would support public confidence in the development and deployment of AI systems, and could reduce legal uncertainty regarding the applicable duties and responsibilities of AI companies and government regulators.

They codify what people reasonably expect: to benefit from innovation without surrendering their freedom, equality and dignity; to understand how important decisions are made; to challenge mistakes and violations; and to interact with humans when it matters. They also help innovators by setting up clear expectations of guardrails necessary to protect individual entitlements.  

Why act now?  

In short, because the measures we set in place today might be too expensive or impossible to retrofit tomorrow. 

The adoption of an international AI bill of human rights would support public confidence in the development and deployment of AI systems, and could reduce legal uncertainty regarding the applicable duties and responsibilities of AI companies and government regulators. 

The aim is certainly not to stop the advancements of AI, but to ensure both human rights and AI advance together. 

Discover The Need for and Feasibility of an International AI Bill of Human Rights White Paper and listen to the “AI and Human Rights: Professor Yuval Shany on AI, Law and Global Accountability” podcast from the Institute for Ethics in AI.

University of Oxford

“The University of Oxford is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation.”

 

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.