You are currently viewing Expert Comment: In Claude We Trust? Evaluating the New Constitution

Professor Yuval Shany, Fellow of the Accelerator Fellowship Programme in the Institute for Ethics in AI, examines Anthropics new Constitution for Claude through a human rights lens – asking what’s missing when rights aren’t named explicitly, and what that omission could mean when powerful AI systems are tested in high-stakes settings such as surveillance and armed conflict. 

Headshot of Professor Yuval ShanyProfessor Yuval Shany. Image credit Ian Wallman.

On January 21, 2026Anthropic published its New Constitution for Claude – a series of Large Language Models (LLMsthat perform general-purpose generative AI functions. The Constitution – a84-page document – is presented as a foundational document that both expresses and shapes who Claude is. It also enumerates actions that Claude should refrain from undertaking (hard constraints), and identifies considerations the system should weigh when deciding whether to perform certain actions  

A few weeks after the Constitution was published, Anthropic faced two realworld situations in which its normative outer-boundaries were tested: Its showdown with the US Department of War (DoW), regarding legal limits on the utilization of Claude; and its actual use for targeting by the US military in the war in Iran.  

These developments highlight the importance of introducing strong human rights safeguards into the Constitution 

No place for human rights? 

According to the Constitution, Claude should conform to four sets of values, applied in the following hierarchical order: Safety, ethics, compliance with Anthropic guidelines and helpfulness. Put differently, Claude should strive to assist users, unless instructed by Anthropic not to do so, or if it deems the request to be unethical or unsafe.  

The Constitution also introduces a number of hard constraints – specific nogo areas, which should never be attempted, including attempting to kill or disempower the vast majority of humanity or the human species as whole or assisany individual or group with an attempt to seize unprecedented and illegitimate degrees of absolute societal, military, or economic control 

While some ethical standards enumerated in the Constitution overlap with human rights – e.g., privacy, protection from harmrule of law, equal treatment, the right to access information and political freedom – the document does not explicitly mention the term human rights. This is in contrast to the 2023 version of the constitution which referred to the UN Universal Declaration of Human Rights.  

This means that many important human rightprotections that could be relevant to the operation of Claude – for example, the right to liberty, freedom of religion and the right to intellectual property – have not been clearly integrated into the Constitution.  

Anthropic vs the US Department of War 

Shortly after the promulgation of the Constitution, Anthropic was mentioned in the news in two dramatic contexts – both underscoring the importance of developing effective normative backstops.  

First, on March 2026, the Department oWar designated Anthropic a supplychain risk due to its refusal to allow the Department to use Claude for mass domestic surveillance purposes and for operating lethal autonomous weapon systemsInstead, the DoW signed a contract with OpenAI for the provision of substitute AI systems.   

As Dr. Brianna Rosen from the Blavatnik School of Government explained, the insistence of the DoW on being able to use AI systems for any lawful use left in place a governance gap, since US law (and, in fact, also international law) does not clearly ban, under all circumstances, mass surveillance or the use of autonomous weapon systems.  

Delineating the permissible scope of such extraordinary capabilities through contractual negotiations between the U.S. government and Anthropic (or OpenAI) appears to provide weaker human rights guarantees than embedding universally accepted protections directly in the AI system itself, through a Constitution or a comparable normative framework. This is especially so given the difficulties of monitoring and enforcing state compliance in sensitive domains such as national security.  

Dr. Rosen is also right to point out that the negotiating position of Anthropic on mass surveillance, which focuses on domestic surveillance only, may already fall short of international human rights standards in the field, which capture foreign surveillance too 

Secondly, it has been widely reported that Claude systems, still in use by the US military, have been employed in the war in Iran for target selection purposes. It has also been speculated – albeit without hard evidence – that the use of AI systems may have contributed to one high-profile operational mistake (the targeting of an Iranian school) by reason of reliance on out-of-date maps of the attacked area.  

Here again, questions arise as to whether the Constitution, as currently drafted, contains appropriate safeguards against reliance on AI systemin contexts involving lethal consequences.  

Arguably, a more human rights-oriented approach would include within the system’s constitutional norms an explicit requirement that any use of the AI system in armed conflict comply with the basic principles of international humanitarian law (which give effect also to human rights principles)including flagging precautionary obligations such as realtime target verification before attacks are recommended 

In this policy space, reliance on AI systems may not only result in operational mistakes; it might also perpetuate accountability gaps (enabling humans to blame outcomes on the AI)In such cases, embedding human rights by design within the AI system’s constitution which governs its operation could offer a much more effective level of protection against violations of basic individual rights.   

Read an expanded edition of this article (co-written with Dr. Noa Mor, Prof. Renana Keydar and Prof. Omri Abend) via the Institute for Ethics in AI blog. 

University of Oxford

“The University of Oxford is a collegiate research university in Oxford, England. There is evidence of teaching as early as 1096, making it the oldest university in the English-speaking world and the world’s second-oldest university in continuous operation.”

 

Please visit the firm link to site


Corporate and Taxation services in Cyprus by Totalserve Group >

Cloud, Data centre and Cybersecurity services by CL8 >

You can also contribute and send us your Article.


Interested in more? Learn below.