You are currently viewing Two Visions of AI Governance—And the Real Test That Will Decide the Winner

In July 2025, Washington and Beijing rolled out competing roadmaps for artificial intelligence—and, with them, two philosophies of global tech governance. America’s AI Action Plan, introduced by the Trump administration, prizes speed and market dynamism, wrapped in national-security guardrails and an explicit push to tear down domestic barriers to AI deployment. China’s Global AI Governance Action Plan along with an initiative to launch a World AI Cooperation Organisation in Shanghai, China, casts AI as a global public good, promising capacity building, shared standards, and infrastructure for the developing world, Hao Nan writes. The author is a participant of the Valdai  New Generation project. 

These aren’t just policy menus; they reflect two sets of governance logic, underscored by very different assumptions about who should set rules, who should own infrastructure, and how sovereignty is protected. For countries in the Global South, the real question isn’t whose rhetoric is more appealing—it’s which model can be implemented, financed, and sustained under real supply-chain and political constraints.

The US model rests on market-led innovation with selective state intervention. The Trump administration’s deregulatory turn reframes the Biden administration’s guidance on fairness, risk, and transparency as red tape, while procurement rules tilt toward “ideological neutrality” – to prevent what he has called Woke AI. Now, in US federal procurement, “ideological neutrality” is not a metaphysical claim that models can be value-free; it is a contracting rubric that requires vendors to document and test for viewpoint-symmetric behaviour in politically salient domains and to certify adherence to “unbiased AI principles,” notably “truth-seeking” and neutrality, set by recent Executive Orders implementing the AI Action Plan. In practice, agencies are being instructed to (i) require model cards and red-team reports that show no systematic preference across paired prompts from opposing viewpoints, (ii) log and audit RLHF interventions that could encode a one-sided stance, and (iii) incorporate pass/fail acceptance tests for partisan asymmetries in procurement. The White House Office of Management and Budget (OMB) has been directed to issue implementing guidance within 120 days, which will translate these principles into contract language and evaluation criteria. At the same time, export controls follow the “small yard, high fence” doctrine: cutting-edge chips, software, and services are restricted to maintain a strategic lead and to police end-use. To reassure allies on sovereignty, US cloud giants now sell “sovereign cloud” options with regional data residency and customer-held encryption keys. The system isn’t purely proprietary: the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and federal analysis of “open-weight” models nod toward an open ecosystem. The result is a hybrid: laissez-faire at home, tight controls at the edge, and tailored assurances abroad. The narrative is problematic on three fronts: first, neutrality is contestable in socio-technical systems—NIST’s own AI Risk Management Framework emphasises context-specific trade-offs, not value erasure; second, enforcing viewpoint balance in government AI may collide with the First Amendment of the US Constitution and civil rights obligations; third, by branding fairness and safety guardrails as “ideological,” agencies risk chilling legitimate harm-mitigation work.

China’s model is state-led and coherence-driven. It has established a comprehensive legal regime at home, ranging from the Personal Information Protection Law, Data Security Law, Cybersecurity Law, and Export Control Law to hard-wire data localisation, security reviews, and outbound technology controls. Beijing couples these with industrial policy—national champions in chips, cloud, and models; a buildout of green data centres; and the promotion of domestic foundation models. The vocabulary Beijing often uses is “data sovereignty,” “systemic safety,” and “cross-border model cooperation.” The often-flagged risks normally include censorship, privacy concerns and data security. Proponents counter that only a coordinated state can steward equitable access and national resilience. If America trusts markets, with the state guarding chokepoints, China trusts the state to shape markets toward public goals.

Abroad, the US is building an alliance-centric architecture anchored in standards, supply chains, and access to frontier capability. Expanded export controls now cover high-end logic chips, Electronic Design Automation (EDA) tools, and even some cloud services, binding access to compliance with US rules. Semiconductor coordination through groupings like “CHIP 4,” plus tariff threats extend that leverage. The carrot is world-class AI infrastructure and models, delivered quickly via US vendors. Sovereign cloud offerings aim to square performance with local control: run American tech within your jurisdiction, with your keys, under your law. The trade-off is dependence on US supply chains and licensing regimes—a high-assurance, high-performance offer, but on American terms, and with risks that such dependency might be weaponised against you, as is happening with trade negotiation.

China’s global playbook lies in the Digital Silk Road: a package of finance, infrastructure, platforms and training. Fibre backbones, data centres, e-government platforms and smart city systems, are delivered with financing and skill transfer. The 2025 action plan doubles down on capacity-building and standard-setting that prioritises developing countries, and Beijing has floated a World AI Cooperation Organisation in Shanghai to institutionalise such inclusive governance. The pitch stresses co-development and respect for each nation’s digital sovereignty. Nevertheless, this offer is not altruistic. In a competition with the US, though unwillingly, adoption of Chinese technical standards, embedded vendor dependencies, and financial exposure would certainly advantage China in securing data pools and polishing up the algorithm. For many governments, faster timelines, lower up-front costs, and on-premises control outweigh those risks—at least initially.

Both models crash into the same hard reality: AI runs on scarce hardware and materials. Leading-edge logic chips remain concentrated within a US-aligned supply chain, composed of TSMC, Samsung, Intel, ASML lithography, and US EDA software. China is investing heavily and catching up closely but still trails at the frontier under sanctions. Memory chips are a pinch point: a few players produce high-bandwidth memory (HBM), and demand will likely outstrip supply for years, making pricing and delivery highly sensitive to shocks. Rare-earth processing—dominated by China—adds another lever. The mid-2025 neodymium and praseodymium (NdPr) oxide price spike, triggered by cross-border processing frictions, showed how quickly the cost of a niche material can ripple through data-centre components. The lesson for adopters: without assured chips, memory, and materials, governance blueprints are academic.

A quieter convergence is underway around localisation and openness. US providers now routinely offer EU-only data boundaries, country-specific regions, and customer-managed keys—features that once seemed unlikely. China, for its part, leans on open-source tools and models to speed diffusion, reduce dependence on Western IP, and cultivate shared norms. US agencies have likewise studied the safe deployment of open-source and “open-weight” models. In practice, many deployments will be hybrid: proprietary cores for security and performance; open standards and components for auditability, lower costs, and customisation. The catch is capacity: openness without local expertise can simply shift dependence from code to consultants.

How should governments choose? For liberal democracies, the US stack often aligns more cleanly with existing privacy regimes and procurement norms, and the very rigidity of export controls can create predictability—partners know the rules, even when they chafe. Compliance tooling and audits are mature. China offers a different assurance: on-premises installations and the symbolism of physical control. But real assurance depends on contract terms—auditability, security guarantees, and the absence of hidden dependencies. Both models can meet high assurance bars; both require careful verification.

The likely near-term equilibrium is hybridisation. Many governments will assemble portfolios: US-origin systems where assurance and ecosystem depth matter most; Chinese-built infrastructure where cost, speed, and capacity-building dominate. Sensitive workloads—identity, tax, defence—might sit on localised sovereign clouds under tight legal control, while smart-city platforms or education systems leverage China’s finance-and-build model. Advanced democracies will lean American, especially as sovereign cloud features reduce legal friction. Capital-constrained adopters may prefer China’s integrated packages—but should demand open standards, strong training commitments, and explicit exit clauses. Everyone should apply a four-question test to any AI offer: (1) Will we retain legal control over our data and keys? (2) Can we migrate without prohibitive cost or disruption? (3) Are chips, memory, and critical materials contractually assured under stress? (4) Will this partnership build durable local skills? The best deals answer “yes” on all four—and most will require mixing and matching across the two ecosystems.

This contest is not about who trains the flashiest model. It is about who offers a more credible framework for deploying AI at scale under political, legal, and supply constraints. Washington is betting on “trusted dominance”: remain the indispensable hub while giving allies enough sovereignty to stay comfortable. Beijing is betting on “distributed development”: make itself the patron of a multipolar, capacity-building future. For most states, the smartest play is to turn rivalry into leverage—codify localisation and interoperability, insist on transparent interfaces and exit options, and diversify hardware and knowledge sources. In a world of two visions, the most durable path is a third: a sovereign, pragmatic blend that takes the best of each model and leaves the rest.

The Valdai Discussion Club was established in 2004. It is named after Lake Valdai, which is located close to Veliky Novgorod, where the Club’s first meeting took place.

 

Please visit the firm link to site


You can also contribute and send us your Article.


Interested in more? Learn below.