top of page

Rules or Practice: New Shades in Global AI Governance

  • Writer: Anton Tamarovich
    Anton Tamarovich
  • 1 day ago
  • 7 min read

AI today is not just a technology but an instrument of sovereignty, economic competition, and global influence. The key question is not who writes the most comprehensive rules, nor who builds the largest model, but who most effectively integrates AI into institutions and the economy.


Laws that work: A realistic strategy for AI regulation may be to form coalitions of states that can align on standards and practices, rather than seeing universal consensus.

Photo credit: Gettyimages/istockphotos

License: https://creativecommons.org/licenses/by-nc/4.0/#ref-appropriate-credit
Laws that work: A realistic strategy for AI regulation may be to form coalitions of states that can align on standards and practices, rather than seeing universal consensus.

From February 16 to 20, 2026, the India AI Impact Summit took place in New Delhi — the fourth and largest in a series of global artificial intelligence summits launched in 2023. The event brought together more than 100,000 participants, around 20 heads of state and government, as well as delegations from over 110 countries and 30 international organizations. In total, more than 400 sessions were held — a scale that in itself became a political statement.


Many expected the summit to serve as a starting point for establishing a sustainable course that would allow middle powers to participate in shaping the rules of joint AI development and prevent the concentration of benefits from this technology in the hands of only a few companies from the United States and China. However, instead of a unified architecture of global governance, the summit documented what had previously existed as a trend: global consensus on AI is fragmenting into competing regulatory philosophies. These disagreements are not technical but fundamental — they reflect differing views on the role of the state, the market, and international institutions.


From “Safety” to “Impact”: A Shift in the Global AI Agenda


The evolution of AI summits reflects an ongoing search for common ground. While the early meetings — the “Safety” summits in Bletchley Park (2023) and Seoul (2024) — focused on risk mitigation and prevention, the “Action” summit in Paris (2025) brought geopolitical and geoeconomic competition to the forefront, including competition for standards, markets, and national champions. In turn, the “Impact” summit in New Delhi (2026) shifted the focus toward practical implementation of technologies.


Where three years ago concerns about existential AI risks dominated and calls for preventive regulation were prominent, today the focus has shifted toward pragmatism: the decisive question is who will be able to integrate AI into public services, the economy, and infrastructure — and thereby define the rules of the game.


The location is equally symbolic. For the first time, an event of this scale was held in a Global South country. AI governance is no longer the exclusive domain of Western elites and technology giants. The question is whether this is supported by real capabilities.


The New Delhi Declaration: Symbol of Unity, Reality of Divergence


The culmination of the summit was the New Delhi Declaration on AI Impact, endorsed by 91 countries, including China, Russia, and the United States. It is worth recalling that Washington demonstratively refused to sign the Paris Summit declaration in 2025, considering its approach to AI risks and inclusivity excessive.


The declaration covers issues such as democratizing access to computing power, data, and AI models; expanding the role of AI in healthcare, education, agriculture, and public services; and principles of accountability and human oversight. At the same time, the document has been justifiably criticized for conceptual redundancy: in essence, it reiterates positions already articulated by the OECD, G20, UNESCO, and previous summits.


This reveals a systemic problem. The document contains neither financial commitments nor mechanisms for creating and using shared computing infrastructure, nor binding standards. As one commentator said: “Non-binding declarations are the international equivalent of LinkedIn likes: generous, free, and quickly forgotten.”


This skepticism is hardly exaggerated. Behind the declaration lies a fundamental asymmetry: approximately 90% of the world’s AI computing infrastructure is controlled by just two countries — the United States and China. The document calls for fair use of technology but avoids addressing the concentration of computing power, data, and know-how in the hands of a few states and corporations.


At the same time, it would be a mistake to dismiss the declaration entirely. The participation of 91 actors creates — albeit conditionally — a platform where China, Russia, and the United States are present together, enabling discussion of coexistence among already established independent AI systems.


The Clash of AI Paradigms


By spring 2026, three distinct models have emerged, each grounded in its own philosophy.


United States: technological dominance and minimal regulation: The Trump administration advocates minimal regulation to maintain U.S. global leadership in AI. This position was articulated clearly by Michael Kratsios, Director of the White House Office of Science and Technology Policy, who stated: “We believe AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralized control.”


At the same time, leaders of major tech companies speak about the need for “smart regulation”, while the White House remains cautious.


European Union: a minimal-risk zone: The European approach is based on mandatory transparency requirements within a risk-based framework, where the level of state control and regulatory strictness depends on the level of risk posed by a given AI system.


The EU was the first to adopt a comprehensive AI law (the AI Act), one of the most ambitious regulatory experiments in technological policy. It requires conformity assessments for high-risk systems before market entry, imposes significant fines for violations, and has extraterritorial scope: its requirements apply to any provider serving EU users, regardless of location.


The logic of this framework is to manage risks before harm occurs. However, the cost of such precaution is increased business costs and potential investor flight to more permissive jurisdictions.


Flexible pragmatism: The third path lacks a clear regional identity but shares common features: voluntary frameworks and situational adaptation. This is not an absence of strategy but a different logic, where speed of deployment is valued over regulatory completeness.


For example, Singapore launched the world’s first governance system for agent-based AI on January 22, 2026 — autonomous systems capable of planning and executing tasks with minimal human involvement. China’s model is also notable, combining rapid deployment conditions with strict political oversight, including fines and strict data localization requirements.


Russia’s approach balances soft regulation through an AI ethics code with a risk-based approach that differentiates the level of experimental freedom depending on application domains. Notably, although often overlooked, Russia’s approach largely aligns with the model presented by India at the summit.


The Indian Model: A Fourth Path or Tactical Compromise?


A key feature of the Indian model is that governance develops in parallel with implementation rather than preceding it. Rules are not rejected, but their sequencing is different: institutional adoption of technology comes before the consolidation of regulatory frameworks.


Instead of competing in frontier model development, India focuses on:

  • institutional implementation of AI,

  • adapting open-source models to national needs,

  • integrating legal mechanisms into technological architectures.


For example, the summit presented the Sarvam AI model, developed by fine-tuning the open-source Mistral base model on local-language data. This is not a frontier breakthrough but an implementation strategy: take existing technologies, adapt them, and embed them into institutional structures. For a country that cannot compete with companies like Google in terms of investment, this may be the only rational strategy — and potentially the most relevant for many countries.


The United Nations: A Platform for Global Consensus


The UN seeks to become a platform for global AI governance by attempting to unify fragmented and rapidly evolving regulatory practices. At the summit, it was announced that an Independent International Scientific Panel on AI, consisting of 40 experts, would be established. Its task is to produce annual evidence-based reports synthesizing research on AI capabilities, risks, and impacts, helping states develop informed positions.


UN Secretary-General António Guterres also announced the Global Dialogue on AI Governance, to be held in Geneva in May 2026. As he noted: “Without a common baseline, fragmentation prevails — different regions will operate under incompatible policies and technical standards. This will increase costs, weaken security, and deepen divisions.”


However, concerns remain: the U.S. views multilateral governance as a threat, while the EU insists on exporting its regulatory model. In such conditions, UN initiatives may struggle to establish a unified global strategy — although the UN remains the only universally recognized international institution.


Conclusions and Outlook


AI today is not just a technology but an instrument of sovereignty, economic competition, and global influence. Multiple regulatory models are emerging, and their interaction will shape the global AI architecture over the next decade.


Fragmentation appears not as a temporary deviation but as the new normal. Attempts to establish universal consensus face fundamental differences in political philosophies and economic interests. A more realistic strategy may involve coalitions of countries aligning on technical standards, sharing practices, and gradually forming common data markets.


At the same time, a structural gap persists between rhetoric and real capabilities. For Global South countries, the issue is not ambition but lack of “hard” capacity — models, infrastructure, talent, and energy resources. Without addressing these constraints, declarations risk remaining symbolic.


The rapid pace of AI development further complicates regulation: detailed rules may become obsolete before implementation. The AI Act is a clear example — developed before the rise of generative AI and agents, it has already required revision.


Yet halting AI adoption is not an option. Countries that fail to integrate AI into institutional systems risk economic restructuring costs, inertia, and dependency. Ultimately, the key question is not who writes the most comprehensive rules, nor who builds the largest model, but who most effectively integrates AI into institutions and the economy.


Thus, the country that develops the most practical and effective AI strategy will gain the greatest advantage in the coming “AI-ization” of the world.


Disclaimer: The article expresses the author’s views on the matter and do not reflect the opinions and beliefs of any institution they belong to or of Trivium Think Tank and the StraTechos website. A version of this article was published in Russian on the website of the Russian Council.


Anton Tamarovich profile picture
Anton Tamarovich

Anton Tamarovich is a researcher of international AI trends. He has held various positions in research and consulting on AI development and implementation of state strategy in Russia over the past five years.

Comments


StraTechos is a website produced by Trivium Think Tank. It explores the linkages between advances in science and technology and the strategic choices of nations. The website is committed to presenting these issues in a simple, yet coherent manner.

Trivium holds the copyright over the content published on the StraTechos website. Kindly seek permission before reusing any content.

Subscribe for Updates

Thanks for submitting! We promise not to spam you with unwanted content.

bottom of page