On November 19, 2025, Russian President Vladimir Putin delivered a special speech on the prospects for the development and regulation of artificial intelligence in Russia. According to him, generative AI technology is becoming core and strategic technology. Major companies and leading countries are vying to develop proprietary fundamental language models.
In the current geopolitical struggle, this competition is acquiring not only an economic but also a pronounced political dimension. Therefore, sovereign control over large language models (LLMs) that underlie generative artificial intelligence is becoming a key aspect of technological sovereignty.
The second aspect of the functioning and influence of large language models on society is no longer related to competition, but to the fact that they are gradually becoming, as Putin noted, one of the most important tools for disseminating information, and therefore are capable of influencing people’s values and worldviews, as well as shaping the semantic space of entire states and, ultimately, humanity as a whole.
Here, in addition to the economic dimension, artificial intelligence also acquires a pronounced value dimension. This, in the context of the acute value conflict between Russia and the West, and, up to a certain point, between the major countries of the Global South and the West—has become a kind of value war between them, which is an extremely important factor, not only in the context of geopolitical struggle, but also in the consolidation of societies around certain value systems.
The development of large language models is key here. And in this case, we are talking about more than just the voluminous databases that are used to create a given model. I think it’s no secret that Western-centric narratives have often manifested themselves in the functioning of the large language models underlying Russian and Chinese artificial intelligence systems, especially in their initial stages. These narratives were clearly anti-Russian and anti-Chinese, respectively.
There may be several reasons for this. One possibility is that given the need to achieve import substitution for artificial intelligence systems as quickly as possible, the implementers lacked the time to create their own large databases (or other material factors may have played a role). As a result, Western source databases, or at least significant fragments of them, may have been incorporated into Russian and Chinese systems. This has led to the judgments and recommendations issued by the supposedly import-substituting artificial intelligence systems being, to put it mildly, unfavourable toward Russia and China, respectively.
This has led to some anecdotal, and sometimes even tragic incidents, many of which have been covered in the media, and even more so on social media. In addition to these examples, from university experience, it’s become almost routine that when a Russian student uses Russian artificial intelligence systems to some extent when preparing their theses and coursework (the temptation is great), but doesn’t carefully proofread the resulting text, it typically contains a fair number of anti-Russian narratives, both in the wording and definitions of certain events, and in the logic and structure of the presentation.
More often than not, it’s clear that such anti-Russian thoughts don’t originate with the student themselves; this is clear from their social behaviour, their life position, and, ultimately, their common sense. As a result, in addition to standard plagiarism checks of theses (and it’s clear that anti-plagiarism systems are lagging behind the development of artificial intelligence and are increasingly less likely to detect high-quality generated text), the very presence of anti-Russian narratives in a thesis text becomes a marker that the student did not write the work himself, but used AI. I repeat, they used a Russian artificial intelligence system (good Western systems, firstly, require a fee, and secondly, are inaccessible in Russia without a VPN). This supposedly sovereign (but seemingly home-brewed) Russian artificial intelligence broadcasts statements that are in no way consistent with official Russian policy or the traditional values officially promoted in Russia. Ultimately, they contradict Vladimir Putin’s words.
According to colleagues, the situation with Chinese AI systems is roughly the same. Unlike Russian systems, Chinese systems have their own unique characteristics. The same Chinese AI system, when accessed from within China and from outside, sometimes produces contradictory results and recommendations. Anti-Chinese narratives are rare within China, but they’re commonplace outside of China. Experts say this is because the infamous “Chinese firewall”, which separates the internet within China from the global one, also affects the language models used by Chinese AI systems. It creates a kind of filter for information undesirable to official authorities.
A separate question for the future is whether this won’t lead to a kind of “split personality” in Chinese AI systems, a kind of “schizophrenia”, where they must communicate one thing to one audience and something else to another. Here we’re approaching not even the realm of AI ethics, but the psychology of AI, if you will. After all, most science fiction films about future AI tell us that it is precisely the development of its own psychology, its ability to feel, that makes AI comparable to humans. And if an AI’s psychological profile is initially trained to split consciousness and use Aesopian language, then what could this lead to? The “machine uprising” scenarios depicted in science fiction films may be one possible outcome.
In any case, it’s clear to me that the current wealth of user experience and examples suggests that neither Russia nor China currently have sovereign, large-scale language models (despite all the fanfare and pronouncements from high-ranking authorities convincing us otherwise). Work is underway to optimise these models through so-called “post-training”, or, if you will, their “patriotic education”.
But this process could also generate its own nuances. Simply installing filters and filtering out information considered undesirable by the official authorities of a given country from language models will obviously impact the competitiveness of AI systems in that country. For some trivial purposes (such as plagiarising a term paper), this may not be so serious. However, the challenge of global competition and the global expansion of sovereign AI systems to promote a corresponding value agenda worldwide was noted above. In the external global market, such AI “with filters” will obviously lose out to systems without filters.
Another, more complex, approach is to teach a sovereign AI system to “think patriotically”, where it can independently find logical arguments to combat undesirable narratives. This is probably also possible, but it requires thoughtfulness from the system’s human operators and requires consideration beyond mere mercantile factors.
This approach, however, may also present its own challenges. For an artificial intelligence system, the term “thinking” means rationally sorting through and selecting the most optimal options from a vast array of diverse data. And here, the purely cold rationality of artificial intelligence’s “thinking” may, at a certain point, conflict with the “sovereign patriotic impulse” imparted to it from outside by its operators. What such a contradiction might lead to for the AI itself is also a question for the psychology of future AI. For now, even at our mortal human level, we note that we have already touched on the issue of the contradiction between expert rationality, on the one hand, and political will and patriotic impulse, on the other, in relation to forecasting international relations: when the most radical events in the world, seemingly impossible based on expert rationality, nevertheless occur thanks to a value-based impulse and will. How artificial intelligence, even in a sovereign form, will behave at such radical historical crossroads, at such bifurcation points, is, in our view, difficult to say. Of course, it could simply be disconnected from the power grid at such moments.
Overall, we see that the task of creating sovereign and effective large-scale language models is complex and raises questions related not only to management or even politics and ideology, but also to the ethics and psychology of AI, which only yesterday seemed like pure science fiction. Clearly, we are only at the beginning of this journey. How it will develop remains to be seen.
The Valdai Discussion Club was established in 2004. It is named after Lake Valdai, which is located close to Veliky Novgorod, where the Club’s first meeting took place.
Please visit the firm link to site

