By Ren Ito, Project Syndicate, Exclusive to the Sunday Times in Sri Lanka  TOKYO – AI leaders like OpenAI and DeepMind see themselves as being in a race to build artificial general intelligence (AGI): a model capable of performing any intellectual task that a human can. At the same time, the US and Chinese governments [...]

Sunday Times 2

Sovereign AI: The age of AI soft power

View(s):

By Ren Ito, Project Syndicate, Exclusive to the Sunday Times in Sri Lanka 

TOKYO – AI leaders like OpenAI and DeepMind see themselves as being in a race to build artificial general intelligence (AGI): a model capable of performing any intellectual task that a human can. At the same time, the US and Chinese governments see the AI race as a national-security priority that demands massive investments reminiscent of the Manhattan Project. In both cases, AI is seen as a new form of “hard power”, accessible only to superpowers with vast computational resources and the means to convert them into economic and military dominance.

But this view is incomplete and increasingly outdated. Since the Chinese developer DeepSeek launched its lower-cost, competitively performing model earlier this year, we have been in a new era. No longer is the ability to build cutting-edge AI tools confined to a few tech giants. Multiple high-performing models have emerged around the world, showing that AI’s true potential lies in its potential to extend soft power.

The era of “bigger-is-better” models ended in 2024. Since then, model superiority has not been determined solely by scale (based on ever more data and computing power). DeepSeek proved not only that top-tier models can be built without massive capital but also that introducing advanced development techniques can radically accelerate AI progress globally. Dubbed the “Robin Hood of AI democratisation”, its decision to go open-source sparked a wave of innovation.

The OpenAI monopoly (or oligopoly of a few companies) of just a few months ago has given way to a multipolar, highly competitive landscape. Alibaba (Qwen) and Moonshot AI (Kimi) in China have also since released powerful open-source models, Sakana AI (my own company) in Japan has open-sourced AI innovations, and the US giant Meta is investing heavily in its open-source Llama programme, aggressively recruiting AI talent from other industry leaders.

Boasting state-of-the-art model performance is no longer sufficient to meet the needs of industrial applications. Consider AI chatbots: they can give “70-point” answers to general questions, but they cannot achieve the “99-point” precision or reliability needed for most real-world tasks—from loan evaluations to production scheduling—that heavily rely on the collective know-how shared among the experts. The old framework in which foundation models were considered in isolation from specific applications has reached its limits.

Real-world AI must handle interdependent tasks, ambiguous procedures, conditional logic, and exception cases—all messy variables that demand tightly integrated systems. Accordingly, model developers must take more responsibility for the design of specific applications, and app developers must engage more deeply with the foundational technology.

Such integration matters for the future of geopolitics no less than it does for business. This is reflected in the concept of “sovereign AI”, which calls for reducing one’s dependence on foreign technology suppliers in the name of national AI autonomy. Historically, the concern outside the United States has been that by outsourcing critical infrastructure—search engines, social media, smartphones—to giant Silicon Valley firms, you incurred persistent digital trade deficits. Were AI to follow the same path, the economic losses could grow exponentially. Moreover, many worry about “kill switches” that could shut off foreign-sourced AI infrastructure at any time. For all these reasons, domestic AI development is now seen as essential.

But sovereign AI doesn’t have to mean that every tool is domestically built. In fact, from a cost-efficiency and risk-diversification perspective, it is still better to mix and match models from around the world. The true goal of sovereign AI should not merely be to achieve self-sufficiency but to amass AI soft power by building models that others want to adopt voluntarily.

Traditionally, soft power has referred to the appeal of ideas like democracy and human rights, cultural exports like Hollywood films, and, more recently, digital technologies and platforms like Facebook, or even more subtly, different apps like WhatsApp or WeChat that shape cultures through daily habits. When diverse AI models coexist globally, the most widely adopted ones will become sources of subtle yet profound soft power, given how embedded they will be in people’s everyday decision-making.

From the perspective of AI developers, public acceptance will be critical to success. Many potential users are already wary of Chinese AI systems (and US systems as well), owing to perceived risks of coercion, surveillance, and privacy violations, among other hurdles to widespread adoption. It is easy to imagine that, in the future, only the most trustworthy AIs will be fully embraced by governments, businesses, and individuals. If Japan and Europe can offer such models and systems, they will be well placed to earn the confidence of the Global South—a prospect with far-reaching geopolitical implications.

Trustworthy AI isn’t just about eliminating bias or preventing data leaks. In the long run, it must also embody human-centred principles—enhancing, not replacing, people’s potential. If AI ends up concentrating wealth and power in the hands of a few, it will deepen inequality and erode social cohesion.

The story of AI has only just begun, and it need not become a “winner-takes-all” race. But in both the ageing northern hemisphere and the youthful Global South, AI-driven inequality could create lasting divides. It is in developers’ own interest to ensure that the technology is a trusted tool of empowerment, not a pervasive instrument of control.

 

(Ren Ito, a former Japanese diplomat, is co-founder of Sakana AI.)

Copyright: Project Syndicate, 2025. www.project-syndicate.org

 

Share This Post

WhatsappDeliciousDiggGoogleStumbleuponRedditTechnoratiYahooBloggerMyspaceRSS

Advertising Rates

Please contact the advertising office on 011 - 2479521 for the advertising rates.