Revolution Inside AI

As the world fixates on large language models, a quieter shift toward world models, scientific AI and safety is reshaping global power

Sundeep Waslekar

A quiet transformation inside artificial intelligence.

Editor’s Note: This column marks the final instalment of Sundeep Waslekar’s monthly series for Founding Fuel. Over the past year, his essays have traced how artificial intelligence is reshaping geopolitics, governance, and global power. In this concluding piece, he turns inward—to the quiet revolution unfolding within AI itself—and reflects on what it may mean for science, safety, and sovereignty ahead.

When I began this column in January 2025, large language models (LLMs) were the talk of the town. As the year unfolded, the public debate focused almost entirely on how AI would reshape jobs, creativity, and the global economy.

But outside a small circle of AI scientists, few noticed that a quieter and far more consequential revolution was taking place inside artificial intelligence itself. Many of the godfathers of modern AI—those who laid the foundations of machine learning and general-purpose intelligence—were either stepping away from LLMs or deliberately staying away from them.

For most users, AI has become synonymous with large language models. The scientists who helped create them see things differently.

Yann LeCun, a Turing Award winner, who recently announced he was exiting as Meta’s chief AI scientist, has repeatedly argued that LLMs lack grounding in the physical world, causal understanding, persistent memory, and goal-directed planning. In his view, scaling language models alone cannot produce genuine intelligence. Instead, LeCun has long advocated the development of what he calls “world models”—and is now set to build one in his new venture.

Geoffrey Hinton, often described as a godfather of deep learning, has moved in a different direction. Rather than championing scale, he has focused on warning about the risks posed by advanced AI systems, emphasising that more parameters do not automatically translate into understanding or reasoning.

Yoshua Bengio, the third member of the Turing Award-winning trio with Hinton and LeCun, has increasingly devoted his attention to AI safety, alignment, and governance. He has expressed deep doubts that current architectures capture agency or reasoning in any meaningful sense.

Fei-Fei Li, former chief scientist of AI at Google Cloud, had already begun building research programmes around world-model-like approaches well before this debate entered the mainstream.

Demis Hassabis, CEO of Google DeepMind and Nobel laureate in chemistry, has never presented LLMs as sufficient for general intelligence. DeepMind’s most consequential breakthroughs—such as AlphaGo and AlphaFold—relied on planning, search, and structured reasoning, not language generation alone.

For non-specialists, a “world model” can be understood as an internal map of how the world works. Humans constantly build such models. We know that if we push a glass off a table, it will fall and break. If we leave food out, it will spoil. We learn this not from text, but from interacting with the world, observing cause and effect, and updating our expectations.

In AI, a world model is a system that learns the structure of reality—objects, relationships, physical laws, and the consequences of actions—so that it can simulate outcomes before acting. Instead of predicting the next word in a sentence, such systems predict what will happen next in an environment.

A useful metaphor is a flight simulator rather than a typewriter. A language model completes sentences. A world-model-based system runs mental simulations: If I do X, Y is likely to happen. This approach is critical in robotics, scientific discovery, climate modelling, and complex decision-making, where understanding causality matters more than fluent language.

This is why many scientists increasingly see world models as the foundation for more reliable and controllable AI—especially in safety-critical domains—while viewing LLMs as powerful interfaces layered on top, not the core of intelligence itself.

China’s parallel shift

A similar intellectual divergence has been unfolding in China.

Song-Chun Zhu, a leading AI researcher, returned to China from California and publicly argued that the “big-data, small-task” LLM path is unlikely to lead to genuine intelligence. He has pushed instead for a “small-data, big-task” research programme focused on cognitive architectures, causal reasoning, and embodied, world-model-style approaches.

Zhou Zhihua, one of China’s most cited machine-learning researchers, has also warned against an exclusive reliance on scale. In published commentaries and technical forums, he has emphasised both the limits and ethical risks of pursuing strong AI through scale alone, and has urged diversified research agendas that include symbolic and causal methods alongside stronger safety governance.

In 2025, the Chinese Academy of Sciences unveiled ScienceOne, a purpose-built “AI for science” platform developed jointly by more than a dozen institutes. Unlike generic LLMs, ScienceOne is designed to work with scientific modalities such as waveforms, spectra, and fields; to orchestrate specialised tools; and to run agentic workflows for experiments and literature review. It marks a deliberate move away from consumer-oriented LLM productisation toward domain-specific scientific intelligence.

Teams led by researchers such as Gao Caixia at the Institute of Genetics and Developmental Biology illustrate this shift in practice. Rather than fine-tuning general-purpose language models, they are building specialised pipelines for protein design, particle simulation, and tool orchestration—combining simulators, symbolic reasoning modules, and agent-based systems.

The global turn to AI safety

China has also moved more decisively on the question of extreme risks from advanced AI. In early 2025, Chinese experts largely framed AI as a tool for public good, keeping discussion of existential risks muted. When I attended a technology conference in Shanghai in April this year, scientists spoke of such risks only in private. A day later, President Xi addressed the politburo and called for legal and technical mechanisms to prevent unprecedented risks posed by AI.

Within months, Concordia AI and Shanghai AI released a framework aimed at managing extreme AI risks. By September, China’s national standardisation body had advanced a mix of mandatory and voluntary AI safety standards covering generative models, security evaluation, and dynamic risk classification.

When Yoshua Bengio presented his AI Safety Report in February 2025, the initial response across much of the Global South was lukewarm. Countries such as China, India and the UAE viewed it as a Western narrative designed to constrain their growth. Over the course of the year, however, that position began to shift.

G42, the main investment vehicle of Abu Dhabi’s ruling family, presented its own frontier AI safety framework. South Korea and Brazil launched parliamentary debates on AI governance, including the prevention of extreme risks. In November, India issued AI governance guidelines—largely voluntary in nature.

The growing salience of AI safety has also begun to fracture Western politics. Steve Bannon signed a letter issued by Geoffrey Hinton and other scientists calling for restrictions on superintelligence, triggering unease within parts of the MAGA movement. At the same time, other factions pushed in the opposite direction, contributing to pressure on the European Union to dilute elements of its AI Act. These tensions suggest that AI safety is likely to become a major fault line within the Western alliance in 2026.

A new global hierarchy

As parts of the West moved beyond consumer-facing LLMs toward scientific and safety-oriented AI, major American technology companies ramped up investments in data centres across India and the Middle East.

LLMs will not disappear. They are likely to sit at the lower end of the AI value chain, supported by energy- and water-intensive data centres in the Global South. At the upper end, control over scientific discovery, advanced models, and foundational architectures may increasingly rest with a small group of countries—the United States, China, South Korea, and potentially Japan.

If current trends persist, India and several Middle Eastern countries risk being locked into roles as suppliers of data, labour, and energy, while remaining dependent on external powers for the most critical layers of AI. Sovereignty, safety, and long-term strategic autonomy would then come under strain.

The year 2025 may ultimately be remembered as the moment when the foundations of a new techno-colonial hierarchy were laid—quietly, and largely unnoticed.

Dig Deeper 

Sundeep Waslekar on AI, Power and Global Change

Over the past year, Sundeep Waslekar’s monthly columns for Founding Fuel have explored how artificial intelligence is reshaping geopolitics, governance, and global power. Here are a few essays from the series:

About the author

Sundeep Waslekar
Sundeep Waslekar

President

Strategic Foresight Group

Sundeep Waslekar is a thought leader on the global future. He has worked with sixty-five countries under the auspices of the Strategic Foresight Group, an international think tank he founded in 2002. He is a senior research fellow at the Centre for the Resolution of Intractable Conflicts at Oxford University. He is a practitioner of Track Two diplomacy since the 1990s and has mediated in conflicts in South Asia, those between Western and Islamic countries on deconstructing terror, trans-boundary water conflicts, and is currently facilitating a nuclear risk reduction dialogue between permanent members of the UN Security Council. He was invited to address the United Nations Security Council session 7818 on water, peace and security. He has been quoted in more than 3,000 media articles from eighty countries. Waslekar read Philosophy, Politics and Economics (PPE) at Oxford University from 1981 to 1983. He was conferred D. Litt. (Honoris Causa) of Symbiosis International University by the President of India in 2011.