What stands in the way of AGI — Demis Hassabis names three barriers

The Google DeepMind CEO and Nobel laureate explained why current systems still fall short of artificial general intelligence.

Author: Michael Kokin ·

Google DeepMind CEO Demis Hassabis at the AI Summit in New Delhi explained why current systems still fall short of artificial general intelligence. A year ago he predicted AGI in 5–10 years, and this week he listed three barriers that prevent getting there faster and proposed solutions.

1. Continual Learning

Problem: Modern AI models "freeze" after training — they don't learn from new experiences in real time. ChatGPT, for example, undergoes training and then launches without further development. To add new knowledge, companies have to retrain the model from scratch — a process costing millions of dollars and taking months. Researchers call this "loss of plasticity" and "catastrophic forgetting."

Solution: Hassabis sees the answer in architectures with external memory and local weight updates that would let systems change over time without destroying old knowledge. It's similar to how AlphaGo learned by playing itself, but adapted for language models.

2. Long-term Planning

Problem: AI handles tactical tasks in the here and now but can't plan years ahead. Modern LLMs simply predict the next word without the strategic thinking needed to build complex multi-step plans in a changing environment (e.g., on financial markets).

Solution: Implementing "World Models." Hassabis proposes adding a planning module (like in AlphaZero) to language models that would let AI run through future scenarios and consequences of its actions in its "head" before making a move in reality.

3. Inconsistency — "Jagged Frontier"

Problem: Systems win gold medals at the International Mathematical Olympiad but stumble on elementary everyday questions. A recent example: when asked "The car wash is 50 meters from my house, should I walk or drive?" AI often suggests walking for health, forgetting that you need to bring your car along.

Solution: Moving away from static benchmarks that models simply "memorize" toward dynamic environments (Game Arena). Also creating hybrid systems where neural network intuition is backed by hard symbolic logic (Search/Reasoning) to eliminate silly mistakes on simple tasks.

Context
Hassabis founded DeepMind in 2010 (Google acquired the lab in 2014). In 2024 he received the Nobel Prize in Chemistry for AlphaFold — a protein structure prediction system. By his current estimates, true AGI is 5–8 years away.