Why the “Godfather of AI” is Moving Beyond Chatbots
It's time for your coffee break! Embark inside the race to build World Models: Why Yann LeCun is betting against LLMs and how Ami Labs plans to bridge the gap between digital text and physical agency.
The news hit like a bombshell: Yann LeCun is (partially) leaving the comfort of Meta to launch Ami Labs with massive ambition: a €500M raise to prove we’ve been heading down the wrong path for the past three years. As Silicon Valley continues to pour billions into ever-larger language models, an intellectual rift is opening up. On one side, the proponents of “more data, more GPUs.” On the other, those who believe we’ve hit a glass ceiling.
1. The Smoke and Mirrors of LLMs
Before diving into the future, let’s recall what an LLM (Large Language Model) actually is. Simply put, imagine an ultra-sophisticated version of your smartphone’s auto-complete. These generative models learn to statistically predict the next word based on a massive context. While they excel at coding or mathematics through imitation, they are fundamentally incapable of learning on their own: theoretically, they merely regurgitate statistical rules without ever grasping the deep meaning of what they write. This is the great paradox of our time: although these models appear to “reason,” researchers still don’t 100% understand the internal logic that dictates their answers. For those who want to dig into the core mechanisms, this excellent article by Sean Trott is a must-read. The fundamental problem remains: without interaction with the real world, these models lack essential cognitive abilities like causal reasoning, planning, or a refined long-term memory.
2. The LeCun Thesis: JEPA Against the World
If you follow AI fundraising, you hear “Transformers” everywhere, it’s the architecture behind ChatGPT. But for Yann LeCun, this is a dead end for reaching true intelligence. At Ami Labs, the keyword is JEPA (Joint Embedding Predictive Architecture), a method that prioritizes learning through observation. Instead of training solely on text, these models use video, sound, and images to model the structure of the visual world, with the ultimate goal of creating systems capable of surviving and acting in the real world, far beyond laboratory demonstrations.
This marks the strong comeback of World Models: architectures designed to simulate and anticipate the consequences of an action in the physical world by understanding causal links. However, the idea wasn’t born yesterday. It is rooted in the work of Herb Simon in the 1950s with his General Problem Solver, and was later championed by Jürgen Schmidhuber in the 1990s, alongside researchers like David Ha (now CEO of Sakana Labs). LeCun is resurrecting a historical vision with modern means to try and endow AI with actual common sense.
3. The Reality Wall: Seeing is Not Understanding
Yet, this technological bet raises legitimate doubts, particularly those expressed by Denis O. and echoed by Gary Marcus. While JEPA helps AI better extract structures from sensory streams, Denis O. argues that “seeing is not understanding.” According to him, JEPA is not the path to general intelligence because it lacks the essentials: internal geometry, physics, and a causal map. Put simply, JEPA has no stabilizing mechanism to rebuild coherence once the world becomes discontinuous or shifts outside its training data. Without structure, there is no real understanding of space, only improved perception. It is a brilliant tool, but is it a true “model of the world”? The question remains open.
4. AGI: From the Age of Scaling to the Age of Research
This questioning of the “all-LLM” approach is shared by another titan: Ilya Sutskever. In his recent interview with Dwarkesh Patel, the OpenAI co-founder stated that we are moving from the “age of scaling” (where adding more chips was enough) to the “age of research.” Sutskever points out a disturbing truth: our current models generalize dramatically worse than humans, a very fundamental flaw. For him, AGI (Artificial General Intelligence) won’t be born simply from an accumulation of data, but from a breakthrough in how models process information. The industry is no longer looking for a 'Super-Encyclopedia' that can cite every fact on the internet. Instead, as the industry moves toward World Models, the goal is to build what we might call a 'Super-Toddler', a system that, much like a four-year-old child, possesses the foundational common sense and learning efficiency to understand, plan, and react to the physical world.
💡 What This Means for the Ecosystem
The shift toward World Models moves strategic value toward concrete sectors like robotics, autonomous vehicles, and smart factories. Players like Fei-Fei Li’s World Labs (valued at $1 billion in the blink of an eye) are already in the race with concepts like 3D spatial intelligence. Meanwhile, giants like Nvidia and Google DeepMind are investing heavily to connect AI vision directly to robotic action.
But beware: these models are energy monsters. A model like Sora requires thousands of GPU chips to run, and video-based training is sure to keep copyright lawyers busy for years.
The Takeaway: Ami Labs isn’t just another AI startup. It is the symbol of an industry realizing that talking is no longer enough. To become truly intelligent, AI must finally confront the harsh reality of the physical world. But as we grant machines the agency to act within our reality, the ethical stakes shift from digital misinformation to physical safety, raising the ultimate question: are we ready for an intelligence that understands the laws of gravity, but not necessarily the weight of human responsibility?
Want to track the AI deals of last week and access our full database?
👉 Click here to find last week’s full database of European Deals.
👉 Click here to get For the full scoop on last week’s European VC & Tech news.
📚 Sources & Further Reading
Large language models, explained with a minimum of math and jargon — Sean Trott
[FR] « Nous n’avons pas de robot capable de faire la même chose qu’un enfant de 5 ou 6 ans » : les « world models », nouvelle frontière de l’IA — Les Echos
Yann LeCun’s New Startup AMI Labs: Can World Models Move Beyond Hype? — Forbes
Why Yann LeCun Believes AI Needs World Models, Not Just Language Models — Anika Dobrev
Breaking: Marcus weighs in (mostly) for LeCun — Gary Marcus
Ilya Sutskever – We’re moving from the age of scaling to the age of research — Dwarkesh Patel



