Published on 11.03.2026
TLDR: Yann LeCun and collaborators have launched AMI Labs in Europe, proposing Superhuman Adaptable Intelligence as an alternative to the current LLM-centric approach. The focus is on world models, self-supervised learning, and physical AI, which they argue is necessary for the coming Machine Economy.
Look, I have been watching the AI space long enough to know that when someone with a Turing Award starts building something fundamentally different from the prevailing narrative, you pay attention. That is exactly what is happening with AMI Labs, and the article from AI Supremacy digs into why this matters and what it signals about where AI is actually headed.
The core thesis here is straightforward but important: large language models, for all their impressive capabilities, are not sufficient for what comes next. The author has been covering AI for over four years and takes a self-described contrarian and realist stance, which honestly is refreshing in a landscape dominated by breathless AGI marketing. The piece draws a clear line between the marketing-driven AGI narrative, largely championed by Sam Altman and OpenAI, and a more grounded alternative that Yann LeCun and his collaborators are proposing. Instead of chasing the nebulous goal of AGI, AMI Labs is pursuing what they call Superhuman Adaptable Intelligence, or SAI. This reframing is not just semantic. It represents a fundamentally different technical direction built on self-supervised learning from unlabeled data and world models that enable planning and zero-shot transfer.
What strikes me about this is the emphasis on world models. The idea that AI agents need to be able to predict the consequences of their actions before taking them is not new, but AMI Labs is putting real institutional weight behind it. Michael Rabbat, quoted in the piece, makes the case that planning, memory, and reasoning about complex observations all require these world models. If you think about it, this is what separates a language model that can generate plausible text from a system that can actually operate in the physical world. And that is where the second big theme comes in: Physical AI. The article argues that 2027 will mark the nascent start of a Physical AI era, with startups focused on robotics, automation, and real-world interaction rather than just text generation.
Now, here is where I think the article could push harder. The criticism of LLMs from people like LeCun and Gary Marcus is well-documented, but the piece does not spend enough time on the practical bridging question: how do you get from where we are today, with LLMs deeply embedded in production systems, to this world model future? That transition story is the hard part. It is one thing to say we need world models for planning and reasoning. It is another to explain how companies currently investing heavily in LLM infrastructure should think about hedging their bets or gradually shifting. The article also mentions a wave of Physical AI startups like Prometheus Project, Core Automation, and World Labs, but does not interrogate their business models or timelines in any depth. The enthusiasm is warranted, but the execution details matter enormously.
For architects and engineering leaders, this is a signal worth taking seriously. If your team is building AI-powered systems today, the takeaway is not to abandon LLMs, but to start thinking about what a world model layer might look like in your architecture. Consider how your systems might need to evolve to support agents that plan and reason rather than just generate. The organizations that start experimenting with these complementary approaches now, even at a small scale, will be better positioned when this next wave matures. Europe positioning itself as a hub for this alternative AI research through AMI Labs also has strategic implications for teams thinking about where AI talent and partnerships will concentrate in the coming years.