A Yann LeCun–Linked Startup Charts a New Path to AGI
If you ask Yann LecunSilicon Valley has a groupthink problem. Since leaving Meta in November, the researcher and AI luminary has purpose taken at the orthodox view that large language models (LLMs) will bring us to artificial general intelligence (AGI), the threshold where computers match or surpass human intelligence. Everyone, he explained in a recent interviewis “LLM pilled.”
On January 21, San Francisco-based startup Logical Intelligence appointed LeCun to his board. Building on a theory coined by LeCun two decades earlier, the startup claims to have developed a different form of AI, better equipped to learn, reason and self-correct.
Logical intelligence has developed what is known as an energy-based reasoning model (EBM). While LLMs effectively predict the most likely next word in a sequence, EBMs absorb a set of parameters—for example, the rules for sudoku—and complete a task within those limits. This method is intended to eliminate errors and require much less computation because there is less trial and error.
The startup's debut model, Kona 1.0, can solve sudoku puzzles many times faster than the world's leading LLMs, despite running on just one Nvidia H100 GPU, according to founder and CEO Eve Bodnia, in an interview with WIRED. (In this test, the LLMs are blocked from using coding capabilities that allow them to “brute force” the puzzle.)
Logical Intelligence claims to be the first company to have built a working EBM, until now just a flight of academic fancy. The idea is for Kona to tackle problems such as optimizing energy grids or automating sophisticated production processes, in settings with no tolerance for error. “None of these tasks are connected with language. It is anything but language,” says Bodnia.
Bodnia expects Logical Intelligence to work closely with AMI Labs, a Paris-based startup recently launched by LeCun, which is developing yet another form of AI – a so-called world model, intended to recognize physical dimensions, demonstrate persistent memory and anticipate the outcomes of its actions. The road to AGI, says Bodnia, starts with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take on reasoning tasks, while world models will help robots take action in 3D space.
Bodnia spoke to WIRED this week via video conference from her office in San Francisco. The following interview has been edited for clarity and length.
WIRED: I was going to ask about Yann. Tell me about how you met, his part in driving research at Logical Intelligence, and what his role on the board will entail.
Bodnia: Yann has a lot of experience from the academic end as a professor at New York University, but he has been exposed to real industry for many, many years through Meta and other collaborators. He has seen both worlds.
For us, he is the only expert in energy-based models and various types of associated architecture. When we started working on this EBM, he was the only person I could talk to. He helps our technical team to navigate certain directions. He's been very, very handy. Without Yann, I can't imagine us scaling this quickly.
Yann has been outspoken about the potential limitations of LLMs and which model architectures are most likely to advance AI research. where do you stand
LLMs are a big guessing game. That's why you need a lot of calculations. You take a neural network, feed it pretty much all the garbage from the internet, and try to teach it how people communicate with each other.
When you speak, your language is intelligent to me, but not because of the language. Language is a manifestation of everything that is in your brain. My reasoning happens in a kind of abstract space that I decode into language. I feel that people try to reverse intelligence by imitating intelligence.