Join us as we return to New York on June 5 to work with executive leaders to explore comprehensive methods for auditing AI models for bias, performance, and ethical compliance in diverse organizations. Find out how you can be present here.
AI pioneer Yann LeCun started an animated discussion today after telling the next generation of developers not to work on it major language models (LLMs).
“This is in the hands of big companies, you can't bring anything to the table,” Lecun said VivaTech today in Paris. “You should be working on next-generation AI systems that overcome the limitations of LLMs.”
The comments from Meta's lead AI scientist and NYU professor quickly sparked a series of questions and led to a conversation about the limitations of current LLMs.
When faced with questions and headaches, LeCun elaborated (sort of) on X (formerly Twitter): “I'm working on the Next-generation AI systems myself, not on LLMs. So technically I'm telling you, 'compete with me', or better said, 'work on the same thing as me, because that's the right thing to do, and the [m]the better!'”

Without more specific examples provided, many X users wondered what “next-gen AI” means and what an alternative to LLMs could be.

Developers, data scientists and AI experts offered a large number of options on X-threads and sub-threads: boundary-driven or discriminative AI, multitasking and multimodality, categorical deep learning, energy-based models, more targeted small language models, niche use cases, custom refinement and training, state-space models, and hardware for embodied AI. Some also suggested exploring it Kolmogorov-Arnold networks (CANs), a new breakthrough in neural networks.

One user mentioned five next-generation AI systems:
- Multimodal AI.
- Reasoning and general intelligence.
- Embodied AI and robotics.
- Unsupervised and self-controlled learning.
- Artificial general intelligence (AGI).

Another said that “every student should start with the basics,” including:
- Statistics and probability.
- Data management, cleansing and transformation.
- Classical pattern recognition such as naive Bayes, decision trees, random forest and bagging.
- Artificial neural networks.
- Convolutional neural networks.
- Recurrent neural networks.
- Generative AI.

Dissenters, on the other hand, pointed out that this is a perfect time for students and others to work on LLMs because the applications are still “barely utilized.” For example, there is still a lot to learn when it comes to prompting, jailbreaking and accessibility.

Others, of course, pointed to Meta's own prolific LLM build-up and suggested that LeCun was subversively trying to stifle the competition.
“When the head of AI at a major company says, 'Don't try to compete, there's nothing you can bring to the table,' that makes me want to compete,” another user foolishly commented.

LLMs will never achieve human level intelligence
A champion of objective-driven AI and open source systems, Lecun also said the Financial Times this week that LLMs have a limited understanding of logic and will not achieve human-level intelligence.
They “do not understand the physical world, have no lasting memory, cannot reason in any reasonable definition of the term, and cannot plan. . . hierarchical,” he said.
Meta recently got his Video Collaborative embedding of predictive architecture (V-JEPA), which can detect and understand highly detailed object interactions. The architecture is what the company calls the “next step toward Yann LeCun's vision of advanced machine intelligence (AMI).”
Many share LeCun's sentiments about the setbacks of LLMs. The X bill for AI chat app Wildlife today called LeCun's comments a “great take” as closed-loop systems have “huge limitations” when it comes to flexibility. “Whoever creates an AI with a prefrontal cortex and the ability to create information absorption through open-ended self-training will likely win a Nobel Prize,” they claimed.

Others described the industry's “overt fixation” on LMMs, calling it “a dead end in achieving real progress.” Even more noted that LLMs are nothing more than a “connective tissue that stitches systems together quickly and efficiently,” much like telephone operators, before being handed off to the appropriate AI.


Reviving old rivalries
Of course, LeCun has never been one to shy away from debate. Many may remember the extensive, heated back and forth between him and fellow AI godfathers Geoffrey Hinton, Andrew Ng, and Yoshia Bengio on the existential risks of AI (LeCun is in the “it's overblown” camp).
At least one industry watcher pushed back on this drastic clash of opinions, pointing to a recent interview with Geoffrey Hinton in which the British computer scientist recommended going all-in on LLMs. Hinton has also argued that the AI brain is very close to the human brain.
“It's interesting to see the basic disagreement here,” the user said.
One that is unlikely to reconcile anytime soon.
