LeCun Predicts the End of Today's AI Darlings: "Nobody in Their Right Mind Will Use Them"

Fractal error propagation in auto-regressive AI systems visualized through prismatic light patterns, illustrating LeCun's mathematical critique of LLMs

"My prediction is that auto-regressive LLMs are doomed. A few years from now, nobody in their right mind will use them," declares Yann LeCun, Meta's Chief AI Scientist, challenging the very foundation of today's most celebrated AI systems.

This striking assessment comes during LeCun's prestigious Josiah Willard Gibbs Lecture at the American Mathematical Society's 2025 Joint Mathematics Meetings, where the Turing Award winner outlined fundamental mathematical barriers facing current AI architectures, End of Miles reports.

The Mathematical Death Sentence

LeCun doesn't mince words about the inherent limitations of systems like ChatGPT and Claude, which rely on auto-regressive prediction — predicting one token at a time based on previous tokens. The Meta scientist frames the problem in precise mathematical terms.

"Auto-regressive prediction is kind of a divergent process. If you assume there is some sort of probability of error every time you produce a symbol, and you assume those errors are independent, then the probability that a sequence of n symbols would be correct is (1-E)^n. Even if E is really small, this has got to diverge exponentially." Yann LeCun

This mathematical reality, LeCun argues, creates an insurmountable ceiling for the technology powering today's most advanced AI systems. The error compounds with each prediction step, making longer outputs increasingly unreliable — a fundamental flaw that no amount of data or computing power can overcome.

The Current AI Landscape

The timing of LeCun's proclamation is particularly notable as organizations worldwide are investing billions in auto-regressive LLM technology. His assessment contradicts the prevailing industry narrative that scaling current approaches will inevitably lead to increasingly capable AI systems.

"That's why you've heard about LLM hallucination and things like that. Sometimes they produce nonsense, and it's essentially because of this auto-regressive prediction." LeCun

The Meta AI chief points to hallucinations — a term describing AI systems generating false information — not as a temporary bug to be fixed, but as a symptom of the fundamental mathematical limitation he describes. This perspective suggests current efforts to reduce hallucinations may hit diminishing returns.

Beyond Auto-regressive Models

LeCun's critique isn't merely theoretical — he's actively pursuing alternative architectures at Meta that could overcome these limitations. His remarks suggest a significant research pivot may be necessary across the AI field.

"The question is, what should we replace this by? I think we're missing something really big in terms of a new concept of how to build AI systems." LeCun

For businesses and researchers heavily invested in current LLM approaches, LeCun's assessment presents a sobering counterpoint to the optimistic industry messaging. If the Meta scientist's mathematical analysis proves correct, the AI community may soon face a fundamental reckoning with the limitations of its most celebrated technology.

Read more