Architectural constraints in today’s most popular artificial intelligence (AI) tools may limit how much more intelligent they can get, new research suggests.
A study published Feb. 5 on the preprint arXiv server argues that modern large language models (LLMs) are inherently prone to breakdowns in their problem-solving logic, known as “reasoning failures.”
Based on LLMs’ performance on evaluations such as Humanity’s Last Exam, some scientists say the underlying neural network architecture can one day lead to a model capable of reaching human-level cognition. While transformer architecture makes LLMs extremely capable at tasks like language generation, the researchers argue that it also inhibits the kind of reliable logical processes needed to achieve true human-level reasoning.
“LLMs have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks,” the researchers said in the study. “Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios … This failure is attributed to an inability of holistic planning and in-depth thinking.”
Limitations with LLMs
LLMs are trained on huge amounts of text data and generate responses to user prompts by predicting, word by word, a plausible answer. They do this by stringing together units of text, called “tokens,” based on statistical patterns learned from their training data.
Transformers also use a mechanism called “self-attention” to keep track of relationships between words and concepts over long strings of text. Self-attention, combined with their massive training databases, is what makes modern chatbots so good at generating convincing answers to user prompts.
However, LLMs don’t do any actual “thinking” in the conventional sense. Instead, their responses are determined by an algorithm. For long tasks, particularly those that require genuine problem-solving across multiple steps, transformers can lose track of key information and default to the patterns learned from their training data. This results in reasoning failures.
It’s not real reasoning in the human sense — it’s still just next‑token prediction dressed up as a chain of thought
Federico Nanni, senior research data scientist at the Alan Turing Institute
“This fundamental weakness extends beyond basic tasks, to compositions of math problems, multi-fact claim verification, and other inherently compositional tasks,” the researchers said in the study.
Reasoning failures are also why LLMs often circle the same response to a user query even after being told it’s incorrect, or produce a different answer to the same question when it’s phrased slightly differently, even when it’s prompted to explain its reasoning step by step.
Federico Nanni, a senior research data scientist at the U.K’s Alan Turing Institute, argues that what LLMs typically present as reasoning is mostly window dressing.
“People figured out that if you tell an LLM, instead of answering directly, to ‘think step by step’ and write out a reasoning process first, it often gets the right answer,” Nanni told Live Science. “But that’s a trick. It’s not real reasoning in the human sense — it’s still just next‑token prediction dressed up as a chain of thought,” he said. “When we say these models ‘reason,’ what we actually mean is that they write out a reasoning process — something that sounds like a plausible chain of reasoning.”
Gaps in existing AI benchmarks
Current ways to assess LLM performance fall short in three key areas, the researchers found. First, results can be affected by rewording a prompt. Second, benchmarks degrade and become contaminated the more they’re used. And finally, they only assess the outcome, rather than the reasoning process a model used to reach its conclusion.
This means current benchmarks may significantly overstate how capable LLMs are and understate how often they fail in real-world use.
“Our position is not that benchmarks are flawed, but that they need to evolve,” study co-author Peiyang Song, a computer science and robotics student at Caltech, told Live Science via email. Likewise, benchmarks tend to leak into LLM training data, Nanni said, meaning subsequent LLMs figure out how to trick them.
“On top of that, now that models are deployed in production, usage itself becomes a kind of benchmark,” Nanni said. “You put the system in front of users and see what goes wrong — that’s the new test. So yes, we need better benchmarks, and we need to rely less on AI to check AI. But that’s very hard in practice, because these tools are now woven into how we work, and it’s extremely convenient to just use them.”
A new architecture for AGI?
Unlike other recent research, the new study doesn’t argue that neural-network approaches to AI are a dead end in the quest to achieve artificial general intelligence (AGI). Rather, the researchers liken it to the early days of computing, noting that understanding why LLMs fail is key to improving them.
However, they do argue that simply training models on more data or scaling them up are unlikely to resolve the issue on their own. This means developing AGI may require a fundamentally different approach to how models are built.
“Neural networks, and LLMs in particular, are clearly part of the AGI picture. Their progress has been extraordinary,” Song said. “However, our survey suggests that scaling alone is unlikely to resolve all reasoning failures … [meaning] reaching human-level reasoning may require architectural innovations, stronger world models, improved robustness training, and deeper integration with structured reasoning and embodied interaction.”
Nanni agreed. “From a philosophy‑of‑mind point of view, I’d say we’ve basically found the limits of transformers. They’re not how you build a digital mind,” he said. “They model text extremely well, to the point that it’s almost impossible to tell if a passage was written by a human or a machine. “But that’s what they are: language models … There’s only so far you can push this architecture.”
















