Researchers at the Center for AI Safety and Scale AI have published “Humanity’s Last Exam” — a test designed to measure how close today’s most powerful artificial intelligence (AI) models are to meeting or exceeding human-level knowledge across several domains.
The test was launched in January 2025, but scientists outlined the framework and their thinking behind its design for the first time in a new study published Jan. 28 in the journal Nature. It contains a corpus of 2,500 questions across more than 100 subjects, with input from more than 1,000 subject-matter experts from 500 institutions across 50 countries.
At launch, the researchers tested OpenAI’s GPT-4o and o1 models, Google’s Gemini 1.5 Pro, Anthropic’s Claude 3.5 Sonnet and DeepSeek R1. OpenAI’s o1 system notched the top spot with a score of just 8.3%.
Despite this poor performance, the researchers wrote at the time that “given the rapid pace of AI development, it is plausible that models could exceed 50% accuracy on HLE by the end of 2025.”
As of Feb. 12, 2026, the highest score achieved so far is 48.4%, set by Google’s Gemini 3 Deep Think. Human experts, meanwhile, score around 90% in their respective domains.
Testing the smartest machines in the world
Humanity’s Last Exam was intentionally designed to be extremely difficult for AI models. During early development, the researchers put out a global call for submissions from subject matter experts across numerous domains.
The researchers enforced strict submission criteria requiring questions to be precise, unambiguous, solvable and non-searchable. They didn’t want models to cheat by performing a simple web search, or for any of the questions to already appear online — thus increasing the likelihood a given model would have the answer in its training dataset.
Each question submitted was then fed to the AI models. The team automatically rejected any questions the models could answer correctly.
More than 70,000 submissions were attempted, resulting in approximately 13,000 questions that stumped LLMs. These were then vetted by a team of subject matter experts, approved by the research team, and presented to the scientific community for open feedback.
Ultimately, the researchers narrowed the total submissions down to 2,500 questions that generally fall within the realm of PhD-level testing.
An example of a trivia question in the exam is: “In Greek mythology, who was Jason’s maternal great-grandfather?”
Meanwhile, an example of a physics question asks for the relationship between different forces during motion in a scenario where a block is placed on a horizontal rail (and can slide frictionlessly) while also being attached to a rigid, massless rod of an unknown length.
The breadth of questions and scope of subjects covered by Humanity’s Last Exam sets it apart from similar benchmarking tools, its creators say.
Common tests, such as the Massive Multitask Language Understanding (MMLU) dataset, which was authored with participation from Center for AI Safety founder Dan Hendrycks, only test a small subset of expert-level domain knowledge, primarily focusing on coding and mathematics.
Even state-of-the-art benchmarks such as Francois Chollet’s ARC-AGI suite struggle to outpace the memorization and searchability problems that the creators of Humanity’s Last Exam suggest the new test addresses. Gemini’s Deep Think, for example, achieved 84.6% on the ARC-AGI-2 benchmark, just a week after failing to reach 50% on the HLE test.
The ultimate prize is general intelligence
Humanity’s Last Exam likely represents the AI world’s best attempt to date at measuring the broad-spectrum capabilities of modern AI models relative to human experts, but the study’s authors categorically state that achieving a high score on the HLE is in no way indicative of the arrival of artificial general intelligence (AGI).
“High accuracy on HLE would demonstrate expert-level performance on closed-ended, verifiable questions and cutting-edge scientific knowledge, but it would not alone suggest autonomous research capabilities or artificial general intelligence,” the scientists said in the study.
“Doing well on HLE is a necessary, but not a sufficient criterion to say that machines have reached true intelligence,” Manuel Schottdorf, a neuroscientist at the University of Delaware’s Department of Psychological and Brain Sciences, said in a recent statement. Schottdorf is one of the many experts whose question was accepted into the HLE’s corpus.
“They will have to be good enough to solve these questions, but that as a fact alone can’t allow us to conclude that machines are truly intelligent.”
















