Researcher Introduces "Jagged AGI" Framework That Redefines How We View Advanced AI

Fractal AI capability landscape with prismatic peaks and unexpected valleys visualizing the "Jagged AGI" concept in cybernetic neo-futurist style

Meet "Jagged AGI": Where AI Is Both Superhuman and Surprisingly Fallible
"Jagged AGI" Concept Explains Why Advanced AI Models Excel and Fail Simultaneously

Advanced AI systems are now simultaneously superhuman and deeply flawed in ways that challenge our traditional understanding of machine intelligence, argues researcher Ethan Mollick in a new analysis of cutting-edge models like o3 and Gemini 2.5.

End of Miles reports that Mollick's concept of "Jagged AGI" offers a nuanced framework for understanding why today's most sophisticated AI systems can execute complex multi-stage projects with minimal guidance while still failing at relatively simple reasoning problems.

The uneven terrain of AI capabilities

Mollick, who co-authored research on what he calls the "Jagged Frontier" of AI capabilities, points to striking examples of this phenomenon in practice. When given a simple variation of a classic brainteaser, even OpenAI's advanced o3 model stubbornly provides an incorrect answer that matches the original riddle rather than adapting to the new version.

"An AI may succeed at a task that would challenge a human expert but fail at something incredibly mundane," Mollick notes in his analysis

Yet the same system can perform tasks that would have seemed impossible just a year ago. The researcher describes how o3 can transform a single vague prompt about a cheese shop into a complete business plan with marketing materials, financial projections, and a functional website in under two minutes—demonstrating remarkable generalization and planning capabilities.

Beyond traditional AGI definitions

The concept of "Jagged AGI" sidesteps long-standing definitional problems surrounding Artificial General Intelligence. Instead of debating whether systems have reached human-level performance across all domains, Mollick suggests focusing on their practical impact.

"Superhuman in enough areas to result in real changes to how we work and live, but also unreliable enough that human expertise is often needed to figure out where AI works and where it doesn't." Mollick writes

This framing acknowledges both the extraordinary capabilities and persistent limitations of current systems, offering a more accurate picture than binary "AGI or not" distinctions. The AI expert notes that even influential economist Tyler Cowen has recently declared that OpenAI's o3 model constitutes AGI, highlighting how these debates continue among leading thinkers.

Changing human roles in an AI-enabled world

Perhaps most significantly, the "Jagged AGI" framework suggests an evolving role for human expertise as these systems become more capable. Rather than wholesale replacement, Mollick emphasizes that human judgment becomes essential for determining where AI can be reliably deployed.

The unpredictability of AI limitations creates an environment where testing and verification remain crucial. While these models can perform tasks that would challenge human experts, their surprising blind spots necessitate careful oversight.

"Those who learn to navigate this jagged landscape now will be best positioned for what comes next… whatever that is." Mollick concludes

As models like o3 and Gemini 2.5 demonstrate increasingly agentic properties—decomposing complex goals, using tools, and executing multi-step plans independently—this jagged capability landscape may ultimately determine how quickly and pervasively AI transforms various sectors and practices.

Read more