Stanford AI Index 2026: AI Is Sprinting and the World Is Still Looking for Its Running Shoes

Oliver Grant

May 12, 2026

Stanford AI Index 2026

Stanford University’s Institute for Human-Centered Artificial Intelligence published its 2026 AI Index this week — the most comprehensive annual measurement of AI’s technical progress, economic impact, and social effects — and the picture it paints is one of a technology advancing at a speed that has outpaced the institutions, benchmarks, regulations, and labour markets designed to manage it. “AI is sprinting,” MIT Technology Review summarised in its coverage, “and the rest of us are trying to find our shoes.”

Models Keep Getting Better

Despite persistent predictions that AI development would hit a wall as low-hanging research fruit was exhausted, the Stanford AI Index finds that the top AI models continue to improve across every major benchmark category. As of March 2026, Anthropic leads the frontier model rankings, trailed closely by xAI, Google, and OpenAI — with Chinese models from DeepSeek and Alibaba lagging only modestly. The competitive gap between the leading models has narrowed dramatically from early 2024, when OpenAI held a clear lead, to the present where the best models are separated by razor-thin margins and competition has shifted to cost, reliability, and real-world usefulness.

Adoption Faster Than Any Previous Technology

People are adopting AI faster than they adopted the personal computer or the internet — a finding that would have seemed implausible three years ago and that now has enough data behind it to be stated with confidence. AI companies are generating revenue faster than companies in any previous technology boom. The flip side of this growth is that spending on AI infrastructure — data centres, chips, energy — is running at hundreds of billions of dollars annually, and the profitability timeline for most frontier AI companies remains years away.

The Parts That Cannot Keep Up

Three areas the Stanford AI Index identifies as unable to keep pace with AI’s technical progress: benchmarks, policy, and the job market. Benchmarks designed to measure AI capability are becoming obsolete faster than new ones can be validated — models achieve near-perfect scores on tasks that were considered benchmarks of advanced capability two years ago. AI policy and governance frameworks are being developed significantly slower than the capabilities they are designed to govern.

Transparency has also declined. The Stanford AI Index notes that as competition has intensified, major AI companies — OpenAI, Anthropic, Google — have stopped disclosing their training code, parameter counts, and dataset sizes. The era of open research that characterised AI’s pre-ChatGPT years is over. The models are getting better. Understanding why, and what risks they carry, is becoming harder for everyone outside the companies building them.