Anthropic Claude 4 Features 2026 — Opus 4.7 and the Full Series Explained

James Whitaker

April 23, 2026

Anthropic Claude 4 Features

Understanding Anthropic Claude 4 features in 2026 means understanding a model family that Anthropic has been iterating at roughly two-month intervals since its initial Claude 4 launch in May 2025. The Anthropic Claude 4 series currently includes Haiku 4.5, Sonnet 4.6, and the flagship Opus — which has gone through versions 4.5, 4.6, and the current 4.7 released April 16, 2026. The Anthropic Claude 4 features listed in this guide reflect the current publicly available capabilities as of April 2026, with particular focus on what changed in Opus 4.7 — the release that most significantly extends Claude’s lead on coding and vision benchmarks.

The Claude 4 Model Family at a Glance

ModelReleasedBest ForAPI Pricing (Input/Output)
Claude Haiku 4.52025Speed and volume — classification, extraction, high-throughput$0.80 / $4.00 per million tokens
Claude Sonnet 4.6Feb 2026Balanced — daily professional work, Claude Code default$3.00 / $15.00 per million tokens
Claude Opus 4.6Feb 2026Complex reasoning, long context — now superseded$5.00 / $25.00 per million tokens
Claude Opus 4.7Apr 16, 2026Flagship — coding, vision, agentic tasks, enterprise knowledge work$5.00 / $25.00 per million tokens (same as 4.6)

Anthropic Claude 4 model family, April 2026. Opus 4.7 delivers substantially better performance at unchanged API pricing from Opus 4.6.

New Features in Claude Opus 4.7

  • High-resolution vision (3.75 megapixels): Maximum image resolution increased from 1,568px (1.15MP) to 2,576px (3.75MP) — a 3× improvement. Screenshots, dense diagrams, design mockups, and document scans now come through at actual fidelity. Coordinates map 1:1 to actual pixels, eliminating scale-factor mathematics in computer use applications.
  • xhigh effort level: A new reasoning effort setting between “high” and “max”. Claude Code defaults to xhigh for all subscribers. Anthropic recommends starting at high or xhigh for coding and agentic workflows. The xhigh setting at 100K tokens scores 71% on CursorBench — ahead of Opus 4.6’s max setting at 200K tokens.
  • Task budgets (beta): A new API feature that gives Claude a rough token target across a full agentic loop. Claude sees a running countdown and uses it to prioritise work, finishing tasks gracefully as the budget is consumed rather than cutting off mid-task. Particularly useful for production agentic workflows with cost constraints.
  • New tokenizer: Improved text processing contributing to better performance, with the tradeoff that the same input may produce 1.0–1.35× more tokens than Opus 4.6. Developers migrating from 4.6 should update max_tokens parameters and monitor cost changes.
  • Improved instruction following: Opus 4.7 is significantly more literal than 4.6. Prompts specifying exact output format are followed precisely. Prompts tuned for 4.6’s looser interpretation may need updating — this is the primary migration consideration Anthropic flags for developers.
  • Automated cybersecurity safeguards: First Claude model with automated detection and blocking for prohibited cybersecurity uses. Legitimate security researchers can apply to Anthropic’s Cyber Verification Program for access to cybersecurity-relevant capabilities.

Anthropic Claude 4 Benchmark Performance

  • SWE-bench Pro (coding): Opus 4.7 — 64.3% | Opus 4.6 — 53.4% | GPT-5.4 — 57.7%. A 10.9-point single-release jump — described as “one of the largest in frontier model history” by independent reviewers.
  • CursorBench (agentic coding): Opus 4.7 — 70% | Opus 4.6 — 58% | +12 percentage point improvement. Lifts code resolution by 13% overall on complex multi-file tasks.
  • GDPval-AA (knowledge work — finance/legal): Tied for top score across six modules at 0.715. Led on General Finance at 0.813. Best overall enterprise knowledge work performance of any generally available model.
  • Visual navigation (vision, no tools): Opus 4.7 — 79.5% | Opus 4.6 — 57.7%. The high-resolution upgrade enables accurate processing of tasks that required zooming in or failed entirely at lower resolution.

Frequently Asked Questions

What is Claude Opus 4.7 and when was it released?

Claude Opus 4.7 is Anthropic’s most capable publicly available model, released April 16, 2026. It is a direct upgrade to Opus 4.6 (February 2026), with major improvements in coding (64.3% SWE-bench Pro, up 10.9 points), vision (3.75MP resolution, up from 1.15MP), and instruction precision. It is available at the same API price as Opus 4.6 — $5/$25 per million tokens — on the Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.

How does Claude Opus 4.7 compare to GPT-5.4?

Opus 4.7 leads GPT-5.4 on SWE-bench Pro (64.3% vs 57.7%), CursorBench (70% vs ~58%), enterprise knowledge work (GDPval-AA), and vision-intensive tasks. GPT-5.4 leads on OSWorld computer use benchmarks (75% surpassing the human expert baseline — Opus 4.7 does not directly compete on this benchmark) and agentic web search. The benchmark comparison is 7-4 in Opus 4.7’s favour on directly comparable metrics according to VentureBeat’s analysis of the April 2026 launch.

Is there a Claude 5 coming?

No confirmed Claude 5 release date exists as of April 2026. Anthropic is following a rapid iterative release cadence within the Claude 4 family — Opus 4.5, 4.6, and 4.7 shipped in roughly two-month intervals. The unreleased “Mythos Preview” model (internally developed, restricted to a small group of enterprise partners for safety testing) represents the most capable model Anthropic has trained — substantially ahead of Opus 4.7 on all benchmarks — but it is not publicly available. Whether Mythos becomes “Claude 5” or a separate product line has not been announced.

Unlock everything in AI TOOLS—click here to explore the full collection.