Claude Opus 4.6

published

Anthropic's most capable model

Frontier reasoning model with 1M context window. Leads on SWE-bench Verified and agentic coding tasks. Extended thinking for complex multi-step problems.

Provider anthropic Type llm Access api Params undisclosed Context 1M License proprietary

Benchmarks (4)

BenchmarkScoreReported BySource
SWE-bench Verified80.8%vendorsource ↗
MMLU90.8%vendorsource ↗
Aider Polyglot89.4%vendorsource ↗
HumanEval92.0%vendorsource ↗

Why It Matters

First model to break 80% on SWE-bench Verified in standard runs. The 1M context window enables full-codebase reasoning without chunking. Extended thinking mode produces step-by-step reasoning traces.

Known Limitations

Proprietary, API-only access. High output token cost. Extended thinking increases latency significantly. No open weights.

Provider anthropic
Released 2026-02-24
Training cutoff 2025-04
Created March 22, 2026
Last reconciled Never