Anthropic Launches Claude Opus 4.7

Anthropic has released Claude Opus 4.7, a significant upgrade over Opus 4.6 that delivers major improvements in advanced software engineering, instruction following, high-resolution vision, and long-running agentic tasks, along with new cybersecurity safeguards and developer features like the xhigh effort level and ultrareview command.

anthropic Apr 16, 2026

Anthropic's latest model, Claude Opus 4.7, is now generally available.

Opus 4.7 represents a significant improvement over Opus 4.6 in advanced software engineering, with especially notable gains on the most challenging tasks. Users have reported being able to delegate their most difficult coding work-tasks that previously required close oversight-to Opus 4.7 with confidence. The model handles complex, long-running tasks with rigor and consistency, follows instructions precisely, and finds ways to verify its own outputs before reporting back.

The model also features substantially improved vision capabilities, processing images at higher resolution. It demonstrates greater taste and creativity on professional tasks, producing higher-quality interfaces, slides, and documents. Although it is less broadly capable than Anthropic's most powerful model, Claude Mythos Preview, it outperforms Opus 4.6 across a range of benchmarks.

The previous week, Anthropic announced Project Glasswing, highlighting both the risks and benefits of AI models for cybersecurity. Anthropic stated it would keep Claude Mythos Preview's release limited and test new cyber safeguards on less capable models first. Opus 4.7 is the first such model: its cyber capabilities are not as advanced as those of Mythos Preview (indeed, during training Anthropic experimented with efforts to differentially reduce these capabilities). Opus 4.7 is being released with safeguards that automatically detect and block requests indicating prohibited or high-risk cybersecurity uses. Lessons from real-world deployment of these safeguards will help Anthropic work towards the eventual goal of a broad release of Mythos-class models.

Security professionals who wish to use Opus 4.7 for legitimate cybersecurity purposes (such as vulnerability research, penetration testing, and red-teaming) are invited to join Anthropic's new Cyber Verification Program.

Opus 4.7 is available across all Claude products and the API, Amazon Bedrock, Google Cloud's Vertex AI, and Microsoft Foundry. Pricing remains unchanged from Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can use claude-opus-4-7 via the Claude API.

Early Tester Feedback

Claude Opus 4.7 has received strong feedback from early-access testers across a wide range of companies, including those in financial technology, developer tools, legal tech, life sciences, cybersecurity, and data analytics. Key themes from their feedback include:

  • The model catches its own logical faults during planning and accelerates execution far beyond previous Claude models.
  • It handles real-world async workflows-automations, CI/CD, and long-running tasks-exceptionally well and thinks more deeply about problems rather than simply agreeing with the user.
  • It correctly reports when data is missing instead of providing plausible-but-incorrect fallbacks, and resists dissonant-data traps.
  • On coding benchmarks, testers reported 13% or greater improvements over Opus 4.6, including solving tasks that no previous model could handle.
  • It demonstrates the strongest efficiency baseline for multi-step work, with the most consistent long-context performance.
  • Vision improvements are dramatic, with one tester reporting 98.5% on a visual-acuity benchmark versus 54.5% for Opus 4.6.
  • Multiple testers noted the model pushes through hard problems rather than giving up, carries work all the way through, and recovers gracefully from errors.
  • It demonstrates improved code quality, cutting out unnecessary wrapper functions and fallback scaffolding, and fixes its own code as it goes.

Key Highlights from Internal Testing

  • Instruction following: Opus 4.7 is substantially better at following instructions. This means prompts written for earlier models can sometimes produce unexpected results, since previous models interpreted instructions loosely or skipped parts entirely while Opus 4.7 takes them literally. Users should re-tune their prompts and harnesses accordingly.
  • Improved multimodal support: Opus 4.7 has enhanced vision for high-resolution images, accepting images up to 2,576 pixels on the long edge (~3.75 megapixels), more than three times as many as prior Claude models. This enables multimodal uses requiring fine visual detail: computer-use agents reading dense screenshots, data extractions from complex diagrams, and pixel-perfect reference work.
  • Real-world work: Beyond achieving a state-of-the-art score on the Finance Agent evaluation, Anthropic's internal testing showed Opus 4.7 to be a more effective finance analyst than Opus 4.6, producing rigorous analyses and models, more professional presentations, and tighter integration across tasks. It also leads on GDPval-AA, a third-party evaluation of economically valuable knowledge work.
  • Memory: Opus 4.7 is better at using file system-based memory, remembering important notes across long, multi-session work and using them to proceed with new tasks that require less up-front context.

Safety and Alignment

Opus 4.7 shows a similar safety profile to Opus 4.6. Anthropic's evaluations show low rates of concerning behavior such as deception, sycophancy, and cooperation with misuse. On some measures, such as honesty and resistance to malicious prompt injection attacks, Opus 4.7 improves on Opus 4.6; in others (such as a tendency to give overly detailed harm-reduction advice on controlled substances), it is modestly weaker. Anthropic's alignment assessment concluded that the model is "largely well-aligned and trustworthy, though not fully ideal in its behavior." Mythos Preview remains the best-aligned model Anthropic has trained. Full safety evaluations are discussed in the Claude Opus 4.7 System Card.

Additional Launches

Alongside Claude Opus 4.7, Anthropic is releasing several updates:

  • More effort control: Opus 4.7 introduces a new xhigh ("extra high") effort level between high and max, giving users finer control over the tradeoff between reasoning and latency. In Claude Code, the default effort level has been raised to xhigh for all plans.
  • Claude Platform (API): In addition to higher-resolution image support, task budgets are launching in public beta, letting developers guide Claude's token spend to prioritize work across longer runs.
  • Claude Code: The new /ultrareview slash command produces a dedicated review session that reads through changes and flags bugs and design issues. Pro and Max Claude Code users receive three free ultrareviews to try it out. Auto mode has also been extended to Max users, allowing Claude to make decisions on the user's behalf for longer tasks with fewer interruptions.

Migration Notes

Opus 4.7 is a direct upgrade to Opus 4.6, but two changes affect token usage. First, an updated tokenizer improves text processing but means the same input can map to roughly 1.0–1.35× more tokens depending on content type. Second, Opus 4.7 thinks more at higher effort levels, particularly on later turns in agentic settings, producing more output tokens. Users can control token usage via the effort parameter, task budgets, or prompting for conciseness. Anthropic's internal testing shows the net effect is favorable, with improved token usage across all effort levels on an internal coding evaluation. A migration guide provides further advice on upgrading.