Anthropic released Claude Opus 4.7 on Thursday as its most capable generally available AI model, while acknowledging it falls short of the unreleased Claude Mythos Preview, which the company deemed too risky for broad deployment.
The modern model improves on Opus 4.6 in advanced coding, visual understanding, and document analysis, with users reporting they can now entrust it with complex software engineering tasks that previously required close supervision. Anthropic says Opus 4.7 handles long-running tasks with greater consistency, pays precise attention to instructions, and verifies its own outputs before reporting back.
On the Humanity’s Last Exam benchmark, Opus 4.7 scored 46.9 percent, outperforming Gemini 3.1 Pro (44.4 percent) and GPT-5-4 Pro (42.7 percent), but trailing Claude Mythos Preview at 56.8 percent and far ahead of Opus 4.6 at 40 percent. Anthropic’s model card states that Opus 4.7 does not advance the company’s capability frontier, meaning it is not evidence of accelerated AI development beyond existing trends.
A key distinction of Opus 4.7 is its reduced cybersecurity capability compared to Mythos Preview. Anthropic confirmed it experimented during training to differentially reduce these abilities, and the model now includes safeguards that automatically detect and block requests indicating prohibited or high-risk cybersecurity uses. The company is using real-world deployment of these safeguards to inform its eventual goal of broadly releasing Mythos-class models.
To support legitimate cybersecurity work, Anthropic has launched a Cyber Verification Program inviting security professionals interested in vulnerability research, penetration testing, and red-teaming to apply for access to Opus 4.7 under controlled conditions. The model is available via Claude AI, the Claude API, and cloud partners including Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry.
For more on this story, see Anthropic Launches Project Glasswing to Fight Cyberattacks with New Mythos AI Model.
Pricing remains unchanged from Opus 4.6 at $5 per million input tokens and $25 per million output tokens. Anthropic says the cost reflects the model’s increased computational demand at higher effort levels, which results in greater output token usage than its predecessor.
Anthropic has positioned itself as a more safety-focused alternative to rivals like OpenAI, and the controlled rollout of Opus 4.7 reflects that strategy. By limiting Mythos Preview to a select group of companies and using Opus 4.7 as a proving ground for safety mechanisms, the company aims to build empirical evidence for responsible scaling.
This follows our earlier report, OpenAI Launches New $100 ChatGPT Pro Tier for Heavy Codex Use.
The launch follows Opus 4.6’s release in February and underscores Anthropic’s pattern of rapid model iteration in 2026. While the company continues to push technical performance, it is doing so within a framework that prioritizes risk assessment and incremental deployment, particularly for capabilities with dual-use potential.
How does Claude Opus 4.7 compare to other leading AI models on reasoning benchmarks?
On the Humanity’s Last Exam without tools, Opus 4.7 scored 46.9 percent, beating Gemini 3.1 Pro (44.4 percent) and GPT-5-4 Pro (42.7 percent), but falling short of Claude Mythos Preview’s 56.8 percent and exceeding Opus 4.6’s 40 percent.
Why is Anthropic not releasing Claude Mythos Preview to the general public?
Anthropic has deemed Claude Mythos Preview too dangerous for public release and is instead using Opus 4.7 to test cybersecurity safeguards, with the goal of informing a future broad release of Mythos-class models.
Can developers utilize Claude Opus 4.7 for cybersecurity work?
Yes, but only through Anthropic’s Cyber Verification Program, which invites security professionals interested in legitimate uses like vulnerability research and penetration testing to apply for controlled access.

