Anthropic Launches Claude Code Security to Empower Cyber Defenders with Frontier AI Capabilities

Anthropic has released Claude Code Security in a limited research preview, offering AI-powered codebase scanning that reasons about code like a human security researcher to find complex vulnerabilities traditional tools miss. Enterprise, Team, and open-source maintainers can apply for early access to review findings and approve suggested patches through a dedicated dashboard.

anthropic Feb 20, 2026

Claude Code Security, a new capability built into Claude Code on the web, is now available in a limited research preview. It scans codebases for security vulnerabilities and suggests targeted software patches for human review, enabling teams to discover and remediate security issues that conventional methods frequently overlook.

Security teams commonly face a persistent challenge: an overwhelming number of software vulnerabilities paired with insufficient personnel to address them. Existing analysis tools provide some help, but they are limited because they typically search for known patterns. Identifying subtle, context-dependent vulnerabilities-the kind attackers often exploit-demands skilled human researchers, who are already contending with ever-growing backlogs.

AI is starting to shift that dynamic. Anthropic has recently demonstrated that Claude can detect novel, high-severity vulnerabilities. However, the same capabilities that assist defenders in finding and fixing vulnerabilities could also be leveraged by attackers to exploit them.

Claude Code Security is designed to place this power firmly in defenders' hands and protect code against this emerging category of AI-enabled attack. Anthropic is releasing it as a limited research preview for Enterprise and Team customers, with expedited access for open-source repository maintainers, so that the community can collaborate on refining its capabilities and ensuring responsible deployment.

How Claude Code Security Works

Static analysis-a widely used form of automated security testing-is typically rule-based, matching code against known vulnerability patterns. This approach catches common issues like exposed passwords or outdated encryption but often misses more complex vulnerabilities such as business logic flaws or broken access control.

Instead of scanning for known patterns, Claude Code Security reads and reasons about code in a manner similar to a human security researcher: understanding how components interact, tracing how data flows through an application, and catching complex vulnerabilities that rule-based tools miss.

Every finding undergoes a multi-stage verification process before reaching an analyst. Claude re-examines each result, attempting to prove or disprove its own findings and filter out false positives. Findings are also assigned severity ratings so teams can prioritize the most critical fixes.

Validated findings appear in the Claude Code Security dashboard, where teams can review them, inspect suggested patches, and approve fixes. Because these issues often involve nuances that are difficult to assess from source code alone, Claude also provides a confidence rating for each finding. Nothing is applied without human approval: Claude Code Security identifies problems and proposes solutions, but developers always have the final say.

Leveraging Claude for Cybersecurity

Claude Code Security builds on more than a year of research into Claude's cybersecurity capabilities. Anthropic's Frontier Red Team has been systematically stress-testing these abilities: entering Claude in competitive Capture-the-Flag events, partnering with Pacific Northwest National Laboratory to experiment with using AI to defend critical infrastructure, and refining Claude's ability to find and patch real vulnerabilities in code.

Claude's cyberdefensive abilities have improved substantially as a result. Using Claude Opus 4.6, released earlier that month, Anthropic's team found over 500 vulnerabilities in production open-source codebases-bugs that had gone undetected for decades despite years of expert review. Triage and responsible disclosure with maintainers is currently underway, and Anthropic plans to expand its security work with the open-source community.

Anthropic also uses Claude to review its own code and has found it to be extremely effective at securing Anthropic's systems. Claude Code Security was built to make those same defensive capabilities more broadly available. And since it is built on Claude Code, teams can review findings and iterate on fixes within the tools they already use.

The Road Ahead

This is a pivotal time for cybersecurity. Anthropic expects that a significant share of the world's code will be scanned by AI in the near future, given how effective models have become at uncovering long-hidden bugs and security issues.

Attackers will use AI to find exploitable weaknesses faster than ever. But defenders who move quickly can find those same weaknesses, patch them, and reduce the risk of an attack. Claude Code Security represents one step toward Anthropic's goal of more secure codebases and a higher security baseline across the industry.

Getting Started

Anthropic is opening a limited research preview of Claude Code Security to Enterprise and Team customers. Participants will receive early access and collaborate directly with Anthropic's team to refine the tool's capabilities. Open-source maintainers are also encouraged to apply for free, expedited access.

Apply for access here.

To learn more, visit claude.com/solutions/claude-code-security.