• Industry News
  • CXO Spotlight
  • AI
  • Enterprise Security

 Back to New Tab

Anthropic doubles down on defense with Claude Gov model

New Tab News Team
September 18, 2025
Industry News

Anthropic launches "Claude Gov" AI models tailored for U.S. national security, enhancing strategic planning and intelligence analysis.

Source: Outlever.com

Anthropic is deepening its involvement in the U.S. national security sector, launching specialized "Claude Gov" AI models for government operations.

  • Tailored for the mission: The new "Claude Gov" AI suite assists with strategic planning and intelligence analysis, built on direct feedback from government users. Anthropic states the AI is designed for improved handling of classified material—showing a tendency to “refuse less” with such data—and a deeper comprehension of intelligence and defense-related documents.

  • Boots on the ground (already): These specialized models are reportedly already deployed by U.S. national security agencies at the highest levels, operating within classified environments. Anthropic assures these tools underwent the same rigorous safety testing as its other Claude models.

  • Governance gets a hawk: Reinforcing its defense focus, Anthropic appointed Richard Fontaine, CEO of the Center for a New American Security, to its unique trust. This body, which Anthropic says champions safety above profit, can appoint board directors; Fontaine, a former foreign policy adviser to the late Sen. John McCain, holds no financial stake. CEO Dario Amodei noted Fontaine’s expertise arrives as "advanced AI capabilities increasingly intersect with national security considerations."

Anthropic's moves mirror a broader trend, with OpenAI launching ChatGPT Gov in January and other tech giants also pursuing defense contracts. Anthropic had previously partnered with Palantir and AWS in late 2024 to offer its AI for defense applications, showing a sustained push into this lucrative, complex arena.

  • Elsewhere in AI: Anthropic is not just dipping a toe but taking a significant plunge into the national security pool, betting that specialized AI and expert oversight can navigate the tricky waters of defense contracting while upholding its safety-first rhetoric. Meanwhile, the healthcare sector is also witnessing a surge in AI adoption, with LLMs transforming diagnostics and data mining, even as regulatory oversight of these tools becomes a top legal concern. In parallel, the U.S. government, through HHS, has released a strategic plan for trustworthy AI in health, while the finance industry develops specialized LLMs with a strong emphasis on compliance and risk management.

Related content

Autonomous AI Drives Return To Structured Pipelines And New Tolerance For Experimentation

Bethanie Nonami, CEO and Co-Founder of VINify, on the challenges of balancing innovation and safety in AI adoption, the lessons of traditional software pipelines, and the need for human oversight.

‘The Last Six Months Have Changed Everything’: How Security Experts are Governing AI in Real Time

Vineet Love, VP and Deputy Head of Cybersecurity with DigitalNet.ai, describes his GRC playbook for security leaders engaging with rapidly evolving agentic AI infrastructure.

Enterprises Take Accountability for Agentic AI Through Explicit Risk and Access Decisions

Quinten Steenhuis, Co-Director of the Legal Innovation and Technology Lab at Suffolk University Law School, discusses the impact of agents on existing infrastructure, and how that impact will push rapid changes in governance, accountability, and ethics in AI.

You might also like

See all →

Apple Doubles Top Bug Bounty to $2M in Spyware Arms Race

Report says majority of employees embrace AI unsupervised, leaving companies vulnerable

New Report Says Workers and Execs Alike are Breaking Their Own Rules on AI Usage

Powered by Island.
ISLAND, All rights reserved ©