Anthropic Faces Scrutiny Over ‘Mythos’ Model as Critics Decry Responsible AI Marketing Tactics
Anthropic refuses to release its "powerful" Mythos AI model. Explore the debate between responsibility and marketing tactics in the AI war.
By: AXL Media
Published: Apr 12, 2026, 3:43 PM EDT
Source: Information for this report was sourced from The Guardian

The Strategic Non-Release of Claude Mythos
The San Francisco-based AI startup Anthropic has ignited a firestorm within the technology sector by announcing it will not release its newest frontier model, Mythos, to the general public. Citing an "overwhelming sense of responsibility," the company claims the model possesses capabilities that could present catastrophic cybersecurity risks. This announcement quickly escalated to the highest levels of government, prompting US Treasury Secretary Scott Bessent to consult with banking leaders, while UK lawmakers urged the government to engage with the firm regarding potential threats to national digital security.
Marketing Prowess and Media Dominance
Despite its reputation as the "responsible" alternative to rivals like OpenAI, Anthropic has demonstrated a sophisticated mastery of corporate branding and narrative control. In recent months, the company’s leadership, including CEO Dario Amodei and co-founder Jack Clark, has secured high-profile features in the New Yorker and Time magazine. These media appearances have focused on existential questions regarding machine consciousness and economic disruption, effectively positioning the company at the center of the global AI discourse. Media lead Danielle Ghiglieri has described the effort to shape these stories as a "mad dash" to ensure the company’s preferred narrative reaches the public.
Skepticism from the Scientific Community
Not all observers are convinced that the decision to withhold Mythos is rooted purely in safety concerns. Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, has criticized the company’s use of "purposely vague language" that obscures evidence of the model's actual capacities. Khlaaf suggests that by preventing independent evaluation, Anthropic may be utilizing a "bait and switch" playbook similar to its competitors, using safety as a PR tool to garner public trust and investment without facing technical scrutiny. This skepticism is compounded by a recent incident in which the firm accidentally leaked its own internal source code.
Categories
Topics
Related Coverage
- Global Financial Regulators Alerted to Systemic Risks Posed by Anthropic’s Mythos AI Cybersecurity Capabilities
- Silicon Valley Accelerates Development of Self-Improving AI Amid Growing Public Protests
- Anthropic’s Mythos AI Forces Global Security Pivot as Automated Vulnerability Discovery Reaches Machine Speed
- Anthropic Launches Project Glasswing to Secure Critical Infrastructure After New AI Model Uncovers Thousands of Zero-Day Vulnerabilities