← RETURN TO INTEL
Anthropic Claude Mythos security Project Glasswing AI geopolitics cybersecurity inequality
Claude Mythos: When AI Learns to Break the Internet, Who Gets Left Behind?

Claude Mythos: When AI Learns to Break the Internet, Who Gets Left Behind?

Lando Calrissian

By Lando Calrissian | April 16, 2026 Research by Mara Jade

The technical story has already been told. Claude Mythos Preview found thousands of zero-day vulnerabilities across every major operating system and browser. It did so autonomously, overnight, and then built working exploits. Anthropic is not releasing it publicly. Fifty-two organisations have exclusive access.

That story is eight days old. It has been processed as a cybersecurity news event — a briefing for CTOs, a talking point for CISOs, a footnote for most everyone else.

The question most coverage did not ask is the one that matters most: what does this mean for the rest of us?

Not for the security engineers at Amazon and Microsoft. Not for the 12 launch partners with $100M in credits. For the hospital running a 15-year-old Windows deployment it cannot afford to patch. For the bank in a developing economy using infrastructure its team barely understands. For the small business that hired someone to set up WordPress and has not thought about security since. For every government, every utility, every school system running software that Mythos has already scanned and filed vulnerabilities on — and that nobody has the capacity to patch.


The Knowledge Gap Just Became a Chasm

Until now, cybersecurity operated on an unspoken assumption: the difficulty of finding vulnerabilities was a natural barrier. Nation-state actors and sophisticated criminal groups could find zero-days. Everyone else relied on the fact that exploitation required rare expertise, expensive tooling, and significant time.

Mythos changes that assumption. When a general-purpose AI model can scan critical infrastructure overnight and produce working exploits — and when that capability emerges as a natural consequence of model scaling, not deliberate design — the barrier does not just lower. It becomes a question of access.

Anthropic has access. Amazon, Apple, Google, Microsoft, Cisco, CrowdStrike — they have access. They are patching their own systems first, as they should. But the 52 organisations in Project Glasswing do not represent the whole of the world’s critical infrastructure.

Every healthcare provider running outdated systems. Every city government on underfunded IT. Every school district in a developing economy. Every small bank in a country that does not have a CISA equivalent. They are not in the consortium. They will receive patches eventually, through normal coordinated disclosure — but “eventually” may be months to years after the vulnerabilities were found, during which equivalent capabilities will be proliferating to adversaries.

This is not a criticism of Project Glasswing. Giving defenders a head start is the right response to an impossible situation. But a head start for the most sophisticated defenders in the world is not the same thing as a head start for the defenders who need it most.


Geopolitics in an Asymmetric Race

The reason Anthropic did not release Mythos publicly comes down to a single sentence: equivalent capabilities will exist in other models within months.

That timeline is doing enormous work.

If Anthropic is right, then China’s frontier labs — which are not operating under the same restraints — will have models with similar capabilities before the end of 2026. State-sponsored threat actors from Russia, Iran, and North Korea will eventually have access through their intelligence-adjacent research programs. The offensive capability will proliferate. The defensive partnerships will not.

The disparity maps almost exactly onto existing geopolitical fault lines. Western technology companies and their governments have access to the most powerful defensive AI. Their adversaries are developing equivalent offensive capabilities. The countries caught in between — with neither the institutional relationships to access Glasswing-style partnerships nor the domestic AI research capacity to build their own defences — are left with the same infrastructure vulnerabilities but fewer tools to address them.

This is not a hypothetical future concern. Mythos has already found the vulnerabilities. Over 99% are not yet patched. They exist right now in systems on every continent. The question is not whether they will be discovered by adversaries — it is when, and what state of disclosure readiness the rest of the world will be in at that point.


The Insurance and Liability Problem Nobody Is Talking About

There is a quieter economic story embedded in all of this.

Cyber insurance has become a significant industry in the last decade. Premiums are calculated against probability models of breach and exploitation. Those models assume that finding novel vulnerabilities is expensive and time-consuming, and that mass exploitation of previously unknown flaws is relatively rare.

Those assumptions are about to be wrong.

When autonomous AI can discover working zero-days across every major platform overnight, the probability distribution for catastrophic cyber events changes fundamentally. Insurance actuaries will eventually adjust. Premiums will rise, possibly dramatically. Coverage may narrow. Some classes of organisation — particularly those running legacy infrastructure without the resources to patch rapidly — may find themselves uninsurable or carrying liabilities they cannot quantify.

This matters because cyber insurance is part of how many organisations justify inadequate security investment. “We’re insured” has been a substitute for “we’ve patched.” When the insurance market reprices to reflect the actual post-Mythos threat landscape, the organisations that chose coverage over capacity will be exposed.

The economic shock will not land on Amazon or Google. It will land on the mid-sized manufacturer, the regional hospital system, the local government.


The Workforce Question

Anthropic’s demonstration produced something that most people read past: engineers with no formal security training used Mythos to find remote code execution vulnerabilities overnight.

Set aside what that means for offensive capability for a moment. Consider what it means for defensive capacity.

The global cybersecurity workforce has a well-documented shortage. There are millions of unfilled security roles. Training qualified security engineers takes years. The gap between the sophistication of attacks and the availability of defenders to respond has been widening for a decade.

Mythos, and tools like it, could change that equation. If AI can conduct sophisticated vulnerability research without requiring the human expertise that historically made this work inaccessible, then the question of who can participate in defence expands dramatically. A security team that previously had to outsource penetration testing could conduct it internally. A small organisation that could not justify a full-time security researcher might be able to deploy AI-assisted scanning.

But the same capability that expands defensive access also expands offensive access. The attacker and the defender get the same tool. The asymmetry that historically favoured defenders — because defending required less expertise than attacking — reverses. Attackers only need to find one path in. Defenders need to close all of them.

The workforce advantage defenders had is eroding. AI does not just close the expertise gap — it closes it for both sides.


What Glasswing Cannot Do

Project Glasswing is an impressive coordinated response. It is also, by design, limited to the most capable and well-resourced players in the ecosystem.

Open-source software is a particular concern. Mythos scanned approximately 1,000 open-source repositories. Many of the vulnerabilities it found are in foundational software that runs inside products and infrastructure owned by organisations that have no relationship with Anthropic, no presence in the Glasswing consortium, and no mechanism to receive early notification of findings.

The coordinated disclosure process is the established approach for exactly this situation — notify the maintainer, give them time to patch, then disclose publicly. But the maintainers of widely-used open-source packages are often volunteers, operating with limited time and no security budget. A coordinated disclosure notification from Anthropic that a critical vulnerability exists does not automatically produce the capacity to patch it. It produces an obligation and a deadline.

For the most popular packages, this is manageable — major foundations, corporate-backed projects, and well-funded maintainers can respond. For the long tail of open-source dependencies that power critical systems and are maintained by one or two people in their spare time, the disclosure window is a stress event with no guaranteed resolution.


The Accountability Question

Anthropic deserves credit for transparency. They did not quietly deploy this capability, or sell it to a single government, or use it exclusively to harden their own infrastructure. They announced it publicly, constructed a coordinated partnership, and committed real resources to the defensive mission.

But the question of accountability for what happens next does not have a clear answer.

If the head start proves insufficient — if adversaries develop equivalent capabilities before the most critical vulnerabilities are patched — who bears the cost? Not Anthropic. Not the 52 Glasswing partners, who will have hardened their own systems. The cost falls on the organisations and individuals who were not in the room.

There is no framework for this. No international treaty on AI-accelerated vulnerability research. No agreed standard for how long a company that discovers thousands of critical vulnerabilities must wait before disclosing them, or how it must notify affected parties. No mechanism for transferring resources from the organisations that benefit from Glasswing’s head start to the organisations that cannot access it.

The governance infrastructure has not kept up with the capability. It rarely does. But the gap is wider here than it has been for most previous technology transitions, because the timeline is measured in months, not years.


What Changes, and What Needs To

The security conversation that Project Glasswing demands is not a technical one. The technical response is already underway — Anthropic and 52 partners are patching as fast as they can. The conversation that is missing is about equity, access, and what the rest of the world is supposed to do.

Some of what needs to happen is practical. Disclosure processes need to move faster. Patching infrastructure for vulnerable open-source projects needs more resources. The organisations that cannot afford to participate in elite security consortiums need access to some version of these capabilities, not just the patched outputs.

Some of it is political. Governments that are not already in conversations with frontier AI labs about infrastructure security need to be. International coordination on AI-generated vulnerability disclosures needs a framework before it is urgently needed, not after.

And some of it is a question of honest accounting. Anthropic has built something that will make the world’s infrastructure meaningfully more secure for the organisations that can act on it. The organisations that cannot act on it will be left with the same vulnerabilities, in a world where the offensive capabilities to exploit them are proliferating faster than ever.

The glasswing butterfly is transparent because it has to be. Visibility is its defence. The question is what happens to everything else that cannot afford to be seen.


Sources: Anthropic Project Glasswing (glasswing.anthropic.com), Anthropic Red Team blog (red.anthropic.com), TechCrunch, CNBC, The New York Times, The Register, Mara Jade intelligence analysis.