GPT-5.4-Cyber: OpenAI's Answer to the Question Nobody Wanted Asked
Seven days after Anthropic announced Claude Mythos Preview and Project Glasswing, OpenAI announced GPT-5.4-Cyber.
The timing was deliberate. The message was deliberate. And the strategy behind it is a direct challenge to how Anthropic has chosen to handle one of the most consequential capability questions in the history of this industry.
Both companies now have AI models capable of finding vulnerabilities in critical software at a scale and speed no human team can match. They have chosen completely different answers to the question of who should have access.
Understanding that split is not a matter of corporate competitive strategy. It is a preview of how the industry will handle every future capability that is powerful enough to be both an essential defence and a catastrophic weapon.
What GPT-5.4-Cyber Actually Is
GPT-5.4-Cyber is a fine-tuned variant of GPT-5.4, explicitly optimised for defensive cybersecurity work. Where Mythos’s cybersecurity capabilities emerged accidentally from general model scaling — the model was never trained for cyber — GPT-5.4-Cyber was deliberately engineered for the domain.
The key technical addition is binary reverse engineering: the model can analyse compiled software without having access to the source code. This matters because the overwhelming majority of production software runs from compiled binaries. An AI that can only analyse source code is useful for open-source security reviews. One that can analyse binaries can work on essentially everything.
OpenAI’s description of the model introduces a term that carries significant weight: “cyber-permissive.” GPT-5.4-Cyber has deliberately lowered refusal boundaries for legitimate security research. The model has been tuned to be more helpful for the class of requests — vulnerability analysis, exploit research, penetration testing — where standard models are typically cautious.
The access structure is tiered. General users can access a basic version through the TAC (Trusted Access for Cyber) program via identity verification at chatgpt.com/cyber. Enterprise teams can get broader access through formal vetting. The full GPT-5.4-Cyber with binary reverse engineering capabilities is rolling out to vetted security vendors and researchers. The expansion is from hundreds of organisations to thousands — a significantly larger population than Glasswing’s 52.
The Treasury Briefing That Changed Everything
The week between Anthropic’s announcement and OpenAI’s response is compressed, but not inexplicable.
On April 10 — three days after Glasswing’s announcement — the US Treasury Department convened an emergency briefing with Wall Street leaders to discuss Mythos’s potential impact on banking cybersecurity. Treasury Secretary Scott Bessent personally involved. Federal Reserve Chair Jerome Powell present. Bank CEOs from Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley in the room.
When the heads of Treasury and the Fed personally convene major bank CEOs to discuss a single AI model, the signal is unambiguous: this is systemic risk.
OpenAI read that signal. An emergency government mobilisation around a competitor’s model, combined with enterprise customers who had been waiting for AI-powered security tooling and were now watching one company capture the entire narrative, created the conditions for an accelerated response. Seven days is fast for an AI product launch. The Treasury briefing created the urgency.
Two Philosophies, One Week Apart
The strategic divergence between Anthropic and OpenAI on this question is as clear as it has been on any issue between the two companies.
Anthropic’s position: these capabilities are so dangerous that access must be limited to a small, tightly vetted consortium. The gating is not just a safety precaution — it is a signal to the industry and to governments that some capabilities require extra-democratic allocation. The decision about who gets access should be made by people who understand the risks, not by market dynamics.
OpenAI’s position: restricting access to 52 organisations creates a different problem. Verified, legitimate defenders — small security firms, independent researchers, university teams, government agencies that did not make Glasswing’s list — are locked out while attackers face no equivalent restriction. Security is better served by democratising defensive access than by concentrating it in a small consortium of the world’s largest technology companies and their partners.
This is not a simple argument to adjudicate. Glasswing’s logic is coherent: the same capability that defends also attacks, and keeping it away from unvetted hands reduces offensive proliferation risk. OpenAI’s logic is also coherent: the sophistication asymmetry between attackers and defenders means that concentrated access among large organisations is not the same as making defenders stronger across the full landscape.
The UK’s AI Security Institute published its formal evaluation of Mythos Preview during this same period, noting it was the first AI model to complete a 32-step corporate hack simulation. Official government validation of the capability added weight to both arguments simultaneously — confirming the power of these tools while also raising the question of why thousands of legitimate defenders should lack access to equivalent defensive capability.
The Government Contradiction Nobody Is Resolving
The political context around both models is not straightforward.
The Trump administration labelled Anthropic a supply-chain risk and restricted Pentagon access after Anthropic refused to allow Mythos to be used for lethal autonomous targeting or surveillance of US citizens. This mirrors a broader tension between government demand for these capabilities and political constraints on who can supply them. Despite the official Pentagon restriction, federal agencies tested Mythos anyway. Staff on at least three Congressional committees requested briefings from Anthropic. Commerce Department’s Center for AI Standards and Innovation ran their own evaluation.
When national security concerns override a political ban, you know the underlying stakes are being treated as real.
OpenAI does not carry the same political friction — it has been more accommodating of government partnerships, including defence-related work. GPT-5.4-Cyber arrived at a moment when parts of the US government wanted Mythos-class capability but were politically constrained from accessing it through Anthropic. The fact that OpenAI moved into that gap within a week of Glasswing’s announcement is not coincidence.
What It Means That Both Exist
The announcement of two cyber-capable frontier models within a week of each other is, regardless of the competitive dynamics between the companies, a meaningful moment.
It confirms that Anthropic’s threat assessment in the Glasswing announcement was accurate: these capabilities were always going to emerge from general model scaling, and they were always going to reach multiple labs in a short window. The Glasswing head start was always a temporary advantage rather than a durable one.
It establishes a new category of AI product. Cybersecurity AI is no longer a research curiosity or a beta capability. Two of the three most prominent frontier labs have gated commercial offerings. Cloud providers — AWS and Google Cloud both announced integration on the same week as these launches — are treating this as a major product category. Enterprise demand, backed by government urgency, is real and immediate.
And it sets up a competition on access philosophy that will play out in government procurement, enterprise contracts, and regulatory frameworks over the next several years. Glasswing’s model assumes that tighter control produces better outcomes. TAC’s model assumes that verified broad access produces better outcomes. Both models will accumulate evidence over the coming months as vulnerabilities are found, patches are shipped, and incidents either happen or do not.
The stakes of that experiment are not abstract. The vulnerabilities are real. The infrastructure is real. The adversaries acquiring equivalent capabilities via other means are, as we will cover in a separate piece, also real.
The Question the Week Did Not Answer
The seven-day gap between Glasswing and GPT-5.4-Cyber resolved the competitive question — both companies have a model, both have government attention, both have enterprise access programmes — but it did not resolve the deeper question.
Neither gating strategy addresses what happens when adversaries acquire equivalent capabilities through means that do not involve either company’s consent. The distillation campaigns already documented against both companies’ models are not paused because Glasswing and TAC exist. The Chinese labs that extracted 16 million conversations from Claude before Mythos was even announced are not deterred by access tier requirements they were never subject to.
The competition between Anthropic and OpenAI on access philosophy is happening inside a larger race whose outcome neither company controls.
That race is the more important story. It is the subject of this publication’s next piece.
Sources: OpenAI (April 14, 2026 announcement), Reuters, CyberScoop, The Hacker News, WIRED, Bloomberg, POLITICO, UK AISI evaluation, Mara Jade intelligence analysis.
Share this