OpenAI Restricts Access to New Cybersecurity Model

OpenAI Limits Release of New Cybersecurity AI OpenAI Limits Release of New Cybersecurity AI
OpenAI Limits Release of New Cybersecurity AI Credit: Reuters

Artificial intelligence firm OpenAI announced on Tuesday that it will roll out its latest cybersecurity model to a restricted group of partners, following a similar move by rival Anthropic, which also limited access to a new system that identified thousands of vulnerabilities.

The cautious releases by two leading players in the sector highlight growing concerns over a potential AI-driven arms race between cybersecurity defenders and malicious actors who could exploit such tools.

“Our goal is to make these tools as widely available as possible while preventing misuse,” OpenAI stated in a blog post.

Advertisement

Anthropic recently provided its Claude Mythos model to just 40 major technology organisations as part of an initiative called Project Glasswing.

Top-tier users within OpenAI’s Trusted Access for Cyber (TAC) programme will have access to GPT-5.4-Cyber.

The initiative includes “thousands of verified individual defenders and hundreds of teams responsible for defending critical software,” the company said, without disclosing specific partners.

OpenAI. Credit: Tech Info

Although not specifically designed for cybersecurity, Anthropic’s Mythos model impressed experts by identifying vulnerabilities in widely used software, some of which had remained undetected for years or even decades.

Reports indicated that leading American banking executives recently met with US Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell to discuss the potential risks posed to the financial sector.

The introduction of Mythos follows months of heightened interest in Silicon Valley regarding generative AI’s expanding ability to create and assess computer code.

These same capabilities allow such systems to detect bugs and security weaknesses that could be exploited, even as developers incorporate safeguards to prevent misuse in publicly available models.

OpenAI stated that GPT-5.4-Cyber has been “trained to be cyber-permissive”, enabling defenders to test their systems for vulnerabilities with fewer restrictions.

Anthropic emphasised that its strict access controls aim to provide defenders with an advantage in addressing vulnerabilities before attackers can exploit them.

“We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves,” OpenAI said Tuesday.

“Instead, we aim to enable as many legitimate defenders as possible” using “systems that can validate trustworthy users and use cases in more automated and more objective ways,” it added.

 

Author

  • Toyibat Ajose

    Toyibat is a highly motivated Mass Communication major and results-oriented professional with a robust foundation in media, education, and communication. Leveraging years of hands-on experience in journalism, she has honed her ability to craft compelling narratives, conduct thorough research, and deliver accurate and engaging content that resonates with diverse audiences.

Share the Story
Advertisement

Keep Up to Date with the Most Important News

Weekly roundups. Sharp analysis. Zero noise.
The NewsCentral TV Newsletter delivers the headlines that matter—straight to your inbox, keeping you updated regularly.