Artificial intelligence is looking for vulnerabilities that hackers can exploit. Get ready for Bugmageddon.

Published:

The software bug can cause operating systems used by firewalls, servers and network devices to crash. For 27 years it remained undiscovered.

Logan Graham faces forward, assessing the risks of artificial intelligence with colleagues at Anthropic.
Logan Graham faces forward, assessing the risks of artificial intelligence with colleagues at Anthropic.

Last month, I was caught Mythos, the latest AI model From Anthropic Yes frightened the white housebank executives and cybersecurity professionals around the world.

Welcome to the wrong apocalypse. AI models like Mythos are discovering bugs in older software at an unprecedented rate.

While most coding issues may be minor, their sheer volume amplifies the risk for smaller software developers to be inundated with bug reports like the one Mythos found. With artificial intelligence, hackers will be able to exploit these vulnerabilities faster than ever before.

A 1998 bug in the OpenBSD operating system was one of thousands of myths discovered last month. Anthropic said last week that it is working with about 50 technology companies and organizations to find and fix bugs and that it has no current plans to release Mythos to the public.

“We need to know we can release it safely, but it’s not clear how we can do that with confidence,” said Logan Graham, head of Anthropic’s Frontier Red Team, which assesses AI risks.

Anthropic rival OpenAI is developing a similar campaign to provide developers with secure versions of its products so they can patch the systems before criminals discover the bugs, according to a person familiar with the company’s plans. Google is also rolling out an early access program for developers, the company said.

Mythos has sparked a battle among technology employees within major companies, as many try to understand how new models can upend cybersecurity and expose their products to a new set of threats.

Numeric, a San Francisco-based AI accounting automation platform, recently launched a discussion on its risks in a cybersecurity Slack channel. “Well, this will be interesting,” one executive wrote.

Numeric co-founder Anthony Alvernaz said some of the biggest risks companies face may come from their reliance on so-called “open source” tools, which are often built collaboratively by volunteers who may not have the resources to quickly triage bug reports. This infrastructure underpins much of the modern internet, he said.

“The code that companies write is almost like the top layer of the cake, and underneath there are all these layers of open source software,” he said.

When security researcher Niels Provos heard that Mythos had discovered an old OpenBSD vulnerability, he wondered if he had made the mistake while writing some code for OpenBSD while getting his PhD 27 years ago. from the University of Michigan. A quick check confirmed his suspicions.

“To be honest, I just think it’s hilarious. It’s code that’s so old,” said Provos, a former security director at payments company Stripe. “Who knows when the last time a human being looked at it was.”

For humans, finding and exploiting such bugs often requires countless hours of research. Provos said most hackers wouldn’t even look at Provos’ old code, assuming it had been selected to find bugs.

“Only a few people could do this before,” he said. “Now, with these tools, the skill required to develop really complex exploits has been significantly reduced.”

Anthropic said Mythos consumed about $20,000 in computing power over two days while it discovered the vulnerability and dozens of other issues.

Over the past few weeks, Mythos has proven to be better at writing code that exploits these vulnerabilities, Anthropic said.

Today, most cyberattacks do not involve previously undiscovered vulnerabilities (called zero-day vulnerabilities). Hackers more often exploit previously discovered bugs, steal login credentials, or use social engineering techniques to break into companies. Additionally, even if a single computer is hacked, most companies have additional strategies in place to mitigate cyberattacks.

Earlier this year, Anthropic’s software More than 100 errors found in Firefox browser, It’s even possible to write code that exploits one of these bugs in test builds of the browser. In the real world, Firefox has other security mitigations in place to block attacks, which would create more work for real-world hackers.

Over the past few months, the cybersecurity capabilities of the latest artificial intelligence models have been met with skepticism. They began to worry that patching a large and growing number of bugs would lead to unprecedented logistical challenges – The artificial intelligence equivalent of Y2Ka global effort to patch a program that was incomprehensible to the world a year later in 1999. The Y2K warnings were scary, but the technical fixes mostly worked.

Many cybersecurity professionals believe that an AI vulnerability apocalypse could play out in a similar way, but they say successfully patching thousands of vulnerabilities in various software will require a monumental effort.

Senior White House officials, including national cyber director Sean Cairncross Competition against threats Myth and other models propose an effort to identify government vulnerabilities and coordinate a private sector response.

Investors worry these changes could Disrupting the software industryshares of cybersecurity companies fell last week.

HackerOne, which helps companies triage bug reports, said most companies are getting better at patching critical bugs, but artificial intelligence is increasing the number of reported bugs and patching everything is taking longer. According to the company, the number of bugs submitted increased by 76% over last year, and the average time to fix bugs jumped from 160 days to 230 days during the same period.

Companies are also concerned that previously ignored technology products may now be targeted, and unlike the tech giants, the companies or software developers building these more obscure products may not have the resources to manage patch attacks.

“It will become much easier to attack random infrastructure that no one has attacked before,” said security researcher Thomas Ptacek, who heads the cloud computing company Fly.io.

Sergej Epp got a taste of this phenomenon in February. As chief information security officer at cybersecurity firm Sysdig, he hadn’t tried to find a vulnerability in a decade. But by using Anthropic’s software, he quickly discovered some security issues.

at a cybersecurity conference Two weeks later, he launched a vivi coding website that uses publicly available data to show how quickly artificial intelligence tools turn new bugs into software that can be used for attacks. He modeled it on the Bulletin of the Atomic Scientists’ warning of nuclear annihilation, calling it the “Doomsday Clock” Zero day clock.

Every software has flaws, he said, and when bugs are discovered, a race begins between hackers and people looking to fix the flaws. This is a long-term race between attackers and defenders.

Eight years ago, the average time between a vulnerability’s public disclosure and an attack was 847 days, he said. Last year that number dropped to 23 days. This year, most were exploited within a day.

The site calls for the tech industry to fundamentally reboot the way software is built.

“AI is giving superpowers to hackers, not defenders,” Epp said.

Write to Robert Macmillan: robert.mcmillan@wsj.com and the chip knife is located chip.cutter@wsj.com

WEB DESK TEAM
WEB DESK TEAMhttps://articles.thelocalreport.in
Our team of more than 15 experienced writers brings diverse perspectives, deep research, and on-the-ground insights to deliver accurate, timely, and engaging stories. From breaking news to in-depth analysis, they are committed to credibility, clarity, and responsible journalism across every category we cover.

Related articles

Recent articles

spot_img