Google confirms hackers used AI to create zero-day exploit bypassing two-factor authentication

1 hour ago 18

For years, the cybersecurity industry warned that AI-assisted hacking was coming. It’s here now. Google’s Threat Intelligence Group (GTIG) has confirmed the first known case of a zero-day exploit generated with the help of artificial intelligence, one that bypasses two-factor authentication by exploiting a hardcoded trust flaw in a widely used open-source web administration tool.

The discovery, published on May 11, 2026, represents a meaningful escalation in the cat-and-mouse game between security researchers and threat actors. And for anyone in crypto who relies on 2FA as a security blanket, this is a wake-up call worth paying attention to.

What GTIG found, and why it’s different

The exploit is a Python script designed to circumvent 2FA protections by targeting a logic flaw in an unnamed but widely deployed open-source web admin tool. In English: the tool had a weakness in how it decides to trust certain authentication requests, and the script was built specifically to abuse that weakness.

What makes this case unprecedented isn’t just the exploit itself. It’s the fingerprints left behind.

GTIG researchers identified several telltale markers of AI-generated code throughout the script. Clean ANSI color classes, organized educational prompts, a fabricated CVSS score (the industry-standard severity rating), and detailed help menus were all present. These are characteristics that almost never show up in manually written exploits.

Think of it like finding a burglary tool kit where every tool is individually labeled with instructions and color-coded by function. Human hackers don’t typically bother with that kind of polish. Large language models, on the other hand, are trained to be helpful and organized, even when the output is malicious.

GTIG’s analysis found that the code structure aligns closely with training data patterns from large language models. The group was able to exclude Google’s own Gemini model from involvement, meaning the threat actors used a different AI system to both discover the vulnerability and engineer the working exploit.

Google’s intervention stopped a mass exploitation campaign

Here’s the thing. This wasn’t just an academic exercise or a proof-of-concept sitting on some dark web forum. GTIG determined that the threat actors had plans for mass exploitation, meaning they intended to deploy the exploit at scale against systems running the vulnerable tool.

Google intervened by working directly with the vendor to implement a patch before that campaign could launch. The timeline suggests GTIG caught this relatively early in the exploitation lifecycle, which is the best-case scenario for an incident like this.

But the fact that it got this far, an AI model being used not just to write a script but to identify a previously unknown vulnerability and then build a functional bypass around 2FA, marks a new chapter in offensive cybersecurity. The barrier to entry for sophisticated exploit development just dropped considerably.

Previously, crafting a zero-day required deep expertise in reverse engineering, vulnerability research, and exploit development. These are skills that take years to develop. An AI model can compress much of that process into hours, lowering the skill floor for would-be attackers while raising the ceiling for what experienced hackers can accomplish.

Why crypto should be paying attention

No specific cryptocurrency platforms have been linked to this particular exploit. But the implications for the crypto industry are hard to ignore.

Two-factor authentication is a foundational security layer across virtually every major cryptocurrency exchange, wallet provider, and DeFi platform. Many of these services run on or integrate with open-source web administration tools, precisely the category of software targeted here.

The hardcoded trust flaw at the center of this exploit is the kind of vulnerability that can exist across multiple implementations of similar software. If one open-source admin tool had this issue, there’s a reasonable chance others share comparable logic weaknesses.

For crypto users, the practical takeaway is that 2FA is necessary but not sufficient. Hardware security keys, withdrawal whitelists, and multi-signature wallet setups provide additional layers that wouldn’t be compromised by a 2FA bypass alone. Exchanges and custodians that rely solely on software-based 2FA as their primary defense should be reevaluating their security architecture in light of this discovery.

The broader concern is the acceleration curve. If AI can generate a functional zero-day today targeting a web admin tool, it’s not a stretch to imagine similar techniques being applied to smart contract vulnerabilities, browser extension wallets, or API authentication systems used by trading platforms. The attack surface in crypto is already enormous. AI-assisted exploit generation makes it exponentially harder to defend.

Look, the cybersecurity arms race has always favored whoever moves faster. For the first time, attackers have a tool that can systematically probe for weaknesses at machine speed. Google caught this one. The next AI-generated exploit might not come with such convenient fingerprints, and the target might not have a GTIG-caliber team watching the perimeter.

Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.

Read Entire Article