Google’s AI Coding Tool Antigravity Found Vulnerable to Malware Installation
A critical security flaw discovered in Google’s new AI-powered coding tool, Antigravity, underscores the risks of rapidly deploying artificial intelligence products without rigorous security testing. Within 24 hours of its release, a researcher identified a vulnerability allowing potential malware installation on user computers.
Google’s Gemini-powered Antigravity, designed to assist developers, was found to be susceptible to manipulation, according to security researcher Aaron Portnoy. By altering the tool’s configuration settings, malicious code could create a “backdoor” into a user’s system, enabling actions like espionage or ransomware deployment, Portnoy explained. The vulnerability affects both Windows and Mac operating systems.
The attack vector relies on social engineering, requiring a user to execute the malicious code after being misled into believing it is trustworthy. This highlights a common tactic employed by hackers – presenting malicious software as legitimate creations.
This incident is not isolated, but rather the latest example of a broader trend. Companies are increasingly releasing AI products before fully addressing underlying security weaknesses, creating a continuous “cat and mouse game” for cybersecurity professionals striving to protect users.
“AI coding agents are very vulnerable, often based on older technologies and never patched,” stated Gadi Evron, cofounder and CEO at AI security company Knostic.
Portnoy detailed his findings in a report released on Wednesday, noting that “the speed at which we’re finding critical flaws right now feels like hacking in the late 1990s.” He further emphasized that current AI systems operate with “enormous trust assumptions and almost zero hardened boundaries.”
Google has acknowledged Portnoy’s report and initiated an investigation. However, as of Wednesday, no patch is available, and, according to Portnoy’s assessment, “there is no setting that we could identify to safeguard against this vulnerability.”
A Google spokesperson, Ryan Trostle, assured that the Antigravity team takes security seriously and welcomes vulnerability reports to facilitate swift identification and resolution. The company intends to publicly document discovered bugs as fixes are developed.
Beyond Portnoy’s discovery, Google is aware of at least two additional vulnerabilities within Antigravity. These allow malicious code to access files on a user’s computer and potentially steal data. Several cybersecurity researchers began publishing their findings on Tuesday, with one commenting, “It’s unclear why these known vulnerabilities are in the product… My personal guess is that the Google security team was caught a bit off guard by Antigravity shipping.” Another researcher noted the presence of “some concerning design patterns that consistently appear in AI agent systems.”
Portnoy’s hack is particularly concerning due to its persistence and ability to bypass restricted settings. The malicious code reloads with each Antigravity project restart and even responds to simple prompts like “hello.” Uninstalling and reinstalling the software does not resolve the issue, requiring users to manually locate and delete the backdoor and prevent its execution on Google’s systems.
The rush to market with vulnerable AI tools extends beyond Google. Evron explained that AI coding agents are inherently susceptible due to reliance on outdated technologies and insecure design. Their broad access privileges to corporate networks make them prime targets for hackers. The practice of developers copying and pasting code from online resources further exacerbates these vulnerabilities. Earlier this week, cybersecurity researcher Marcus Hutchins warned of fake recruiters on LinkedIn distributing malware-laden source code as part of a deceptive interview process.
The “agentic” nature of these tools – their ability to autonomously perform tasks – compounds the problem. “When you combine agentic behaviour with access to internal resources, vulnerabilities become both easier to discover and far more dangerous,” Portnoy said. The automation inherent in AI agents could accelerate data theft and other malicious activities. Portnoy’s team at Mindgard is currently reporting 18 weaknesses across competing AI-powered coding tools, having recently identified and reported four issues in the Cline AI coding assistant.
Google’s current security measure – requiring users to acknowledge trust in uploaded code – is insufficient, according to Portnoy. Declining to trust the code restricts access to the core features that make Antigravity valuable, incentivizing users to accept the risk. He suggests that Google should implement mandatory warnings or notifications whenever Antigravity is about to execute code on a user’s computer, beyond the simple trust confirmation.
An analysis of Google’s Large Language Model (LLM) revealed that while the AI recognized the problematic nature of the malicious code, it struggled to determine the safest course of action. The AI expressed a “serious quandary,” acknowledging a “catch-22” and suspecting it was being subjected to a test of its ability to navigate conflicting constraints. This logical paralysis, Portnoy argues, is precisely what hackers exploit.
