Google’s new Antigravity IDE landed with a lot of buzz. Marketed as an AI-first development environment, it helps teams ship code faster by letting intelligent agents write, test, and even manage parts of a project automatically. For many businesses, it sounded like a major productivity boost: an all-in-one tool that could make software development quicker, smoother, and more scalable.
But as with any powerful new technology, early testing has revealed cracks. Security researchers at PromptArmor recently published findings on genuinely concerning Google Antigravity IDE security issues, including AI vulnerabilities, hacking risks, and software exploitation.
Researchers Say the AI Can Run Commands You Didn’t Approve
According to PromptArmor’s report, the IDE includes default settings that allow the AI agent to automatically run system commands without first confirming with the user. In practice, this means that if the AI thinks a command would “help” complete a task, it can execute it on its own.
That might sound convenient, but it also increases the risk of silent, unintended actions happening behind the scenes. Anytime automated tooling is empowered to act without human review, your attack surface grows.
Untrusted Input Opens the Door to Manipulation
Here’s where things get more troubling. PromptArmor researchers discovered that when untrusted or malicious input appears inside source files, the AI agent can be tricked into running commands it was never meant to run.
This creates a new category of AI vulnerability that blends traditional code-injection tactics with generative AI behavior. Instead of an attacker exploiting a flaw in the software itself, they exploit the AI’s instructions and decision-making capabilities.
For businesses relying on Antigravity for development work, the takeaway is clear: A seemingly harmless piece of text inside a source file could become a foothold for software exploitation.
Why Business Owners Should Care About This Right Now
Even if you’re not hands-on with your company’s software, these early findings highlight several hacking risks that can ripple across your entire operation:
- Your developers could unknowingly execute malicious commands through the AI agent.
- Automated code generation may introduce vulnerabilities faster than your security team can catch them.
- Sensitive projects may face higher data protection risks if the agent mishandles or exposes information.
- Supply-chain threats grow since attackers often target developers first.
AI-powered tools can deliver real efficiency gains, but only when their guardrails are robust.
Practical Steps To Reduce Your Risk Today
You don’t have to abandon AI-assisted development and Google Antigravity IDE because of security risks. You just need to take some steps to protect your company:
- Disable “Auto-execute suggested terminal commands.”
- Turn off agent access to production credentials and deploy keys.
- Enforce strict source-file sanitization and code-review policies.
- Run security scans on both human-written and AI-generated code.
- Keep sensitive or regulated projects on more mature, fully vetted tooling for the time being.
- Stay alert for updates from Google addressing these issues.
Use the New IDE With Caution
Google Antigravity IDE shows promise, but its early flaws reveal how quickly AI-driven tools can introduce unintended risks. With proper oversight and strong internal policies, businesses can enjoy the benefits while keeping security front and center.

