Google’s fix for critical Gemini CLI bug might break your CI/CD pipelines
If you use Gemini CLI, watch out: Google has patched a CVSS 10.0 vulnerability in its command-line AI tool and is warning anyone running it in headless mode, or through GitHub Actions, to review their workflows.
The update to Gemini CLI and the run-gemini-cli GitHub Action, published last week but largely unnoticed until one of the two credited research teams published its writeup on Wednesday, fixes a critical – and apparently easy-to-abuse – flaw tied to over-permissive workspace trust settings.
Per Google’s advisory published to GitHub, the issue stems from how the headless mode of Gemini CLI (frequently used in CI/CD environments and increasingly by AI agents) handles workspace folder trust: It automatically assumes any of the workspace folders it’s active in are trusted for the purpose of loading configuration files and environment variables.
We trust you can see the problem here.
Novee researcher Elad Meged discovered the vulnerability (independently of Pillar Security’s Dan Lisichkin, who Google also credited for the find), he told us, while studying CI/CD supply chain attack vectors.
“This vulnerability had nothing to do with prompt injection or the model ‘deciding’ to act maliciously,” Meged told The Register in an email. “It was an infrastructure-level issue, where attacker-controlled content was silently accepted as trusted configuration and executed before any sandbox was initialized.”
A CVE hasn’t been issued for the issue yet, but Meged told us Google has confirmed to him that it is in the process of assigning one. Novee also scored a bug bounty for the find, but declined to disclose how much.
A necessary fix, but expect fallout
“This is potentially risky in situations where Gemini CLI runs on untrusted folders in headless mode,” Google explained. “If used with untrusted directory contents, this could lead to remote code execution via malicious environment variables in the local .gemini/ directory.”
Interactive mode in Gemini CLI does not behave the same way: it requires users to explicitly trust a folder before workspace configuration files are loaded, and the update brings headless mode into line with that behavior.
The mitigations shipped in Gemini CLI versions 0.39.1 and 0.40.0-preview.3, but here’s the catch: the run-gemini-cli GitHub Action defaults to the newest Gemini CLI release unless users pin a specific version. In other words, anyone using the GitHub Action as part of a workflow without specifying a CLI version may have some cleanup to do.
“GitHub Actions and other automated pipelines that rely on the previous automatic trust behavior will fail to load workspace-specific settings until they are updated to use explicit trust mechanisms,” Google said.
The update may also break workflows that relied on Gemini CLI’s –yolo mode, which previously bypassed fine-grained tool allowlists and automatically approved agent actions without prompting.
“In previous versions, when Gemini CLI was configured to run in –yolo mode, it would ignore any fine grained tool allowlist,” Google explained in the advisory. “In version 0.39.1, the Gemini CLI policy engine now evaluates tool allowlisting under –yolo mode … As a result, some workflows that previously depended on this behavior may fail silently unless tool allowlists are modified to fit the task.”
Those who do specify a version, says Google, ought to make changes to allow the newest, safest version to run and be prepared to fix those workflows anyhow.
Damned if you do, damned if you don’t, in other words, but the fix is necessary, as explained by the folks at Novee Security, one of the credited finders. Across every workflow Novee tested the vuln on, the company noted, the results were devastatingly the same.
“Code execution on the host running the agent gave an unprivileged outsider access to whatever secrets, credentials, and source code the workflow could reach,” the Novee team explained. “Enough for token theft, supply-chain pivots, and lateral movement into downstream systems.”
In short, take action as Google suggests, or avoid putting AI agents in sensitive environments until the risks are fully understood. ®
READ MORE HERE
