Anthropic quietly fixed flaws in its Git MCP server that allowed for remote code execution
Anthropic has fixed three bugs in its official Git MCP server that researchers say can be chained with other MCP tools to remotely execute malicious code or overwrite files via prompt injection.
The Git MCP server, mcp-server-git, connects AI tools such as Copilot, Claude, and Cursor to Git repositories and the GitHub platform, allowing them to read repositories and code files, and automate workflows, all using natural language interactions.
Agentic AI security startup Cyata found a way to exploit the vulnerabilities – a path validation bypass flaw (CVE-2025-68145), an unrestricted git_init issue (CVE-2025-68143), and an argument injection in git_diff (CVE-2025-68144) – and chain the Git MCP server with the Filesystem MCP server to achieve code execution.
“Agentic systems break in unexpected ways when multiple components interact. Each MCP server might look safe in isolation, but combine two of them, Git and Filesystem in this case, and you get a toxic combination,” Cyata security researcher Yarden Porat told The Register, adding that there’s no indication that attackers exploited the bugs in the wild.
“As organizations adopt more complex agentic systems with multiple tools and integrations, these combinations will multiply,” Porat said.
Cyata reported the three vulnerabilities to Anthropic in June, and the AI company fixed them in December. The flaws affect default deployments of mcp-server-git prior to 2025.12.18 – so make sure you’re using the updated version.
The Register reached out to Anthropic for this story, but the company did not respond to our inquiries.
There’s no S(ecurity) in MCP
In a Tuesday report shared with The Register ahead of publication, Cyata says the issues stem from the way AI systems connect to external data sources.
In 2024, Anthropic introduced the Model Context Protocol (MCP), an open standard that enables LLMs to interact with these other systems – filesystems, databases, APIs, messaging platforms, and development tools like Git. MCP servers act as the bridge between the model and external sources, providing the AI with access to the data or tools they need.
As we’ve seen repeatedly over the past year, LLMs can be manipulated into doing things they’re not supposed to do via prompt injection, which happens when attacker-controlled input causes an AI system to follow unintended instructions. It’s a problem that’s not going away anytime soon – and may never.
There are two types: indirect and direct. Direct prompt injection happens when someone directly submits malicious input, while indirect injection happens when content contains hidden commands that AI then follows as if the user had entered them.
This attack abuses the three now-fixed vulnerabilities.
CVE-2025-68145: The –repository flag is supposed to restrict the MCP server to a specific repository path. However, the server didn’t validate that repo_path arguments in subsequent tool calls within that configured path, thus allowing an attacker to bypass security boundaries and access any repository on the system.
CVE-2025-68143: The git_init tool accepted arbitrary filesystem paths and created Git repositories without any validation, allowing any directory to be turned into a Git repository and eligible for subsequent git operations through the MCP server. To fix this, Anthropic removed the git_init tool from the server.
CVE-2025-68144: The git_diff and git_checkout functions passed user-controlled arguments directly to the GitPython library without sanitization. “By injecting ‘–output=/path/to/file’ into the ‘target’ field, an attacker could overwrite any file with an empty diff,” and delete files, Cyata explained in the report.
Attack chain
As Porat explained to us, the attack uses indirect prompt injection: “Your IDE reads something malicious, a README file, a webpage, a GitHub issue, somewhere the attacker has planted instructions,” he said.
The vulnerabilities, when combined with the Filesystem MCP server, abuse Git’s smudge and clean filters, which execute shell commands defined in repository configuration files, and enable remote code execution.
According to Porat, it’s a four-step process:
This attack illustrates how, as more AI agents move into production, security has to keep pace.
“Security teams can’t evaluate each MCP server in a vacuum,” Porat said. “They need to assess the effective permissions of the entire agentic system, understand what tools can be chained together, and put controls in place. MCPs expand what agents can do, but they also expand the attack surface. Trust shouldn’t be assumed, it needs to be verified and controlled.” ®
READ MORE HERE
