CamoLeak: GitHub Copilot vulnerability enabled silent theft of private source code and secrets
Take action: When using GitHub Copilot or any AI coding assistant, never trust it to analyze code or comments from external contributors or public pull requests. There are ways to hide prompts which are not even tested for until exploited. Treat AI-generated code suggestions the same way you'd treat code from an untrusted developer—always verify packages, libraries, and code logic before implementing them in your projects.
Learn More
In June 2025, security researcher Omer Mayraz discovered a vulnerability in GitHub Copilot Chat that allowed attackers to silently exfiltrate secrets and source code from private repositories while gaining full control over Copilot's responses.
The vulnerability, tracked as CVE-2025-CamoLeak (CVSS score 9.6), combined a novel Content Security Policy (CSP) bypass using GitHub's own infrastructure with remote prompt injection techniques.
The vulnerability is caused by GitHub Copilot Chat's context-aware functionality, which ingests information from repositories including code, commits, and pull requests to provide tailored answers. The researcher exploited this feature by embedding malicious prompts inside pull request descriptions using GitHub's official "invisible comments" feature.
By wrapping malicious instructions within HTML comment tags (<!-- -->), the content remained hidden from users viewing the pull request but was still processed by Copilot Chat when generating responses.
When a legitimate developer with access to private repositories asked Copilot Chat to explain the compromised pull request, the AI assistant would execute the hidden malicious prompt. Since Copilot operates with the same permissions as the user making the request and requires access to private repositories to respond accurately, attackers could instruct the compromised Copilot instance to access victims' private repositories, search for sensitive information, and exfiltrate the data.
The most sophisticated aspect of the attack is bypassing GitHub's restrictive Content Security Policy, which blocks fetching images and content from domains not explicitly owned by GitHub. The researcher circumvented this protection by exploiting GitHub's Camo proxy service—a feature designed to safely route external image requests by rewriting URLs to camo.githubusercontent.com addresses with cryptographic signatures. The attacker can pre-generate a complete dictionary of valid Camo URLs using GitHub's REST API, with each URL corresponding to a single character or symbol in the alphabet. Each URL pointed to a transparent 1×1 pixel image hosted on the attacker's server.
The injected prompt instructed Copilot to render stolen content as "ASCII art" composed entirely of these pre-signed image URLs. When the victim's browser rendered Copilot's response, it made sequential requests through GitHub's trusted Camo proxy to fetch each invisible pixel. The sequence and timing of these requests, as received by the attacker's server, effectively reconstructed the stolen data character by character without displaying any suspicious content to the victim or triggering standard security alerts.
The researcher demonstrated that attackers could influence Copilot's responses for any user visiting the compromised page, inject custom Markdown including malicious URLs and code suggestions, and even have Copilot recommend malicious packages. One demonstration showed Copilot suggesting a malicious "Copilotevil" package to unsuspecting developers. The vulnerability also enabled attackers to encode private repository contents in base16 format and append them to URLs, which when clicked by victims would exfiltrate the data.
The researcher reported the vulnerability through HackerOne, and GitHub confirmed the issue was fixed as of August 14, 2025. GitHub's remediation involved completely disabling image rendering in Copilot Chat.