A researcher from Legit Security has discovered a critical vulnerability in GitHub Copilot Chat that exposed private repositories to silent data exfiltration. The flaw, called CamoLeak, received a 9.6 CVSS score due to its potential impact on confidential source code and data.
GitHub Copilot Chat is an AI assistant that uses repository context to answer questions and suggest code. That contextual access is useful for developers, but it also widens the attack surface. The assistant runs with the same repository permissions as the authenticated user, so Copilot can access any data the user can access.
The proof of concept began with a prompt injection embedded inside an invisible comment within a Pull Request (PR) description. Invisible comments are a legitimate GitHub feature, used by integrations or bots to store metadata. In this case, the attacker used that hidden space to plant malicious instructions.
When another user later asked Copilot Chat to “explain this pull request,” Copilot processed the hidden comment as part of the PR’s content. This caused the injected command to run in the victim’s Copilot session.
However, there was an obstacle blocking exfiltration. GitHub has a security feature called Camo. Camo rewrites external image URLs into cryptographically signed GitHub-owned links. When a browser requests an image, GitHub verifies the signature and then fetches the image through its servers. This prevents outbound requests from the user’s browser to attacker-controlled domains. It also blocks simple image-based exfiltration attempts that rely on direct <img> tags pointing to an attacker’s host.
The injected prompt instructed Copilot to encode sensitive information, such as API keys or issue titles, into a sequence of those pixel images.
Exfiltrating AWS Keys
Source: Legit Security
When the victim’s browser rendered the images, GitHub’s servers fetched each corresponding pixel from the attacker’s domain. By observing the order of these requests, the attacker could reconstruct the stolen data character by character. The process was invisible to the user and appeared identical to normal image loading behaviour.
While the technique could not steal large volumes of data, it proved effective for stealthy, targeted leaks of short text fragments or credentials.
Legit Security noted that such activity would likely evade both user detection and GitHub’s internal monitoring systems since all traffic appeared legitimate and originated from GitHub’s infrastructure. The researcher reported the issue via HackerOne, and GitHub responded by disabling image rendering in Copilot Chat, which immediately closed the attack path.
CamoLeak shows how product features can be chained into covert channels when they interact with AI assistants that have repository-level access. Practical steps to mitigate this issue include limiting assistant access to sensitive repos, validating and sanitizing contextual inputs, auditing model outputs, and monitoring outbound patterns that originate from tools.




