Skip to main content

Researchers have discovered an indirect prompt injection vulnerability in GitLab’s AI coding assistant, Duo. The vulnerability could have allowed attackers to steal private source code, inject malicious HTML into AI-generated responses, and mislead users with poisoned responses.

Duo was launched in 2023 and is powered by Anthropic’s Claude models. It is designed to help users write, edit, and review code within GitLab. It analyzes not just source code, but also metadata such as commit messages, issue descriptions, merge request discussions, and comments.

That broad access makes it convenient for developers, but it also widened the attack surface. According to Legit Security researchers, Duo did not sufficiently sanitize the content it analyzed. As a result, attackers could hide harmful instructions anywhere in a project.

Unlike direct prompt injection, where the attacker directly feeds malicious instructions to the AI, this case involved malicious prompts embedded in project content that the AI assistant would normally read during regular use. These prompts could be hidden in elements like merge request descriptions, comments, or even commit messages. To further evade detection, they could also be concealed using Base16 encoding, Unicode tricks, or white text blending into the page background.

One of the major risks came from the way Duo renders output. It uses streaming Markdown, which is immediately converted to HTML and displayed in the user’s browser. As a result, when Duo processed an injected prompt that contained malicious HTML, it rendered that HTML as part of its live response. This allowed the researchers to demonstrate a proof-of-concept where Duo sent a victim’s source code to an external server via a simple GET request, triggered just by reviewing a poisoned merge request.

Attack Chain

Credit: Legit Security 

GitLab responded to the disclosure by addressing the HTML rendering issue and communicating further with the researchers to ensure other risks were also mitigated. As of May, Legit Security confirmed that the flaws had been resolved, though they acknowledged it’s still possible to influence Duo’s responses, just not in a harmful way.

This case shows just how important it is to continuously monitor the security of AI tools, especially when they’re used in everyday coding work. As these AI tools become more integrated into development workflows, staying alert to new kinds of risks will be key to ensuring projects remain secure and reliable.

About the author: