A cybersecurity researcher has disclosed a vulnerability in Microsoft 365 Copilot that exposes user emails and personal data. In a blog post made on August 26, Johann Rehberger described the recently patched vulnerabilityโs exploit chain.
A part of Microsoft’s extensive suite, Microsoft 365 Copilot is an AI-powered assistant that is integrated into other Microsoft 365 applications like Word, Excel, Outlook, and Teams. It uses Large Language Models (LLMs) to help users with various tasks, including drafting emails, summarizing documents, and automating workflows.
The newly disclosed vulnerability is particularly concerning due to its sophisticated nature and the relatively new attack techniques it employs. The exploit combines advanced techniques as follows:
- Prompt Injection: Prompt injection occurs when an attacker manipulates a language model’s output by injecting malicious code or prompts into the input. In this case, the attack begins with a malicious email or document, which includes hidden instructions that manipulate Copilot. This allows the attacker to take control of Copilot by injecting prompts that it executes.
- Automatic Tool Invocation: After the initial prompt injection, Copilot can then autonomously search through other emails and documents, retrieving sensitive information without the user’s knowledge. In one instance, Copilot was manipulated into searching for specific Slack Multi-Factor Authentication (MFA) codes because an email contained hidden instructions to do so.
- ASCII Smuggling: The final stage of the attack is the exfiltration of data using ASCII Smuggling. ASCII Smuggling is a novel technique where attackers use Unicode characters that look like standard ASCII but are invisible in the user interface. These hidden Unicode characters are embedded within hyperlinks to manipulate requests. For example, a link might look like a standard URL but actually contains hidden Unicode characters that, once clicked, will send sensitive information to the attacker’s server. This method of data exfiltration is sneaky and stealthy because the user is unaware that anything unusual has occurred.
To mitigate the vulnerability, Rehberger recommended that Microsoft avoid rendering clickable hyperlinks to prevent phishing. Additionally, he pointed out that automatic tool invocation is risky without a fix for prompt injection, as attackers could exploit this feature to access sensitive information.
Although the issue was initially classified as low severity, further demonstrations of the exploit’s capabilities highlighted its potential impact, prompting Microsoft to take action. The Microsoft Security Response Center (MSRC) has since patched the vulnerability. However, the details of the fix remain undisclosed.
This case shows that AI solutions like Microsoft Copilot can be exploited in unexpected ways.ย As attackers continue to develop new techniques and strategies, it is important that organizations and developers implement robust security measures and conduct regular testing to stay vigilant against threats.