A prompt injection vulnerability has been discovered in Cursor IDE, a developer-focused AI agent.
The vulnerability, uncovered by Aim Labs, is called โCurXecute.โ It allows an attacker to achieve full Remote Code Execution (RCE) by injecting malicious prompts through third-party services.
Cursor is an AI-powered coding assistant built into a developer-friendly IDE. It is designed to help users write and interact with code using natural language. It integrates with external tools like GitHub and Slack to streamline software development workflows.
Cursor uses Model Context Protocol (MCP), which lets the AI agent communicate with external services and execute commands based on prompts. The flexibility this feature offers makes the tool appealing to developers. However, connecting to external services and trusting their content introduces an attack surface that is challenging to secure.
If the agent retrieves a message containing a malicious prompt, it can be manipulated into silently modifying a configuration file (~/.cursor/mcp.json) on the userโs machine. This file defines commands for launching external tools via MCP. Once updated, Cursor automatically executes any newly added commands without requiring user approval. This automatic execution is what makes the attack especially dangerous.
In the proof of concept demonstrated by the researchers, an attacker can post a malicious payload in a public Slack channel. If a developer then asks Cursor to summarize recent Slack messages, the agent fetches the poisoned input, rewrites its configuration to include a command, and executes it immediately.
Proof of Concept Script
Credit: Aim Labs
A similar vulnerability was discovered earlier this year when Aim Labs exposed EchoLeak, a zero-click exfiltration method targeting Microsoft 365 Copilot. It showed how LLMs can be hijacked just by feeding them malicious content.
The core issue is the same here. If an AI agent is designed to act on external input, that input can manipulate the agentโs behaviour. The risk is further heightened by the fact that these agents are typically granted elevated privileges.
The vulnerability has been assigned CVE-2025-54135 with a CVSS score of 8.6. Cursor has released a fix in version 1.3, but any earlier versions remain vulnerable. Users are advised to update as soon as possible.
As AI agents increasingly bridge local tools with the wider web, external input should never be trusted by default, and strong security controls must extend well beyond the modelโs output.




