Advisory

Systemic Design Flaw in MCP Protocol Exposes AI Ecosystem to RCE

Take action: If you're using any AI agent tools or frameworks that rely on MCP (like Flowise, GPT Researcher, Langchain, Windsurf, or similar), treat them as very dangerous and restrict their access to the internet and internal networks. Run them only in isolated sandboxes with no access to sensitive data, credentials, or cloud environments. Only install AI tools from verified, trusted sources, and monitor these systems closely for any unusual activity until vendors release confirmed patches.


Learn More

Flowise and the broader AI ecosystem face a systemic risk due to a design flaw in Anthropic’s Model Context Protocol (MCP). This architectural issue allows remote command execution (RCE) across various platforms and frameworks that use the protocol for agent-to-tool communication. Researchers at OX Security found that the flaw exists in the core architecture, affecting official MCP SDKs in Python, Java, Rust, and TypeScript. With over 150 million downloads and 200,000 potentially vulnerable instances, this represents a massive software supply chain risk.

Vulnerabilities summary:

  • CVE-2025-65720 (CVSS score N/A) - A remote command execution vulnerability in GPT Researcher that stems from insecure handling of MCP adapters. Attackers can use malicious inputs to run arbitrary code on the host system. This allows for full takeover of the research environment.
  • CVE-2026-30624 (CVSS score N/A) - A flaw in Agent Zero where the MCP implementation fails to properly sanitize tool-calling parameters. By sending crafted requests, an attacker can execute system-level commands. The impact includes unauthorized access to local files and network resources.
  • CVE-2026-30618 (CVSS score N/A) - An RCE vulnerability in the Fay Framework caused by architectural weaknesses in how AI agents interact with external tools. Attackers can bypass security boundaries to run malicious scripts. This leads to the compromise of the underlying server infrastructure.
  • CVE-2026-30617 (CVSS score N/A) - A command injection vulnerability in Langchain-Chatchat resulting from the integration of vulnerable MCP components. Maliciously formatted prompts can trigger the execution of arbitrary OS commands. This exposes sensitive chat logs and internal configurations.
  • CVE-2026-33224 (CVSS score N/A) - A critical flaw in Jaaz that allows remote attackers to execute code via manipulated MCP server responses. The vulnerability exploits the trust relationship between the AI agent and the protocol adapter. Successful exploitation grants the attacker persistent access to the application environment.
  • CVE-2026-30615 (CVSS score N/A) - A zero-click prompt injection vulnerability in AI IDEs like Windsurf that leverages MCP design flaws. Attackers can trigger code execution simply by having the user open a malicious project or file. This compromises the developer's workstation and source code.
  • CVE-2026-30625 (CVSS score TBD) - An allowlist bypass vulnerability in Upsonic that permits unauthorized tool execution through MCP. Attackers can circumvent restricted command lists to perform actions outside the intended scope. This results in unauthorized data manipulation and privilege escalation.

Exploiting these flaws allows attackers to achieve full system compromise, bypassing existing security controls in tools like Flowise. Once an attacker gains RCE, they can move laterally through the network or exfiltrate high-value assets like databases, environment variables, cloud credentials, API keys for third-party services, user data and chat histories, proprietary source code and AI models.

Platforms like LiteLLM and Bisheng have released patches, the underlying protocol remains unchanged. Anthropic has characterized the behavior as expected, placing the burden of security on individual developers and organizations. This creates a persistent risk for any application using official MCP SDKs without additional custom security layers.

Organizations should treat all external MCP inputs as untrusted. Administrators should restrict public internet access to MCP-enabled services and run these components in isolated sandbox environments. It is vital to install AI tools and components only from verified sources to avoid marketplace poisoning. 

Systemic Design Flaw in MCP Protocol Exposes AI Ecosystem to RCE