Advisory

ContextCrush Flaw Exposes AI Development Tools to Attacks

Take action: Treat AI documentation feeds as executable code and never assume a tool is safe just because it has high GitHub stars. Limit your AI assistant's file system permissions and verify the source of all instructions delivered through MCP servers. And update Context7 MCP server if you are using it.


Learn More

Upstash's Context7 MCP Server, a tool used by over 8 million developers to provide documentation to AI coding assistants, is reported to contain a critical flaw dubbed ContextCrush. 

The vulnerability allowed attackers to inject malicious instructions into the AI's context through a trusted documentation channel. The issue affected popular AI tools like Cursor, Claude Code, and Windsurf, which use the Model Context Protocol (MCP) to fetch library data.

The flaw is reported as ContextCrush, an instruction injection vulnerability that occurs when the Custom Rules feature fails to sanitize user-submitted AI guidance. Attackers can register a library on the Context7 registry and insert malicious prompts into the rules section, which are then delivered verbatim to AI agents. This allows the AI assistant to execute harmful commands with local system privileges, leading to data exfiltration and file destruction.

Researchers from Noma Labs demonstrated a system compromise using a poisoned library entry. The attack instructed the AI to locate sensitive .env files, send their contents to an attacker-controlled GitHub repository, and delete local directories under the guise of a cleanup task. Because the instructions arrived through a trusted MCP channel, the AI assistant could not distinguish between legitimate documentation and the malicious payload, leading to silent data theft and system damage.

The flaw impacted the Context7 platform prior to the February 2026 patch, affecting a toolset with 50,000 GitHub stars and high npm download volumes. 

Researchers found that the platform's reputation signals were easily manipulated. Attackers could manufacture social proof by automating API requests and page views to make a malicious library appear as a trending or high-ranking resource, deceiving developers into trusting the poisoned content.

Upstash deployed a fix on February 23, 2026, after receiving a report on February 18. The update introduced mandatory rule sanitization and security guardrails to prevent instruction injection. Developers using AI coding assistants should update their MCP server integrations and treat all third-party documentation sources as potential attack vectors. Organizations should audit their AI supply chain and monitor data flowing between local agents and external servers.

ContextCrush Flaw Exposes AI Development Tools to Attacks