Critical LangChain serialization flaw enables secret extraction and arbitrary code execution
Take action: If you're using LangChain, immediately update langchain-core to version 1.2.5 or 0.3.81, check your invoked methods for the risky ones and treat all LLM outputs as untrusted data. Make sure your langchain-community dependencies are also updated.
Learn More
LangChain has patched a critical security vulnerability in its core library that could allow attackers to extract sensitive environment variables and potentially execute arbitrary code through deserialization flaws.
The vulnerability is tracked as CVE-2025-68664 (CVSS score 9.3) in LangChain's dumps() and dumpd() serialization functions, which fails to properly escape user-controlled dictionaries containing the reserved 'lc' key structure. This key is used internally by LangChain to mark serialized objects. When user-controlled data contains this specific key structure, it is incorrectly treated as a legitimate LangChain object during deserialization instead of user data.
This bug enabled multiple attack vectors, including injection of malicious LangChain object structures through user-controlled fields like metadata, additional_kwargs, or response_metadata, and class instantiation within trusted namespaces.
Applications are vulnerable if they use astream_events(version="v1") which internally uses vulnerable serialization, Runnable.astream_log() for streaming outputs, RunnableWithMessageHistory, InMemoryVectorStore.load() to deserialize untrusted documents, or load untrusted generations from cache using langchain-community caches.
Affected versions are:
- langchain-core versions from 1.0.0 up to but not including 1.2.5,
- versions from 0.0.0 up to but not including 0.3.81.
Patched versions are 1.2.5 and 0.3.81, which fix the escaping and introduce new restrictive security defaults.
The most common attack vector is through LLM response fields like additional_kwargs or response_metadata, which can be controlled via prompt injection and then serialized and deserialized in streaming operations. A single carefully crafted text prompt can cascade into a surprisingly complex internal exploitation pipeline.
Attackers who control serialized data can extract environment variable secrets by injecting {"lc": 1, "type": "secret", "id": ["ENV_VAR"]} to load environment variables during deserialization when secrets_from_env=True, which was the default setting prior to the patch.
Organizations using LangChain in production must immediately upgrade langchain-core to versions 1.2.5 or 0.3.81 and verify dependencies including langchain-community. As a best practice, users should treat all LLM outputs as untrusted data, audit deserialization usage in streaming and logging operations, explicitly specify which objects they want to allow for serialization and deserialization, and disable secret resolution unless inputs are verified as trustworthy.