Critical remote code execution flaw in Ollama AI Server
Take action: If you are using Ollama AI server, time to review exposure to the internet immediately. Ideally your Ollama should be behind a firewall and authenticated reverse proxy or application code. If it's directly visible, patch IMMEDIATELY. If it's protected, still patch, but you have a bit more time to plan this out.
Learn More
A security flaw, tracked as CVE-2024-37032 (CVSS score 9.1) and dubbed Probllama, has been reported in the Ollama open-source AI infrastructure tool.
Ollama is a widely-used tool for running large language models (LLMs) and is compatible with various neural networks like Meta's Llama family, Microsoft's Phi clan, and models from Mistral
The issue was patched by the maintainers in version 0.1.34, released on May 7, 2024. Despite the patch, over 1,000 vulnerable instances remain exposed online.
The vulnerability is caused by insufficient validation on the server side of the REST APIwhich leads to remote code execution (RCE).
The flaw can be attacked via path traversal in the API endpoint /api/pull. An attacker can exploit this flaw by sending a specially crafted HTTP request containing a malicious manifest file with a path traversal payload in the digest field. This allows the attacker to overwrite arbitrary files on the server, potentially leading to remote code execution. This is particularly severe in Docker installations, where the API server runs with root privileges and listens on 0.0.0.0 by default, making it accessible over the internet.
Any version of Ollama before 0.1.34 is vulnerable to this exploit.
Administrators are advised to patch all instances of Ollama, and do not expose Ollama servers to the internet.