Meta and researchers in dispute about severity of flaw in Meta's LLM code - better to patch
Take action: If you are using Meta llama-stack, update it. There is a debate between developers and security, but while they debate and you wait, hackers may breach you. So it's wise to isolate the stack servers, and then plan a systemic patch.
Learn More
A security vulnerability in Meta's Llama large language model framework is under severity dispute, with security researchers considering it critical and possibly enabling arbitrary code execution on the llama-stack inference server.
The vulnerability is tracked as CVE-2024-50050 (Meta CVSS score: 6.3), though Snyk has rated it as critical with a score of 9.3. It's located in the Llama Stack component, affecting the Python Inference API implementation. It stems from unsafe deserialization of Python objects using pickle format in the reference implementation
It could possibly allow attackers to achieve remote code execution by sending malicious objects to an exposed ZeroMQ socket
Meta addressed this vulnerability on October 10, 2024, with the release of version 0.0.41, switching from pickle to JSON format for socket communication. The fix was also implemented in the pyzmq library.
Additionally, a high-severity vulnerability was found in OpenAI's ChatGPT crawler that could enable DDoS attacks. The flaw existed in the "chatgpt.com/backend-api/attributions" API, where unlimited URLs could be submitted in a single request without proper validation, potentially overwhelming target websites through Microsoft Azure IP ranges where the ChatGPT crawler operates.
Users of the llama-stack are advised to update to latest version, because the debate between developers and security will not end.