Incident

xAI employee leaked private API key on GitHub exposing internal LLMs

Take action: A simple example of why no secrets should ever reside in source code. Even if the source code is private. Because the source code will become visible, either by accident or through a decision of someone else that has no idea of the secrets stored there.


Learn More

An employee at Elon Musk's artificial intelligence company xAI inadvertently exposed a private API key on GitHub that remained accessible for approximately two months. This security lapse allowed unauthorized individuals to query private xAI large language models (LLMs) that appear to have been custom-developed for processing internal data from Musk's various companies, including SpaceX, Tesla, and Twitter/X.

The exposure was initially discovered and publicized by Philippe Caturegli, who serves as the "chief hacking officer" at security consultancy Seralys. After Caturegli posted about the leak on LinkedIn, researchers at GitGuardian—a company specializing in detecting exposed secrets in code repositories—conducted a deeper investigation into the incident.

According to Eric Fourrier from GitGuardian, the exposed API key provided access to several unreleased versions of Grok, the AI chatbot developed by xAI. Their analysis revealed that the compromised credentials could access approximately 60 fine-tuned and private language models. These included not only public Grok models but also what appeared to be unreleased development versions and private specialized models.

GitGuardian's investigation uncovered that they had actually alerted the xAI employee about the exposed API key on March 2, nearly two months before broader notification. Despite this early warning, the key remained valid and usable until April 30, when GitGuardian directly contacted xAI's security team. Shortly after this notification, the repository containing the exposed key was removed from GitHub.

Some of the exposed models appeared to be fine-tuned using proprietary data from SpaceX and Tesla. Potential attackers could exploit this access for prompt injection attacks, model manipulation, or attempting to implant malicious code into the supply chain.

The security incident raises additional concerns given Elon Musk's involvement with the so-called Department of Government Efficiency (DOGE), which has reportedly been feeding sensitive government records into AI tools. According to reports from The Washington Post and other sources, DOGE has been deploying AI technologies across various government departments to analyze programs and spending.

The number of individuals who may have accessed these models during the two-month exposure period is not disclosed. Similarly, there is no information available regarding potential financial impact or data extraction resulting from this incident.

xAI employee leaked private API key on GitHub exposing internal LLMs