OpenAI internal communication systems breached, AI technology details stolen
Take action: Despite assurances that current AI technologies are secure, this incident shows that corporations want to process all your data but are very selective at being transparent in security practices, incidents and remediations.
Learn More
OpenAI, has experienced a breach of its internal messaging systems. According to a report from the New York Times, a hacker accessed and stole sensitive details from employee discussions in an internal online forum.
The hacker, whose identity remains undisclosed, managed to extract confidential about the company's latest AI technologies information from internal communications among OpenAI employees. Apparently the hacker did not penetrate the more secure systems where OpenAI houses and builds its AI models and related data.
OpenAI executives informed their employees about the breach during an all-hands meeting in April 2023. The company also briefed its board of directors on the incident.
OpenAI chose not to publicly disclose the incident, citing the fact that no customer or partner information had been compromised. The executives determined that the breach did not pose a national security threat, attributing the attack to a private individual with no known connections to foreign governments or hacker groups. Based on that determination, federal law enforcement agencies were not notified about the breach.
No details are disclosed about the attack or the specifics and volume of data that was stolen.
The incident and the lack of transparency has sparked internal concerns at OpenAI. Some employees fear that foreign adversaries, such as China, could exploit similar vulnerabilities to steal AI technology that could potentially threaten U.S. national security. This sentiment was echoed by Leopold Aschenbrenner, a former technical program manager at OpenAI, who argued in a memo to the board that the company was not doing enough to protect its intellectual property from foreign threats. Aschenbrenner, who was later dismissed for leaking information, highlighted the need for stronger security measures to prevent future breaches.
It's also concerning that OpenAI does not share incidents with customers and the public since it's security measures were obviously lacking. The public and customers have no idea whether security measures have been improved and whether the threat actor is a single individual or part of a larger team with significant resources and multiple access channels.
Despite assurances that current AI technologies are secure, this incident shows that corporations want to process all your data but are very selective at being transparent in security practices, incidents and remediations.