Flaw in ChatGPT API enables reflective DDoS attacks
Take action: You can't really do much about this, except having good DDoS protection on your website.
Learn More
A security vulnerability is reported in OpenAI's ChatGPT API that enables malicious actors to conduct Reflective Distributed Denial of Service (DDoS) attacks against arbitrary websites. The vulnerability, (no CVE, CVSS score 8.6), exists in the API endpoint https://chatgpt.com/backend-api/attributions. The high CVSS score is attributed to it being network-based, low complexity, requiring no privileges or user interaction, with changed scope and high impact on availability.
The vulnerability stems from multiple security design flaws in the API's handling of HTTP POST requests:
- No validation of duplicate hyperlinks in the urls parameter
- Absence of maximum limit enforcement on hyperlink submissions
- Lack of connection rate limiting to the same domain
- No restrictions on duplicate requests from the ChatGPT crawler
- No throttling mechanisms for parallel connection attempts
When the API receives a request, OpenAI's servers (hosted on Microsoft Azure) initiate individual HTTP requests for each hyperlink in the urls parameter. The ChatGPT crawler, operating across multiple Azure IP ranges, sends these requests simultaneously. This behavior, combined with the lack of proper validation, enables attackers to trigger massive numbers of concurrent requests to target websites.
The vulnerability affects system availability but does not compromise data confidentiality or integrity. A proof-of-concept demonstration showed that sending just 50 HTTP requests through OpenAI's servers generated significant simultaneous connection attempts from various Azure-based IP addresses to a target domain.
The flaw was discovered in early January 2025, with multiple disclosure attempts made through BugCrowd, the OpenAI security team, the GitHub repository reports, OpenAI privacy channels and support, Microsoft security channels and forms.
As of January 10, 2025, no meaningful response has been received from either OpenAI or Microsoft, and no mitigation steps have been announced.
Update - as of January 21st, a GitHub thread comment notes that the issue has been patched.
Neither OpenAI nor Microsoft has publicly acknowledged the security flaw or provided mitigation guidance.