Knowledge

Learning from the Microsoft key breach - Logs, Authentication Pentests, Key Protection

Take action: When building your product, enable access audit as a core product feature, manage your cryptographic key security, and continuously test since there is always someone that says "this is fine" hoping someone else will fix it. If you are a Microsoft Cloud customer and you hadn't paid for the advanced license for audit logging you can only hope that your data wasn't compromised.


Learn More

On the 13th of July the news broke out that chinese hackers known as Storm-0558 managed to steal the Microsoft consumer signing key and that they managed to comporomise around 25 email organizations, including some belonging to the US Government.

So what bad is it?

On Friday 21st of July, Microsoft made an attempt to explain the cause of a breach. According to the company's post, the breach was the result of three exploited vulnerabilities in either its Exchange Online email service or Azure Active Directory. Apparently the breach was achieved through the exploitation of a now-patched zero-day validation issue in the GetAccessTokenForResourceAPI. This allowed the hackers to forge signed access tokens, enabling them to impersonate accounts within the targeted organizations.

No really, how bad is it?

While Microsoft initially stated that only Exchange Online and Outlook were affected, exploiting Azure Active Directory (AAD) changes the risk profile for much worse: AAD is the source of truth for all Identity questions and all applications are connected to AAD via Single Sigh On - If you log in once, your authenticated credentials are valid to all applications where you have been granted access.

Threat actors could exploit the compromised Microsoft consumer signing key to impersonate any account within any affected customer or cloud-based Microsoft application. This encompassed managed Microsoft applications such as Outlook, SharePoint, OneDrive, and Teams, as well as customer applications supporting "Login with Microsoft" functionality.

Finally, Microsoft is still unable to determine how the Chinese hackers stole the Microsoft consumer signing key.

When greed dictates product security, lawyers and PR crisis managers write incident reports

The reporting provided by Microsoft is unclear on what are the impacted organizations, and whether it's even possible to identify if one's organization was affected - bear in mind that the forged credential would be seen by the system as a valid user. Microsoft hasn't provided a clear mechanism for identification of bad actors.

The only reliable mechanis that to detect the attackers so far is the login audit logs, which have so far been part of a "premium" pricing package - in the simplest of terms, your own security audit logs are not available until you pay extra per user.

Another interesting aspect is that while the breach obviouslt involved zero-day vulnerabilities in Microsoft cloud services, Microsoft's posts went out of their way to avoid using the terms "vulnerability" or "zero-day." Instead, they used vague terms like "issue," "error," and "flaw" to describe how the nation-state hackers tracked the email accounts of some of their major customers.

Detection and remediation

 

In response to the security breach, Microsoft took several measures.

  1. They revoked all valid MSA (Microsoft account) signing keys to prevent the threat actors from accessing other compromised keys and generating new access tokens.
  2. Microsoft also moved newly generated access tokens to the key store for their enterprise systems.
  3. Under pressure from CISA (Cybersecurity and Infrastructure Security Agency), Microsoft agreed to provide free access to cloud logging data to aid in the detection of similar breach attempts in the future.

Lessons that we can apply to our own products

Nothing much can really be done for the customers of Microsoft, who may have been compromised. You may be among them, and if you didn't pay for the advanced license that provided you with audit logging you can only hope that your data wasn't compromised.

But there are clear product and process lessons to be learned and practiced:

  1. Enable access audit, and provide the logs to the customers. It's part of the core product, not an extended paid feature.
  2. Establish good cryptographic key security processes and controls - generate it inside a HSM and never let it leave the HSM. An authentication signing key shouldn't be floating around. If it's exported always export encrypted with at least three people needed to decrypt it.
  3. Make it a habit of your security team and a requirement for all penetration testers to test out exploits of your authentication systems.
  4. Implement security controls in multiple layers to avoid the "Swiss Cheese Failure Model", like the extraction of the key causing exposure of a vulnerability that was probably ignored as "low priority".
  5. When handling an incident - be transparent and learn from the mistakes. Vague law and PR speak just makes people trust you less. Which may be fine for a behemoth like Microsoft, but it's not fine for a smaller outfit where the customers can sue or just leave.
Learning from the Microsoft key breach - Logs, Authentication Pentests, Key Protection