For those that are not aware, or want some context, Stack Exchange’s network for StackOverflow.com was breached on May 5th-11th, 2019. They initially provided notice on their blog on May 16th
They then posted an update today
My Thoughts on the Events
I find it fascinating to see how the security operations of other organizations play out. In the case of Stack Exchange, the threat actor, as per retelling of events by Mary Ferguson, was able to do the following things starting on May 5th undetected
- Gain access to development tier of the network due a bug in build that was deployed
- Spend time exploring the environment
The attacker was able to spend 5 days roaming the development network without raising any alarms. Poking around to see what was out there and what would be an ideal next move. This is the part of breaches every business is terrified of. Some unauthorized entity gains access to your network and gets to hang out until they make a noisy enough misstep that trips a security system that the security team notices, if there even is something like this in place to alert the proper channels.
The misstep that caused the attacker to be detected was when they escalated their access to production on May 11th, which triggered some internal system to notify of the activity and put their IR plan into action. While it seems like a long time, it’s incredibly commendable that Stack Exchange was able to identify the threat only 6 days after the breach initially occurred. Often times it takes months or years for an organization to notice an incident has occurred. Sometimes they never notice or only find out when an outside agency tells them there’s a problem on their network.
- Securing development environments matter, especially if there is the possibility for it to connect to product.
This isn’t to say Stack Exchange doesn’t secure their dev environment, but it would appear there is room for improvement in their processes
- PR can be done right. Stack Exchange, as far as we know today, did a pretty good job of letting the public know about the breach only a few days after they discovered the event. They provided clear and enough detail so we could understand the answers to key questions (What, When, Where, How and Who was affected)
- Having an incident response strategy. It is clear that they knew what to do once a breach occurred, took it seriously and jumped into action.
- Knowing their data. While we don’t have much details except the last bullet point from the actions they have taken, they didn’t take chances. They understood what data could be at risk given what they knew about the breach and while they couldn’t confirm if other parts of the system had been compromised they still took action recycling secrets and resetting passwords that could have been exposed.
While no security organization is perfect, we can always strive to be better. In the public realm we only get a glimpse into how other organizations respond, or don’t.
It’s a good time for companies to reflect on how these incidents unfold and run the same scenario for their organization.
- What would you do if a bug was deployed to a publicly exposed dev environment and someone was able to exploit it and gain access?
- How far could they get before you know something isn’t right?
- How would you know how long they had been there?
- How did they gain access initially?
- What data did they have access to?
These questions and table top exercises based on scenarios like this provide a good learning experience for your security organization to know what to do and know where the gaps are. Every event is different and no company is the same in their risk profile, but everyone in the organization should know what their part in the plan is and how to execute it when the need arises.
Kudos to the folks at Stack Exchange for being able to detect, respond and communicate this event so quickly.