Cyberattacks happen constantly. Every day organizations are attackers online whether they realize it or not. Most of these attacks are passing affairs. The mere fact that systems are on to the internet makes them a target of opportunity. For the most part, these attacks are non-events.
Security software, bugs in attack code, and updated applications stop most attacks. With 20 billion+ devices connected to the internet, it’s easy enough for the attack to move on.
But every couple of weeks there is a big enough attack to draw headlines. You’ve seen a steady stream of them over the past few years. 10 million records here, thousands of systems there, and so on.
When we talk about these attacks, for most people, it’s an abstract discussion. It’s hard to visualize an abstract set of data that lives online somewhere.
Late Thursday, some systems on Tribune Publishing network were inaccessible. This is not an uncommon experience for anyone working in a large organization.
Technology has brought about many wonders but reliability isn’t typically one of them. When system is inaccessible, it’s not out of the question to first think, “Ugh, this isn’t working. Call IT”.
Support tickets are often the first place cyberattacks show up…in retrospect. All public signs in the Tribune Publishing attack point this way. Once support realized the extent of the issue and that it involved malware, the event—a support request—turned into an incident. This kicks off an incident response (IR) process.
It’s this process that the teams at Tribune Publishing are dealing with now.
“Who is behind the attack?” Is the first question on everyone’s mind. It’s human nature—doubly so at a media organization—to want to understand the “who” and “why” as opposed to the “how”.
The reality is that for the incident response process, that’s a question that wastes time. The goal of the incident response process is to limit damage to the organization and to restore systems as fast as possible.
In that context, the response team only needs to roughly classify their attacker. Is the attacker;
- A low level cybercriminal who is got lucky with an automated attack and has few resources to continue or sustain the attack?
- A cybercriminal intending on attacking a specific class of organization or systems?
- A cybercriminal targeting your organization?
Knowing which class of cybercriminal is behind the attack will help dictate the effort required in your response.
For a simple attack, your automated defences should take care of it. Even after an initial infection, a defence in depth strategy will isolate the attack and make recovery straight forward.
If the attack is part of a larger campaign (e.g., WannaCry, NotPeyta, etc.), incident response is more complex but the same principles hold true. The third class of attacker—specifically targeting your organization—is what causes a change in the process. Now you are defending against an adversary who is actively changing their approach. That requires a completely different mindset compared to other responses.
Incident response processes generally follow six stages;
On paper the process looks simple. Preparation begins with teams gather contact information, tools, and by writing out—or better yet, automating—procedures.
Once an incident has started, teams work to identify affected systems and the type of attack. They then contain the attack to prevent it from spreading. Then work to eradicate any trace of the attack.
Once the attack is over, the work shifts to recovering systems and data to restore functionality. Afterwards, an orderly review is conducted and lessons are shared about what worked and what didn’t.
Any incident responders reading this post, can take a minute here having enjoyed a good laugh. The next section slams everyone back to the harsh reality of IR.
The six phases of incident response look great on paper but when you’re faced with implementing them in the real world, things never work out so cleanly.
The majority of a response is spent stuck in a near endless loop. Identifying new areas of compromices to try to contain the attack. Hopefully allowing responders to eradicate any foothold to recover the affected systems.
This is what most organizations struggle with. The time spent preparing is often insufficient because it’s all theoretical. Combined with the rapid pace of change on the network means that teams are struggling to keep up during an active incident.
With an organization like Tribune Publishing, things are even more difficult. By it’s very nature, it’s a 24/7 business with a wide variety of users around the country. This means there are a lot of systems to consider and each hour of downtime has a very real and significant impact on the bottom line.
As the incident progresses, the response team will make critical decision after critical decision. Shutting down various internal services to protect them. Changing network structures to isolate malicious activity. And a host of other challenges will pop up during the incident.
It’s difficult, hard driving work. Made doubly so with the eyes of senior management, customers, and the general public looking on.
As a CISO or incident response team leader, you need to focus on the IR process, not on attribution. That’s why it’s worrisome to see early attribution during an incident.
In the Tribune Publishing attack, it was publicly reported that the attack came from outside of the United State. This lead to speculation around motivation. It’s likely that statement was based on the malware reportedly found and simple IP address information.
Early in the IR process, evidence like this will be found. It’s easily accessible but also highly unreliable. Malware is often sold in the digital underground and IP addresses are easily spoofed or proxied. The response team knows this but pressure from higher up may demand some form of answer…whether or not it helps resolve the situation.
The team must stay focused on resolving the incident, not spending valuable time and energy getting side tracked. Attribution has its place. It’s definitely not in the middle of the response to an incident.
The one hard truth of incident response is that nothing can substitute for experience. Given the—hopefully obvious—fact that you don’t actually want to be attacked, this leads to the concept of a game day or an active simulation.
Popular in cloud environments—AWS runs game days at their events—these exercises provide hands on experience. Usually held for the operations team, they are are of critical importance to the security team as well.
Security doesn’t operate in a vacuum, especially during an incident. Working with other teams during an incident is key. Practicing that way is a must. This type of work is a huge effort but one that will pay off significant when an organization is attacked.
Tribune Publishing was hit by a cyberattack with real world impact. This level of visibility is a stark reminder of how challenging these situations can be. The most critical phase of incident response is the first one: preparation.
As a CISO or senior security team member, you need to prepare not only the incident response plan. With a plan in hand, you need to get other teams on board and make it clear to senior management how this process works. Critical to success is making sure that management knows that the priority is recovery…not attribution.
Combine that with a lot of practice and when the next incident hits, you’ll have put your team in a reasonable position to respond and recover quickly.
Read More HERE