From: Securosis Highlights – Defending Against DoS Attacks: The Process

Posted on 2012/10/16


a quick re-post from Securosis Highlights https://securosis.com/blog/defending-against-dos-attacks-the-process

Defending Against DoS Attacks: The Process by (author unknown)

As we’ve mentioned throughout the series, a strong underlying process is your best defense against a Denial of Service (DoS) attack. The tactics will change, the attack volumes will increase, but if you don’t know what to do when your site goes down, it will be down for a while.

The good news is the DoS Defense process is a close relative to your general incident response process. We’ve already done a ton of research on the topic, so check out both our Incident Response Fundamentals series and our React Faster and Better paper. If your incident handling process isn’t where it needs to be, you should start there.

Building off of the IR process, think about what you need to do as set of activities before, during, and after the attack:

  • Before: Before the attack you spend the time figuring out the triggers for an attack, and ensure you perform persistent monitoring to ensure you have both sufficient warning and enough information to identify the root cause of the attack. This must happen before the attack, because you only get one chance to collect that data, when things are happening. In the Before the Attack post we defined a three step process for these activities, define, discover/baseline, and monitor.

  • During: How can you contain the damage as quickly as possible? By identifying the root cause accurately and remediating effectively. This involves identifying the attack (Trigger and Escalate), identifying and mobilizing the repsonse team (Size Up), and then containing the damage in the heat of battle. The During the Attack post summarizes these steps,

  • After: Once the attack has been contained, focus shifts to restoring normal operations (Mop Up) and making sure it doesn’t happen again (Investigation and Analysis). This involves a forensics process, and some self-introspection described in the After the Attack post.

Yet there are key differences when dealing with the DoS, so let’s amend the process a bit. We’ve already talked about what needs to happen before the attack, in terms of controls and architectures to maintain availability in the face of DoS attacks. That may involve network-based approaches, or focusing on the application layer. Or more likely both.

Before we jump right into what needs to happen during the attack, let’s talk about e importance of practice. You practice your disaster recovery plan, right? You should practice your incident response plan, and even do a subset of that practice relative to DoS attacks. The time to discover the gaping holes in your process is not when the site is melting under a volumetric attack. That doesn’t mean to npblast yourself with 80 GBps of traffic either. But practice the handoffs with the service provider, with tuning the anti-DoS gear, and ultimately ensuring everyone knows their roles and accountability during the real thing.

Trigger and Escalate

There are a number of ways you can detect a DoS attack is happening. You could see increasing volumes or a spike in DNS traffic. Perhaps your applications are a little flaky and falling down, or the servers show performance issues. You may get lucky and have your CDN alert you to the attack (you’ve set the CDN to alert on anomalous volumes, right?). Or more likely you’ll just lose your site. Increasingly these attacks come out of nowhere in a synchronized series of activities targeting your network, DNS, and applications. We’re big fans of setting thresholds and monitoring everything, but understand that the DoS is a bit different in that you may not see it coming.

Size Up

Now your site and/or servers are down, and basically all hell is likely breaking loose. So now you need to notify the powers that be, assemble the team, and establish responsibilities and accountabilities. You’ll also have your operations guys starting to dig into the attack. They’ll need to identify root cause, attack vectors, adversaries, and figure out the best way to get the site back up.

Restore

At this point, there will be quite a variability in what comes next. It depends on what network and application mitigations are in place. Optimally your contracted CDN and/or anti-DoS service provider already has a team working on the problem. If it’s an application attack, with a little tuning hopefully your anti-DoS appliance can block the attacks. Yeah, hope isn’t a strategy, so you’ll need plan B, which usually entails redirecting your traffic to a scrubbing center as we described in the Network Defenses post.

The biggest decision you’ll face is when to actuall redirect the traffic. If the site is totally down, that decision is easy. If it’s an application performance issue (caused either by and application or network attack), you’ll need more information in terms of whether the redirection will even help. In many cases it will, since the service provider will then see the traffic and they will likely have more expertise and will more effectively diagnose the issue, but remember there will be a lag time involved to the network converges after the changes.

Finally, there is the issue of those targeted organizations that don’t currently contract with a scrubbing center. In that case, your best bet is to cold call an anti-DoS provider and hope they can help you. To be clear, these folks are in the business of fighting DoS, so they’ll likely be able to help, but do you want to take a chance on that? We didn’t think so, thus it makes sense to at least have a conversation with the anti-DoS provider before you are attacked, if only to understand their process to help. Again, talking to a service provider doesn’t mean you have to contract for a service. It means you know who to call and what to do when under fire.

Mop Up

You’ve weathered the storm and your sites operate normally now. In terms of mopping up, you’ll move the traffic from the scrubbing center and maybe loosen up the anti-DoS appliance/WAF rules. You’ll keep monitoring for more signs of trouble, but you’ll probably want to grab some sleep for 2-3 days to catchup.

Investigate and Analyze

Once you are well rested, dont fall into the trap of turning the page and moving on. There are tons of lessons to be learned. What worked? What didn’t? Who needs to be added to the team? Who just got in the way? Remember, post-mortems must identify the good, the bad, and the ugly. Some folks may end up with bruised egos. Tell them to get over it. The sacred cows must be slain, if you don’t want to relive nightmare soon enough.

More importantly, dig into the attack. What controls would have damped the impact? Would running all of your traffic through a CDN helped? Did the network redirection work effectively? Did you get the proper level of support from the service provider? The more questions you ask, the better.

Then update the process as needed. Implement new controls, if needed. Swap out your service provider, if they didn’t get it done. If you aren’t learning from every attack, you missing an opportunity to improve the response the next time. And you know there will be a next time, there always is.

So with that we’ve finished up the Defending Against Denial of Service Attacks series. As always, we’ll be packaging up the series into white paper format, but you’ve still got time to weigh in with comments to the posts. What did you like? What didn’t match your experience? Help us make the research better.

And thanks for reading.

– Mike Rothman
(0) Comments

Posted in: reading