If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Security Journal, CyberSecurity Journal, DevOps Journal

Blog Post

The DPS of a DDoS Has Doubled By @LMacVittie | @DevOpsSummit [#DevOps]

The question you need to ask yourself now is: has the capacity of your Internet pipes doubled in the past two years?

DPS, or damage per second, is a somewhat self-describing term for the amount of damage that can be dealt (by a single person or a group) in one second. It's typically used by players of online games such as World of Warcraft or Diablo. Not that us old skool table top gamers don't calculate the amount of damage we can possibly do in a single round, we just don't have a cool sounding abbreviation for it.

In any case, the measure of damage output per second uses terminology that's very similar to bandwidth utilization - burst and sustained. The former being how much damage can be dealt in a specified (usually short) period of time and the latter, well, over a more extended period of time. We use these same types of terms to describe the bandwidth usage patterns of protocols (FTP is bursty) as well as attacks. For DDoS the measurement isn't in points of damage per se, it's in bits. Gigabits, to be more precise.

According to Verizon's 2014 Data Breach Investigations Report (DBIR) the mean density of a DDoS attack in 2011 was 4.7 Gbps. In 2012 it was 7.0 Gbps. Last year it reached 10.0 Gbps.

Yes, you read that right. The DPS of a DDoS has doubled in less than two years. And that's the mean. The largest attacks have registered at over 300 Gbps.

The question you need to ask yourself now is: has the capacity of your Internet pipes doubled in the past two years? Can you handle 300+ Gbps slamming into your data center?

If it hasn't and you can't then you might be wondering how you're going to withstand what is likely an inevitable attack. The most likely answer is you're not, unless you provision more capacity now or move to using a DDoS protection service and make such an attack Somebody Else's Problem.

Or you might consider taking a hybrid approach that combines the traditional protections you have on-premise that work just fine fending off most attacks and keep in your back pocket a DDoS protection as a service option for those times when the DPS of an attack threatens to topple that infrastructure.

It's like cloud bursting, in that capacity is increased on-demand by using the capacity of a public service to augment existing capacity on-premise. In the case of DDoS protection, all traffic is rerouted to a service (on the backbone with lots and lots and lots of bandwidth and compute capacity) that scrubs the data, eliminating the bad and sending only the good on to the data center.

The advantages of a hybrid, bursting style approach is that it is more cost-effective than oversubscribing an on-premise option (e.g., provisioning fatter pipes to the Internet) while affording similar levels of confidence in your ability to withstand a significant attack.

The DPS or a DDoS is likely to continue increasing with advances in compute and increases in consumer bandwidth available to attackers through various botnets and malware distributions. There's a point at which the costs of suffering through an attack (including downtime and impact on reputation) are just not worth the extra investment in oversubscription on a 24x7x365 basis. But there are solutions available that make it possible to be prepared for an onslaught of DDoS DPS without a lot of heavy lifting or breaking your budget.

A hybrid approach to DDoS protection a la F5 Silverline is one of those solutions.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.