Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing

Blog Post

Cloud Lets You Throw More Hardware at the Problem Faster

Will Help In Some Cases, But Not All

Hidden deep within an article on scalability was a fascinating insight. Once you read it, it makes sense, but because cloud computing forces our attention to the logical (compute resources) rather than the physical (hardware) it’s easy to overlook.

blockquote “Cloud computing is actually making this problem a little bit worse,” states Leach [CTO of domain registrar Name.com], “because it is so easy just to throw hardware at the problem. But at the end of the day, you’ve still got to figure, ‘I shouldn’t have to have all this hardware when my site doesn’t get that much traffic.’”

The “problem” is scalability and/or performance. The solution is (but shouldn’t necessarily be) “more hardware.”

Now certainly you aren’t actually throwing more “hardware” at the problem, but when you consider what “more hardware” is intended to provide you’ll probably quickly come to the conclusion that whether you’re provisioning hardware or virtual images, you’re doing the same thing. The way in which we’ve traditionally approached scale and performance problems is to throw more hardware at the problem in order to increase the compute resources available. The more compute resources available the better the performance and the higher the capacity of the system. Both vertical and horizontal scalability strategies employ this technique; regardless of the scalability model you’re using (up or out) both increase the amount of compute resources available to the application.

As Leach points out, cloud computing exacerbates this problem by making it even easier to simply increase the amount of compute resources available by provisioning more or larger instances. Essentially the only thing that’s changed here is the time it takes to provision more resources: from weeks to minutes. The result is the same: more compute resources.


ISN’T THAT the BEST WAY TO ADDRESS SCALABILITY and PERFORMANCE?

Not necessarily. The reason the ease of provisioning can actually backfire on IT is that it doesn’t force us to change the way in which we look at the  problem. We still treat capacity and performance issues as wholly dependent on the amount of processing power available to the application. We have on imageblinders that don’t allow us to examine the periphery of the data center and the rest of the ecosystem in which that application is deployed and may in fact be having an impact on capacity and performance.

Other factors that may be impacting performance and scalability:

  • Network performance and bandwidth
  • Performance and scalability of integrated systems (data stores, identity management, Web 2.0 integrated gadgets/widgets)
  • Logging processes
  • Too many components in the application delivery chain
  • Ratio of concurrent users to open TCP connections (typically associated with AJAX-based applications)
  • Cowboy applications architecture (doing everything in the application – acceleration, security, SSL, access control, rate limiting)

Any one, or combination thereof, of these factors can negatively impact the performance and scalability of an  application. Simply throwing more hardware (resources) at the problem will certainly help in some cases, but not all. It behooves operations to find the root cause of the performance and scalability issues, and address them rather than pushing the “easy” button and spinning up more (costly) instances.

Improving the capacity of a single instance by leveraging application delivery techniques such as offloading TCP connection management, centralizing SSL termination, employing intelligent compression, and leveraging application acceleration solutions where possible can dramatically improve the capacity and performance of an application without dramatically increasing the bottom line. A unified application delivery strategy can be employed across all applications and simultaneously improve performance and capacity while sharing its costs across all applications, which results in lower operational costs universally.


NICKEL and DIME

Until cloud computing is more mature in terms of its infrastructure service offerings, organizations will unlikely have motivation to move away from the “throw more hardware the problem” strategy of dealing with scalability and performance. Cloud computing does not nothing right now to encourage a move away from such thinking because at this point cloud computing models are based entirely on the premise that compute is cheap and it’s okay to overprovision if necessary. Certainly a single instance of an application is vastly more inexpensive to deploy in the cloud today, but in the case of cloud computing the whole is greater than the sum of its parts. Each instance adds up and up and up until, as the old adage goes, your budget has been nickel and dime’d to death.

IT architects must seriously start considering the architecture of applications from a data center perspective. Architects need to evaluate all mitigating factors that impact capacity and performance whether those factors are network, application, or even end-point originated. Without a holistic view of application development and deployment, cloud computing will continue to promote more of the same problems that have always plagued applications in the data center.

The change brought about by cloud computing and virtualization cannot be simply technological; it must also be architectural. And that means IT, as technologists and architects, must change.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.