Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine, Infrastructure On Demand, Cloudonomics Journal, Twitter on Ulitzer, Infrastructure 2.0 Journal, Microsoft Developer, CIO/CTO Update

Blog Feed Post

The Goldfish Effect

When you combine virtualization with auto-scaling without implementing proper controls you run a risk of scaling yourself silly

When you combine virtualization with auto-scaling without implementing proper controls you run the risk of scaling yourself silly or worse – broke.

You virtualized your applications. You set up an architecture that supports auto-scaling (on-demand) to free up your operators. All is going well, until the end of the month.

Applications are failing. Not just one, but all of them. After hours of digging into operational dashboards and logs and monitoring consoles you find the problem: one of the applications, which experiences extremely heavy processing demands at the end of the month, has scaled itself out too far and too fast for its environment. One goldfish has gobbled up the food and has grown too large for its bowl.

goldfish_bowl

It’s not as crazy an idea as it might sound at first. If you haven’t implemented the right policies in the right places in your shiny new on-demand architecture, you might just be allowing for such a scenario to occur. Whether it’s due to unforeseen legitimate demand or a DoS-style attack without the right limitations (policies) in place to ensure that an application has scaling boundaries you might inadvertently cause a denial of service and outages to other applications by eliminating resources they need.

Automating provisioning and scalability is a Good Thing. It shifts the burden from people to technology, and it is often only through the codification of the processes IT follows in a more static, manual network to scale an application can inefficiencies be discovered and subsequently eliminated. But an easily missed variable in this equation are limitations that were once imposed by physical containment. An application can only be scaled out as far as its physical containers, and no further. Virtualization breaks applications free from its physical limitations and allows it to ostensibly scale out across a larger pool of compute resources located in various physical nooks and crannies across the data center.

But when you virtualized resources you will need to perform capacity planning in a new way. Capacity planning becomes less about physical resources and more about costs and priorities for processing. It becomes a concerted effort to strike a balance between applications in such a way that resources are efficiently used based on prioritization and criticalness to the business rather than what’s physically available. It becomes a matter of metering and budgets and factoring costs into the auto-scaling process.


A NEW KIND of NETWORK is REQUIRED

From a technical perspective this means you need to have strategic points of control at which such decisions are made and policies enforced. The system controlling provisioning in the auto-scaling process must take into consideration not only Application A and its resource requirements, but Application B and C and D as well. It must have visibility into the total pool of resources available and be able to make decisions on scaling in real-time. It must be able to view the data center from a holistic point of view; treating resources more like an operating system treats a CPU and schedules discrete workloads based on a variety of parameters.

From these variables can be derived the limitations that impose policy on applications and resource consumption. Those limitations must be enforced, but they must also be flexible. The same limitations that exist at on the 5th of May are not necessarily applicable on the 31st of May, and they may be applicable only at certain times of the day. The orchestration of the data center is about balancing all the variables and ensuring that the limitations can be just as quickly increased as they can be decreased. A set of heuristics needs to be developed to take into consideration all the variables and solve the riddle: which application gets what resources, and for how long? How can we automatically adjust the network to meet the needs of all applications? We must be able to balance the performance needs of one application against the time-sensitive processing of another. Can we add an acceleration policy to the one to reduce resource consumption and give its extra resources to the other? Are the costs of applying the acceleration policy worth the benefit of meeting both SLAs? Can we degrade functionality in Application Z to reduce its consumption because Application B is experiencing unanticipated demand?

Once the decision is made, it must still be enforced, and that means collaboration and integration.

The orchestration or management system responsible for provisioning must be able to communicate with the infrastructure responsible for delivering those applications to enforce the decisions it has made. When one application is being scaled back, limitations on the number of instances or connections to the instances available should be communicated to the load balancing solution, in real-time, to ensure that the policy is enforced. When an application is being allowed to scale out by adding more instances the same communication must occur, and limitations must be increased or modified to match the new policy.

A new kind of network is needed to support this kind of dynamism; a dynamic infrastructure, a connected infrastructure, a collaborative and interactive infrastructure. An integrated infrastructure.

Hat tip to Brenda Michelson of Elemental Links for offering up the goldfish analogy during a recent Twitter conversation and James Urquhart for his clarity of thought on the subject.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.