Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine, Cloudonomics Journal, CIO/CTO Update, Virtual Application Appliances

Blog Feed Post

A Hardware Platform and a Virtual Appliance Walk into a Bar at Interop...

Architecture as Important as Products

Invariably when new technology is introduced it causes an upheaval. When that technology has the power to change the way in which we architect networks and application infrastructure, it can be disruptive but beneficial. When that technology simultaneously requires that you abandon advances and best practices in architecture in order to realize those imagebenefits, that’s not acceptable.

Virtualization at the server level is disruptive, but in a good way. It forces organizations to reconsider the applications deployed in their data center, turn a critical eye toward the resources available and how they’re partitioned across applications, projects, and departments. It creates an environment in which the very make-up of the data center can be re-examined with the goal of making more efficient the network, storage, and application network infrastructure over which those applications are delivered.

Virtualization at the network, layer, is even more disruptive. From a network infrastructure perspective there are few changes required in the underlying infrastructure to support server virtualization because the application and its behavior doesn’t really change when moving from a physical deployment to a virtual one. But the network, ah, the network does require changes when it moves from a physical to a virtual form factor. The way in which scale and fault-tolerance and availability of the network infrastructure – from storage to application delivery network – is impacted by the simple change from physical to virtual. In some cases this impact is a positive one, in others, it’s not so positive. Understanding how to take advantage of virtual network appliances such that core network characteristics such as fault-tolerance, reliability, and security are not negatively impacted is one of the key factors in the successful adoption of virtual network technology.

Combining virtualization of “the data center network” with the deployment of applications in a public cloud computing environment brings to the fore the core issues of lack of control and visibility in externalized environments. While the benefits of public cloud computing are undeniable (though perhaps not nearly as world-shaking as some would have us believe) the   inclusion of externally controlled environments in the organization’s data center strategy will prove to have its challenges. image

Many of these challenges can be addressed thanks to the virtualization of the network (despite the lack of choice and dearth of services available in today’s cloud computing offerings).


IT’S ALL ABOUT the ARCHITECTURE

In order to successfully overcome these challenges, it may be necessary to employ a strategy that combines both physical and virtual network appliances. For example, consistent and predictable application of security, availability, and performance policies across two disparate data centers (the organizations’ and the cloud providers) is nearly impossible without a strategy that employs virtualized versions of the network and application delivery network infrastructure already relied upon in the data center.

The ability to leverage a common application delivery platform from the data center to remote cloud computing environments is a powerful one; it enables the continued use of application delivery platforms to provide much of the visibility, security, and scalability that is simply not available from cloud computing providers today. Simple options such as application-aware monitoring and enforcement of protocol-level security are not inherently a part of cloud computing offerings. The ability to deploy alternative scalability solutions in these environments that can be managed and controlled as part of the data center infrastructure ensures flexibility of deployment options without sacrificing core application delivery requirements.

The ability to deploy these solutions is the easy part. The ability to deploy such solutions in a strategic way that makes sense, however, is not. That requires an architecture that’s based on leveraging both physical and virtual [PDF], and does so by maximizing the strengths of each while minimizing the weaknesses.

That’s why this year at Interop the focus is not so much on new products (though they’ll be plenty of those to ooh and aah over, absolutely) but on architectures, on best practices for deploying existing solutions that have been virtualized to enable organizations to make the most of emerging data center models like cloud computing.

 

 


Related blogs & articles:

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

 

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.