If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Cloud Computing, Virtualization Magazine, SOA & WOA Magazine, F5 Networks

Blog Feed Post

At the Intersection of Cloud and Control...

Arises the fourth data center architecture tier – application delivery

The battle of efficiency versus economy continues in the division of the cloud market between public and private environments. Public cloud proponents argue, correctly, that private cloud simply does not offer the same economy of scale as that of public cloud.

But that only matters if economy of scale is more important than the efficiency gains realized through any kind of cloud computing implementation. Cloud for most organizations has been recognized as transformational not necessarily in where the data center lives, but rather in how the data center operates. Private cloud is desired for its ability to transform the operational model of IT from its long reign of static, inefficient architectures to a more dynamic and ultimately efficient architectural model, one able to more rapidly adapt to new, well, everything.

public private cloud IDC

In many respects the transformative power of cloud computing within the enterprise is not focused on the cost savings but rather on the efficiencies that can be realized through the service-focused design; through the creation of a new “virtual” tier of control in the data center that enables the flexibility of cloud inside the data center.

That tier is necessary to ensure things like SLAs between IT and business organizations. SLAs that are, as Bernard Golden recently pointed out in “Cloud Computing and the Truth About SLAs”, nearly useless in public cloud. There are no guarantees on the Internet, it’s a public, commoditized medium designed specifically for failure, not performance or even uptime of nodes. That, as always has been the case, is the purview of the individuals responsible for maintaining the node, i.e. IT.

It’s no surprise, then, that public providers are fairly laid back when it comes to SLAs and that they provide few if any tools through which performance and uptime can be guaranteed. It’s not because they can’t – the technology to do so certainly exists, many organizations use such today in their own data centers – but rather it’s because the investment required to do so would end up passed on to consumers, many of whom simply aren’t willing to pay to ensure SLAs that today, at least, are not relevant to their organization. Test and development, for example, requires no SLAs. Many startups, while desiring 100% uptime and fantabulous performance, do not have the impetus (yet) to fork over additional cents per instance per hour per megabit transferred to enforce any kind of performance or availability guarantee.

But existing organizations, driven by business requirements and the increasing pressure to “add value”, do have an impetus to ensure performance and uptime. Seconds count, in business, and every second delay – whether from poor performance or downtime – can rack up a hefty bill from lost productivity, lost customers, and lost revenue. Thus while the SLA may be virtually useless in the public cloud for its ability to not only compensate those impacted by an outage or poor performance but the inability of providers to enforce and meet SLAs to enterprise-class specifications, they are important. Important enough, in fact, that many organizations are, as anticipated, turning to private cloud to reap the benefits of both worlds – cloud and control.


And thus we are seeing the emergence of a fourth tier within the data center architecture; a flexible tier in which those aspects of delivering applications are addressed: security, performance, and availability. This tier is a necessary evolution in data center architecture because as cloud transforms the traditional server (application) tiers into mobile, virtualized dynamic-data-center-tierscontainers, it abstracts the application and the servers as well from the infrastructure, leaving it bereft of the ability to easily integrate with the infrastructure and systems typically used to provide these functions. The topology of a virtualized application infrastructure is necessarily transient and, in order to more easily develop and deploy those applications are increasingly relying on external services to provide security, access management, and performance-related functionality.

The insertion of a fourth tier in the architecture affords IT architects and operations the ability to easily manage these services and provide them in an application specific way to the virtualized application infrastructure. It has the added advantage of presenting a unified, consistent interface to the consumer – internal or external – that insulates them from failure as well as changes in service location. This is increasingly important as applications and infrastructure become more mobile and move from not only server to server but data center to data center and cloud to cloud. Insulating the consumers of applications and services is critical to ensuring a consistent experience and to enforcing SLAs.

Consider the simple case of accessing an application. Many access control strategies are topologically constrained either in implementation or in integration with applications, making implementation in a dynamic environment challenging. Leveraging an application delivery tier, which focuses on managing applications and not IP addresses or servers enables an access management strategy that is able to deal with changing topology and locations without disruption. This is a more service-focused approach that melds well with the service-oriented design of modern, cloud-based data centers and architectures. The alternative is to return to an agent-based approach, which has its own challenges and has already been tried and for the most part rejected as a viable, long term strategy. Unfortunately, cloud computing is driving us back toward this approach and, while effective in addressing many of the current gaps in cloud computing services, this approach fractures operations and has the effect of increasing operational investment as two very disconnected sets of management frameworks and processes must be simultaneously managed.

An effective application delivery tier, on the other hand, unifies operations while providing the services necessary across multiple environments. This means consistent processes and policies can be applied to applications regardless of location, making it possible to ensure governance and meet business-required SLAs.

This level of control is necessary for enterprise-class services, no matter where the services may actually be deployed. That public providers do not and indeed cannot today provide support for enterprise-class SLAs is no surprise, but partly because of this neither should the data showing enterprises gravitating toward private cloud be surprising.

The right data center architecture can support both the flexibility and operational benefits of using cloud computing and ensuring performance and availability guarantees.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.