If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: SOA & WOA Magazine, F5 Networks

Blog Feed Post

Data Center Feng Shui: SSL

The Choice Between Hardware and Virtual Server are Not Mutually Exclusive

Like most architectural decisions the choice between hardware and virtual server are not mutually exclusive.

image_thumb[4] The argument goes a little something like this: The increases in raw compute power available in general purpose hardware eliminates the need for purpose-built  hardware. After all, if general purpose hardware can sustain the same performance for SSL as purpose-built (specialized) hardware, why pay for the purpose-built hardware? Therefore, ergo, and thusly it doesn’t make sense to purchase a hardware solution when all you really need is the software, so you should just acquire and deploy a virtual network appliance.

The argument, which at first appears to be a sound one, completely ignores the fact that the same increases in raw compute power for general purpose hardware are also applicable to purpose-built hardware and the specialized hardware cards that provide acceleration of specific functions like compression and RSA operations (SSL). But for the purposes of this argument we’ll assume that performance, in terms of RSA operations per second, are about equal between the two options.

That still leaves two very good situations in which a virtualized solution is not a good choice.  1




For many industries, federal government, banking, and financial services among the most common, SSL is a requirement – even internal to the organization. These industries also tend to fall under the requirement that the solution providing SSL be FIPS 140-2 or higher compliant. If you aren’t familiar with FIPS or the different “levels” of security it specifies, then let me sum up: FIPS 140 Level 2 (FIPS 140-2) requires a level of physical security that is not a part of Level 1 beyond the requirement that hardware components be “production grade”, which we assume covers the general purpose hardware deployed by cloud providers.

quote-left Security Level 2 improves upon the physical security mechanisms of a Security Level 1 cryptographic module by requiring features that show evidence of tampering, including tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys and critical security parameters (CSPs) within the module, or pick-resistant locks on covers or doors to protect against unauthorized physical access.

-- FIPS 140-2, Wikipedia

FIPS 140-2 requires specific physical security mechanisms to ensure the security of the cryptographic keys used in all SSL (RSA) operations. The private and public keys used in SSL, and its related certificates, are essentially the “keys to the kingdom”. The loss of such keys is considered quite the disaster because they can be used to (a) decrypt sensitive conversations/transactions in flight and (b) masquerade as the provider by using the keys and certificates to make more authentic phishing sites. More recently keys and certificates, PKI (Public Key Infrastructure), has been an integral component of providing DNSSEC (DNS Security) as a means to prevent DNS cache poisoning and hijacking, which has bitten several well-known organizations in the past two years.

Obviously you have no way of ensuring or even knowing if the general purpose compute upon which you are deploying a virtual network appliance has the proper security mechanisms necessary to meet FIPS 140-2 compliance. Therefore, ergo, and thusly if FIPS Level 2 or higher compliance is a requirement for your organization or application, then you really don’t have the option to “go virtual” because such solutions cannot meet the physical requirements necessary.



A second consideration, assuming performance and sustainable SSL (RSA) operations are equivalent, is the resource utilization required to sustain that level of performance. One of the advantages of purpose built hardware that incorporates cryptographic acceleration cards is that it’s like being able to dedicate CPU and memory resources just for cryptographic functions. You’re essentially getting an extra CPU, it’s just that the extra CPU is automaticallyimage dedicated to and used for cryptographic functions. That means that general purpose compute available for TCP connection management, application of other security and performance-related policies, is not required to perform the cryptographic functions. The utilization of general purpose CPU and memory necessary to sustain X rate of encryption and decryption will be lower on purpose-built hardware than on its virtualized counterpart.

That means while a virtual network appliance can certainly sustain the same number of cryptographic transactions it may not (likely won’t) be able to do much other than that. The higher the utilization, too, the bigger the impact on performance in terms of latency introduced into the overall response time of the application.

You can generally think of cryptographic acceleration as “dedicated compute resources for cryptography.” That’s oversimplifying a bit, but when you distill the internal architecture and how tasks are actually assigned at the operating system level, it’s an accurate if not abstracted description.

Because the virtual network appliance must leverage general purpose compute for what are computationally expensive and intense operations, that means there will be less general purpose compute for other tasks, thereby lowering the overall capacity of the virtualized solution. That means in the end the costs to deploy and run the application are going to be higher in OPEX than CAPEX, while the purpose-built solution will be higher in CAPEX than in OPEX – assuming equivalent general purpose compute between the virtual network appliance and the purpose-built hardware.


Can you achieve the same performance gains by running a virtual network appliance on general purpose compute hardware augmented by a cryptographic acceleration module? Probably, but that assumes that the cryptographic module is one with which the virtual network appliance is familiar and can support via hardware drivers and part of the “fun” of cloud computing and leased compute resources is that the underlying hardware isn’t supposed to be a factor and can vary from cloud to cloud and even from machine to machine within a cloud environment. So while you could achieve many of the same performance gains if the cryptographic module were installed on the general purpose hardware (in fact that’s how we used to do it, back in the day) it would complicate the provisioning and management of the cloud computing environment which would likely raise the costs per transaction, defeating one of the purposes of moving to cloud in the first place.

If you don’t need FIPS 140-2 or higher compliance, if performance and capacity (and therefore costs) are not a factor, if you’re simply using the virtualized network appliance as part of test or QA efforts or a proof of concept, certainly – go virtual. If you’re building a hybrid cloud implementation you’re likely to need a hybrid application delivery architecture. The hardware components provide the fault tolerance and reliability required while virtual components offer the means by which corporate policies and application-specific behavior can be deployed to external cloud environments with relative ease. Cookie security and concerns over privacy may require encrypted connections regardless of application location, and in the case where physical deployment is not possible or feasible (financially) then a virtual equivalent to provide that encryption layer is certainly a good option.

It’s important to remember that this is not necessarily a mutually exclusive choice. A well-rounded cloud computing strategy will likely include both hardware and virtual network appliances in a hybrid architecture pdf-icon. IT needs to formulate a strong strategy and guidance regarding what applications can and cannot be deployed in a public cloud computing environment, and certainly the performance/capacity and compliance requirements of a given application in the context of its complete architecture – network, application delivery network, and security - should be part of that decision making process.

The question whether to go virtual or physical is not binary. The operator is, after all, OR and not XOR. The key is choosing the right form factor for the right environment based on both business and operational needs.


Related blogs & articles:

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.