Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Cloud Computing, Virtualization Magazine, Infrastructure On Demand, Cloudonomics Journal, Infrastructure 2.0 Journal, IBM Journal, Datacenter Automation, CIO/CTO Update, LeanITmanager, Java in the Cloud, DevOps Journal

DevOpsJournal: Blog Feed Post

F5 Friday: Would You Like Some Transaction Integrity with Your Automation?

If you thought the integration and collaboration required new networking capabilities, you ain’t seen nothing yet.

f5friday Anyone who has ever configured a network anything or worked with any of a number of cloud provider’s API to configure “auto-scaling” via a load balancing service recognizes that it isn’t simply point, click, and configure. Certain steps need to be configured in a certain order (based entirely on the solution and completely non-standardized across the industry) and it’s always a pain to handle errors and exceptions because if you want to “do over” you have to backtrack through the completed steps or leave the system cluttered or worse – unstable.

Developers and system operators have long understood the importance of a “transaction” in databases and in systems where a series of commands (processed in a “batch”) are “all or nothing”. Concepts like two-phase commit and transaction integrity are nothing new to developers and sysops and probably not to network folks, either. It’s just that the latter has never had that kind of support and have thus had to engineer some, shall we say, innovative solutions to recreating this concept.

Infrastructure 2.0 and cloud computing are pushing to the fore the need for transactional integrity and idempotent configuration management. Automation, which is similar to the early concepts of “batch” processing, requires that a series of commands be executed that individually configure the many moving pieces and parts of an overarching architecture that are required in order to “make the cloud go” and provide the basic support necessary to enable auto-scaling.

Because it is possible that one command in a sequence of ten, twenty, or more commands that make up an “automation” could fail, you need to handle it. You can catch it and try again, of course, but if it’s a problem that isn’t easily handled you’d wind up wanting to “roll back” the entire transaction until the situation could be dealt with, perhaps even manually. One way to accomplish this is to enable the ability to package up a set commands as a transaction and, if any command fails the transaction management system automagically hits the “undo” button and rolls back each command to the very beginning, making it like it never happened.


IT’S A LONG, LONG WAY TO TIPPERARY…and FULLY AUTOMATED DATACENTERS

We’re not there yet. If you were waiting for me to pronounce “we have arrived” I’m sorry. We haven’t, but we are at least recognizing that this is a problem that needs solutions. In order for the above scenario to be reality every system, every device, every component that is (or could be) part of the transaction must be enabled with such functionality.

Transactions are essential to maintaining the integrity of components across an orchestration in complex systems. That basically means most – if not all – datacenters will need transactions eventually if they go the route of automation to achieve operational efficiency. This is another one of those “dev” meets “ops” topics in which dev has an advantage over ops merely due to the exigencies of being in development and working with integration and databases and other imagedevelopment-focused systems that require transactional support. Basically we’re going to (one day) end up with layers of transactions: transactions at the orchestration layer that is comprised of individual transactions at the component layer and it will be imperative that both layers of transactions are able to handle errors, be idempotent (to handle restarts), and to provide integrity of the implied shared systems that make up a cloud computing environment (that’s fault isolation, by the way).

The necessity of transactional support for infrastructure 2.0 components is going to become more important as larger and more business critical systems become “cloudified”. The new network has to be more dynamic and that means it must be able to adapt its operating conditions to meet the challenges associated with integration and the dependencies between devices and applications that creates.

For F5, we’ve taken a few of the first steps down the road to Tipperary (and the eventually fully transactional-based automation implementation required to get there) by enabling our BIG-IP shell scripting solution (TMSH) with some basic transactional support. It isn’t the full kitchen sink, it’s just a start, and as you can guess you should “watch this space” for updates and improvements and broader support for this concept across the entire BIG-IP management space.


TMSH TRANSACTION SUPPORT

If you weren’t aware, back in a prior release (we called it Park City, you might know it as v10) there was a capability added called TMSH (TMOS SHell) which is an action/object based scripting shell for F5 BIG-IP that allows administrators to script the configuration and management of a BIG-IP in much the same way as they might develop system scripts in BASH and KORN or PowerShell or whatever their favorite shell scripting language may be.

What was really cool was the addition of the ability to wrap transactions around a series of TMSH commands.  What transactions allow you to do is bundle a list of commands to be executed as a single atomic operation. Now, we’re not anywhere near the “two phase commit” capabilities of mature batch processing systems like IBM and databases might have, but we are a far sight further along than perhaps people realized. When you’re “in” a transaction, for example, the command is only validated for syntax. It isn’t executed until the transaction is submitted for execution, you can’t undo a submitted transaction, and right now you can’t create “named” transactions that you can queue up for approval or save to execute at a later date or what have you.  You can, however, recreate a transaction you have deleted by using the cli history component.

   1: [email protected](Active)(tmos)# create transaction
   2: [batch mode][email protected](Active)(tmos)# create ltm pool a_pool members add { 10.2.27.1:80 }
   3: Command successfully added to the current transaction
   4: [batch mode][email protected](Active)(tmos)# create ltm pool a_pool members add { 10.2.16.1:80 }
   5: Command successfully added to the current transaction
   6: [batch mode][email protected](Active)(tmos)# submit transaction 
   7: 01020066:3: The requested pool (a_pool) already exists in partition Common
   8: [batch mode][email protected](Active)(tmos)# list ltm pool a_pool
   9: 01020036:3: The requested pool (a_pool) was not found.
  10: [batch mode][email protected](Active)(tmos)# list transaction 
  11: 1: (tmos)# create ltm pool a_pool members add { 10.2.27.1:80 }
  12: 2: (tmos)# create ltm pool a_pool members add { 10.2.16.1:80 }
  13: [batch mode][email protected](Active)(tmos)# modify transaction delete 1
  14: [batch mode][email protected](Active)(tmos)# submit transaction 
  15: [email protected](Active)(tmos)# list ltm pool a_pool
  16: pool a_pool {
  17:     members {
  18:         10.2.16.1:http { }
  19:     }
  20: }

This is only one (small) piece of a much larger puzzle: the automated, dynamic infrastructure (a la infrastructure 2.0). And it’s a piece that’s not even wholly complete yet. But it’s a start on what’s going to be necessary if we’re going to automate the data center and realize the potential gains in efficiency and move even further into the “automated self-healing, self-optimizing” networks of the (further out) future. We can’t get even to a true implementation of “cloud” until we can automate the basic processes needed to implement elastic scalability because otherwise we’d just be shifting man hours from racking servers to tapping out commands on routers and switches and load balancers.

But it’s a step forward on the road that every one is going to have travel.


Related Posts

from tag orchestration
from tag automation
(more..)

Follow me on Twitter    View Lori's profile on SlideShare  friendfeed icon_facebook

AddThis Feed Button Bookmark and Share


Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.