If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

#openflow #sdn Programmability and reliability rarely go hand in hand, especially when complexity and size increase, which creates opportunity for vendors to differentiate with a vetted ecosystem

emergingsdnofI’m reading (a lot) on SDN these days. That means reading on OpenFlow, as the two are often tied together at the hip. In an ONF white paper on the topic, there were two “substantial” benefits with respect to an OpenFlow-based SDN that caught my eye as they appear to be in direct conflict with one another.

1. Programmability by operators, enterprises, independent software vendors, and users (not just equipment manufacturers) using common programming environments, which gives all parties new opportunities to drive revenue and differentiation.

2. Increased network reliability and security as a result of centralized and automated management of network devices, uniform policy enforcement, and fewer configuration errors.

I am not sure it’s possible to claim both increased network reliability and security as well as programmability as being complementary benefits. Intuitively, network operators know that “messing with” routing and switching stability through programmability is a no-no. Several network vendors have discovered this in the past when programmable core network infrastructure was introduced. The notion of programmability for management purposes is acceptable, the notion of programmability for modification of function is not.

Most network operators cannot articulate their unease with such notions. This is generally because there is a gap between developers and network operators’ core foci. Those developers who’ve decided to plunge into graduate school can – or should be able – to articulate exactly from where this unease comes, and why. It is the reason why points 1 and 2 conflict, and why I continue to agree with pundits who predict SDN will become the method of management for dynamic data centers but will not become the method of implementing new functions in core routing and switching a la via OpenFlow.

Code Complexity and Error Rates

Most folks understand higher complexity incurs higher risk. This is not only an intuitive understanding that transcends code outwards all the way to the data center architecture but it also been proven through studies.

According to Steve McConnell in “Code Complete” (a staple of many developers), a study at IBM found “the most error-prone routines were those that were larger than 500 lines of code.” McConnell also notes a study by Lind and Vairavan that “code needed to be changed least when routines averaged 100 to 150 lines of code.” But it is not just the number of lines of code that contribute to the error rate within source code. Cyclomatic complexity increases the potential for errors; that is, the more conditional paths possible in logic the higher the cyclomatic complexity.

The cyclomatic complexity of a section of source code is the count of the number of linearly independent paths through the source code. For instance, if the source code contained no decision points such as IF statements or FOR loops, the complexity would be 1, since there is only a single path through the code. If the code had a single IF statement containing a single condition there would be two paths through the code, one path where the IF statement is evaluated as TRUE and one path where the IF statement is evaluated as FALSE.

-- Wikipedia, Cyclomatic complexity 

An example of this is seen in the OpenFlow wiki tutorial, illustrating some (very basic) pseudocode:

Sample Pseudocode

Your learning switch should learn the port of hosts from packets it receive. This is summarized by the following sequence, run when a packet is received:

if (source mac of the packet is known) 
    record mac as being bound to input port of packet received
if (destination mac of the packet is known) 
    setup flow to the learned port
    send flood packet

This particular pseudocode has a cyclomatic complexity score of 3, as there are three distinct paths through the logic. Needless to say, the actual cyclomatic complexity of a real implementation is likely to be much, much higher as the matching and classification of packets is likely to be far more detailed and require more inspection of data.

The cyclomatic complexity of a piece of code can be correlated to error rates, as detailed by Thomas McCabe and by Mike Chapman and Dan Solomon (both of NASA) in “The Relationship of Cyclomatic Complexity, Essential Complexity and Error Rates”:

McCabe Complexity Metrics probability of error vs cc
In his paper A Complexity Measure, IEEE Transactions on Software Engineering, Dec 1976, Thomas McCabe defined a set of metrics to characterize the complexity of a software module's internal control flow logic. Glen Myers suggested a revision to the calculation in An Extension to the Cyclomatic Measure of Program Complexity, SIGPLAN Notices, Oct 1977.

Cyclomatic Complexity (v(G))

  • Measures - The complexity of a module's decision structure.
  • Calculation - Count of linearly independent paths through a function or set of functions. An Imagix 4D option controls whether the original McCabe calculation or the Myers revision is used.
  • Use - Higher cyclomatic complexities correlate with greater testing and maintenance requirements. Complexities above 20 also correspond to higher error rates.
In other words, the gut instincts of network operators is right: code controlling core routing and switching – which will be necessarily high in cyclomatic complexity by nature of what it has to do to perform tasks – is almost certain to be prone to higher error rates. Such reality butts heads against the definition of “reliable”, making the two not very complementary.

This does not mean that OpenFlow controllers are going to be full of errors, it simply means there is likely a higher risk of error (and thus risk of impairing reliability of the entire network) that is mathematically provable and not just the overactive imagination of network operators.

OpenFlow May Enable a New Ecosystem

This likely means unless there are some guarantees regarding the quality and thoroughness of testing (and thus reliability) of OpenFlow controllers, network operators are likely to put up a fight at the suggestion said controllers be put into the network. Which may mean that the actual use of OpenFlow will be limited to an ecosystem of partners offering “certified” (aka guaranteed by the vendor) controllers.

That may sound limiting, as if innovation will be stifled or somehow limited to a vetted ecosystem. I think the opposite, and perhaps like consumerization we’ll see “app stores” of OpenFlow controller implementations that have been vetted by vendors on their OpenFlow-enabled infrastructure that will enable a new market and new entrants into the market. It will encourage innovation, if equipment manufacturers are quick enough on the uptake to see the value in an entire ecosystem of developers innovating for them.

The lessons of Apple and Android should be clear at this point: he who encourages and supports an ecosystem where innovation can take place will eventually take the lead in the market. If that lesson is taken to heart, one day we may see “number of certified OpenFlow controllers” listed as table stakes on core network infrastructure.

The reality is that without support and vetting of controllers, OpenFlow may never gain more than “test lab” status in a significant number of enterprise production environments outside of academia. Whether existing network vendors see OpenFlow as a possible path forward to differentiate what is increasingly commoditized in the data center, remains to be seen.

Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.