Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Virtualization Magazine, Intel XML, XML Magazine, SOA Best Practices Digest, SOA in the Cloud Expo, CIO/CTO Update, F5 Networks, SDN Journal

Blog Feed Post

What an Increase in East-West Traffic Patterns Really Means

If you know where it's coming from you'll see some things have got to change in the data center

There's a lot of lip service given to east-west (intra-VM) traffic in emerging data center architectures. We talk about how it's increasing, how it's important, but understanding why it's happening offers some insight into the impact it will have on service placement within next-generation data centers.

Application architectures are decomposing. As predicted years ago when SOA was the buzzword of the day, the notion of building applications from composible services has indeed become the method du jour. Unfortunately for SOA, its supporting standards (SOAP, WSDL, XML) were deemed too complex and heavy handed for broad adoption. REST plus a dash of JSON, however, meets the need for simplicity while accomplishing much of the same goals as SOA. Mainly, it enables the decomposition of monolithic applications into individual and much more focused services. Those services can be shared across multiple applications via an API, ensuring not only consistency in the underlying business logic but providing a rapid and agile means of addressing vulnerabilities or the need for new service functionality in a centralized location.

So what we have now are applications decomposing into multiple services that are likely serviced by different "applications" (in old school terminology anyway) that reside topologically in an easterly (or westerly) direction from the invoking application or service.

Basically, when an application makes a call to the API endpoint the logic it executes may actually invoke another (or multiple anothers) services to accomplish its task. Those services are going to cause east-west traffic, as opposed to the north-south implied by the initial invocation of the API endpoint.

Now, I told you all that to ask a very relevant (for networking and infrastructure minded folks anyway) point: what's protecting the services invoked by the API endpoint from malicious code passed through?

If you peeked at the diagram you'll note there's nothing there. And the reason there's nothing there is that topologically speaking, that's not a place where we deploy "infrastructure" or "network" solutions designed to address things like detection and mitigation of malicious code. We do that closer to the perimeter.

propagation-of-malicious-code

The problem is that we can't keep doing it only at the perimeter. Each service is going to need its own set of application services. This example relies heavily on security, but let's pretend there's no need for that (hey, I said PRETEND, try not to have a coronary, okay?). What about scalability of the individual services? Service A may be invoked much more heavily than Service B, and will therefore need to scale more often. And what about the API itself? If it's delivering content to mobile devices (and it probably is - everything does these days, even my toaster) it may need a way to dynamically determine that fact and then apply optimizations like minification and image resizing to ensure a good experience for the user. Service A may not need that, but the API will.

Basically, the change in how applications are architected, along with virtualization, has changed how applications are "put together" and thus dramatically changed traffic patterns in the data center. This, in turn, requires us to rethink how we address service-level needs for security, mobility, performance and availability services.

New models are necessary to enable the economy of scale required to provide application and network services to every service that needs it, no matter where it sits in the data center traffic pattern.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.