If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Innovations Software Technology, Web Analytics, F5 Networks, DevOps Journal

DevOpsJournal: Blog Feed Post

The New Distribution of the 3-Tiered Architecture Changes Everything

Welcome to the Brave New Web

As the majority of an application’s presentation layer logic moves to the client it induces changes that impact the entire application delivery ecosystem

The increase in mobile clients, in demand for rich, interactive web applications, and the introduction of the API as one of the primary means by which information and content is shared across applications on the web is slowly but surely forcing a change back toward a traditional three-tiered architecture, if not in practice then in theory. This change will have a profound impact on the security, delivery, and scalability of the application but it also forces changes in the underlying network and application network infrastructure to support what is essentially a very different delivery model.

What began with Web 2.0 – AJAX, primarily – is continuing to push in what seems a backward direction in architecture as a means to move web applications forward. In the old days the architecture was three-tiered, yes, but those tiers were maintained almost exclusive on the server-side of the architecture, with imagethe browser acting only as the interpreter of the presentation layer data that was assembled on the  server. Early AJAX applications continued using this model, leveraging the out-of-band (asynchronous) access provided by the XMLHTTPRequest object in major  browsers as a means to dynamically assemble smaller pieces of the presentation layer. The browser was still relegated primarily to providing little more than rendering support.

Enter Web 2.0 and RESTful APIs and a subtle change occurred. These APIs returned not presentation layer fragments, but data. The presentation layer logic required to display that data in a meaningful way based on the application became the responsibility of the browser. This was actually a necessary evolution in web application architecture to support the increasingly diverse set of end-user devices being used to access web applications. Very few people would vote for maintaining the separation of presentation layer logic used to support mobile devices and richer, desktop clients like browsers. By imageforcing the client to assemble and maintain the presentation layer that complexity on the server side is removed and a single, unified set of application logic resources can be delivered to every device without concern for cross-browser, cross-device support being “built in” to the presentation layer logic.

This has a significant impact on the ability to rapidly support emerging clients – mobile and otherwise – that may not support the same robust set of capabilities available on a traditional browser. By reducing the presentation layer assembly on the server side to little more than layout – if that – the responsibility for assembling all the components and their display and routing data to the proper component is laid on the client. This means one server-side application truly can support both mobile and desktop clients with very little modification. It means an API provided by a web application can not only be used by the provider of that API to build its own presentation layer (client) but third-party developers can also leverage that API and the data it provides in whatever way it needs/chooses/desires.

This is essentially the point to which we are almost at today.


This change on the surface may appear to be innocuous but for network and application delivery network and security professionals it should ring some alarm bells. For network and application delivery network folks this model changes the usage patterns of TCP and HTTP. It is no longer about HTML per se but about imagesmaller, more abstract fragments of HTML and then little more than data packaged up in XML or JSON or CSV. Where once there was a great disparity between the size of a request and the size of a response now there is more equality. Where once there was a predictable number of request-response pairs per user connection, now there is a variable number. Where once there was a specific time-frame in which those request-response pairs would be sent and received, now the time-frame for interaction between the client and the server is indeterminable, based solely on the whim of the user and not on the protocol.

Data is now returned in multiple, schema-less formats that might be served by the same API invocation. Data is returned instead of HTML, for which intermediaries providing security (web application firewall) and load balancing and data leak prevention have long had capabilities to protect but now find it more difficult to provide. Caching of HTML objects is rendered less useful as there are fewer HTML objects being sent to clients because they are now being assembled on the client and data is now transmitted instead. TCP sessions are held open longer, consuming resources on intermediaries and servers, and over which are sent more requests more frequently.

The changes in the application development and deployment architecture significantly impact the entire application delivery infrastructure, forcing more focus on TCP and HTTP optimization and less on caching and compression. It incurs a higher rate of throughput because requests and replies are more frequent, and imposes a higher burden on the network due to the smaller size of both.


The redistribution of the three-tiered architecture to rely upon the client as the presentation layer is necessary to remove the complexity inherent in supporting multiple clients with varying presentation capabilities on the server, and makes more efficient the server-side processing. It moves that complexity into the presentation layer, onto the client, but this is operationally more efficient as the burden of execution becomes the client’s and not the server’s. The costs, then, associated with the processing necessary to determine the type of client and assemble the presentation layer are eliminated and transferred to the end-client.

But this redistribution imposes new burdens and introduces new challenges to the infrastructure that support it. It can no longer rely upon the HTML specification as a means to enforce security or acceleration or optimization. It must be able to adapt and support the same capabilities for schema-less data exchanges and be capable of modifying its those capabilities based on real-time usage patterns that are likely to be unpredictable across time. This is why the role of devops is emerging – because the very deployment model of an application changes by nature its behavior and how it uses its supporting infrastructure. Network and application network infrastructure tuning and optimization becomes less about bandwidth and more about behavior. Ops needs to understand the application, and dev needs to understand the impact of TCP and longer sessions and smaller data on the network and its application servers. The two disciplines must communicate and merge to be able to deliver such applications in the most efficient manner possible.

Welcome to the brave new web.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.