Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Mobile Enterprise Application Platforms, Internet of Things Journal

Blog Feed Post

Internet of Things and Mobile Communications By @LMacVittie | @ThingsExpo [#IoT]

There's a lot of focus on the performance of mobile communications given the rate at which mobile is outpacing legacy PC

Optimizing IoT and Mobile Communications with TCP Fast Open

There's a lot of focus on the performance of mobile communications given the incredible rate at which mobile is outpacing legacy PC (did you ever think we'd see the day when we called it that?) usage. There's been tons of research on the topic ranging from the business impact (you really can lose millions of dollars per second of delay) to the technical mechanics of how mobile communications is impacted by traditional factors like bandwidth and RTT. Spoiler: RTT is more of a factor than is bandwidth in improving mobile app performance.

The reason behind this isn't just because mobile devices are inherently more resource constrained or that mobile networks are oversubscribed or that mobile communications protocols simply aren't as fast as your mega super desktop connection, it's also because mobile apps (native ones) tend toward the use of APIs and short bursts of communication. Grab this, check for an update on that, do this quick interaction and return. These are all relatively small in terms of data transmitted, which means that the overhead from establishing a connection can actually take more time than the actual exchange. The RTT incurred by the three-step handshake slows things down.

That same conundrum will be experienced by smart "things" that connect for a quick check-in to grab or submit small chunks of data. The connection will take longer than the data transmission, which seems, well, inefficient, doesn't it?

Apparently other folks thought so too, and hence we have in Internet Draft form a proposed TCP mechanism to alleviate the impact of this overhead known as "TCP Fast Open".

TCP Fast Open Draft @ IETF

This document describes an experimental TCP mechanism TCP Fast Open (TFO). TFO allows data to be carried in the SYN and SYN-ACK packets and consumed by the receiving end during the initial connection handshake, and saves up to one full round trip time (RTT) compared to the standard TCP, which requires a three-way handshake (3WHS) to complete before data can be exchanged. However TFO deviates from the standard TCP semantics since the data in the SYN could be replayed to an application in some rare circumstances. Applications should not use TFO unless they can tolerate this issue detailed in the Applicability section.

The standard relies on the existence of a cookie deposited with the client that indicates a readiness and willingness (and perhaps even a demand) to transmit some of the data in the initial SYN and SYN-ACK packets of the TCP handshake. The cookie is generated by the app (or gateway, the endpoint) upon request from the client. There's no special TCP behavior on this request, so it seems likely this would be handled during the "initial setup" of a thing. On subsequent communications in which the TFO cookie is present, the magic happens. The app (or gateway, the endpoint) recognizes the cookie and is able to grab the data and start processing - before the initial handshake is even complete.

While the use of 'cookies' is more often associated with HTTP, it is also found within the realm of TCP (SYN cookies are a popular means of attempting to detect and prevent SYN flood attacks).

Needless to say, such a mechanism is particularly of interest to service providers as their networks often act as gateways to the Internet for mobile devices. Reducing the time required for short-burst communications ultimately reduces the connections that must be maintained in the mobile network, thus relieving some pressure on the number of proxies - virtual or not - required to support the growing number of devices and things needing access.

A word of caution, however. TFO is not designed for nor meant to be used for every application. The draft clearly spells out applicability as being to those applications where initial requests from the client are of a size that they are less than the TCP MSS. This is because otherwise the server still has to wait until after the handshake completes to gather the rest of the data and formulate a response. Thus any performance benefit would be lost. Proceed with careful consideration, therefore, in applying the use of TCP Fast Open but do consider it, particularly if data sets are small, as may be the case with things reporting in or checking for updates.

tcp fast open

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.