Welcome!

If everyone is thinking the same, someone isn't thinking

Lori MacVittie

Subscribe to Lori MacVittie: eMailAlertsEmail Alerts
Get Lori MacVittie via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


#infosec #cloud #iam Though unfettered access to cloud-deployed applications is touted as a benefit, IT knows that control over access is necessary half proxy cloud broker

That doesn’t mean that achieving that control is easy.

Well, maybe it’s easy than you might think.

The trick to managing access to cloud-deployed applications is requiring that access be brokered through infrastructure over which IT has control, i.e. through services IT can configure and manage itself to be compliant with corporate policies.

Doing so, especially given “anywhere, any client” access models, requires orchestration of several infrastructure components, starting with DNS.

DNS is (or should be) the gatekeeper to all corporate applications whether they’re deployed locally or in the cloud. By maintaining control over the corporate namespace, IT can ensure that requests for applications – regardless of from where those requests originate – can be managed in a manner consistent with corporate access policies.

Yes, users may want to use their iPad, but if it’s an unmanaged device and the application or resource being requested requires transfer of sensitive corporate data to the device, it may not be acceptable based on security policies (and a number of governmental regulations, as well). By controlling the namespace, IT can implement a first line of access control.

Once it’s been determined that the location is acceptable for the requested resource, DNS can return the appropriate resource to the user. That address is, of course, hosted locally and serviced by the application delivery tier, through which additional security measures can be executed to ensure appropriateness of access.

Such measures might include authentication processes – including additional security questions for verification of identity – as well as scanning the endpoint (when possible) to ensure compliance with access endpoint guidelines that may include running particular anti-virus or firewall software. Once it has been determined that the user is authorized to access the requested resource, a redirect can be issued that automatically sends the user on to their desired destination. The redirected request may include assertions, tokens, or other identifying data that can be used by the application for further validation, if desired.


WHAT is a HALF-PROXY?

A “half-proxy” is based on the concept of late or delayed binding. Late binding is a programmatic concept associated with object-oriented paradigms, often implemented through polymorphism. In load balancing, delayed binding is the process of delaying a routing decision until after the TCP handshake is completed, at which time the load balancing service has the information required to make a decision. In load balancing, the virtual server (virtual end point hosted on the load balancing system) is analogous to the method defined in a base class (or interface) and the specific application instance is the equivalent to the derived class’ method implementation (or interface implementation, depending on language).

A half-proxy is one that relies on delayed binding techniques, because only the initial exchange is “proxied”. After the decision has been made the actual resource takes control of the connection.


The advantages of this architecture go beyond security and into the realm of mobility. By leveraging DNS and a flexible delivery tier, the actual location of the resource can be obscured, allowingcloudinhibitors1 for mobility of the resource without negatively impacting users. The resource may live in one cloud today and another tomorrow – or migrate locally – and no one needs to be wiser until the moment they are actually accessing the resource.

The disadvantages of this architecture lie in the giving up of control over delivery of the resource. Opportunities to improve performance or ensure security are often lost and become the responsibility of the cloud provider, which may or may not support the necessary delivery services to meet corporate SLAs or security requirements. Given the concern over performance cited by respondents to surveys on what’s stopping them from adopting cloud, that may be enough to warrant a different architectural approach, one that affords more control over both access to and delivery of cloud-hosted applications.

Still, in a world where 20% of users have “gone outside IT” to provision cloud resources1, such an architecture provides a path to regaining at least some of the control required to remain compliant with policies.

 

1 Avande Cloud Survey March - April 2011 


Connect with Lori: Connect with F5:
o_linkedin[1] google  o_rss[1] o_twitter[1]   o_facebook[1] o_twitter[1] o_slideshare[1] o_youtube[1] google

Related blogs & articles:


Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.