What DevOps Can Do About Cloud's Predictable Provisioning Problem
Go ahead. Name a cloud environment that doesn't include load balancing as the
key enabler of elastic scalability. I've got coffee... so it's good, take
Exactly. Load balancing - whether implemented as traditional high
availability pairs or clustering - provides the means by which applications
(and infrastructure, in many cases) scale horizontally. It is load balancing
that is at the heart of elastic scalability models, and that provides a means
to ensure availability and even improve performance of applications.
But simple load balancing alone isn't enough. Too many environments and
architectures are wont to toss a simple, network-based solution at the
problem and call it a day. But rudimentary load balancing techniques that
rely solely on a set of metrics are doomed to fail eventually. Th... (more)
JANUARY 8, 2014 02:00 PM EST
When we talk about the impact of BYOD and BYOA and the Internet of Things, we
often focus on the impact on data center architectures. That's because there
will be an increasing need for authentication, for access control, for
security, for application delivery as the number of potential endpoints
(clients, devices, things) increases. That means scale in the data center.
What we gloss over, what we skip, is that before any of these "things" ever
makes a request to access an application it had to execute a DNS query.
Every. Single. Thing.
Maybe that's ... (more)
Cloud and Things and Big Operational Data
Software-defined architectures are critical for achieving the right mix of
efficiency and scale needed to meet the challenges that will come with the
Internet of Things
If you've been living under a rock (or rack in the data center) you might not
have noticed the explosive growth of technologies and architectures designed
to address emerging challenges with scaling data centers. Whether considering
the operational aspects (devops) or technical components (SDN, SDDC, Cloud),
software-defined architectures are the future enabler of business... (more)
The MacVittie-Roberts Wall of DOOM
Performance. Speed. Velocity. Quality of experience.
No matter what particular turn of phrase we use to describe it, the reality
is that we'll try a whole lot of things if it promises to improve application
performance. Entire markets have been dedicated to this overriding
unqualified principle: faster is better.
That implies, however, that we know what faster means. Faster is relative to
some baseline; some measurement that's been taken either on our applications
or our competitors. Faster means improving existing performance, which
Kirk Byers at SDN Central writes frequently on the topic of DevOps as it
relates (and applies) to the network and recently introduced a list of seven
DevOps principles that are applicable in an article entitled, "DevOps and the
Chaos Monkey. " On this list is the notion of reducing variation.
This caught my eye because reducing variation is a key goal of Six Sigma and
in fact its entire formula is based on measuring the impact of variation in
results. The thought is that by measuring deviation from a desired outcome,
you can immediately recognize whether changes to a process impro... (more)