Image Image Image Image Image Image Image Image Image Image
Scroll to top

Top

One Comment

Discover The True Meaning Of “Software Defined”

Discover The True Meaning Of “Software Defined”

If you’ve not yet heard about the latest trend hitting the IT industry, it’s time to breakout your buzzword BINGO playing board and make some new additions.  Today, “software defined” is trending everywhere and across the entire IT spectrum and it doesn’t look like it’s going to go anywhere soon.  And that is with good reason.  As one begins to peel back some of the hype and hysteria surrounding the software defined movement, it becomes clear that there are some laudable goals and outcomes that surround this trend.  If capitalized upon, these goals and outcomes can have tremendous positive impact on the business and on IT.  Make no mistake; there are clear business benefits to be had in the software-defined world.

What Does “Software Defined” Mean?

The unfortunate thing about new buzzwords is just how quickly they are adopted by anyone and everyone with a product that they want to shove onto the emerging bandwagon.  Such actions basically dilute and cloud the message, leading to confusion about what something really means, particularly when vendors with no business in the space work their best to create a tenuous string to the concept.

We’re at that point with “software defined”.  There are a lot of people that simple check out when confronted with the phrase because it’s become nebulous.  What that in mind, I want to make clear how I see “software defined” in the context of the data center.

In a software defined data center (SDDC), all infrastructure is abstracted in some way from the underlying hardware – generally through virtualization – pooled, and the services that operate in the environment are completely managed in software.

You may wonder how this really differs from how legacy data centers have been constructed and managed.  After all, when it comes down to it, those environments were managed with software, too, right?  The sections below outline some of the key differences between traditional data centers and what is intended in the software defined data center.

Software Defined Servers

Back in the early 2000’s VMware introduced to the world the software defined server, more commonly called a virtual machine.  In this scenario, workloads no longer run directly on hardware, but instead run on software-based constructs that sit atop the virtualization/abstraction layer.  Administrators now only interact with physical hardware as a means to install this abstraction layer and pretty much all other administration is handled through tools provided by the hypervisor vendor and other third parties that integrate with the hypervisor.  The hardware here has become almost an afterthought.  As long as it’s x86-based and can hold as much RAM as we need, it’s pretty much good to go.  Prior to the days of virtualization, administrators went through painstaking server configuration steps when buying new systems.

The benefits of having moved many environments to a virtualized state have been incredible.  Once something has been moved into a full software box, it becomes incredibly easy to manipulate.  This is evidenced by tools such as vMotion, which brought to IT a level of flexibility and availability that was difficult to achieve with physical systems.  Further, tools such as vMotion and Hyper-V’s Live Migration power automated workload placement systems that can be configured to help improve the overall availability of a computing environment and that provide heretofore-unseen levels of flexibility.  Moreover, this software-driven flexibility has enabled numerous new ways to handle critical disaster recovery and business continuity needs and an entire market of vendors has sprung up to help companies leverage their virtualization environments to this end.

More importantly, by abstracting servers from the underlying hardware, organizations now have the opportunity to automate more routine tasks, including the creation and configuration of new virtual machines, among many other opportunities.  Such ability begins to provide a strong foundation on which to automate more IT operations and manage resources more centrally and to help IT focus more on business needs.  These are all critical elements and are underlying characteristics of software defined data centers.

Software Defined Storage

Software defined storage (SDS) is a bit harder to nail down than software defined servers, but the fact is that SDS is a key component of the software defined data center and it’s actually the real deal, with a lot of vendors working hard to build SDS-based storage products that can live up to the hype and expectations.

There are some that feel that storage’s hardware-centric nature automatically means that it’s not really possible to consider the resource as software-based anything.  Still others look at the fact that many purported SDS vendors are selling appliance-based hardware.  The thinking goes like this: If the vendor has to sell hardware, it’s automatically not SDS.

The flaw in these arguments revolves around the focus on the underlying hardware.  In most of the hardware/software bundle on the market today, the hardware is based on off-the-shelf commodity components and can be relatively easily swapped out with components from other vendors.  I challenge the reader to attempt to swap out hardware in proprietary storage systems with something not provided by the original vendor.  In most cases, proprietary storage systems have embraced their uniqueness and there are a lot of custom parts in the solutions.  While that approach has worked well for a number of years, as commodity hardware has become more robust, it’s become possible to move into software many of the functions that used to require proprietary custom chips.  This has made some modern storage options much less expensive than systems provided even just a few years ago and has also made these systems much more inclusive when it comes to features.  Now, implementing high-end features, such as deduplication, is a software chore rather than a hardware one and many vendors now automatically include these features in their default configurations.  It’s really a major win for the customer.

VMware’s VSAN has become one of the better known SDS offerings and, recalling the definition for the software defined data center, this product meets the requirement.  With VSAN, the hypervisor – which hosts the kernel-based VSAN management software – aggregates the storage from across all of the vSphere hosts.  From there, the software pools all of that storage together and then presents it back to the hosts as a single pool of storage on which virtual machines can run.  The storage layer continuously manages this resource pool and provides to the administrator a complete policy framework so that storage resources can be leverages in ways that make sense to the business.

Software Defined Networking

Consider for a moment the traditional network environment.  Distributed throughout the entire organization are devices with varying levels of intelligence, with said intelligence closely corresponding to the OSI model layer at which the device operates.  The higher up the stack the device operates, the more brains it needs to accomplish its duties.  Hubs, which had only enough intelligence to simply repeat incoming traffic out all ports on the device operate at layer 1 and are rarely used anymore.  At layer 2, switches gain some intelligent switching capabilities, at least for traffic that originates and is destined for the local network.  Even at layer 2, though, there are a lot of administrative opportunities to improve overall traffic management.  Layer 3 – where routers reside – adds lots of intelligence regarding traffic, including routing between networks, prioritization, and a lot more depending on the router model.  And the process continues up the stack.

As you look at the network, the distribution layer consists primarily of layer 2 and layer 3 devices that make sure that traffic gets to where it needs to go.  Sure, higher layers are important, but those are generally in place for security or application performance reasons, not to just allow traffic from point A to point B.  The individual ports through which traffic moves is known as the data plane on the network.

Moreover, these layer 2 and 3 devices each have their own brain.  When changes need to be made to the network, each device is reconfigured to accommodate that change.  Such changes might be the addition of a new VLAN or the implementation of some kind of quality of service.  Just like was necessary with physical services in the old days, whenever a global change needs to be made, it often requires touching each individual affected device.  The policies and configuration that are in place on each individual device are collectively referred to as the control plane.  Often, the intelligence in switches and routers is implemented in custom engineered chips.

Software defined networking operates by decoupling the data plane and the control plane.  In this context, the devices to which endpoints connect – which are generally switches today – get lobotomized.  In essence, their brains are removed from their bodies, but the body continues to function with the device continuing to pass traffic just like it always did.  The device’s intelligence – the aforementioned control plane – is transferred to a centralized controller which manages all of the devices on the network.  Rather than requiring custom hardware to handle networking decisions, the control plane is implemented in software.  As has been the case for other such hardware to software abstraction technologies, this change brings a number of new capabilities and opportunities.

Software is far more flexible than hardware and can be changed at will. Sure, firmware can be updated in legacy devices, but it’s still a less flexible environment than today’s x86-based server environments provide.  In SDN, the centralized controller handles the management and manipulation of all of the various edge devices.  These edge devices, since they no longer require specialized management hardware to operate, can be run on commodity hardware devices.  In short, edge devices become servers, which are managed from the central controller.  The edge devices – the data plane – simply carry out the orders that originate from the control device – the control plane.  As was the case with virtualization, software defined networking implements a software layer atop the entire network.  In SDN, the control plane issues commands – basically helping determine how traffic will flow – and sends orders to the edge devices to manage traffic as prescribed.

The Big Software Defined Picture

Today’s IT departments are being forced to be more flexible and adaptable than ever before while, at the same time, operational costs are scrutinized to ensure reasonable total cost of ownership for all technology systems and assets.  Moreover, businesses want IT to bring operational efficiency to IT’s own operations, just as IT has been trying to do with business units for decades.  During that time, though, many IT departments have built what can often be described as a house of cards that requires endless attention.  Many administrators are required to perform what are repeatable tasks, such as provisioning LUNs, building physical servers, and modifying network hardware to accommodate new needs.

Virtualization and the rise of software defined resources are bringing – or have brought, in the case of server virtualization – to IT tools that help IT become more efficient and better able to meet critical business requirements in a timely manner.  It no longer requires weeks of waiting to stand up a new server, for example.  This process can be handled in mere minutes and, with the right management tools in place, can be accomplished by end users through self-service portals.  As time goes on, the same ease of management will come to the storage and network resource layers.  It’s software – that abstraction layer – that is enabling these opportunities.

Think about this: Before server virtualization, organizations may have had servers all from the same vendor, but from different generations, with each model requiring special handling and different drivers.  In a 100% virtualized data center, every single virtual machine runs atop an identical abstraction layer.  Even if the underlying hardware isn’t identical, this fact is mostly hidden from the virtual machines.  As such, there is little ongoing special handling that needs to take place to manage individual virtualized workloads.

User self-service is also a hot topic these days.  This is partially due to the ease by which business units can get their needs met with a credit card and a cloud service.  This expectation is forcing IT departments to implement systems that help business units achieve lower time-to-value for new services, hence the need for self-service portals.  These kinds of portals, however, depend entirely on being able to interact with the data center’s software layer.  The portal sends direction to the software layer about what users have asked for and carries out those instructions, within the confines of policies established by the IT organization.  The more resources that exist in software, the more flexible and user-centric/business-centric the IT department can be.

Summary

As software-defined everything continues to gain momentum and as these self-service orchestration systems mature and become even more tightly coupled to the software layer, IT can more easily transition itself from a department that is constantly iterating the same rote tasks to a department that acts as a broker of services for the organization and that is far more focused on the business rather than on the underlying bits and bytes of the data center.

Comments

  1. -Disclosure NetApp Employee, but this rant is most definitely my own, and probably doesn’t reflect the views of my employer-

    —Begin tl;dr RANT—

    While I agree with the summary and the overall thrust of the article there are a number of assertions in here that are in my opinion, just plain wrong.

    This article seems to be feeding a continuation of the “Specialised Software = Good, Specialised Hardware = Bad” theme that seems to be dominating the current industry debate about the design and implementation of the datacenter. While it’s understandable given the source of the whole “Software Defined” Datacenter debate was a software vendor, it should be noted that the concept of “software defined” existed before VMware’s purchase of Nicira. “Software defined” really started with the OpenFlow networking initiative was the first clean example of a networking design based on the separation between control and data planes. But its worth noting that depended on programming some very specialised intelligent hardware int the data plane to do what needed to be done (technically something called TCAMs).

    If we’re going to use an analogy of ripping brains out of hardware, which forms so much of this debate, then what was removed in the open flow SDN model was (only) the prefrontal cortex. That is the policy part of the brain that does things like evaluating whether internet articles are accurate, and how to respond to them. What remained behind in the hardware (the TCAM’s that held the flow tables) was the equivalent of limbic systems and the cerebrum – the parts that make lightening fast emotional decisions and the parts that hold what is called procedural memory that remembers how to ride a bike or walk. The brain was carefully separated, rather than ripped out, but a lot of intelligence remained behind. Some argue that Openflow made some incorrect bets about the expected rate of increase in TCAM capability which invalidated much of the architectural advantage it was thought to have, and others have pointed out that the “One controller to rule them all” idea simply doesn’t scale in an “internet of things” world. Regardless of the implementation details though, the main business benefit this gave was the programmability and flexibility of the infrastructure. The subsequent standardisation and hence increased competition in the hardware layer was a useful but secondary effect.

    Now, I was going to respond to each of the other points I found personally problematic, but it extended to something longer than the article itself, but here are a few examples I couldn’t help responding to.

    e.g. “implementing high-end features, such as deduplication, is a software chore rather than a hardware one”

    The two main implementations of deduplication (Data Domain, and ONTAP) and pretty much every other one I can think of were implemented entirely in software running on intel hardware. In short this has always been a “software chore”. Yes these functions were typically embedded in a vertically integrated appliance, to ease integration and improve quality, and in the case of ONTAP is available on broad variety of Non-NetApp provided hardware, including a VSA.

    ” I challenge the reader to attempt to swap out hardware in proprietary storage systems with something not provided by the original vendor. ”

    Products like NetApp V-Series, IBM SVC, HDS USP/VSP all allowed other vendors to provide the underlying storage capacity, (which is the majority of the cost of the hardware), to be procured from other vendors. It used to be called storage virtualisation back then, so this capability has been commonly available to customers for close to a decade, there isn’t much new here.

    ‘it’s become possible to move into software many of the functions that used to require proprietary custom chips”

    Again, you make it sound like this is both new and an intrinsically desirable thing. RAID has been implemented in software on the majority of modular storage arrays for about the last decade or so. On the other hand, the flash translation layers (which are basically filesystems) that live in almost every SSD has to be implanted in specialised ASICs for cost, power and performance reasons. As we increase the amount of flash and decrease the network latency in and between servers while living with increasingly constrained power and cooling requirements, the need for moving more functions back into hardware makes increasing sense. As evidence of this, look at the increasing use of GPU’s and FPGA’s in HPC and cloud-scale hardware designs, and the new chips being designed by Intel and IBM.

    which leads me onto my last and most problematic assertion in the article

    “Software is far more flexible than hardware and can be changed at will.”

    I won’t argue the flexibility point (well not much), but the “changed at will” is simply untrue, or at least no more true than hardware. Ask people who are locked into a particular database vendor, or backup application, or hypervisor/virtualisation technology if they can “change at will”. There are “legacy” applications that businesses simply cant move away from that have remained the same while the underlying hardware has changed multiple times. In fact I’d argue that software is far less easily changed than hardware.

    Overall though I think there are a lot of valid points, but I think the real problem originates in the way the definition was set up in the first place.

    “In a software defined data center (SDDC), all infrastructure is abstracted in some way from the underlying hardware – generally through virtualization – pooled, and the services that operate in the environment are completely managed in software.”

    it’s that last part ….

    “the services that operate in the environment are completely managed in software.”

    That is the bone of contention here, I would personally prefer

    “the services in the environment are managed by people who are aided by automation policies defined in software and implemented in the most efficient combination of software, firmware and hardware, with the flexibility to rapidly and non-disruptively alter that combination based on changes in technology and business requirements”

    It’s not as simple or pithy as your definition, but good datacenter design isn’t something that lends itself to solutions that can be encapsulated in a pithy definition. I think that’s why I wish people talked about a policy defined datacenter, or a programmable datacenter, as both of those are aligned to business outcomes rather than driving a boatload of marketing around a term (software defined) that sets the framework for the debate about the future of the datacenter so its natural winner will inevitably be a software vendor.

    Maybe I’m hung up on details, and maybe we agree more than we disagree, but the current “What is software defined XXX” debate is beginning to annoy the crud out of me, and I felt the need to rant.

    —End RANT–

Submit a Comment