Image Image Image Image Image Image Image Image Image Image
Scroll to top

Top

No Comments

Hyperconvergence: Jack-Of-All-Trades or Master Of None?

Hyperconvergence: Jack-Of-All-Trades or Master Of None?

During the last week of February, 2014, ActualTech Media broadcast an event that we called a Megacast with the topic of the Megacast being hyperconverged infrastructure. During this event, we received a ton of questions from the audience. One of the questions was this one:

The hyperconverged model seems like jack of all trades option, but could it also be a master of none? What are we losing from moving from best of breed hypervisor, storage, servers, networking, management, security and logging solutions?

This is a fantastic question and is one that I hear quite often. After all, general wisdom suggests that nothing can be good at everything, so there must be some kind of tradeoff that is made with something like hyperconvergence . And, yes, there are definitely such tradeoffs made with the adoption of hyperconverged infrastructure solutions.

Linear Scaling

One common complaint around hyperconverged infrastructure revolves around the way it scales. In most cases – but with some exceptions – hyperconverged infrastructure is a scale out solution that requires adding CPU and RAM each time new storage is added. For most, the first resource that is exhausted is storage capacity and there is the perception that adding more CPU and RAM when expanding storage is wasteful. Further, there is concern that some hyperconverged systems only scale in certain block sizes. In essence, you’re locked in to whatever “step size” is dictated by the manufacturer, although most manufacturers do provide the ability to configure individual resources to a point.

VSAs Need Resources

So, yes… you do add all resources at once. But step back for a second – this is the very definition of scale out. When you scale out, you add resources somewhat linearly so that the additional resources don’t overwhelm existing ones and create performance challenges. And, since most hyperconverged solutions leverage virtual storage appliances (VSA) of some kind, there has to be CPU and RAM resources available on the new appliance to support this need. After all, the VM that runs the VSA needs RAM to store things like deduplication tables and other metadata and there needs to be CPU to carry out storage management functions.

The bottom line for scaling: With traditional environments, administrators can, in fact, perfectly tailor resource needs. With hyperconverged infrastructure, administrators get somewhat less control over individual resource scaling, although some hyperconverged vendors do enable more resource customization than others.

The Hypervisor

In a traditional environment, you can run any hypervisor you want, although the vast majority of organizations continue to run VMware vSphere. In a hyperconverged infrastructure scenario, hypervisor choice does become more limited, but this is dependent on the selected vendor. Some hyperconverged vendors support just VMware. Some support just KVM. Some support just Hyper-V. And there are others that support various combinations of all three.

Of course, this means that there is less choice when it comes to hypervisor in the world of hyperconvergence, but I believe that this situation will be remedied over time, at least to a point. For some hyperconvergence vendors that only support one hypervisor, they have roadmaps that include adding support for additional hypervisors. Of course, for some hyperconvergence vendors, they have built their models around support for a particular hypervisor, so I don’t expect to see all vendors adopt all hypervisors.

The bottom line on hypervisors: With traditional environments, all options are the table. With hyperconvergence, you need to be comfortable with the hypervisor support choices your selected vendor has made and supports.

More about this later.

Servers

Virtualization has already completely commoditized the server. Sure, we can go out and buy whatever servers we want in traditional environments. With hyperconvergence, we need to use the hardware platform provided by the vendor, although some hyperconverged vendors have partnered with every major server vendor, thus making the argument largely moot in this case.

Of course, there is also the matter of resource configuration that was discussed earlier in this article. Those same facts hold true here.

The bottom line on servers: Individual resource configuration may not be available to the level desired by some, but when it comes to server vendor, does it really matter all that much anymore? Can a job really not get done if the server vendor needs to change to support a hyperconverged solution?

Storage

The discussion around storage is a bit different since one of the primary goals in hyperconverged infrastructure scenarios is to eliminate the SAN and bring storage back to the server. From there, software defined storage mechanisms abstract and pool the storage and present it back to all members of the cluster. In fact, storage complexity was among the early reasons that hyperconverged infrastructure was even envisioned, so it makes sense that storage in such environments would look very different than in traditional environments.

I would argue as to whether or not there is any less flexibility in storage in hyperconverged environments than in traditional ones. Of course, the obvious exception was discussed earlier in that storage scaling means having to scale other resources as well. With hyperconverged infrastructure, the goal is to simplify the storage paradigm while improving performance, which has often been abysmal.

Other Resources

I’m not going to go resource by resource to keep going because there is a larger consideration here. To be clear – yes – there are tradeoffs with hyperconverged infrastructure. That is a fact and I’ve identified some of the differences between traditional and hyperconverged infrastructure.

Best of Breed?

Hyperconverged infrastructure is not intended to provide best of breed in every single resource area. It is, however, intended to provide a sufficient experience in all of the areas that it touches. So, for systems that include replication, don’t expect the same level of finesse that you get from a dedicated replication solution.

Here’s the skinny on hyperconvergence: it’s not about the technology. It’s about the outcomes. When done right, hyperconverged infrastructure has the potential to help IT organizations transform their operations. Scaling is easier, operations can be streamlined. Yes, you may have to buy some extra CPU and RAM to get that ease of scale. You might have to limit your hypervisor selection to be able to get some of that operational improvement. The outcome here is that the underlying technical complexity is largely hidden in favor of providing IT with an overall easier system to manage.

Now, for those concerned about waste in scale, compare the potential operational improvements to a bit of “wasted” CPU when adding a new node. There is likely a lot more to be saved in operational cycles than will be wasted on one-time CPU costs. Sure, it’s not a perfect solution, but neither is running an IT shop chock full of complexity.

Change Requires, Well, Change

We’ve gone through many changes in IT over the decades. From mainframes to distributed networks to virtualization, each successive generation of technology brings with it pros and cons. Organizations must carefully considered these positive and negatives in order to determine whether the new direction carries sufficient benefits to outweigh the potential challenges.

Is Hyperconvergence the Right Solution?

So all that leads us to the question about whether or not hyperconvergence is a jack-of-all-trades to a point that it doesn’t master anything. Personally, I believe that hyperconvergence as a trend and an architectural option is here to stay as the potential benefits can far outweigh any challenges that arise. Organizations considering hyperconvergence do have to make strategic decisions about what they’re willing to give up in order to attain these benefits.

I see some of the discussion as too focused on inputs and too little on outcomes. The C-suite really and truly doesn’t care that individual resources can scale independently. What they do care about is that IT has the tools and means to quickly and easily expand the environment without creating a budget crisis or incurring downtime. The C-suite doesn’t care whether the organization is running vSphere or Hyper-V or KVM. They do care about whether or not new business workloads can be spun up as necessary and that those workloads can be protected.

With a jack-of-all-trades solution like hyperconvergence, the only outcome that needs to be mastered is whether or not the solution meets business needs in a way that makes sense.

Submit a Comment