How the hyper-converged infrastructure trumps converged

Alan Browning

Friday 27 January 2017

In a world where technology is changing so rapidly, hyper-converged is the only way to architect the new data centre. It proves that simplicity is the ultimate sophistication.

The first disruptive technology since 2004 has finally arrived

Disruption is the new currency in the data centre. Sadly, since 2004 saw the advent of the hypervisor (which transformed the data centre from an undersubscribed physical server estate to a much more cost-effective virtual estate) no major data centre transformation has occurred in the years that followed.

Enterprises have continued to deploy traditional three-tier architectures connecting compute to storage to networking, meaning that as speeds and feeds have improved, the development of full-flash arrays or software-defined storage subsystems has arisen. This is mainly due to the amount of data increasing at a rate that sees companies unable to provision storage rapidly enough to deliver the performance their customers require.

Another factor is that internal IT organisations were never able to adequately solve the problem of the storage bottleneck in the long term. As complexity evolved in managing these cumbersome systems, customers’ operational costs went through the roof with the need to employ expensive resources to manage them.

This was around 2011, when converged systems arrived to take the complexity of managing the high-end systems away from the internal IT operations team, and for an incredibly expensive price, manage and maintain the systems for you. This did have value – to an extent – but due to the cost and customers refusing to let go, the success of converged systems was fleeting at best. Most customers felt they could replicate the build and design themselves, as they often owned all the building blocks required to create a converged solution. The issue that converged systems didn’t address was that they deployed the traditional three-tier architecture.

How hyper-convergence changed the game

In early 2014, hyper-convergence entered the marketplace, and at last it managed to collapse the tried and true: inverting the paradigm by collapsing the storage, network and compute into a 2U form factor. The second thing that happened is it allowed the software to control the hardware. This threw the traditional way of deploying infrastructure out the window and replaced it with an easy-to-scale, new one-tier architecture that’s easy to maintain and manage.

Destroying the silos

The greatest challenge hyper-convergence solves is that it breaks down the silos within organisations. Now the virtualisation administrator becomes the storage guy, the network guy and the compute guy all rolled into one, allowing him to provision services at the same rate as public cloud providers provision their services.

Allowing the software to control the hardware means that resilience can be built throughout the infrastructure. The operational team required to manage the infrastructure platform can be greatly reduced, with change windows no longer required since all the upgrades can be performed during normal business hours. This proves, once again, that simplicity is the ultimate sophistication.

Seeing the way hyper-convergence addresses business challenges, it is a far superior solution when compared to the old, converged way of managing and deploying infrastructure. The complexity of managing the solution is reduced, allowing the business to react to the speeds required by the customers, and ultimately making your IT organisation operate at a higher level in the value chain.

YOU MIGHT ALSO LIKE