Why hyper-convergence is the only option

Alan Browning

Monday 5 December 2016

Hyper-convergence is a term that’s being heard more and more frequently across the IT landscape, renowned for enabling the integration, or convergence, of highly reliable server platforms, with storage, networking and virtualisation resources.

Simply put, hyper-convergence is swiftly becoming the data centre of the future, and the sooner companies align to this strategy, the quicker they will be able to take advantage of simplifying operations while also deriving true business value.

Addressing CIO challenges

1.    The clear and present danger of rogue/shadow IT

With the advent of cloud computing, IT empowered youngsters – who are used to an always on, always connected world – are entering the workplace. The new danger that IT departments are facing is if they cannot deliver services at the same rate that they can procure them, these youngsters will find a way to provision the services themselves. The expectation that one can employ technologically advanced employees and assume that they will wait for days for IT services to be provisioned is a dangerous one. And within large organisations – with their bureaucratic processes and resultant long lead times for IT services – technologically savvy employees will seek solutions that lead to rogue IT practices, security breaches and potential reputational damage.

2.    Cloud: public, private or hybrid … argh!

There is a massive disconnect between what customers are hearing about the cloud from OEMs and what they hear from the system integrators. Cloud is being hyped as the “silver bullet” that will solve every IT challenge – present, legacy and future state. However, the reality is that while public cloud providers are shouting from the rooftops that this is the best solution for all enterprise customers, customers are resisting due to a number of complexities:

  • Public cloud is expensive.

All public cloud providers work on a basic cost model. They operate using a cost per hour/per GB on memory compute and storage, and each component is charged at a seemingly nominal charge. An often overlooked cost metric is the cost of egress data – ingress data that goes into the cloud attracts a zero cost amount, but egress data attracts a charge that can very quickly break the TCO model. Consuming services that are USD priced adds additional risk to this model.

  • Connectivity is expensive.

Most solutions that enterprises deploy are over architected. This means that the customer builds resiliency across the design, often going down to a level where they even build redundancy across telco providers. Connectivity is not cheap in South Africa and this again blows the cost model out of the water, as the only ones benefitting are the connectivity providers.

  • An investment in cloud services means a disinvestment in assets.

Public listed companies list their data centres as a fixed asset on their balance sheets. Moving to the public cloud means disinvesting in traditional brick and mortar and in so doing, reducing stockholder equity.

  • With public cloud, you either need to be all in, or all out.

And, in a world where accountability is often lacking, it could so happen that the public cloud provider and the internal IT staff both declare their innocence when a problem arises. The critical issue would be to diagnose the problem and get the services back online quickly, which may become complicated when two IT service providers are involved.

So how does hyper-convergence solve these challenges?

While the flexibility and scalability that public cloud offers is definitely the way IT departments should go about provisioning services to the business, why would a client not look to put down a resilient infrastructure that provides the same benefits of public cloud, but manage it from within their own data centres?

A good example of this is how Supercell handled the growth in popularity of its online game called Clash of Clans. The game was launched entirely on a public cloud platform, but as it gained status it also became increasingly expensive to run on a public cloud platform. To maximise profits, Supercell started to invest in internal infrastructure while also bursting to the cloud every time it launched a new update, allowing it to use the scalability that could be gained from a public cloud provider. This is a great way to consume public cloud services, and while most enterprise customers are not game developers, so this business model doesn’t make sense for them, the web 2.0 architecture absolutely does seem right.

Typically, when customers procure new infrastructure, it is generally architected to a point where future capacity is catered for in order to last over the depreciation life cycle, be it storage, compute or networking assets that are required. Should the architect miscalculate the capacity requirements, scaling the architecture without incurring downtime becomes incredibly expensive.

Hyper-convergence solves this challenge by allowing the IT infrastructure to scale in almost a “Lego block” type approach, where enough capacity can be procured as needed in a simple, modular manner.  This matches IT budgets really well, as often vendors or OEMs want to sell a “Rolls Royce” product when the client is saying that their budgets dictate that they want “just enough IT”. This new way of architecting a solution allows customers to scale as their business requirements grow or decrease, which mirrors a public cloud solution model.

The second challenge that hyper-convergence solves is that it fundamentally tears down silos within a business. As with any disruptive technology, its implementation is no longer a technology-only discussion, it becomes a people and process transformation, and sadly people resist change when they feel threatened. Hyper-convergence demolishes the silos that are often created within a company’s IT department and empowers the virtualisation administrator to become the “network guy” and the “storage guy”. This means that the business can consume the virtual machine at the same speed that they could procure it from a public cloud provider. Concepts such a RAID, LUN masking and zoning of the storage inside the device no longer exist, so these resources within the business can be redeployed to alternative projects which greatly reduces operational costs within a business. This is a fundamental shift as the software is driving the hardware, making the solution infinitely scalable. This now enables companies to reach the nirvana state of deploying a software-defined data centre.

As a rule, managing infrastructure can be boring and quite unfulfilling, but fortunately hyper-convergence addresses this issue as essentially, the form factor has been condensed from the traditional three-tier architecture into a single 1U or 2U appliance. This means that the software can now interact directly with the underlying hardware without ever needing to leave the chassis – the virtual machine performance is operating at a speed that has never been experienced before.

In the past when virtualisation was developed in the traditional way of deploying infrastructure, it didn’t take into account the legacy NAS and SAN storage subsystems, something that became the bottleneck of any virtualised deployment, especially in the deployment of a Virtual Desktop Infrastructure (VDI). The general solution to this poorly-performing environment was to throw more fabric interconnects at it, or deploy a full flash array which proved to be complex and expensive. This also didn’t bring a permanent solution to the problem either and so the cycle of deploying larger flash arrays just kept repeating itself.

Looking at the traditional “cloud pyramid”, the foundation of this is Infrastructure-as-a-Service, the middle part is Platform-as-a-Service, and the very top part is Software-as-a-Service. As one moves up the value chain of the pyramid, so business value is increased. Cloud only makes sense at the PAAS and SAAS layer, but they are reliant on the underlying IAAS layer. Hyper-convergence once again leaps to the fore as it provides a robust foundation in a self-healing, easily scalable way.

Thus, the question of the moment is, if you are not deploying hyper-converged systems in your own data centres, what are you doing? The recent listing of one of the major hyper-converged companies was met incredibly well by the market, showing that it is ready to embrace the technology and that now is the time to jump onto the wave.

 

YOU MIGHT ALSO LIKE