Data centre innovations you need to know about

Tikiri Wanduragala

Friday 4 November 2016

Tikiri Wanduragala, Lenovo’s EMEA x86 Server Systems Senior Consultant, rounds up a few of his recent articles, examining the shift to software-defined, and what it means for Lenovo, its customers and the wider industry.

This post aims to bring together a number of ideas that I have touched upon previously so that it’s all in one coherent piece. Looking back over the past few months, it’s clear that what’s central is something that may initially sound insignificant: a name change.

The group I’m a part of at Lenovo is now called the Data Centre Group (DCG), whereas previously we were known as the Enterprise Business Group (EBG). The reason for the name change is so we are very focused on and relevant to the huge market changes we’re seeing at the moment.

What’s changing?

These days, customers have to keep pace with the explosion in data and information required for their business. And they have to do that within a fixed or falling budget, and a fixed set of staff and skills. That’s the underlying macro-economic condition.

In parallel, lots of new technologies are hitting the market that could potentially have a huge impact. One of these is in the realm of web-scale technologies, for example, Amazon, eBay, Google and Facebook. They can address some of the problems their customers are having with data or, to be more specific, big data capacity requirements.

On top of that, SSD and Flash are coming down the pipeline. The combination of those two – web-scale and Flash – is resulting in technologies that we’re calling software-defined. These move the management and the provisioning to the application. People are looking at these technologies as the next big thing.

But there are problems. It is quite new, so customers have to retrain staff and learn new skills. The biggest issue, however, is the breaking down of silos. This is a fusion of workloads (or applications), networks, storage and servers.

Rather than thinking about servers and networking and so on, customers now think about the data centre as a whole, as the hub inside their organisation. That’s what’s led to the name change: it’s now about moving beyond a server story to all elements within a data centre and everything that goes on within it.

Lenovo’s role

We have a legacy or heritage in the key building block – the server – and we have a track record of building very high quality servers. Importantly, we also have a legacy in the two areas that are going through the biggest disruption – networking and storage.

There are lots of new players coming into the software-defined networking and storage market. And the fundamental building block of the data centre has evolved from the sub-systems (network, storage, etc) and what’s inside them to the server.

That’s the way web-scale companies are looking at it – let’s forget the nitty-gritty details, and use a server as a commodity building block. Again, that’s good for us because of our legacy.

This new type of data centre will not be built by one player, and because we’re focused on infrastructure while a lot of these new developments are in software, we can partner with new players without any conflict. Having no conflict also means better integration and the ability to bring products to market faster. Lenovo’s partnerships with Nutanix and Cloudian are examples of this.

For customers this means it is now possible to dramatically reduce their data centre footprint, that is, the number of machines in the data centre. The management structure has come from web-scale companies who are used to managing large number of servers with very few people. A high level of automation and orchestration then comes in, which impacts operating expense and capital expenditure.

What time frame are we talking about?

When I talk to people about these new technologies, one question I get asked a lot is how long it will take for all of this to be mainstream. Well, there’s no easy answer to that, but we can use virtualisation as an example.

I started working with virtualisation technologies around 2001, and there was a lot of negative reaction. It wasn’t very old and people didn’t understand it – the typical new technology problem. But then it started to get adopted in low-risk environments because people saw the benefit of reducing the number of servers.

Between 2000 and 2005, the product evolved and customer acceptance changed. From then until 2008, it became an accepted technology, and now it’s the norm. So if we say that software-defined is similar in some ways because it’s about virtualising the entire network, storage and server as one block, then it should follow a similar path – but with one accelerator built in. We now have that knowledge and understanding of virtualisation, its benefits and how it works.

This new technology has huge potential. Its promise is, to be honest, ridiculous. Even if just 20 per cent of the potential is delivered, it will be phenomenal. It has to be because the problem is huge. With these web-scale companies we’re talking billions of users. And these companies, born on the internet, are delivering a totally new way of thinking about it.