The battle for European data centre supremacy
As US data centre giant Equinix announces its acquisition of UK-based Telecity, we look at what’s powering the...
Tikiri Wanduragala, Lenovo’s EMEA x86 Server Systems Snr. Consultant, looks at what the data centre industry is doing to go green and become more energy efficient.
I received a lot of feedback from my 2015 review piece asking me to expand on one of the subject I touched on briefly, the idea of green data centres and energy efficiency, so that’s exactly what I’m going to do.
As I said in that piece, one of the biggest drivers for more energy efficiency in the data centre is the cost savings it can provide, which is so true in many aspects of the technology industry. Essentially, the data centre used to be run by the IT people and their responsibility was to the business. They were measured on getting services up and running, and they viewed their costs in terms of the infrastructure. They didn’t pay the bills.
Now, however, everything is centralised and there is much more awareness of costs. And if you can reduce energy costs, that saving goes straight to the company’s profit. It’s also an instant saving; the vast majority of other projects can take months, years even, to realise a financial benefit.
So, how are we all getting greener and more energy efficient in our data centres? Well, it’s through a number of different means.
Reducing the number of servers
This is of course achieved through server consolidation. We all know Moore’s Law, and that essentially means you can double capacity every 18 months. What this means is every 18 to 24 months you could, if you decided to upgrade, dramatically reduce the number of servers you use.
In the past you would hang on to a server for a long time – many years in some cases. But that’s a very energy inefficient server; it’s using technology from the past. So server consolidation and keeping up with changing technology can have a dramatic impact.
Management and control
Knowing and understanding the problem can help data centres improve their energy efficiency. XClarity is our systems management product, which gathers information from all Lenovo servers and highlights how they’re operating. One very good use of XClarity is in energy: once you can see what’s out there you can utilise that pool of servers more efficiently.
For example, if you have a server running at 50% capacity, you might bring a second server online because you want another 10%, however, XClarity will point out that it’s better to overload the first one. That’s because you’ll use up a lot of energy just getting everything going for the second one. The server you currently have running already has that set-up.
If you can monitor your server estate you can move workloads depending on what’s being used on existing operational servers. And that means you can switch off or hibernate servers that aren’t being used. And that will save an awful lot of money.
Working at the right temperature
There has been a lot of work done to make the servers themselves more efficient in terms of cooling. In the old days data centres used to run at 18C, and in order to do that you had to chill them. So a significant part of the energy wasn’t about computing, it was about cooling. But raising the inlet air temperature at which servers operate, and therefore reducing that cooling requirement, saves energy and potentially a lot of money.
The inlet air temperature for all Lenovo servers is 40C, and that means you can move the data centre temperature from 18C to 25C, or even 28C. One customer who has made the move reported saving 40% on their energy bill.
On top of this, developments in water cooling are a big, big step for the industry. One of these developments is called a rear door heat exchanger. This is where the door of the server rack has chilled water flowing through it to cool the servers. It doesn’t interfere with the servers at all; they are still air-cooled. You can do this with any Lenovo server and rack, even the older ones.
Another approach gaining popularity at the moment, particularly in the high performance computing (HPC) industry, is direct water cooling. One great example of this is the Leibniz Supercomputing Centre (LRZ) in Munich, Germany. They take the heated water that has come through the servers and use it to heat their building. This has helped the centre reduce energy costs by 35%.
I consider this to be real cutting-edge use of server cooling technology; it’s a game changer and I think it will become much more common over the next few years. In fact, hosting companies are showing a huge amount of interest in it.
Actually, customers are showing interest in more efficient data centres in general, not just in the examples mentioned above. Customers are now looking at the big picture, instead of focusing on cost or capacity or the physical server. They are being smart and looking at how much they can do per watt or per rack, for example.
That’s a much better way of looking at the data centre, and one that I hope will continue.