Software-defined storage doesn’t have to be expensive

Per Overgaard

Wednesday 8 February 2017

Enterprise class storage arrays have been the go-to data centre solution for decades. Now, distributed server-based systems are rapidly gaining in popularity as they offer capability along with cost-effectiveness. It looks like the traditional storage array’s days are numbered. Here’s why.

In the not so distant past, if you wanted somewhere to keep your business’s data, your solution would likely be a big black box. Inside it would be a couple of x86 machines, some rudimentary software and disks to store everything on. This would be connected to servers by a high-speed fibre channel and accessed via a front end.

Storage like this was reliable, but when it came to data transfer, it was slow. All-flash arrays (AFA) on the other hand move storage closer to CPUs, boosting application performance: data mobility speeds increase, latency spikes caused by multiple users accessing data lessen, and energy consumption is lowered.

Flash forward

A business increases its competitive advantage with an AFA by providing a faster service for its customers and staff. It also improves its decision-making ability because analytics can be performed in near real time.

These advantages usually come with a sizeable price tag. But there is a way to offset premium costs. The same solution could be deployed on industry-standard, Lenovo hardware. It’d be scalable and you could connect to a standard cluster of disks via serial attached small computer system interface (SAS) fabric. So far, so much more cost-effective.

Taking it a step further, you could remove the SAS connectivity, the cables and the disks. Instead you enable storage pooling across multiple nodes and move everything up to operating system level.

Build in Lenovo’s AFA based on the x3650 M5 rack server – ideally suited to storage-hungry big data workloads – coupled with Windows Server 2016 and you have the best of flash-based and traditional storage methods.

See the jump

By deploying on two or more Lenovo servers with Microsoft’s converged storage, Storage Spaces Direct (S2D), you’ll get some pretty impressive performance numbers. Up to 3PB data and 1 million input/output operations per second – that’s 900,000 reads and 80,000 writes per second.

And that’s with a fairly entry-level solution. You don’t need any specialised hardware and there’s no call for a dedicated SAS fabric. Just an ethernet connection and Lenovo servers, which can be added to a stack one by one.

Once you’ve deployed Windows Server 2016, it takes three commands in the graphical user interface – or PowerShell, which is perfect if you’re encouraging your team to adopt a DevOps mindset – to capture internal disks and pool them. Then you create a performance tier as well as a capacity tier and you’re good to go.

If you need more capacity, simply add another server with one command. S2D will detect the new drives in seconds, add them to the pool and redistribute data.

For example, you might want to upgrade to a solely solid state solution from a hard disk drive one. Non-volatile memory drives require less processing power, which can in turn be redirected to the application.

Heads in the cloud

SE Cloud Factory is doing exactly that. A next-generation data centre based in Denmark, it specialises in cloud solutions. Its infrastructure combines Lenovo servers with Intel processors, Windows Server 2016 and S2D to create a high-performance, hyper-converged solution.

As Flemming Riis, chief technology officer at SE Cloud Factory says: “The silos have now been well and truly torn down, and that has given us the chance to build a new solution that focuses on performance. We can now guarantee a very stable high speed for all our customers, and for that reason they can now start migrating heavier applications to the cloud, such as large databases. The solution has exceeded our expectations, despite the fact that it takes up much less space and also uses less energy for power, ventilation and cooling.”

Chief executive officer Jacob V Schmidt is just as impressed with the solution. “We can now send even very heavy applications to the cloud with complete confidence, as we can offer a huge jump in performance. And when it comes to capacity, yes, in principle it can be scaled up endlessly – and, it should be noted, with basic servers as well as with the old, well-known server structure.”

We at Lenovo see ourselves as the experts in such software-defined solutions. Not only that, we want to help you take the best bits of traditional storage and add them to the speed, stability and cost-effectiveness of software-defined storage.

YOU MIGHT ALSO LIKE

Building the next-gen data centre

Where traditional and web-scale apps co-exist