Blog courtesy of Samsung and Dell Tech Center:
As virtual environments become more widespread and applications require greater performance, you may find that traditional hard drives are becoming a bottleneck. Shifts toward private clouds, in-memory computing, and highly demanding transactional workloads require leading-edge server performance.
There are three components that can significantly boost server performance: the processor(s), the memory, and the storage. Processors and memory have been speeding up every couple of years for decades. Hard disk drives (HDDs)… not nearly as much.
To circumvent this problem, you could use dozens or even hundreds of HDDs, pooling the performance of many to meet requirements. Using many HDDs will boost performance, but also delivers headaches in terms of costs, maintenance, power consumption, cooling and space constraints.
Servers weren’t designed to hold hundreds of hard drives, so those faced with high-performance workloads and saddled with HDDs have been forced to buy external storage expansion, driving up costs. Hard drives also use a lot of power and generate considerable heat, increasing power consumption and cooling costs. Finally, adding dozens of hard drives decreases reliability and increases management complexity.
There is an easy fix for these problems. Clearly, you will benefit from storage technology that can access data almost immediately (improved latency) and move large amounts of data quickly (increased throughput). You need storage that won’t consume much power, doesn’t produce much heat, and doesn’t cause cooling problems. Furthermore, you will greatly benefit from storage that’s highly reliable and doesn’t require extra management.
Not surprisingly, many companies like yours are now deploying flash-based, solid state drives (SSDs) that provide significantly faster access to data, resulting in increased performance and lower latency.
Unlike hard drives, which spin platters of magnetic media, SSDs have no moving parts. SSDs are typically made with NAND Flash, which, unlike RAM, stores data on chips for long periods of time. They were designed around enterprise application I/O requirements and their primary attributes are performance and reliability.
An SSD is probably the most cost-effective way to boost server performance. SSDs also fix power problems. A Samsung SSD, for example, consumes half the power of a typical hard drive. Further, SSDs generate almost no heat, and since one SSD can provide the performance of many HDDs, SSDs also can help solve issues with electricity consumption and cooling in data centers.
Moreover, SSDs help to minimize maintenance issues as they’re considerably more reliable than hard drives. They go into servers and storage arrays without additional hardware or management tools. And by sharply reducing drive counts, SSDs reduce complexity.
To give you a sense of the performance opportunities provided by SSDs, consider a recent analysis completed by Principled Technologies. Two PowerEdge R920 servers running Oracle with an OLTP TPC-C-like workload were tested. The first was configured with standard SAS hard drives, the second with Samsung NVMe PCIe SSDs. The performance delta between the two was quite significant.
While the performance of the PowerEdge server with the HDD configuration was good, the upgraded configuration with PCIe SSDs delivered 14.9x the database performance of its peer (meaning that it could complete nearly 15 times as much “work” as the standard configuration). This was accomplished with only one-third the number of total drives (8 SSDs vs. 24 HDDs).
Unquestionably, SSDs deliver the best cost benefit for highly-intensive workloads, including transaction processing and data warehousing. In other cases, like virtual desktop infrastructure (VDI) or high-performance computing (HPC), SSDs are also ideal.
Typically, when a workload requires very high capacity, a large number of high-capacity HDDs may make more economic sense. However, even in these situations, many servers now include a few SSDs to maximize boot, swap file and random access performance, while using HDDs for capacity optimization. Dell servers support use of an SSD and a HDD in the same chassis, at the same time.
Samsung SSDs on Dell PowerEdge servers clearly benefit data center administrators and end-users. Buyers benefit by driving down acquisition and operating costs, while gaining more from reduced complexity and increased reliability.
But the major benefit comes from performance. Administrators will see immediate boosts in performance by deploying SSDs. They’ll be able to embrace performance-intensive workloads that were impossible to run on hard drives. End-users will experience better performance and increased uptime for current as well as new IT services.
The deployment of SSDs in enterprise environments is rapidly accelerating because the benefits are so clear-cut. No other technology has a better potential to literally transform your server (and data center) experience.
For more information about Samsung SSDs, please visit: http://www.samsung.com/global/business/semiconductor/product/flash-ssd/overview
For more information about the Dell PowerEdge R920 featuring the Samsung NVMe SSD, please visit: http://www.dell.com/us/business/p/poweredge-r920/pdTags: Dell PowerEdge R920, Dell Tech Center, NVMe SSD, Samsung, Samsung Semiconductor, Solid State Drive, SSD