Sometimes it is the applications that drive a technology. We can look at ‘Big Data’ as one of the reasons why the spotlight is back on solid-state storage technology. Solid-state storage is not new. This silicon based electronic method of data storage has been around for a long time now. As ‘memory’ to speed up transactions between processors and hard disk drives, and then as ‘storage’ in smaller devices like mobile phones, solid-state storage has proved to be useful. It has also been used for mission-critical applications like defence where speed is imperative. However, with a splurge in Big Data brought about by the Internet of Things (IoT) devices and enterprise applications, the need for faster storage is felt more than ever—giving rise to the development of mega-sized solid-state drives (SSDs) and increased research on how to make larger SSDs at a lower cost. While we might have thought it impossible some years back, online marketplaces show that it is now possible to buy a two-terabyte (TB) peripheral component interconnect express (PCIe) based SSD for less than US$ 3000 and a 1TB one for US$ 500-1000. So we might not be entirely wrong in hoping for a future of data centres populated entirely with SSDs!
Terabyte-scale storage sees uptrend (literally)
The industry has been steadily increasing the capacity of flash memory by reducing space wastage, but still there is only so much you can store on a piece of silicon! However, even at a stage when it is difficult to reduce the cells’ pitch size further, the capacity of SSDs continues to go up, thanks to technologies that enable cells to be stacked one above the other. In 2015, all the industry majors ranging from Intel Corporation and Micron Technology, SanDisk and Toshiba, to Samsung, unveiled prototypes that involved stacking of flash cells.
Basically, one of the ways in which flash memory has evolved in the past deals with the number of bits stored per cell. The earliest were single-level cells (SLCs) that stored one bit per cell, followed by multi-level cells (MLCs) that stored two bits per cell, and then triple-level cells (TLCs) that can store three bits per cell. Although storing more bits in a cell made flash memory cheaper and capable of storing more data, it was not very reliable. Overall, SLCs were found to be faster, more durable, reliable and power-efficient for large enterprise-scale applications. Last year, Intel and Micron revealed a new three-dimensional NAND (3D NAND) technology that uses MLC and TLC technology to bring down the cost of storage, while also promising enough reliability and power-saving for large applications.
Here, flash cells are stacked vertically in 32 layers to achieve 256-gigabit (32GB) MLC and 384-gigabit (48GB) TLC dies that fit within a standard package. According to the press release, this enables three times the capacity of existing 3D technology—up to 48GB of NAND per die—enabling three-fourths of a terabyte to fit in a single fingertip-size package. So a small gum-stick-size SSD can hold more than 3.5TB of storage and a standard 6.35cm (2.5-inch) SSD can accommodate more than 10TB. Since the growth in capacity is achieved by stacking the cells and not by bringing down their dimensions, both performance and endurance are increased, making even the TLC designs suitable for data centre storage.
Another innovation that helps increase the performance and reliability of this 3D NAND flash memory is the use of a floating gate cell design. In this architecture, the cells’ transistors have a second insulated gate that retains the electrons till a strong external voltage is applied. This is a time-tested method frequently used in 2D NAND flash, which Intel has extended to their 3D design, too. Other features like sleep mode are also incorporated in the design to enable power savings, too.