WTF is Flash Storage?

An Intel 910-series PCI-E SSD card. The 910 is an excellent example of 2nd generation flash storage.
Editor’s Note: One of my jobs at GGV is to be our resident tech nerd. In addition to advising our investment team on cool new computers to buy (ZOMGNEWMACPRO), I also try to explain how arcane aspects of disruptive technology work and why they matter. I’ve decided to start a blog series that talks about some of these disruptive technologies that explains how they work and why they matter in a similarly colloqual/overly informal/geekily unprofessional manner to the one I use at work. 
One of the hot new trends in enterprise and infrastructure tech is flash storage. Flash is widely believed to be the panacea to many of the performance issues that traditional monolithic storage  (read: Kanye-esque “Racks on Racks on Racks” of controllers linked to spinning disk arrays) face when dealing with modern performance workloads like big data computing and mulitenant cloud.
But if you listen to all of the hype, both in the press and on Sand Hill Road, flash seems like the cure for everything wrong with modern networked storage. It’s cost effective – somehow. It maximizes IOPS while minimizing scale out array costs. It also mystically solves bad software infrastructure issues.  It slices. It dices. It does your homework for you and gives you great foot massages.
While it’s true that this arcane technology heralds a new age for computer storage – both consumer direct attached and enterprise network storage – flash is not the cure-all for everything. In fact if you implement a flash storage architecture incorrectly you run the risk of dramatically increasing the cost of a solution with only marginal performance boosts.
This is, to say, very frowntown.

Sluice gates running out of China’s Three Gorges dam. Water running out of these sluice gates is a lot like electricity running out of a flash memory cell.
So WTF is flash storage anyway?
Let’s back up for a moment and talk a bit about what flash storage actually is. Flash storage is based on flash memory cells, a way of storing data that is a radical departure from the traditional spinning disk architecture we’ve been using for the last 20-30 years.
Flash sort of works like water running through a dam. Imagine if electricity was water, and you wanted to represent binary digits (the standard basic unit of digital information) by whether or not water could run through certain parts of the dam. If water ran through a sluice gate, it meant that the doors to that gate were open. This could be represented as a 1. If the doors were closed, water couldn’t run from the source to the bottom of the dam. This could represent a 0.
Flash works basically the same way. In a flash memory cell, electricity is designed to flow from a source (the river where the water comes from) to a drain (where the water comes out of the dam). Between the source and the drain is a floating gate that connects the two and ultimately completes an electrical circuit.
When the circuit isn’t completed, electricity can’t flow from the source to the drain. This represents the binary state of 0. But when you apply a certain amount of electricity to the floating gate, the gate locks and the circuit closes. With the circuit closed, electricity is able to flow from the source to the drain. Some electricity bleeds off the circuit onto a detector, which registers as the binary state of 1.
This process happens at the speed of light, and unlike RAM the state of the circuit being either 1 or 0 is maintained even after you turn off the computer. Because you physically change the architecture of each cell every time you power each floating gate, you don’t need to constantly “run water” through each cell to hold state.
An extremely nerdy diagram of how flash NOR gates work. In this example, voltage is applied to the floating gate to connect the Source and Drain together, switching the circuit “On” and setting the data in this gate from a 0 to a 1.
Wacky physics and electrical engineering black magic aside, flash’s architecture means that it’s wickedly fast for reading data. Unlike a magnetic spinning disk, which stores a file in different sectors and has to seek to different parts of the disk to read one file, you can store data in consecutive rows of flash memory. When you do that, reading a file is as easy as running electricity down the row of cells and reading the output of 1s and 0s. This is orders of magnitude faster than reading from a drive, and no modern spinning disk hard drive can come close to flash for read performance.
So why is read performance so important?
Good question. To answer that, you only need to think about what you do online. According to a research report published by UC San Diego, the average American consumes over $35GB of data per day online. This data takes the form of both structured data (mobile app data, things retrieved from databases, etc.) as well as unstructured data (images, videos, etc.).
Mobile is a key driver here. Smartphones are becoming a huge culprit of data consumption, and despite lots of compression and “magic” done to lower the size of mobile files and videos Americans are expected to consumer over 6GB of data per month just from their cell phones by 2017.
In addition to being data hogs, we consumers are also data primadonnas. We want that data in great quantities, and we want that sh#$t now. Data storage frequently is the bottle neck for performance intensive applications like games or HD video, and virtualization makes things worse because it confuses cloud architectures on where the data each user or app is requesting lives (for more on this see my post on the IO Blender problem).
All of this data gluttony means two things:

  1. Data storage needs to be cheap: Because we store so much data, we need to pay less per GB (or soon TB) than we would traditionally do when we didn’t have to store as much. Downward pricing pressure in enterprise-scale storage is extremely disruptive and it’s one of the reasons why GGV invested in Gridstore.
  2. Data storage (particularly read performance) needs to be fast: Data can no longer be the bottleneck in performance-intensive applications like web servers, databases, and mobile apps. Consumers, as a result of the proliferation of incredibly fast computing technology for cheap, demand a certain minimum level of performance and regard “lag” of any kind to be an incredibly frowntown experience. The biggest area where this matters is in read performance – particularly read performance for unstructured files such as photos or videos.

Alright, I get it. Flash solves the performance problem. But what about the “cost” problem with data?
Nailed it. Despite flash being a big win for performance, we’re still in the early days of manufacturing flash memory. As such, flash is still very expensive compared to cheap spinning disks in a cost per gigabyte perspective.
As Jay-Z says in his new song Tom Ford: “numbers don’t lie, check the score board.” Exempting deduplication, compression, and other software performance gains that depend a lot on the structure of underlying data, Skyera posts some of the best flash $/GB in the enterprise world at around $2/GB. This is way higher than the standard spinning disk cost/GB, which bills out under $.50 .
Given that most enterprise storage deployments bill out at around $15TB+, these cost differences can add up big time.
Because of these big differences in price, most enterprise-scale users of networked storage employ a “hybrid” approach to flash. They operate their storage in tiers where hot data (data that is more frequently accessed) is held in high performance flash memory. Cold data such as archival data (medical records, transaction records, etc.) is held in cheaper spinning disk or even in ultra cheap magnetic tape. The popularity of the hybrid approach makes a ton of sense, and it’s one of the reasons why we invested in Nimble Storage.
Alright, so I can’t rely on flash for everything. But if I just throw flash drives into any storage array it’ll run faster right?
Storage companies are interesting beasts. They’re secretly software companies that pretend they’re hardware companies. The key problem for every good storage company comes down to software; controllers and disk arrays are basically just the same Intel server motherboards and RAM whether it’s EMC or NetApp. The secret sauce is all in the software: how you handle deduplication, compression, and most importantly how you handle where you actually store those 1s and 0s on disk / in memory.
This is the job of the file system, the software intelligence that handles you store and present data. To use flash properly in a hybrid deployment, a file system needs to be flash aware: it needs to know that flash is faster than spinning disk, it needs to know how best to store files across both flash and spinning disk storage, and it needs to store data to flash in a way that won’t wear down the floating gates too much.
File system design is one of the hardest problems in computing, and the resource market for developers that know how to build good file systems is incredibly small. How you build a file system to take advantage of flash in a performance and cost-efficient manner can make or break a storage company. It’s so important that I usually dedicate an entire section of my notes in meeting storage companies to deep diving on how their file system works and who builds it.
Final question: what’s with this PCI-E stuff?
The new hotness in flash storage is PCI-E flash storage.
This second generation flash storage is different than the previous generation of flash in that it connects through a PCI-Express interface to the motherboard and manages storage allocation on the card itself. Compare this to previous generations that use SATA or SAS controllers on motherboards, who use a slower interface and have to arbitrate with the drives before data can be sent or saved to the drive, and you can see that PCI-E is inherently faster.
Solid. So if there are 3 things I should take away from this, what are they?

  1. Flash makes read performance faster because it’s faster to read from a memory cell than it is to seek over a disk.
  2. Flash does NOT make things cheaper, but if you use flash in a hybrid deployment with traditional spinning disk you can get the performance without paying a ton more money.
  3. PCI-E Flash is exciting because the PCI-E interface and unified management design is way better than a separate controller.