frankdenneman Frank Denneman is the Machine Learning Chief Technologist at VMware. He is an author of the vSphere host and clustering deep dive series, as well as podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

Memory-Like Storage Means File Systems Must Change – My Take

1 min read

I’m an avid reader of thenextplatform.com. They always provide great insights into new technology. This week they published the article “Memory-Like Storage Means File Systems Must Change” and as usually full of good stuff. The focus of this article is about the upcoming non-volatile memory technologies that leverage the memory channel to provide incredible amounts of bandwidth to the storage medium. I can’t wait to see this happen and we can start to build systems with performance characteristics that weren’t conceivable a half a decade ago.
The article mentions 3D XPoint and Intel Apache Pass is the codename for 3D XPoint in DIMM format. It could be NVDIMM it could be something else. We don’t know yet. This article argues that storage systems need to change and I fully agree. If you consider the current performance overhead on recently released PCIe NVMe 3D XPoint devices, it is clear that the system and the software have the largest impact on latency. The solved the device characteristics pretty much; it’s now the PCIe bus and the software stack that delays the I/O. Moving to the memory bus makes sense. Less overhead and almost five times the bandwidth. For example, four-lane PCIe 3.0 provides a theoretical bandwidth of close to 4 GB/s while 2400 MHz memory has a peak transfer rate of close to 19 GB/s.
This sounds great and very promising, but I do wonder how will it impact memory operations. The key is to deliver an additional level of memory hierarchy, increasing capacity while abstracting the behavior of the new media.
It’s key to understand that memory is accessed after an L3 miss. It can spend a lot of time waiting on DRAM. A number often heard is that it can spend 19 out of every 20 instruction slots waiting on data from memory. This figure seems accurate as the latency of an instruction inside a CPU register is one ns while memory latency is close to 15 ns. Each core requires memory bandwidth, and this impacts the average memory bandwidth per core. Introducing a media that is magnitudes slower than DRAM can negatively affect the overall system performance. More cycles are wasted on waiting on memory media.
Please remember that not every workload is storage I/O bound. Great system design is not only about making I/O faster; it’s about removing bottlenecks in a balanced matter. It’s essential that the storage I/O should not interrupt DRAM traffic.
An analogy would be a car that can go 65MPH. The car in front of him drives 55 MPH. By selecting another lane, the slower car does not interfere anymore, and he can drive the speed he wants. The problem is in this lane cars typically drive 200 MPHs.
The key point for both NVDIMM as Intel Apache Pass is that adding storage on the memory bus to improve I/O latency should not interfere with DRAM operations.
This content is an excerpt of the upcoming vSphere 6.5 Host Resources Deep Dive book.

frankdenneman Frank Denneman is the Machine Learning Chief Technologist at VMware. He is an author of the vSphere host and clustering deep dive series, as well as podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman