Storage I/O requirements of SAP HANA on vSphere 5.5
During VMworld a lot of attention was going to the support for SAP HANA on vSphere 5.5 for Production Environments. SAP HANA is an In-memory database platform that allows running real-time analytics and real-time apps.
In its white paper “Best practices and recommendations for Scale-Up Deployments of SAP HANA on VMware vSphere” VMware states that vMotion, DRS and HA is supported for virtualized SAP HANA systems. This is amazing and very exciting. Being able to run these types of database platform virtualized is a big deal. You can finally leverage the mobility and isolation benefits provided by the virtual infrastructure and get rid of the rigid physical landscapes that are costly to maintain and a pain to support.
When digging deeper in the architecture of the SAP HANA platform you discover that SAP HANA has to write to disk even though it’s an In-memory database platform. Writing to disk allows HANA to provide ACID guarantees for the database. ACID stands for (Atomicity, Consistency, Isolation, Durability) and these properties guarantee that database transactions are processed reliably.
On a side note, the release of SAP HANA support triggered me in to diving into database structures and architectures, luckily for me our director of Products has an impressive track record on DB designs so I spend a few hours with him to learn more about this. This information will be shared in a short series soon. But I digress.
The document “SAP HANA - Storage Requirements” available at saphana.com provides detailed insight in storage I/O behavior of the platform. On page 4 the following statement is made: SAP HANA uses storage for several purposes:
Data: SAP HANA persists a copy of the in-memory data, by writing changed data in the form of so-called save point blocks to free file positions, using I/O operations from 4 KB to 16 MB (up to 64 MB when considering super blocks) depending on the data usage type and number of free blocks. Each SAP HANA service (process) separately writes to its own save point files, every five minutes by default.
Redo Log: To ensure the recovery of the database with zero data loss in case of faults, SAP HANA records each transaction in the form of a so-called redo log entry. Each SAP HANA service separately writes its own redo-log files. Typical block-write sizes range from 4KB to 1MB
So it makes sense to use a fast storage platform that can process various types of block sizes real fast. That means low latency and high throughput, which server-side resources can provide very easily.
In the document “SAP HANA Guidelines for being virtualized with VMware vSphere” available on saphana.com the following statement is issued in the section 4.5.3. Storage Requirement:
SAP and VMware recommend to following the VMware Best Practices for SAP HANA virtualized with VMware vSphere with regards to technical storage configuration in VMware. Especially virtual disks created for log volumes of SAP HANA should reside on local SSD or PCI adapter flash devices if present. Central enterprise storage may be used in terms of the SAP HANA tailored data
center intergation approach.
It is an interesting standpoint. SAP recommends using flash and it makes sense because what is the point of running such a platform in memory when your storage platform is slow. When using local flash storage you will introduce a static workload in your virtual infrastructure again. SAP-HANA supports the use of enhanced vMotion, migrating a VM between two hosts and two datastores simultaneously, however at time of writing, DRS does not leverage enhanced vMotion for load-balancing operations. This results in the loss of automatic load balancing and potentially reducing the ability of virtual machine recovery by vSphere High Availability.
Instead of introducing rigid and silo’ed architectures, its makes sense to use PernixData FVP. FVP, Supported by VMware, allows for clustered and fault tolerant I/O acceleration by using flash or (soon) memory. By virtualizing these acceleration resources into one seamless pool, VMs can seamlessly migrate to any host in the cluster while being able to retrieve data throughout the cluster.
SAP HANA accelerates instructions by keeping it in memory, while FVP accelerates the writes by leveraging the available acceleration resources. In vSphere 5.5 SAP HANA is limited to 1TB of memory due to the maximum virtual machine configuration, however vSphere 5.5 supports a host memory configuration of 4TB. With the soon to be released FVP 2.0 with memory support, FVP allows you to leverage the remaining memory to accelerates the writes as well, making it a true in-memory platform.