Write-Back and Write-Through policies in FVP
Fault tolerant write acceleration is one of the key features of FVP. Once you select a virtual machine to be accelerated by FVP you can choose to apply a write through policy or write back policy with a x number of replicas of the virtual machine. Before expanding upon replicas, let’s review the basics of write through and write back policies.
Please note that the flash device itself does not act as a persistent datastore, it’s a repository of active I/O. Therefore when the application issues a write I/O, the data is committed to the flash device however this data must always be written to the storage system. The timing of the write operation to the storage system is controlled by the write policy.
Write through policy
When a virtual machine issues an I/O operation FVP determines if it can serve it from flash. When it’s a write I/O operation the write goes straight to the storage system and the data is copied to the flash device. FVP acknowledges the completion of the write operation to the application after it receives the acknowledge from the storage system.
Write I/O operations are not accelerated by the flash devices when selecting this write policy, but all subsequent read operations on that particular data are served from flash. Still write operations will eventually benefit because read operations on the flash footprint do not traverse across the network to hit the storage system. This reduces the number of request hitting the storage system and lowering the bandwidth consumption, therefor lower latencies can be expected in virtual infrastructures due to the mixed workload, the exception to this story is if you primarily run write intensive workloads.
First reads / false writes
Not all data is written by a virtual machine first before reading it, read operations can happen before a write operation, such as loading an operating system or opening a file. This operation is called a false write. A false write itself cannot be accelerated by any of two write policies and the access time of this first read is subject to the performance of the storage system. However FVP copies all incoming I/O from the storage system to the flash device providing acceleration on the subsequent reads.
Write back policy
Write back policy accelerates both read and write operations. When an application issues a write I/O operation FVP forwards the command to the flash device. The flash devices acknowledges the write to the application first and completes the write operation to the storage system in the background. This results in the application seeing flash-level latencies while FVP deals with latency and throughput performance levels of the storage system.
Flash replicas to protect delayed writes
When using Write back, delayed writes occurs. A delayed write is data that resides on the flash but that has not been written (destaged) to the storage system. In the time window between committing data to the flash and destaging the data to the storage system a risk exists where that the ESXi host goes down or the flash device fails. The problem is that when the virtual machine is started again on another host it expects the data to be available as before the outage it received acknowledgements on all previous writes
To protect against these failures and to provide an environment where all data is correct, FVP provides flash replicas. The write back policy provides the option to have 0, 1 or 2 flash replicas of a given virtual machine. Please note that FVP allows you to configure a write policy per virtual machine, you can have an environment where virtual machines run in write-through mode, while others have 0, 1 or 2 replicas.
When a virtual machine is configure with a write back policy with 1 replica, FVP forwards the write to the local flash device and a remote flash device. Once they local flash and the remote flash device acknowledge, FVP acknowledges the write completion to the application.
It is the responsibility of the source host running the virtual machine to destage the delayed writes to the array. If the flash device, connection to the datastore or the complete source host goes down then one of hosts containing a replica will take over the job of destaging the delayed writes. In case you are wondering FVP leverages the vMotion network to transfer data to the replica host.
Clustering technology is necessary
Although it might not be obvious but accelerating writes in a virtual infrastructure is a big challenge. The solution needs to provide data acceleration on behalf of the virtual machines in front of a clustered file system where data can be modified from any host connected to the datastore. In addition the solution is subject to clustered operations such as vMotion while making sure that the data is always correct and available. To solve this FVP is a full clustered solution allowing the virtual machine flash footprint available to all participating hosts. Remote flash footprint and the FVP clustering technology will be the focus of the next article.