Recently I received a question about migrating virtual machines with Storage vMotion between two Storage Arrays. More specifically if VAAI is leveraged by Storage vMotion in this process. Unfortunately VAAI is an internal array based feature, the Clone Blocks VAAI feature Storage vMotion leverages is only used to copy and migrate data within the same physical array.
Datamovers
How does Storage vMotion work between two arrays? Storage vMotion uses a VMkernel component called the datamover. This component is moves the blocks from the source to the destination datastore, to be more precise; it handles the read and write blocks I/O from and to the source and destination datastores.
The VMkernel used in vSphere 4.1 and up contains 2 different datamovers, software datamovers (FSDM and FS3DM) and a hardware offloading datamover (FS3DM-hardware offloading). The most efficient datamover is the FS3DM-hardware offload, followed by the FS3DM and as last the legacy datamover FSDM. FS3DM operates at kernel level, while the FSDM operates at the application level, the shorter the communication path the faster the operation. In essence Storage vMotion is travelling up to the stack of datamovers, trying the most efficient first, before reverting to a less optimal choice. To get an idea of difference in performance, please read the article “Storage vMotion performance difference” on Yellow-Bricks.com
Traversing the datamover stack
When a data movement operation is invoked (I.E. Storage vMotion) and the VAAI hardware offload operation is enabled, the data mover will first attempt to use the hardware offload. If the hardware offload operation fails, the data mover reverts to the software datamovers, first FS3DM, then FSDM. As you are migrating between arrays, hardware offloading will fail and the VMkernel selects a software datamover FS3DM. If the block-sizes of the datastore are not identical, then Storage vMotion has to revert to the FSDM datamover. If you are migrating data between NFS datastores than Storage vMotion immediately revert to the FSDM datamover.
Impact on Storage DRS datastore cluster design
Keep this in mind when designing Storage DRS datastore clusters. Storage DRS does not keep historical data of storage vMotion lead times, and thus it cannot incorporate these metrics when generating migration recommendations. Although no performance loss will occur within the virtual machine, migrating between arrays can create overhead on the supporting infrastructure. If possible design your datastores to contain datastores within the same array and use identical blocksizes (if VMFS is used)
VAAI hw offload and Storage vMotion between two Storage Arrays
1 min read
Hi Frank,
your statement “As you are migrating between arrays, hardware offloading will fail…” is not correct in every case. If you use multiple Netapp Boxes in cluster mode, VAAI will offload the Storage vMotion task, if no linked clones are involved. I have done serveral tests during a PoC for a customer and we have seen double throughput with VAAI offload between two boxes (in clustermode) compared to the same task without VAAI. All tests were done with vSphere 5.0 and 5.1, Ontap 8.1.1 (on FAS 6240 boxes) and the NFS VAAI Plugin version “VMW-ESX-5.0.0-NetAppNasPlugin-1.0”.
regards,
Birk
Hi Birk,
Thanks for the feedback. You are right, in your case the clustered arrays are presenting themselves as one array to the vSphere layer, thus Storage vMotion leverages the FS3DM-hw offload module.
Thanks for sharing the info
How does the datamover gaurantee data integrity on the destination? Checksums or some other method?