frankdenneman Frank Denneman is the Chief Technologist for AI at VMware by Broadcom. He is an author of the vSphere host and clustering deep dive series, as well as a podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

VAAI hw offload and Storage vMotion between two Storage Arrays

1 min read

Recently I received a question about migrating virtual machines with Storage vMotion between two Storage Arrays. More specifically if VAAI is leveraged by Storage vMotion in this process. Unfortunately VAAI is an internal array based feature, the Clone Blocks VAAI feature Storage vMotion leverages is only used to copy and migrate data within the same physical array.
Datamovers
How does Storage vMotion work between two arrays? Storage vMotion uses a VMkernel component called the datamover. This component is moves the blocks from the source to the destination datastore, to be more precise; it handles the read and write blocks I/O from and to the source and destination datastores.
The VMkernel used in vSphere 4.1 and up contains 2 different datamovers, software datamovers (FSDM and FS3DM) and a hardware offloading datamover (FS3DM-hardware offloading). The most efficient datamover is the FS3DM-hardware offload, followed by the FS3DM and as last the legacy datamover FSDM. FS3DM operates at kernel level, while the FSDM operates at the application level, the shorter the communication path the faster the operation. In essence Storage vMotion is travelling up to the stack of datamovers, trying the most efficient first, before reverting to a less optimal choice. To get an idea of difference in performance, please read the article “Storage vMotion performance difference” on Yellow-Bricks.com
Traversing the datamover stack
When a data movement operation is invoked (I.E. Storage vMotion) and the VAAI hardware offload operation is enabled, the data mover will first attempt to use the hardware offload. If the hardware offload operation fails, the data mover reverts to the software datamovers, first FS3DM, then FSDM. As you are migrating between arrays, hardware offloading will fail and the VMkernel selects a software datamover FS3DM. If the block-sizes of the datastore are not identical, then Storage vMotion has to revert to the FSDM datamover. If you are migrating data between NFS datastores than Storage vMotion immediately revert to the FSDM datamover.
Impact on Storage DRS datastore cluster design
Keep this in mind when designing Storage DRS datastore clusters. Storage DRS does not keep historical data of storage vMotion lead times, and thus it cannot incorporate these metrics when generating migration recommendations. Although no performance loss will occur within the virtual machine, migrating between arrays can create overhead on the supporting infrastructure. If possible design your datastores to contain datastores within the same array and use identical blocksizes (if VMFS is used)

frankdenneman Frank Denneman is the Chief Technologist for AI at VMware by Broadcom. He is an author of the vSphere host and clustering deep dive series, as well as a podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

Help Us Make vMotion Even Better

The vMotion product team is looking for input on how to improve vMotion.  vMotion has proven to be a paradigm shift of datacenter management....
frankdenneman
21 sec read

Disable vMotion for a single VM

This question pops up regularly on the VMTN forums and reddit. It’s a viable question but the admins who request this feature usually don’t...
frankdenneman
1 min read

New Storage DRS whitepaper available at VMware.com

Last Friday my last and latest whitepaper about Storage DRS was published on VMware.com. Go to http://www.vmware.com/resources/techresources/10363 and download the whitepaper: “Understanding vSphere 5.1...
frankdenneman
48 sec read

4 Replies to “VAAI hw offload and Storage vMotion between two Storage…”

  1. Hi Frank,
    your statement “As you are migrating between arrays, hardware offloading will fail…” is not correct in every case. If you use multiple Netapp Boxes in cluster mode, VAAI will offload the Storage vMotion task, if no linked clones are involved. I have done serveral tests during a PoC for a customer and we have seen double throughput with VAAI offload between two boxes (in clustermode) compared to the same task without VAAI. All tests were done with vSphere 5.0 and 5.1, Ontap 8.1.1 (on FAS 6240 boxes) and the NFS VAAI Plugin version “VMW-ESX-5.0.0-NetAppNasPlugin-1.0”.
    regards,
    Birk

    1. Hi Birk,
      Thanks for the feedback. You are right, in your case the clustered arrays are presenting themselves as one array to the vSphere layer, thus Storage vMotion leverages the FS3DM-hw offload module.

  2. How does the datamover gaurantee data integrity on the destination? Checksums or some other method?

Comments are closed.