Multi-NIC vMotion support in vSphere 5.0

There are some fundamental changes to vMotion scalability and performance in vSphere 5.0 one is the multi-nic support. One of the most visible changes is multi-NIC vMotion capabilities. In vSphere 5.0 vMotion is now capable of using multiple NICs concurrently to decrease lead time of a vMotion operation. With multi-NIC support even a single vMotion can leverage all of the configured vMotion NICs, contrary to previous ESX releases where only a single NIC was used.

Allocating more bandwidth to the vMotion process will result in faster migration times, which in turn affects the DRS decision model. DRS evaluates the cluster and recommends migrations based on demand and cluster balance state. This process is repeated each invocation period. To minimize CPU and memory overhead, DRS limits the number of migration recommendations per DRS invocation period. Ultimately, there is no advantage recommending more migrations that can be completed within a single invocation period. On top of that, the demand could change after an invocation period that would render the previous recommendations obsolete.

vCenter calculates the limit per host based on the average time per migration, the number of simultaneous vMotions and the length of the DRS invocation period (PollPeriodSec).

PollPeriodSec: By default, PollPeriodSec – the length of a DRS invocation period – is 300 seconds, but can be set to any value between 60 and 3600 seconds. Shortening the interval will likely increase the overhead on vCenter due to additional cluster balance computations. This also reduces the number of allowed vMotions due to a smaller time window, resulting in longer periods of cluster imbalance. Increasing the PollPeriodSec value decreases the frequency of cluster balance computations on vCenter and allows more vMotion operations per cycle. Unfortunately, this may also leave the cluster in a longer state of cluster imbalance due to the prolonged evaluation cycle.

Estimated total migration time: DRS considers the average migration time observed from previous migrations. The average migration time depends on many variables, such as source and destination host load, active memory in the virtual machine, link speed, available bandwidth and latency of the physical network used by the vMotion process.

Simultaneous vMotions: Similar to vSphere 4.1, vSphere 5 allows you to perform 8 concurrent vMotions on a single host with 10GbE capabilities. For 1GbE, the limit is 4 concurrent vMotions.

Design considerations
When designing a virtual infrastructure leveraging converged networking or Quality of Service to impose bandwidth limits, please remember that vCenter determine the vMotion limits based on the vMotion uplink physical NIC reported link speed. In other words, if the physical NIC reports at least 10GbE, link speed, vCenter allows 8 vMotions, but if the physical NIC reports less than 10GBe, but at least 1 GbE, vCenter allows a maximum of 4 concurrent vMotions on that host.

For example; HP Flex technology sets a hard limit on the flexnics, resulting in the reported link speed equal or less to the configured bandwidth on Flex virtual connect level. I’ve come across many Flex environments configured with more than 1GB bandwidth, ranging between 2GB to 8GB. Although they will offer more bandwidth per vMotion process, it will not offer an increase in the amount of concurrent vMotions.

Therefore, when designing a DRS cluster, take the possibilities of vMotion into account and how vCenter determines the concurrent number of vMotion operations. By providing enough bandwidth, the cluster can reach a balanced state more quickly, resulting in better resource allocation (performance) for the virtual machines.

**disclaimer: this article contains out-takes of our book: vSphere 5 Clustering Technical Deepdive**

Frank Denneman

Follow me on Twitter, visit the facebook fanpage of, or add me to your Linkedin network

10 Responses

  1. NiTRo says:

    I Frank, FYI i made some tests with 9 vmnics in nested ESX reaching 6GB/s :

  2. Eric Gray says:

    This is one of my favorite new features in vSphere 5!


  3. Conrad says:

    This is exciting stuff for sure!

  4. Is it still possible to override the 4 concurrent in the vcenterconfig?

  5. Yes it is possible, but I doubt its supported and we (VMware PSO) generally do not recommend applying settings outside the UI.

  6. I found the some option to enable/disable some specific mode for vMotion when looking for something else 😉

    Host -> Configuration -> Software/Advanced Settings -> Migrate.BindToVmknic [2] (Default)
    Bind the vmotion socket to a specific vmknic. 0 for never, 1 to bind only with FT, or 2 to bind with FT or for multi-vmknic support.

  1. July 18, 2011

    [...] 5 new Networking features (My Virtual Cloud) LLDP support added in vSphere 5 (Rickard Nobel) Multi-NIC vMotion Support In vSphere 5.0 (Frank Denneman) Leveraging the vSphere 5.0 NetFlow support to monitor and report traffic data in a [...]

  2. July 18, 2011

    [...] On peut enfin utiliser plusieurs vmnic pour les vmotions… [...]

  3. July 22, 2011

    [...] Multi-NIC vMotion support in vSphere 5.0 [...]

  4. January 24, 2012

    [...] datastore clusters Mem minfreepct sliding scale function Upgrading vmfs datastores and Storage DRS Multi NIC vMotion support in vSphere 5.0 Contention on lightly Utilized Hosts Restart vCenter results in DRS load balancing IP-HASH versus [...]