Category: vMotion (page 1 of 5)

Disable vMotion for a single VM

This question pops up regularly on the VMTN forums and reddit. It’s a viable question but the admins who request this feature usually don’t want Maintenance mode to break or any other feature that helps them to manage large scale environments. When you drill down, you discover that they only want to limit the option of a manual vMotion triggered by an administrator.

Instead of configuring complex DRS rules, connect the VM to an unique portgroup or use bus sharing configurations, you just have to add an extra permission to the VM.

The key is all about context and permission structures. When executing Maintenance mode the move of a virtual machine is done under a different context (System) then when the VM is manually migrated by the administrator. As vCenter honors the most restrictive rule you can still execute a Maintenance mode operation of a host, while being unable to migrate a specific VM.

Here is how you disable vMotion for a single VM via the Webclient:

Step 1: Add another Role let’s call it No-vMotion

  1. Log in as a vCenter administrator
  2. Go to the home screen
  3. Select Roles in the Administration screen
  4. Select Create Role Action (Green plus icon)
  5. Add Role name (No-vMotion)
  6. Select All Priveleges
  7. Scroll down to Resource
  8. Deselect the following Privileges:
  • Migrate powered off virtual machine
  • Migrate powered on virtual machine
  • Query vMotion

Edit role No-vMotion

Step 2: Restrict User privilege on VM.

  1. Select “Host and Clusters” or “VMs and Templates” view, the one you feel comfortable with.
  2. Select the VM and click on the Manage tab
  3. Select Permissions
  4. Click on “Add Permissions” (Green plus icon)
  5. Click on Add and select the User or Group who you want to restrict.
  6. In my example I selected the user FrankD and clicks on Add on OK
  7. On the right side of the screen in the pulldown menu select the role “No-vMotion” and click on OK.

2-Add-Permission

Ensure that the role is applied to This object.

3-This-Object

FrankD is a member of the vCenterAdmins group which has Administrator privileges propagated through the virtual datacenter and all its children.
However FrankD has an additional role on this object “No-vMotion”. Let’s check if it works. Log in with the user id you restricted and right-click the VM. As shown, the option Migrate is greyed out. The VM is running on Host ESX01

4-No-Migrate

The option Mainentance Mode is still valid for Host ESX01.

5-Enter-Maintenance Mode

Click on the option “More Tasks” in the Recent Task window, here you can verify that FrankD is the initiator of the operation Maintenance mode, and System migrated the virtual machine.

6-Context

vSphere 5.1 update 1 release fixes Storage vMotion rename “bug”

vSphere 5.1 update 1 is released today which contains several updates and bug fixes for both ESXi and vCenter Server 5.1.

This release contains the return of the much requested functionality of renaming VM files by using Storage vMotion. Renaming a virtual machine within vCenter did not automatically rename the files, but in previous versions Storage vMotion renamed the files and folder to match the virtual machine name. A nice trick to keep the file structure aligned with the vCenter inventory. However engineers considered it a bug and “fixed” the problem. Duncan and I pushed hard for this fix, but the strong voice of the community lead (thanks for all who submitted a feature request) helped the engineers and product managers understand that this bug was actually considered to be a very useful feature. The engineers introduced the “bugfix” in 5.0 update 2 end of last year and now the fix is included in this update for vSphere 5.1

Here’s the details of the bugfix:

vSphere 5 Storage vMotion is unable to rename virtual machine files on completing migration
In vCenter Server , when you rename a virtual machine in the vSphere Client, the VMDK disks are not renamed following a successful Storage vMotion task. When you perform a Storage vMotion task for the virtual machine to have its folder and associated files renamed to match the new name, the virtual machine folder name changes, but the virtual machine file names do not change.

This issue is resolved in this release. To enable this renaming feature, you need to configure the advanced settings in vCenter Server and set the value of the provisioning.relocate.enableRename parameter to true.

Read the rest of the vCenter 5.1 update 1release notes and ESXi 5.1 update 1 release notes to discover other bugfixes

Migrating datastore clusters by changing storage profiles in a vCloud

vCloud director 5.1 supports the use of both storage profiles and Storage DRS. One of the coolest features and unfortunately relatively unknown is the ability to live migrate virtual machines between datastore clusters by changing the storage profile in the vCloud director portal.

In my lab I’ve set up a provider vDC that contains two compute clusters. Each compute cluster connects to two datastore clusters. Datastore Cluster “vCloud-SDC-Gold” is compatible with the VM storage profile “vCloud-Gold-Storage”, while Datastore Cluster “vCloud-SDC-Silver” is compatible with the VM storage profile “vCloud-Silver-Storage”.

01

When creating a vApp the default storage profile of the organization vDC is applied to the vApp and all its virtual machines. In this case, the VM storage profile Gold is applied to all the virtual machines in the vApp.

You can determine which VM Storage Profile is associated with the virtual machine by selecting the properties of the virtual machine in the “My Cloud” tab. Please note that vCloud Director does not show the VM Storage Profile at the vApp level!

02

By selecting the drop-down box, all storage profiles that are associated with the organization vCD are displayed.

03

By selecting the Storage Profile “vCloud-Silver-Storage” vCloud Director determines that the virtual machine is stored on a datastore that is not compatible with the associated storage profile. In other words the current configuration is violating the storage level policy.

To correct this violation, vCloud director instructs vSphere to migrate the virtual machine via Storage vMotion to a datastore that is compatible with the VM storage Profile. In this case the datastore cluster “vCloud-DSC-Silver” is selected as the destination. Storage DRS determines the most suitable datastore by using its initial placement algorithm and selects the datastore that has the most amount of free space and the lowest I/O load.

To demonstrate the feature, I selected the virtual machine “W2K8_R2-SP1”. The VM storage profile “vCloud-Gold-Storage” is applied and Storage DRS determined that the datastore “nfs-f-vcloud03” of the datastore cluster “vCloud-DSC-Gold” was the most suitable location.

04

By changing the Storage Profile to “vCloud-Silver-Storage” vCloud director instructed vSphere to migrate it to the datastore cluster that is compatible with the newly associated VM storage profile.

05

When logging into the vCenter server managing the ESXi hosts the following task is running:

06

After the task is complete, vCenter shows that the virtual machine is now stored on datastore “nfs-f-vcloud06” in the datastore cluster “vCloud-DSC-Silver”.

07

The power of abstraction
The abstraction layer of vCloud Director makes this possible. When changing the storage profile directly on the vSphere layer, nothing happens. vSphere will not migrate the virtual machine to the appropriate datastore cluster that is compatible with the selected VM storage profile.

Useful for stretched clusters?
The reason why I was looking into this feature in my lab is due to an conversation with my esteemed colleagues Lee Dilworth and Aidan Dalgleish. We were looking to an alternative scenario for a stretched cluster. By leveraging the elastic vDC feature of vCloud director, a seperate DRS cluster is created in each site. Due to the automatic initial placement engine on the compute level, we needed to find a construct that can provide us a more deterministic method of virtual machine placement. We immediately thought of the VM profile storage feature. Create two datastore clusters, one per site and associate a profile storage based on site name to the respective datastore clusters.

08

When creating the vApp, just select the site-related Storage Profile to place the virtual machine in a specific site. Due to the compatibility check, vCloud Director determines that in order to be compliant with the storage profile it places the virtual machine on the compute cluster in the same site. For example, if you want to place a virtual machine in site 1, select the VM storage Profile “site 1”. vCloud director determines that the virtual machine needs to be stored in datastore cluster “DSC-Site-1”. The compute cluster Site-1 is the only compute cluster connected to the datastore cluster, therefor both the compute and storage configuration of the virtual machine is stored in Site 1.

This configuration works perfect if you want to simplify initial placement if you have multiple sites/locations and you always want to keep the virtual machine in the same site. However this solution might not be optimal for a Stretched cluster configuration where failover to another site is necessary.

Connectivity to all datastores necessary
As this feature uses storage vMotion instead of cross-host/datastore vMotion, means that the cluster needs to be connected to both datastore clusters.

09
When selecting the different storage profile, the storage state is migrated to another datastore cluster. However it doesn’t move the compute state of the virtual machine. This means that storage is moved to site B, while the compute state is still in Site A. vCloud director does not provide an option to migrate the virtual machine to a different compute cluster within the provider vDC. You can either solve it by logging into the vCenter server that manages the ESXi hosts and manually vMotion the virtual machines to cluster in Site B, or power-off the virtual machine in vCloud Director, then change the storage profile and power-on the virtual machine. Both “solutions” are not very enterprise-level scenario’s therefor I think this is not yet suitable as a stretched cluster configuration

vMotion over layer 3?

This question regularly pops up on twitter and the community forums. And yes it works but VMware does not support vMotion interfaces in different subnets.

The reason is that this can break functionality in higher-level features that rely on vMotion to work.

If you think Routed vMotion (vMotion interfaces in different subnets) is something that should be available in the modern datacenter, please fill out a feature request. The more feature requests we receive; the more priority can be applied to the development process of the feature.

Why is vMotion using the management network instead of the vMotion network?

On the community forums I’ve seen some questions about the use of the management network by vMotion operations. The two most common scenario’s are explained, please let me know if you notice this behavior in other scenarios.

Scenario 1: Cross host and non-shared datastore migration
vSphere 5.1 provides the ability to migrate a virtual machine between hosts and non-shared datastores simultaneously. If the virtual machine is stored on a local or non-shared datastore vMotion is using the vMotion network to transfer the data to the destination datastore. When monitoring the VMkernel NICs, some traffic can be seen following over the management NIC instead of the VMkernel NIC enabled for vMotion.

When migrating a virtual machine, vMotion determines hot data and cold data. Virtual disks or snapshots that are actively used are considered hot data, while the cold data are the underlying snapshots and base disk. Let’s use a virtual machine with 5 snapshots as an example. The active data is the recent snapshot, this is sent over across the vMotion network while the base disk and the 4 older snapshots are migrated via a network file copy operation across the first VMkernel NIC (vmk0).

The reason why vMotion uses separate networks is that the vMotion network is reserved for data migration of performance related content. If the vMotion network is used for network file copies of cold data, it could saturate the network with non-performance related content and thereby starving traffic that is dependent on bandwidth. Please remember that everything sent over the vMotion network directly affects performance of the migrating virtual machine.

During a vMotion the VMkernel mirrors the active I/O between the source and the destination host. If vMotion would pump the entire disk hierarchy across the vMotion network it would steal bandwidth from the I/O mirror process and this will hurt the performance of the virtual machine.

If the virtual machine does not contain any snapshots, the VMDK is considered active and it is migrated across the vMotion network. The files in the VMDK directory are copied across the network of the first VMkernel NIC.

Scenario 2: Management network and vMotion network sharing same IP-range/subnet
If the management network (actually the first VMkernel NIC) and the vMotion network share the same subnet (same IP-range) vMotion sends traffic across the network attached to first VMkernel NIC. It does not matter if you create a vMotion network on a different standard switch or distributed switch or assign different NICs to it, vMotion will default to the first VMkernel NIC if same IP-range/subnet is detected.

Please be aware that this behavior is only applicable to traffic that is sent by the source host. The destination host receives incoming vMotion traffic on the vMotion network!

I’ve been conducting an online-poll and more than 95% of the respondents are using a dedicated IP-range for the vMotion traffic. Nevertheless I would like to remind you that it’s recommended to use a separate network for vMotion. The management network is considered to be an unsecure network and therefor vMotion traffic should not be using this network. You might see this behavior in POC environments where you use a single IP-range for virtual infrastructure management traffic.

If the host is configured with a Multi-NIC vMotion configuration using the same subnet as the management network/1st VMkernel NIC, then vMotion respects the vMotion configuration and only sends traffic through the vMotion-enabled VMkernel NICs.

If you have an environment that is using a single IP-range for management network and the vMotion network, I would recommend creating a Multi-NIC vMotion configuration. If you have a limited amount of NICs, you can assign the same NIC to both VMkernel NICs, although you do not leverage the load balancing functionality, you force the VMkernel to use the vMotion-enabled networks exclusively.

Older posts

© 2017 frankdenneman.nl

Theme by Anders NorenUp ↑