frankdenneman Frank Denneman is the Machine Learning Chief Technologist at VMware. He is an author of the vSphere host and clustering deep dive series, as well as podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

Impact of Intra VM affinity rules on Storage DRS

3 min read

By default Storage DRS applies an Intra-VM affinity rule to all new virtual machines in the datastore cluster. The Intra-VM affinity rule keeps the virtual machine files, such as VMX file, log files, vSwap and VMDK files together on one datastore.

Keeping all files together on one datastore allows ease of troubleshooting. However Storage DRS load balance algorithms may benefit from distributing the virtual machine across datastores. Let’s zoom in how Storage DRS handles virtual machine with multiple disks when the Intra-VM affinity rule is removed from the virtual machine.
DrmDisk
Storage DRS uses the construct “DrmDisk” as the smallest entity it can migrate. A DrmDisk represent a consumer of datastore resources. This means that Storage DRS creates a DrmDisk for each VMDK belonging to the virtual machine. The interesting part is the collection of system files and swap file belonging to virtual machines. Storage DRS creates a single Drmdisk for all the system files, if an alternate swapfile location is specified, the vSwap file is represented as a separate DrmDisk and Storage DRS will be disabled on the swap DrmDisk. More info about alternate swapfile locations can be found here. For example a virtual machine with three VMDK’s and with no alternate swapfile locations configured, Storage DRS creates 4 DrmDisk:
• A separate DrmDisk for each Virtual Machine Disk File
• A DrmDisk for system files (VMX, Swap, logs, etc)

Initial placement recommendation will look similar to this screenshot when the Intra-VM affinity rule is disabled. Notice the separate recommendation for the “virtual machine configuration file”? This is the DrmDisk containing the system files.

Initial placement Space load balancing
Initial placement and Space load balancing benefit from this increased granularity tremendously. Instead of searching a suitable datastore that can fit the virtual machine as a whole, Storage DRS is able to seek for appropriate datastores for each DrmDisk file separately. Recently I wrote an article about datastore cluster fragmentation and Storage DRS ability to issue prerequisite migrations. You can imagine due to the increased granularity, datastore cluster fragmentation is less likely to happen and if prerequisite migrations are required, the number of migrations is expected to be a lot less.

IO load balancing
Similar to initial placement and load balancing, I/O load balancing benefit from the deeper level of detail. It can find a better fit for each workload generated by the VMDK files. The system file DrmDisk will not be migrated quite often as it small in size and does not generate a lot of I/O often. Storage DRS analyzes the workload and generates a workload model for each DrmDisk, it then decides which datastore it needs to place the DrmDisk to keep the load balanced within the datastore cluster while offering enough performance for each DrmDisk. You can imagine this becomes a lot harder when Storage DRS is required to keep all the VMDK files together. Usually the datastore chosen is the datastore that provides the best performance for the most demanding workload AND is able to store all the virtual machine disk files and system files. Now let’s dig into this a little deeper, for example the virtual machine used in the previous example has two DrmDisk generating heavy workloads, while the DrmDisks containing the system files and VMDK2 are “cold”.

If Intra-VM affinity rules are used, Space balancing is required to find a datastore that has 350+ GB free without exceeding the space utilization threshold. If I/O load balancing is enabled, this datastore also needs to provide enough performance to keep the latency below the I/O latency threshold (by default 15ms) after placing the 4 DrmDisks. You can imagine it’s a lot less complicated when space and I/O load balancing are allowed to place each DrmDisk on a datastore that suits their needs.
How to change default datastore cluster behavior?
Mentioned before, datastore cluster defaults in applying an Intra-VM affinity rule to each new virtual machine. Recently Duncan published an article on how to change the affinity rules on active virtual machines. Unfortunately there is not User-Interface option available that can disable this behavior, so I turned to my good friend and colleague Alan Renouf and he created some nice PowerCLI code to solve this problem:
As I’m not a powerCLI user at all, I’m relaying Alan’s instructions:
First you need to run the below code to put the function into memory:

function Set-DatastoreClusterDefaultIntraVmAffinity{
param(
[CmdletBinding()]
[parameter(Position = 0, Mandatory = $true, ValueFromPipeline = $true)]
[PSObject]$DSC,
[Switch]$Enabled
)
process{
$SRMan = Get-View StorageResourceManager
if($DSC.GetType().Name -eq “string”){
$DSC = Get-DatastoreCluster -Name $DSC | Get-View
}
elseif($DSC.GetType().Name -eq “DatastoreClusterImpl”){
$DSC = Get-DatastoreCluster -Name $DSC.Name | Get-View
}
$spec = New-Object VMware.Vim.StorageDrsConfigSpec
$spec.podConfigSpec = New-Object VMware.Vim.StorageDrsPodConfigSpec
$spec.podConfigSpec.DefaultIntraVmAffinity = $Enabled
$SRMan.ConfigureStorageDrsForPod($DSC.MoRef, $spec, $true)
}
}

Once this has been run you can use this function….

Get-DatastoreCluster “Shared Datastores” | Set-DatastoreClusterDefaultIntraVmAffinity

Shared datastores is the name of the Datastore cluster, you can change that into the name of your own datastore cluster. Or in the case you have multiple datastore clusters and want to disable the rule for all datastore clusters at once, omit the name of the datastore cluster at all.
If ease of troubleshooting is not your first concern, than it might be beneficial to the performance of Storage DRS to disable the default Intra-VM affinity rule on the virtual machines in the datastore cluster. However I’m interested in reasons why you wouldn’t want to disable the default affinity rule besides troubleshooting effort.
Note:
Unfortunately I’m unaware why VMware decided to use the Intra-VM affinity rule as default and I do not know if a future release of vSphere will provide a UI setting to change the affinity rule behavior of the datastore cluster. Please leave a comment if you would like this option included in a new version of vSphere. All I can do is relay this to the appopriate product manager.

frankdenneman Frank Denneman is the Machine Learning Chief Technologist at VMware. He is an author of the vSphere host and clustering deep dive series, as well as podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

New Storage DRS whitepaper available at VMware.com

Last Friday my last and latest whitepaper about Storage DRS was published on VMware.com. Go to http://www.vmware.com/resources/techresources/10363 and download the whitepaper: “Understanding vSphere 5.1...
frankdenneman
48 sec read

Migrating datastore clusters by changing storage profiles in a…

vCloud director 5.1 supports the use of both storage profiles and Storage DRS. One of the coolest features and unfortunately relatively unknown is the...
frankdenneman
3 min read

VCD and initial placement of virtual disks in a…

Recently a couple of consultants brought some unexpected behavior of vCloud Director to my attention. If the provider vDC is connected to a datastore...
frankdenneman
51 sec read

10 Replies to “Impact of Intra VM affinity rules on Storage DRS”

  1. I think some of the decision on whether to leave enabled or to disable Intra-VM affinity depends on the type of back end storage used. Do the datastores tie back to a specific set of spindles or disk groups or do they tie back to a fully virtualized storage array, or an array with an abstraction layer that services IO across a larger pool of spindles which is shared by other physical and virtual machines. There are other storage characteristics tied to the datastores which could influence this decision such as data deduplication, compression, backup, and replication, but all of these capabilities should be factored in when pooling datastores within a datastore cluster. I gravitate towards the “ease of management” discussion. I’ve never been one to disperse virtual disks across multiple datastores unless absolutely required for performance or other underlying storage capability reasons. From an operational standpoint, doing so at scale would drive me a little crazy but like all things I suppose it’s something a person can get used to.

  2. Is there any point at al to use the I/O loadbalancing on a virtualized array like P4000 or EQL? So then the only reason to disable the afinityrule is to satisfy initsial playsment, and you shuld have datastores thats big enough to support your vm’s.
    I whould rather create a new datastore or extend one, if I can, only as a last resort whould I split the VM

  3. personally i think it can be set as default by creation of datastore, but it should be possible to be switched off later via user interface on global level, not only per VM (it doesnt make any sense to have option to disable it on per vm level, but not globally at datastore level – so in case you need to disable it, you currently have to take some action, but this action could be pre-set by unswitching the default keep together rule and you have one thing less to care about). Such feature would be definitely helpful.

  4. Sure… apologies. I realise you didn’t write it, but I’m trying to use the script you posted by Alan Renouf. It doesnt change the default affinity rule on a per DS cluster basis for me, so I’m wondering if anyone actually got this working… or if perhaps it only works on a 5.1 vCenter?
    The function does seem to run, and work when i run “Get-DatastoreCluster “my datastore cluster” | Set-DatastoreClusterDefaultIntraVmAffinity
    But whenever a machine is built and added into the datastore cluster, it still has the vmdk affinity rule ticked to “keep vmdk’s together”, which none of the other VM’s have.
    I’ve logged a case with VMware support for this issue, prior to finding this post. I’m using Power CLI 5.1 against a 5.0 U2 vCenter.

  5. I have the same issue. I am now running vCenter 5.1u1 and the option is available in the Web interface. The script above and the Web interface agree when the setting is changed, but it has no effect. VMs deployed to the clusters still have the “Keep VMDKs together” checked and functionally we can’t deploy a VM with more disks than the largest free space available without adding 1 disk at a time.

Comments are closed.