• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Manual storage vMotion migrations into a datastore cluster

January 8, 2013 by frankdenneman

Frequently I receive questions about the impact of a manual migration into a datastore cluster, especially about the impact of the VM disk file layout. Will Storage DRS take the initial disk layout into account or will it be changed? The short answer is that the virtual machine disk layout will be changed by the default affinity rule configured on the datastore cluster. The article describes several scenarios of migrating “distributed“ and “centralized” disk layout configurations into datastore cluster configured with different affinity rules.
Test scenario architecture
For the test scenarios I’ve build two virtual machines VM1 and VM2 Both virtual machines are of identical VM configuration, only the datastore location is different. VM1-centralized has a “centralized” configuration, storing all VMDKs on a single datastore, while VM2-distributed has a “distributed” configuration, storing all VMDKs on separate datastores.

Hard disk Size VM 1 datastore VM 2 datastore
Working directory 8GB FD-X4 FD-X4
Hard disk 1 60GB FD-X4 FD-X4
Hard disk 2 30GB FD-X4 FD-X5
Hard disk 1 10GB FD-X4 FD-X6

Two datastore clusters exists in the virtual infrastructure:

Datastore cluster Default Affinity rule VMDK rule applied on VM
Tier-1 VMs and VMDKs Do not keep VMDKs together Intra-VM Anti-affinity
Tier-2 VMs and VMDKs Keep VMDKs together Intra-VM Affinity rule

Test 1: VM1-centralized to Datastore Cluster Tier-2 VMs and VMDKs
Since the virtual machine is stored on a single datastore is makes sense to start of migrating the virtual machine to the datastore cluster which applies a VMDK affinity rule, keeping the virtual machine disk files together on a single datastore in the datastore cluster.Select the virtual machine, right click the virtual machine to display the submenu and select the option “Migrate…”. The first step is to select the migration type, select change datastore.
00-UI-select-migration type
The second step is to select the destination datastore, as we are planning to migrate the virtual machine to a datastore cluster it is necessary to select the datastore cluster object.
01-UI-select-Datastore
After clicking next, the user interface displays the Review Selection screen; notice that the datastore cluster applied the default cluster affinity rule.
02-UI-review-selections
Storage DRS has evaluated the current load of the datastore cluster and the configuration of the virtual machine, it concludes that datastore nfs-f-05 is the best fit for the virtual machine, the existing virtual machines in the datastore cluster and the load balance state of the cluster. By clicking “more recommendations” other datastore destinations are presented.
Test result: Intra-VM affinity rule applied and all virtual machine disk files are stored on a single datastore
Selecting the Datastore cluster object
The user interface provides you two options, select the datastore cluster object or a datastore that is part of the datastore cluster, however for that option you explicitly need to disable Storage DRS for this virtual machine. By selecting the datastore cluster, you fully leverage the strength of Storage DRS. Storage DRS initiates it’s algorithms and evaluate the current state of the datastore cluster. It reviews the configuration of the new virtual machine and is aware of the I/O load of each datastore as well as the space utilization. Storage DRS weigh both metrics and will weigh either space of I/O load heavier if the utilization is higher.
Disable Storage DRS for this virtual machine
By default it’s not possible to select a specific datastore that is a part of a datastore cluster during the second step “Select Datastore”. In order to do that, one must activate (tick the option box) the “Disable Storage for this virtual machine”. By doing so the datastores in the lower part of the screen are available for selection. However this means that the virtual machine will be disabled for any Storage DRS load balancing operation. Not only will it affect have an effect for the virtual machine itself, it also impacts other Storage DRS operations such as Maintenance Mode and Datastore Cluster defragmentation. As Storage DRS is not allowed to move the virtual machine, it cannot migrate the virtual machine to find an optimum load balance state when Storage DRS needs to make room for an incoming virtual machine. For more information about cluster defragmentation, read the following article: Storage DRS initial placement and datastore cluster defragmentation.
Test 2: VM1-centralized to Datastore Cluster Tier-1 VMs and VMDKs
Migrating a virtual machine stored on a single datastore to a datastore cluster with anti-affinity rules enabled results in a distribution of the virtual machine disk files:
04-VM1-DSC-Tier-1
Test result: Intra-VM anti-affinity rule applied and the virtual machine disk files are placed on separate datastores.
Working directory and default anti-affinity rules
Please note that in the previous scenario the configuration file (working directory) is placed on the same datastore as Hard disk 3. Storage DRS does not forcefully attempt to place the working directory on a different datastore. It weighs the load balance state of the cluster heavier than separation from the virtual machine VMDK files.
Test 3: VM2-distributed to Datastore Cluster Tier-1 VMs and VMDKs
Following the example of VM1, I started off by migrating VM2-Distributed to Tier-1 as the datastore cluster is configured to mimic the initial state of the virtual machine and that is to distributed the virtual machine across as many datastores as possible. After selecting Datastore Cluster Tier-1 VM and VMDKs, Storage DRS provided the following recommendation:
05-vm2-dsc-tier-1
Test result: Intra-VM anti-affinity rule applied on VM and the virtual machine disk files are stored on separate datastores.
A nice tidbit, as every virtual disk file is migrated between two distinct datastores, this scenario leverages the new functionality of parallel disk migration introduced in vSphere 5.1.
Test 4: VM2-distributed to Datastore Cluster Tier-2 VMs and VMDKs
What happens if you migrate a distributed virtual machine to a datastore cluster configured with a default affinity rule? Selecting Datastore Cluster Tier-2 VM and VMDKs, Storage DRS provided the following recommendation:
06-distributed-affinity rule
Test result: Intra-VM affinity rule applied on VM and the virtual machines are placed on a single datastore cluster.
Test 5: VM2-distributed to Multiple Datastore clusters
A common use case is to distribute a virtual machine across multiple tiers of storage to provide performance while taken economics into account. This test simulates the exercise of placing the working directory and guest OS disk (Hard disk 1) on datastore cluster Tier 2 and the database and logging hard disk (Hard disk 2 and Hard disk 3) on datastore cluster Tier 1.
In order to configure the virtual machine to use multiple datastores, click on the button Advanced during the second step of the migration:
07-advanced
This screen shows the current configuration, by selecting the current datastore of a hard disk a browse menu appears:
09-distributed-configuration
Select the appropriate datastore cluster for each hard disk and click on next to receive the destination datastore recommendation from Storage DRS.
The working directory of the VM and Hard disk 1 are stored on datastore cluster Tier 2 and Hard disk 2 and Hard disk 3 are stored in datastore cluster Tier 1.
10-multipe-datastore-clusters
As datastore cluster Tier 2 is configured to keep the virtual machine files together, both the working directory (designated as Configuration file in the UI) and Hard disk 1 are placed on datastore nfs-f-05. A default anti-affinity rule is applied to all new virtual machines in datastore cluster 2, therefore Storage DRS recommends to place Hard disk 2 on nfs-f-07 and Hard disk 3 on datastore nfs-f-01.
Test result: Intra-VM anti-affinity rule applied on VM. The files stored in Tier-2 are placed on a single datastore, while the virtual machine disk files stored in the Tier-1 datastore are located on different datastores.

Initial VM configuration Cluster default affinity rule Result Configured on:
Centralized Affinity rule Centralized Entire VM
Centralized Anti0Affinity rule Distributed Entire VM
Distributed Anti-Affinity rule Distributed Entire VM
Distributed Affinity rule Centralized Entire VM
Distributed Affinity rule Centralized Working directory + Hard disk 1
Anti-Affinity rule Distributed Hard disk 2 and Hard disk 3

All types of migrations with the UI lead to a successful integration with the datastore cluster. Every migration results in an application of the correct affinity or anti-affinity rule set by the default affinity rule of the cluster.

Filed Under: Storage DRS, vMotion Tagged With: Storage DRS, vMotion

SIOC on datastores backed by a single datapool

December 6, 2012 by frankdenneman

Duncan posted an article today in which he brings up the question: Should I use many small LUNs or a couple large LUNs for Storage DRS? In this article he explains the differences between Storage I/O Control (SIOC) and Storage DRS and why they work well together, to re-emphasize, the goal of Storage DRS load balancing is to fix long term I/O imbalances, while SIOC addresses short term burst and loads. SIOC is all about managing the queue’s while Storage DRS is all about intelligent placement and avoiding bottlenecks.
Julian Wood makes an interesting remark, and both Duncan and I hear this remark when discussing SIOC. Don’t get me wrong I’m not picking on Julian, I’m merely stating the fact he made a frequently used argument.

“There is far less benefit in using Storage IO Control to load balance IO across LUNs ultimately backed by the same physical disks than load balancing across separate physical storage pools. “

Well when you look at the way SIOC works I tend to disagree with this statement. As stated before, SIOC manages queues, queues to the datastores used by the virtual machines in the virtual datacenter. Typically speaking these virtual machines differ from workload types, from peak moments and also they differ in importance to the organization. With the use of disk shares, important virtual machine can be assigned a higher priority within the disk queue. When contention occurs, and this is important to realize, when contention occurs these business critical virtual machine get prioritized over other virtual machines. Not all important virtual machines generate a constant stream of I/O, while other virtual machines, maybe with a lower priority do generate a constant stream of IO. The disk shares provide the high priority low IO virtual machines to get a foot between the door and get those I/Os to the datastore and back. Without SIOC and disk shares you need to start thinking of increasing the queue depth of each hosts and think about smart placement of these virtual machines (both high and low I/O load) to avoid those high I/O load getting on the same host. These placement adjustment might impact DRS load balancing operations, possibly affecting other virtual machines along the way. Investing time in creating and managing a matrix of possible vm to datastore placement is not the way to go in this time with rapidly expanding datacenters.
Because SIOC is a datastore-wide scheduler, SIOC determines the queue-depth of the ESX hosts connected to the datastores running virtual machines on those datastores. Hosts with higher priority virtual machines get “deeper” queue depths to the datastore and hosts with lower priority virtual machines running on the datastore receive shorter queue-depths. To be more precise, SIOC calculates the datastore wide latency and each local host scheduler determines the queue depth for the queues of the datastore.
But remember queue depth changes only occur when there is contention, when the datastore exceeds the SIOC latency threshold. For more info about SIOC latency read “To which Host level latency statistic is the SIOC threshold related”
Coming back to the argument, I firmly believe that SIOC has benefits in a shared diskpool structure, between the VMM and the datastore a lot of queue’s exists.
vSphere 5.1 VMObservedLatency
Because SIOC takes the avg device latency off all hosts connected to the datastore into account, it understands the overall picture when determining the correct queue depth for the virtual machines. Keep in mind, queue depth changes occur only during contention. Now the best part of SIOC in 5.1 is that it has the Automatic Latency Threshold Computation. By leveraging the SIOC injector it understands the peak value of a datastore and adjust the SIOC threshold. The SIOC threshold will be set to 90% of its peak value, therefor having an excellent understanding of the performance capability of the datastore. This is done on a regular basis so it keeps actual workload in mind. This dynamic system will give you far more performance benefit that statically setting the queue-depth and DNSRO for each host.
One of the main reasons of creating multiple datastores that are backed by a single datapool is because of creating a multi-path environment. Together with advanced multi-pathing policies and LUN to controller port mappings, you can get the most out of your storage subsystem. With SIOC, you can manage your queue depths dynamically and automatically, by understanding actually performance levels, while having the ability to prioritize on virtual machine level.

Filed Under: SIOC Tagged With: DRS, SIOC, Storage DRS

How to create a "New Storage DRS recommendation generated" alarm

November 30, 2012 by frankdenneman

It is recommended to configure Storage DRS in manual mode when you are new to Storage DRS. This way you become familiar with the decision matrix Storage DRS uses and you are able to review the recommendations it provides. One of the drawbacks of manual mode is the need to monitor the datastore cluster on a regular basis to discover if new recommendations are generated. As Storage DRS is generated every 8 hours and doesn’t provide insights when the next invocation run is scheduled, it’s becomes a bit of a guessing game when the next load balancing operation has occurred.
To solve this problem, it is recommended to create a custom alarm and configure the alarm to send a notification email when new Storage DRS recommendations are generated. Here’s how you do it:
Step 1: Select the object where the alarm object resides
If you want to create a custom rule for a specific datastore cluster, select the datastore cluster otherwise select the Datacenter object to apply this rule to each datastore cluster. In this example, I’m defining the rule on the datastore cluster object.
Step 2: Go to Manage and select Alarm Definitions
Click on the green + icon to open the New Alarm Definition wizard
Storage DRS datastore cluster object
Step 3: General Alarm options
Provide the name of the alarm as this name will be used by vCenter as the subject of the email. Provide an adequate description so that other administrators understand the purpose of this alarm.
In the Monitor drop-down box select the option “Datastore Cluster” and select the option “specific event occurring on this object, for example VM Power On”. Click on Next.
Storage DRS vCenter new alarm definition
Step 4: Triggers
Click on the green + icon to select the event this alarm should be triggered by. Select “New Storage DRS recommendation generated”. The other fields can be left blank, as they are not applicable for this alarm. Click on next.
Storage DRS new recommendation trigger
Step 5: Actions
Click on the green plus icon to create a new action. You can select “Run a Command”, “Send a notification email” and “Send a notification trap”. For this exercise I have selected “Send a notification email”. Specify the email address that will receive the messages containing the warning that Storage DRS has generated a migration recommendation. Configure the alarm so that it will send a mail once when the state changes from green to yellow and yellow to red. Click on Finish.
Storage DRS new recommendation alarm email notification configuration
The custom alarm is now listed between the pre-defined alarms. As I chose to define the alarm on this particular datastore cluster, vCenter list that the alarm is defined on “this Object”. This particular alarm is therefor not displayed at Datacenter level and cannot be applied to other datastore clusters in this vCenter Datacenter.
Storage DRS new recommendation alarm listed
Please note that you must configure a Mail server when using the option “send a notification email” and configure an valid SNMP receiver when using the option “Send a notification trap”. To configure a mail or SNMP server, select the vCenter server option in the inventory list, select manage, settings and click on edit. Go to Mail and provide a valid mail server address and an optional mail sender.
Configure a mail server in vCenter general settings
To test the alarm, I moved a couple of files onto a datastore to violate the datastore cluster space utilization threshold. Storage DRS ran and displayed the following notifications on the datastore cluster summary screen and at the “triggered alarm” view:
vCenter shows the following triggered alerts on the Storage DRS datastore cluster
The moment Storage DRS generated a migration recommendation I received the following email:
email message generated by vCenter Storage DRS new recommendation alarm
As depicted in the screenshot above, the subject of the email generated by vCenter contains the name of the alarm you specified (notice the exclamation mark), the event itself – New Storage DRS recommendation generated” and the datastore cluster in which the event occurred.

Filed Under: Storage DRS Tagged With: Migration recommendation, Storage DRS, vCenter alarm

How to create a "New Storage DRS recommendation generated" alarm

November 30, 2012 by frankdenneman

It is recommended to configure Storage DRS in manual mode when you are new to Storage DRS. This way you become familiar with the decision matrix Storage DRS uses and you are able to review the recommendations it provides. One of the drawbacks of manual mode is the need to monitor the datastore cluster on a regular basis to discover if new recommendations are generated. As Storage DRS is generated every 8 hours and doesn’t provide insights when the next invocation run is scheduled, it’s becomes a bit of a guessing game when the next load balancing operation has occurred.
To solve this problem, it is recommended to create a custom alarm and configure the alarm to send a notification email when new Storage DRS recommendations are generated. Here’s how you do it:
Step 1: Select the object where the alarm object resides
If you want to create a custom rule for a specific datastore cluster, select the datastore cluster otherwise select the Datacenter object to apply this rule to each datastore cluster. In this example, I’m defining the rule on the datastore cluster object.
Step 2: Go to Manage and select Alarm Definitions
Click on the green + icon to open the New Alarm Definition wizard
Storage DRS datastore cluster object
Step 3: General Alarm options
Provide the name of the alarm as this name will be used by vCenter as the subject of the email. Provide an adequate description so that other administrators understand the purpose of this alarm.
In the Monitor drop-down box select the option “Datastore Cluster” and select the option “specific event occurring on this object, for example VM Power On”. Click on Next.
Storage DRS vCenter new alarm definition
Step 4: Triggers
Click on the green + icon to select the event this alarm should be triggered by. Select “New Storage DRS recommendation generated”. The other fields can be left blank, as they are not applicable for this alarm. Click on next.
Storage DRS new recommendation trigger
Step 5: Actions
Click on the green plus icon to create a new action. You can select “Run a Command”, “Send a notification email” and “Send a notification trap”. For this exercise I have selected “Send a notification email”. Specify the email address that will receive the messages containing the warning that Storage DRS has generated a migration recommendation. Configure the alarm so that it will send a mail once when the state changes from green to yellow and yellow to red. Click on Finish.
Storage DRS new recommendation alarm email notification configuration
The custom alarm is now listed between the pre-defined alarms. As I chose to define the alarm on this particular datastore cluster, vCenter list that the alarm is defined on “this Object”. This particular alarm is therefor not displayed at Datacenter level and cannot be applied to other datastore clusters in this vCenter Datacenter.
Storage DRS new recommendation alarm listed
Please note that you must configure a Mail server when using the option “send a notification email” and configure an valid SNMP receiver when using the option “Send a notification trap”. To configure a mail or SNMP server, select the vCenter server option in the inventory list, select manage, settings and click on edit. Go to Mail and provide a valid mail server address and an optional mail sender.
Configure a mail server in vCenter general settings
To test the alarm, I moved a couple of files onto a datastore to violate the datastore cluster space utilization threshold. Storage DRS ran and displayed the following notifications on the datastore cluster summary screen and at the “triggered alarm” view:
vCenter shows the following triggered alerts on the Storage DRS datastore cluster
The moment Storage DRS generated a migration recommendation I received the following email:
email message generated by vCenter Storage DRS new recommendation alarm
As depicted in the screenshot above, the subject of the email generated by vCenter contains the name of the alarm you specified (notice the exclamation mark), the event itself – New Storage DRS recommendation generated” and the datastore cluster in which the event occurred.

Filed Under: Storage DRS Tagged With: Migration recommendation, Storage DRS, vCenter alarm

VAAI hw offload and Storage vMotion between two Storage Arrays

November 6, 2012 by frankdenneman

Recently I received a question about migrating virtual machines with Storage vMotion between two Storage Arrays. More specifically if VAAI is leveraged by Storage vMotion in this process. Unfortunately VAAI is an internal array based feature, the Clone Blocks VAAI feature Storage vMotion leverages is only used to copy and migrate data within the same physical array.
Datamovers
How does Storage vMotion work between two arrays? Storage vMotion uses a VMkernel component called the datamover. This component is moves the blocks from the source to the destination datastore, to be more precise; it handles the read and write blocks I/O from and to the source and destination datastores.
The VMkernel used in vSphere 4.1 and up contains 2 different datamovers, software datamovers (FSDM and FS3DM) and a hardware offloading datamover (FS3DM-hardware offloading). The most efficient datamover is the FS3DM-hardware offload, followed by the FS3DM and as last the legacy datamover FSDM. FS3DM operates at kernel level, while the FSDM operates at the application level, the shorter the communication path the faster the operation. In essence Storage vMotion is travelling up to the stack of datamovers, trying the most efficient first, before reverting to a less optimal choice. To get an idea of difference in performance, please read the article “Storage vMotion performance difference” on Yellow-Bricks.com
Traversing the datamover stack
When a data movement operation is invoked (I.E. Storage vMotion) and the VAAI hardware offload operation is enabled, the data mover will first attempt to use the hardware offload. If the hardware offload operation fails, the data mover reverts to the software datamovers, first FS3DM, then FSDM. As you are migrating between arrays, hardware offloading will fail and the VMkernel selects a software datamover FS3DM. If the block-sizes of the datastore are not identical, then Storage vMotion has to revert to the FSDM datamover. If you are migrating data between NFS datastores than Storage vMotion immediately revert to the FSDM datamover.
Impact on Storage DRS datastore cluster design
Keep this in mind when designing Storage DRS datastore clusters. Storage DRS does not keep historical data of storage vMotion lead times, and thus it cannot incorporate these metrics when generating migration recommendations. Although no performance loss will occur within the virtual machine, migrating between arrays can create overhead on the supporting infrastructure. If possible design your datastores to contain datastores within the same array and use identical blocksizes (if VMFS is used)

Filed Under: Storage DRS, vMotion Tagged With: datamover, Storage DRS, Storage vMotion

  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in