• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Avoiding VMDK level over-commitment while using Thin disks and Storage DRS

October 1, 2012 by frankdenneman

The behavior of thin provisioned disk VMDKs in a datastore cluster is quite interesting. Storage DRS supports the use of thin provisioned disks and is aware of both the configured size and the actual data usage of the virtual disk. When determining placement of a virtual machine, Storage DRS verifies the disk usage of the files stored on the datastore. To avoid getting caught out by instant data growth of the existing thin disk VMDKs, Storage DRS adds a buffer space to each thin disk. This buffer zone is determined by the advanced setting “PercentIdleMBinSpaceDemand”. This setting controls how conservative Storage DRS is with determining the available space on the datastore for load balancing and initial placement operations of virtual machines.
IdleMB
The main element of the advanced option “PercentIdleMBinSpaceDemand” is the amount of IdleMB a thin-provisioned VMDK disk file contains. When a thin disk is configured, the user determines the maximum size of the disk. This configured size is referred to as “Provisioned Space”. When a thin disk is in use, it contains an x amount of data. The size of the actual data inside the thin disk is referred to as “allocated space”. The space between the allocated space and the provisioned space is called identified as the IdleMB. Let’s use this in an example. VM1 has a single VMDK on Datastore1. The total configured size of the VMDK is 6GB. VM1 written 2GB to the VMDK, this means the amount of IdleMB is 4GB.

PercentIdleMBinSpaceDemand
The PercentIdleMBinSpaceDemand setting defines percentage of IdleMB that is added to the allocated space of a VMDK during free space calculation of the datastore. The default value is set to 25%. When using the previous example, the PercentIdleMBinSpaceDemand is applied to the 4GB unallocated space, 25% of 4GB = 1 GB.
Entitled Space Use
Storage DRS will add the result of the PercentIdleMBinSpaceDemand calculation to the consumed space to determine the “entitled space use”. In this example the entitled space use is: 2GB + 1GB = 3GB of entitled space use.

Calculation during placement
The size of Datastore1 is 10GB. VM1 entitled space use is 3GB, this means that Storage DRS determines that Datastore1 has 7GB of available free space.
Changing the PercentIdleMBinSpaceDemand default setting
Any value from 0% to 100% is valid. This setting is applied on datastore cluster level. There can be multiple reasons to change the default percentage. By using 0%, Storage DRS will only use the allocated space, allowing high consolidation. This is might be useful in environments with static or extremely slow data increase.
There are multiple use cases for setting the percentage to 100%, effectively disabling over-commitment on VMDK level. Setting the value to 100% forces Storage DRS to use the full size of the VMDK in its space usage calculations. Many customers are comfortable managing over-commitment of capacity only at storage array layer. This change allows the customer to use thin disks on thin provisioned datastores.
Use case 1: NFS datastores
A use case is for example using NFS datastores. Default behavior of vSphere is to create thin disks when the virtual machine is placed on a NFS datastore. This forces the customer to accept a risk of over-commitment on VMDK level. By setting it to 100%, Storage DRS will use the provisioned space during free space calculations instead of the allocated space.
Use case 2: Safeguard to protect against unintentional use of thin disks
This setting can also be used as safeguard for unintentional use of thin disks. Many customers have multiple teams for managing the virtual infrastructure, one team for managing the architecture, while another team is responsible for provisioning the virtual machines. The architecture team does not want over-commitment on VMDK level, but is dependent on the provisioning team to follow guidelines and only use thick disks. By setting “PercentIdleMBinSpaceDemand” to 100%, the architecture team is ensured that Storage DRS calculates datastore free space based on provisioned space, simulating “only-thick disks” behavior.
Use-case 3: Reducing Storage vMotion overhead while avoiding over-commitment
By setting the percentage to 100%, no over-commitment will be allowed on the datastore, however the efficiency advantage of using thin disks remains. Storage DRS uses the allocated space to calculate the risk and the cost of a migration recommendation when a datastore avoids its I/O or space utilization threshold. This allows Storage DRS to select the VMDK that generates the lowest amount of overhead. vSphere only needs to move the used data blocks instead of all the zeroed out blocks, reducing CPU cycles. Overhead on the storage network is reduced, as only used blocks need to traverse the storage network.
Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: Storage DRS Tagged With: Storage DRS, Thin disk

Apply User-defined Storage Capabilities to multiple datastore at once

October 1, 2012 by frankdenneman

To get a datastore cluster to surface a (user-defined) storage capability, all datastores inside the datastore cluster must be configured with the same storage capability.

When creating Storage Capabilities, the UI does not contain a view where to associate a storage capability with multiple datastores. However that does not mean the web client does not provide you with the ability to do so. Just use the multi-select function of the webclient.
Go to Storage, select the datastore cluster, select Related Objects and go to Datastores view. To select all datastores, click the first datastore, hold shift and select the last datastore. Right click and select assign storage capabilities.

Select the appropriate Storage capability and click on OK.

The Datastore Cluster summary tab now shows the user-defined Storage Capability.

Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: Storage DRS Tagged With: assign multiple user-defined storage capabilities, Storage Profiles

Storage DRS demo available on VMware TV

September 27, 2012 by frankdenneman

If you haven’t seen Storage DRS in action, check out the Storage DRS demo I’ve created for VMwareTV.

Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: Storage DRS

How to create VM to Host affinity rules using the webclient

September 21, 2012 by frankdenneman

This article shows you how to create a VM to Host affinity rule using the new webclient.
1. Select host and clusters in the home screen.
2. Select the appropriate cluster.
3. Select the tab Manage and click on Settings.

4. Click on the >> to expand the Cluster setting menu.

5. Select DRS Groups.
6. Click on Add to create a DRS Group.
The dropdown box provides the ability to create a VM DRS group and a Host DRS group. The behavior of this window is a little tricky. When you create a group, you need to click on OK to actually create the group. If you create a VM DRS group first and then select the Host DRS group in the dropdown box before you click OK, the VM DRS group configuration is discarded.

7. Create the VM DRS Group and provide the VM group a meaningful name.
8. Click on “Add” to select the virtual machines.
9. Click on OK to add the virtual machines to the group.

10. Review the configuration and click on OK to create the VM DRS Group.
11. Click on “Add” again to create the Host DRS Group.
12. Select Host DRS Group in the dropdown box and provide a name for the Host DRS Group.
13. Click on “Add” to select the hosts that participate in this group.
14. Click on OK to add the hosts to the group.

15. Review the configuration and click on OK to create the Host DRS Group.
16. The DRS Groups view displays the different DRS groups in a single view.

The groups are created, now it’s time to create the rules.
17. Select DRS Rules in the Cluster settings menu.

18. Click on “Add” to create the rule.

19. Provide a name for this rule and check if the rule is enabled (default enabled)
20. Select the “Virtual Machines to Hosts” rule in the Type dropbox.
21. Select the appropriate VM Group and the corresponding Host Group.
22. Select the type affinity rule. For more information about the difference between should and must rule, read the article: “Should or Must VM-Host affinity rules?”. In this example I’m selecting the should rule.
23. Click on Ok to create the rule.
24. Review your configuration in DRS rules screen.

Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: Uncategorized

Technical paper: “VMware vCloud Director Resource Allocation Models” available for download

September 20, 2012 by frankdenneman

Today the technical paper “VMware vCloud Director Resource Allocation Models” has been made available for download on VMware.com.
This whitepaper covers the allocation models used by vCloud Director 1.5 and how they interact with the vSphere layer. This paper helps you correlate the vCloud allocation model settings with the vSphere resource allocation settings. For example what happens on the vSphere layer when I set a guarantee on an Org VDC configured with the Allocation Pool Model. It provides insight on the distribution of resources on both the vCloud layer and vSphere layer and illustrates the impact of various allocation model settings on vSphere admission control. The paper contains a full chapter about allocation model in practice and demonstrates the effect of using various combinations of allocation models within a single provider vDC.
Please note that this paper is based on vCloud Director 1.5
http://www.vmware.com/resources/techresources/10325

Filed Under: VMware

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 58
  • Page 59
  • Page 60
  • Page 61
  • Page 62
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in