• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Elastic vDC and how to span a provider vDC across multiple DRS clusters

March 29, 2013 by frankdenneman

vCloud director 5.1 provides the ability to create elastic vDC which allows an organization vDC to consume resources from multiple DRS clusters. By having the provider vDC abstract the resources from multiple DRS clusters, its simpler to grow capacity when needed. Before elastic vDC, a new provider vDC and Org vDCs needed to be created when an org vDC wanted to grow beyond the capacity of the provider vDC. With Elastic vDC you just add new clusters when needed and allow the Provider vDC to manage initial placement of vApps.
During research of elastic vDCs I discovered that the way to span a provider vDC isn’t that intuitive. In order to save you some time, here are the steps to create a provider vDC that spans multiple DRS clusters.
Create a Provider vDC, give it a name and select the highest supported hardware version. If you run a homogenous environment with solely 5.1 ESX hosts I highly recommend changing it to Hardware Version 9. If the clusters run different ESX versions, lower the hardware version to the appropriate supported level.
00-hardware-version-provider-vdc
Please note that the provider vDC is responsible for initial placement of the vApp. It will place the vApp on the cluster that contains the most available “unreserved” compute resources and storage resources. It is possible that vApps of the same organization run on different ESX versions.
Select Resource pool. This screen is a little bit ambiguous. The user interface “talks” about resource pools, but that doesn’t mean you cannot select a complete DRS cluster for consumption by the provider vDC. A DRS cluster is in essence a resource pool, the root resource pool for all its child resource pools. So don’t worry if you want to select an entire cluster, in matter of fact, when you select the vCenter it shows the DRS clusters as well as the resource pools.
In this example, the vCenter contains two DRS clusters; vCloud-Cluster1 and vCloud-Cluster2. The DRS cluster vCloud-Cluster2 contains a resource pool called RP1. Unfortunately the user interface does not use any icons to differentiate between clusters and resource pools, but shows a vCenter path notation. As RP1 is the child resource pool of vCloud-Cluster2, the vCenter path is as follows: vCloud-Cluster2/RP1.
Unfortunately the interface only allows to select a single resource pool or cluster, therefor I select the vCloud-Cluster1 and select next.
01-select-drs-cluster
Select an appropriate Storage profile and click on next. The ready to complete screen displays an overview of your selected configuration. Click on Finish to create the Provider vDC.
02-Ready-to-Complete-Provider-vDC
At this point in time, the provider vDC maps to only one DRS cluster. To add additional clusters, go to the Manage and Monitor tab and select Provider vDCs.
03-Provider-vDCs-overview
Click on the provider vDC and select the resource pools tab
04-Provider-vDC-menu
Click on the green plus icon to add another DRS cluster. The attach resource pool window is displayed and you can select another cluster from the same vCenter as the primary cluster. Please note that a provider vDC can only span clusters managed by the same vCenter server. Click on Finish to add the DRS cluster to the provider vDC.
05-Select-resource-pool
The Provider vDC is now able to provider resources from multiple DRS clusters. In vCloud Director 5.1 both the Pay-as-You-Go and Allocation Pool model org vCD are able to consume resources from an elastic vDC. In order to allow the Allocation Pool model to leverage an Elastic vDC changes needed to be made. Massimo Re Ferrè wrote an extensive post about the changes of the different allocation models in vCloud director 5.1.

Filed Under: VMware

Would you be interested in Storage-level reservations?

March 26, 2013 by frankdenneman

In todays world it’s quite common to virtualize higher priority / tier-1 applications and services. These applications and services are usually subject to service level agreements that typically include requirements for strong performance guarantees. For the compute resources (CPU and Memory) we are relying on the virtualization layer to give us that resource allocation solution by setting reservation, shares and limits. You might want to ensure that the storage requirements of these virtual machines are met and when contention for storage resources occurs these workloads are not impacted.
Today vSphere offers Storage I/O Control (SIOC) to allocates I/O resources based on the virtual machine priority if datastore latency is exceeded. Shares identify priority while limits restrict the amount of IOPS for a virtual machine. Although these are useful controls it does not provide a method to define a minimum amount of IOPS that is available all the time to the application. Providing lots of shares to these virtual machines can solve help to meet the SLA, however continuously calculating the correct share value in a highly dynamic virtual datacenter is cumbersome and complex job.
Storage level reservations
Therefore we are working on Storage level reservations. A storage reservation allows you to specify a minimum number of IOPS that should be available to the virtual machine at all times. This allows the virtual machine to make minimum progress in order to comply with the service level agreement.
In a relative closed environment such as the compute layer its fairly easy to guarantee a minimum level of resource availability, but when it comes to a shared storage platform new challenges arise. The hypervisor owns the computes resource and distributes it to the virtual machine it’s hosting. In a shared storage environment we are dealing with multiple layers of infrastructure, each susceptible to congestion and contention. And then there is the possibility of multiple external storage resource consumers such as non-virtualized workloads using the same array impacting the availability of resources and the control of distributing the resources. These challenges must be taken into account when developing storage reservations and we must understand how stringent you want the guarantee to be.
One of the questions we are dealing with is whether you would like a strict admission control or a relaxed admission control. With strict admission control, a virtual machine power-on operation is denied when vSphere cannot guarantee the storage reservation (similar to compute reservations). Relaxed admission control turns storage reservations into a share-like construct, defining relative priority at times where not enough IOPS are available at power-on. For example: Storage reservation on VM1 = 800 and VM2 = 200. At boot 600 IOPS are available; therefore VM1 gets 80% of 600 = 480, while VM2 gets 20%, i.e. 120 IOPS. When the array is able to provide more IOPS the correct number of IOPS are distributed to the virtual machines in order to to satisfy the storage reservation.
In order to decide which features to include and define the behavior of storage reservation we are very interested in your opinion. We have created a short list of questions and by answering you can help us define our priorities during the development process. I intentionally kept the question to a minimum so that it would not take more than 5 minutes of your time to complete the survey.
Disclaimer
As always, this article provides information about a feature that is currently under development. This means this feature is subject to change and nor VMware nor I in no way promises to deliver on any features mentioned in this article or survey.
Any other ideas about storage reservations? Please leave a comment below.
The survey is closed, thanks for your interest in participating

Filed Under: VMware Tagged With: Storage reservation

Hello world! Again

March 25, 2013 by frankdenneman

During my holiday, frankdenneman.nl got some unwanted attention.
I’m currently in the process of rebuilding the site.
Stay tuned for new updates!

Filed Under: Uncategorized

WOW, voted number 2 of top virtualization blogs!

March 12, 2013 by frankdenneman

Voted number 2 of top virtualization blogs
As many other IT-addicts, the first thing I do is pick up my phone to see what’s new on twitter, google+ and facebook and to my surprise I received a lot of direct messages and mentions congratulating on taking the second spot on the top 25 virtualization blog list. WOW talk about excitement! From being drowsy to uber-hyped in under a millisecond.
Thanks for voting me! I really appreciate the recognition. I love to blog and write articles and when I’m not researching I’m thinking of topics I can cover. Reaching the number 2 spot proves I’m doing something you all like. But actually I want to thank you for taking the time to vote on any of the top 25 blogs. Everybody spends a great deal of time researching and writing articles, getting votes is a great way to receive acknowledgement for your hard work.
A big thank you goes out to Eric for organizing this competition again. Awesome work and thanks for putting in all the effort. Viewing the stats it shows that this event is becoming more and more an industry event, organized by community members for community members. Great stuff. John, David, Simon similar to last year, great vChat. A delight to watch! BTW, thank you for the compliments! It’s always cool to hear some background details of the top 25 bloggers. I encourage you to watch the special vChat it’s great entertainment!
Congrats to Duncan for taking the number 1 spot. Well deserved! I know how much effort you put into the blog. Outstanding stuff. Congrats to the rest of the top 25 and a special congrats goes out to Cormac. Well deserved to enter in the top 10. If you are on twitter make sure you follow each and everyone of the top 25. These guys are a special bunch, all passionately about virtualization and great bunch of people in general. Here is the list of the top 25 on twitter:

Rank Name Twitter
01 Duncan Epping @DuncanYB
02 Frank Denneman @FrankDenneman
03 Scott Lowe @scott_lowe
04 Eric Sloof @ESloof
05 Chad Sakac @SakacC
06 William Lam @LamW
07 Mike Laverick @Mike_Laverick
08 Alan Renouf @AlanRenouf
09 Cormac Hogan @VMwareStorage
10 Eric Siebert @EricSiebert
11 Jason Boche @JasonBoche
12 Chris Wahl @Wahlnetwork
13 Vaugh Stewart @vStewed
14 Andre Leibovici @AndreLeibovici
15 Luc Dekens @LucD
16 Vladan Seget @vladan
17 Nick Howell @that1guynick
18 Stephen Foskett @SFoskett
19 Gabrie van Zanten @gabvirtualworld
20 Tommy Trogden @vtexan
21 Michael Webster @vcdxnz001
22 Kendrick Coleman @KendrickColeman
23 Simon Seagrave @kiwi_si
24 Derek Seaman @vDerekS
25 Brian Madden @BrianMadden

Filed Under: Miscellaneous

There is a new fling in town: DRMdiagnose

February 28, 2013 by frankdenneman

This week the DRMdiagnose fling is published. Produced by the resource management team and just in case you are wondering, DRM stands for Distributed Resource Manager; the internal code for DRS. Download DRMdiagnose at the VMware fling site. Please note that this fling only works on vSphere 5.1 environments
Purpose of DRMdiagnose
This tool is created to understand the impact on the virtual machines own performance and the impact on other virtual machines in the cluster if the resource allocation settings of a virtual machine are changed. DRMdiagnose compares the current resource demand of the virtual machine and suggest changes to the resource allocation settings to achieve the appropriate performance. This tool can assist you to meet service level agreements by providing feedback on desired resource entitlement. Although you might know what performance you want for a virtual machine, you might not be aware of the impact or consequences an adjustments might have on other parts of the resource environment or cluster policies. DRMdiagnose provides recommendations that provides the meets the resource allocation requirement of the virtual machines with the least amount of impact. A DRMdiagnose recommendation could look like this:

Increase CPU size of VM Webserver by 1
Increase CPU shares of VM Webserver by 4000
Increase memory size of VM Database01 by 800 MB
Increase memory shares of VM Database01 by 2000
Decrease CPU reservation of RP Silver by 340 MHz
Decrease CPU reservation of VM AD01 by 214 MHz
Increase CPU reservation of VM Database01 by 1000 MHz

How does it work
DRMdiagnose reviews the DRS cluster snapshot. This snapshot contains the current cluster state and the resource demand of the virtual machines. The cluster snapshot is stored on the vCenter server. These snapshot files can be found:

  • vCenter server appliance: /var/log/vmware/vpx/drmdump/clusterX/
  • vCenter server Windows 2003: %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\drmdump\clusterX\
  • vCenter server Windows 2008: %ALLUSERSPROFILE%\VMware\VMware VirtualCenter\Logs\drmdump\clusterX\

The fling can be run in three modes:

  1. Default: Given a link to a drmdump, it lists all the VMs in the cluster, and their current demands and entitlements.
  2. Guided: Given a link to a drmdump, and a target allocation for the VM, generates a set of recommendations to achieve it.
  3. Auto: Given a link to a drmdump, generates a recommendation to satisfy the demand of the most distressed VM (the VM for which the gap between demand and entitlement is the highest).

Two things to note:
One: The fling does not have run on the vCenter server itself. Just install the fling on your local windows or linux system, copy over the latest drmdump file and run the fling. And second the drmdump file is zipped (GZ), unzip the file first to and run DRMdiagnose against the .dump file. A “normal” dumpfile should look like this:
00-drmdump
How to run:
Open a command prompt in windows:
01-command-prompt
This command will provide the default output and provide you a list with CPU and Memory demand as well as entitlement. Instead of showing it on screen I chose to port it to a file as the output contains a lot of data.
A next article will expand on auto-mode and guided-mode use of DRMdiagnose. In the mean time, I would suggest to download DRMdiagnose and review your current environment.

Filed Under: DRS

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 43
  • Page 44
  • Page 45
  • Page 46
  • Page 47
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2026 · SquareOne Theme on Genesis Framework · WordPress · Log in