• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Want to have a vSphere 5.1 clustering deepdive book for free?

October 17, 2012 by frankdenneman

vSphere 5.1 clustering deepdive CloudPhysics cover
Want to have a vSphere 5.1 clustering deepdive book for free? CloudPhysics are giving away some vSphere 5.1 clustering deepdive books, do the following if you want to receive a copy:
Action required

  • Email info@cloudphysics.com with a subject of “Book”. No message is needed.
  • Register at http://www.cloudphysics.com/ by clicking “SIGN UP”.
  • Install the CloudPhysics Observer vApp to activate your dashboard.

Eligibility rules

  • You are a new CloudPhysics user.
  • You fully install the CloudPhysics ‘Observer’ vApp in your vSphere environment.

The first 150 users gets a free book, but what’s even better, the Cloudphysics service gives you great insights on your current environment. For more info read the following blogposts:
CloudPhysics in a nutshell and VM reservations and limits card – a closer look

Filed Under: Uncategorized

HA admission control is not a capacity management tool.

October 17, 2012 by frankdenneman

I receive a lot of questions on why HA doesn’t work when virtual machines are not configured with VM-level reservations. If no VM-level reservations are used, the cluster will indicate a fail over capacity of 99%, ignoring the CPU and memory configuration of the virtual machines. Usually my reply is that HA admission control is not a capacity management tool and I noticed I have been using this statement more and more lately. As it doesn’t scale well explaining it on a per customer basis, it might be a good idea to write a blog article about it.
The basics
Sometimes it’s better to review the basics again and understand where the perception of HA and the actual intended purpose of the product part ways.
Let’s start of what HA admission control is designed for. In the availability guide the two following statement can be found: Quote 1:

“vCenter Server uses admission control to ensure that sufficient resources are available in a cluster to provide failover protection and to ensure that virtual machine resource reservations are respected.”

Let’s dive in the first quote and especially this statement: “To ensure that sufficient resources are available in a cluster” is the key element, and in particular the word sufficient (resources). What sufficient means for customer A, does not mean sufficient for customer B. As HA does not have an algorithm decoding the meaning of the word sufficient for each customer, HA relies on the customer to set vSphere resource management allocation settings to indicate the importance of resource availability for the virtual machine during resource contention scenarios.
As we are going back to the basics, lets have a quick look at the resource allocation settings that are used in this case, reservations and shares. A reservation indicates the minimum level of resources available to the virtual machine at all times. This reservation guarantees – or protect might be a better word –the availability of physical resources to the virtual machine regardless of the level of contention. No matter how high the contention in the system is, the reservation restricts the VMkernel from reclaiming that particular CPU cycle or memory page.
This means that when a VM is powered on with a reservation, admission control needs to verify if the host can provide these resources at all times. As the VMkernel cannot reclaim those resources, admission control makes sure that when it lets the virtual machine in, it can hold its promise of providing these resources all the time, but also checks if it won’t introduce problems for the VMkernel itself and other virtual machines with a reservation. This is the reason why I like to call admission control the virtual bouncer.
Besides reservation we have shares and shares indicates the relative priority of resource access during contention. A better word to describe this behavior is “opportunistic access”. As the virtual machine is not configured with a reservation, it provides the VMkernel with a more relaxed approach of resource distribution. When resource contention occurs, VMkernel does not need to provide the configured resources all the time, but can distribute the resources based on the activity and the relative priority based on the shares of the virtual machines requesting the resources. Virtual machines configured only with shares will just receive what they can get; there is no restrictive setting for the VMkernel to worry about when running out of resources. Basically the virtual machines will just get what’s left.
In the case of shares, it’s the VMkernel that decides which VM gets how many resources in a relaxed and very social way, where virtual machines configured with a reservation DEMAND to have the reservations available at all times and do not care about the needs of others.
In other words, the VMkernel MUST provide the resources to the virtual machine with reservation first and then divvy up the rest amongst the virtual machines who opted for a opportunistic distribution (shares).
How does this tie in with HA admission control?
The second quote gives us this insight:

“vSphere HA: Ensures that sufficient resources in the cluster are reserved for virtual machine recovery in the event of host failure.”

We know that admission control checks if there is enough resources are available to satisfy the VM-level reservation without interfering with VMkernel operations or VM-level reservations of other virtual machines running on that host. As HA is designed to provide an automated method of host failure recovery, we need to make sure that once a virtual machine is up and running it can continue to run on another host in the cluster if the current hosts fails. Therefor the purpose of HA admission control is to regulate and check if there are enough resources available in the cluster that can satisfy the virtual machine level reservations after a host failure occurs.
Depending on the admission control policy it calculates the capacity required for a failover based on available resources and still comply with the VMkernel resource management rules. Therefor it only needs to look at VM-level reservations, as shares will follow the opportunistic access method.
Semantics of sufficient resources while using shares-only design
In essence, HA will rely on you to determine if the virtual machine will receive the resources you think are sufficient if you use shares. The VMkernel is designed to allow for memory overcommitment while providing performance. HA is just the virtual bouncer that counts the number of heads before it lets the virtual machine in “the club”. If you are on the list for a table, it will get you that table, if you don’t have a reservation HA does not care if you decide to need to sit at a 4-person table with 10 other people fighting for your drinks and food. HA relies on the waiters (resource management) to get you (enough) food as quickly as possible. If you wanted to have a good service and some room at your table, it’s up to you to reserve.
Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: VMware Tagged With: admission control, HA

Partially connected datastore clusters – where can I find the warnings and how to solve it via the web client?

October 15, 2012 by frankdenneman

During my Storage DRS presentation at VMworld I talked about datastore cluster architecture and covered the impact of partially connected datastore clusters. In short – when a datastore in a datastore cluster is not connected to all hosts of the connected DRS cluster, the datastore cluster is considered partially connected. This situation can occur when not all hosts are configured identically, or when new ESXi hosts are added to the DRS cluster.
The problem
I/O load balancing does not support partially connected datastores in a datastore cluster and Storage DRS disables the IO load balancing for the entire datastore cluster. Not only on that single partially connected datastore, but the entire cluster. Effectively degrading a complete feature set of your virtual infrastructure. Therefore having an homogenous configuration throughout the cluster is imperative.
Warning messages
An entry is listed in the Storage DRS Faults window. In the web vSphere client:
1. Go to Storage
2. Select the datastore cluster
3. Select Monitor
4. Storage DRS
5. Faults.
Storage DRS IO load balancing component was not run due to an unsupported configuration or insufficient information about datastores or virtual disks
The connectivity menu option shows the Datastore Connection Status, in the case of a partially connected datastore, the message Datastore Connection Missing is listed.
Storage DRS datastore is missing
When clicking on the entry, the details are shown in the lower part of the view:
Storage DRS missing datastore connection
Returning to a fully connected state
To solve the problem, you must connect or mount the datastores to the newly added hosts. In the web client this is considered a host-operation, therefore select the datacenter view and select the hosts menu option.
1. Right-click on a newly added host
2. Select New Datastore
3. Provide the name of the existing datastore
Storage DRS add new datastore
4. Click on Yes when the warning “Duplicate NFS Datastore Name” is displayed.
vSphere 5.1 Duplicate NFS Datastore Name
5. As the UI is using existing information, select next until Finish.
6. Repeat steps for other new hosts.
After connecting all the new hosts to the datastore, check the connectivity view in the monitor menu of the of the datastore cluster
Storage DRS all datastores connected
Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: Storage DRS Tagged With: Partially connected, Storage DRS

vSphere 5.1 DRS advanced option LimitVMsPerESXHost

October 10, 2012 by frankdenneman

During the Resource Management Group Discussion here in VMworld Barcelona a customer asked me about limiting the number of VMs per Host. vSphere 5.1 contains an advanced option on DRS clusters to do this. If the advanced option: “LimitVMsPerESXHost” is set, DRS will not admit or migrate more VMs to the host than that number. For example, when setting the LimitVMsPerESXHost to 40, each host allows up to 40 virtual machines.
No correction for existing violation
Please note that DRS will not correct any existing violation if the advanced feature is set while virtual machines are active in the cluster. This means that if you set LimitVMsPerESXHost to 40 and at the time 45 virtual machines are running on an ESX host, DRS will not migrate the virtual machines out of that host. However It does not allow any more virtual machines on the host. DRS will not allow any power-ons or migration to the host, both manual (by administrator) and automatic (by DRS).
High Availability
As this is a DRS cluster setting, HA will not honor the setting during a host failover operation. This means that HA can power on as many virtual machines on a host it deems necessary. This is to avoid any denial of service by not allowing virtual machines to power-on if the “LimitVMsPerESXHost” is set too conservative.
Impact on load balancing
Please be aware that this setting can impact VM happiness. This setting can restrict DRS in finding a balance with regards to CPU and Memory distribution.
Use cases
This setting is primary intended to contain the failure domain. A popular analogy to describe this setting would be “Limiting the number of eggs in one basket”. As virtual infrastructures are generally dynamic, try to find a setting that restricts the impact of a host failure without restricting growth of the virtual machines.
I’m really interested in feedback on this advanced setting, especially if you consider implementing it, the use case and if you want to see this setting to be further developed.
Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: DRS

From the archives – An old Isometric diagram

October 5, 2012 by frankdenneman

While searching for a diagram I stumbled upon an old diagram I made in 2007. I think this diagram started my whole obsession with diagrams and to add “cleanness” to my diagrams.

This diagram depicts a virtual infrastructure located in two datacenters with replication between them. This infrastructure is no longer in use, but to make absolutely sure, I changed the device names into generic text labels such as ESX host, array, SW switch, etc. Back then I really liked to draw Isometric style. Now I’m more focused onto block diagrams and trying to minimalize the number of components in a diagram. In essence I follow the words from Colin Chapman: Simplify, then add lightness. But then applied to diagrams 🙂
The fact that this diagram is still stored on my system tells me that I’m still very proud of this diagram. So that made me wonder, which diagram did you design and are you proud of?
Get notification of these blogs postings and more DRS and Storage DRS information by following me on Twitter: @frankdenneman

Filed Under: Miscellaneous

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 56
  • Page 57
  • Page 58
  • Page 59
  • Page 60
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in