The previous article outlines the multiple admission controls active in a virtual infrastructure. One that always interested me in particular is the admission control feature that verifies resource availability. With the introduction of vCloud director another level of resource construct were introduced. Along with Provider virtual datacenter (vDC) and Organization vDCs, allocation models were introduced. An allocation model defines how resources are allocated from the provider vDC. An organization vDC must be configured with one of the following three allocation models: “Pay As You Go”, “Allocation Pool” and “Reservation Pool”. It is out of the scope to describe all three models, please visit Chris Colotti’s blog or Yellow Bricks to read more about allocation models.
As mentioned before the distinction between each of the allocation models is how resources are consumed. Depending on the chosen allocation model reservations and limits will be set on resource pool, virtual machine level, or both. One of the most interesting allocation model is the Allocation Pool model as it sets reservations on both resource pool level and virtual machine level simultaneously. During configuration of the allocation pool model, an amount of guaranteed resources can be specified. (Guaranteed is the vCloud term for vSphere reservation). The question I was given is will lowering the default value of 100% guaranteed memory result in an increase of more virtual machines inside the Organization vCD? And the answer lies within the working of vSphere admission control.
Allocation Pool model settings
By default the Allocation Pool model sets a 100% memory reservation on both resource pool level and virtual machine level. By lowering the default guarantee, it allows for opportunistic memory allocation on both resource pool level and virtual machine level. Creating this burstable space (resources available for opportunistic access) usually provides an higher consolidation ratio of virtual machines, however due to the simultaneous configuration of reservation on both resource pool and virtual machine level, this is not the case.
Virtual machine level reservation
During power-on operation admission control checks if the resource pool can satisfy the virtual machine level reservation. Because expandable reservation is disabled in this model, the resource pool is not able to allocate any additional resources from the provider vDC. Therefor the virtual machine memory reservation can only be satisfied by the resource pool level reservation of the organization vDC itself. When a virtual machine is using memory protected by a virtual machine level reservation, this memory is withdrawn from the resource pool-level reservation. If the resource pool does not have enough available memory to guarantee the virtual machine reservation, the power-on operation fails. Let’s use a scenario to visualize the process a bit better.
Scenario
An organization vCD is created with the Allocation Pool model and the memory allocation is set to 20GB; the memory guarantee is set to 50%. These settings result in a resource pool memory limit of 20GB and a memory reservation of 10GB. When powering up a 2GB virtual machine, 1GB of reserved resources will be allocated to that virtual machine and withdrawn from the available reserved memory pool.
Admission control allows to power-on virtual machines until the reserved memory pool is reduced to zero. Following the previous example, virtual machine 2 is powered on. The resource pool providing resources to the organization vDC has 9 GB available in its pool of reserved memory. Admission control allows the power-on operation of the virtual machine as this pool can provide the reserved resources specified by the virtual machine level reservation.
During each power-on operation 1GB of reserved memory is withdrawn from the reserved memory pool available to the organization vDC. Resulting in admission control allowing to power on ten virtual machines. When attempting to deploy virtual machine 11, admission controls fails the power-on operation as the organization vDC has no available reserved memory to satisfy the virtual machine level reservation.
Note: This scenario excludes the impact of memory overhead reservation of each virtual machine. Under normal circumstances, the number of virtual machines that could be powered on would be close to 8 instead of 10 as the reserved pool available to the organization vDC is used to satisfy the memory overhead reservation of each virtual machine as well.
Because the guarantee setting of the Allocation Pool model configures resource pool and virtual machine memory reservation settings simultaneously, the supply and demand of reserved memory resources are always equal regardless of the configured percentage setting. Therefore offering opportunistic access to resources inside the organization vDC does not allow an increase of the number of virtual machines inside the organization vDC.
The next question arises, why should you lower the percentage of guaranteed resources? Providing burstable space increases the number of Organization vCDs inside the Provider vDC.
Resource pool memory reservation
Upon creation resource pools claim and withdraw the configured reserved resources from their parent instantaneously. This memory cannot be provided or distributed to other organization vDCs regardless of utilization of these resources.
Although new resource constructs are introduced in a vCloud environment, consolidation ratios and resource management still leverage traditional vSphere resource management constructs and rules. Chris Colotti and I are currently working on a technical paper describing the allocation models in details and the way they interact with vSphere resource management. We hope to see this published soon.
I/O Analyzer v1.1
I/O Analyzer v1.1 is now live on the Flings site:
http://labs.vmware.com/flings/io-analyzer
I/O Analyzer is a virtual appliance tool for measuring storage performance. This version of I/O Analyzer adds the ability to run trace replay – a function which allows a user to replay an I/O trace that was captured elsewhere (with vscsistats) on the target test system. This version also has cool data visualization charts, both for the characteristics of an imported trace, and performance results on the test system.
This is really cool stuff, go check it out.
Cyber Monday deal!
We are long time fascinated by the whole Black Friday and Cyber Monday craze in the USA. Unfortunately we do not celebrate Thanksgiving in the Netherlands and none of the shops are participating in something similar as Black Friday.
This year we thought it was a great idea to participate in some form and what better than to offer our vSphere 5 Clustering Technical Deepdive e-book for a price you cannot resist. We just changed the price of the vSphere 5 Clustering Technical Deepdive to $ 4.99 and 3.99 for our European friends. Yes that is correct…. Less than 5 dollars for over 350 pages of deepdive material.
What better way than recover from the madness of Black Friday and just sit back and relax reading this amazing piece of work? This is most definitely the deal of the year for all virtualization fanatics! Keep in mind that this is a limited offer, Tuesday the 29th the price will be back to “normal” again.
US – ebook – $ 4.99
UK – ebook – £ 3.99
DE – ebook – € 3.99
FR – ebook – € 3.99
Pick it up, tell your friends / colleagues / family about it… Here are some snippets from Amazon reviews, but with 15 extremely positive reviews, all of them 5 out of 5, you know you can’t go wrong:
“If you’re serious about VMware virtualization this book is a must have. Regardless of you responsibilities with a virtual infrastructure administrative, or from a architecture design stand point this book is for you. The level of knowledge and depth which Frank and Duncan cover in this book about the new clustering changes in vSphere 5 is priceless. The design tips and illustrations through the book are truly invaluable. There is no other book that gets into the core of all the different vSphere 5 cluster technologies like this one, ”
“Whether you are longing to know about the transition from AAM to FDM, best practices for DRS and DPM, or are just curious to know what those acronyms are this is a great book! The technical detail, practical advice, and comparative analysis throughout make this book one of the most thorough yet concise technical books available.”
“The book is clearly written, a special emphasis has been made on making it understandable even for professionals like me who use vSphere daily yet do not manage huge production environments. The book goes to great lengths to explain all possible scenarios and I found answers to all my questions. Not only sections cover HOW the technology works, but the authors go as far as explaining the way the algorithms are working, which will satisfy the curiosity of everyone.”
“The complete explanations provide the reader all of the information needed to make informed decisions about their environment with excellent diagrams to provide strong visual reinforcements.”
Please remember that we are offering the book for the price listed above, depending on your location Amazon might charge an additional cost!
New job role
The last two years I enjoyed working as an architect within the PSO organization of VMware, designing and reviewing the most interesting virtual infrastructures in Europe. However today I signed my new contract, accepting a position within the Technical Marketing team.
Starting December I will focus on resource management and disaster avoidance technologies. My new role allows me to collaborate with the Product managers and the R&D organization on products such as DRS, Storage DRS, vMotion, Storage vMotion and FT. My main tasks will be developing best practices, white-papers, documentation and technical presentations, educating field organizations and of course the customers.
Although I enjoyed working within the PSO organization, I can’t wait to get started. Thanks to all the people who made my move possible and offering me such an opportunity!
FDM in mixed ESX and vSphere clusters
Last couple of weeks I’ve been receiving questions about vSphere HA FDM agent in a mixed cluster. When upgrading vCenter to 5.0, each HA cluster will be upgraded to the FDM agent. A new FDM agent will be pushed to each ESX server. The new HA version supports ESX(i) 3.5 through ESXi 5.0 hosts. Mixed clusters will be supported so not all hosts have to be upgraded immediately to take advantage of the new features of FDM. Although mixed environments are supported we do recommend keeping the time you run difference versions in a cluster to a minimum.
The FDM agent will be pushed to each hosts, even if the cluster contains identically configured hosts, for example a cluster containing only vSphere 4.1 update 1 will still be upgraded to the new HA version. The only time vCenter will not push the new FDM agent to a host if the host in question is a 3.5 host without the required patch.
When using clusters containing 3.5 hosts, it is recommended to upgrade the ESX host to ESX350-201012401-SG PATCH (ESX 3.5) or ESXe350-201012401-I-BG PATCH (ESXi) patch first before upgrading vCenter to vCenter 5.0. If you still get the following error message:
Host ‘
Visit the VMware knowledgebase article: 2001833.