• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Distribution of resources based on shares in a Resource pool environment

February 28, 2013 by frankdenneman

Unfortunately Resource pools seem to have a bad rep, pair them with the word shares and we might as well call death and destruction to our virtual infrastructure while we’re at it. Now in reality shares and resource pools are an excellent way of maintaining a free flow of resource distribution to the virtual machine who require these resources. Some articles, and the examples I use in the book are meant to illustrate the worst-case scenario, but unfortunately those examples are perceived to be the default method of operation. Let me use an example:
00-logical-diagram-of-cluster-configuration
In a cluster two resource pool exist, resource pool gold is used for production and is configured with a high share level. Resource pool bronze is used for development and test and is configured with a low share level. Meaning that the ratio of shares is 4:1. Now this environment contains a 8:1 ratio when it comes to virtual machines. The gold resource pool contains 320 virtual machines and the bronze resource pool contains 40 virtual machines. The cluster contains 200 GB of memory and 200 GHz of CPU, this means that the each virtual machine in the gold resource pool has access to 0.5 MHz and 0.5 GB right? Well yes BUT…. (take a deep breath because this will be one long sentence)… Only in the scenario where all the virtual machines in the environment are 100% utilized (CPU and memory), where the ESXi hosts can provide enough network bandwidth and storage bandwidth to back the activity of the virtual machines, no other operations are active in the environment and where all virtual machines are configured identically in size and operating system than yes that happens. In all other scenarios a more dynamic distribution of resources is happening.
The distribution process
Now let’s deconstruct the distribution process. First of all let’s refresh some basic resource management behavior and determine the distinction between shares and reservations. A share is a relative weight, identifying the priority of the virtual machine during contention. It is only relative to its peers and only relative to other active shares. This means that using the previous scenario, the resource pool shares compete against each other and the virtual machine shares inside a single resource pool compete against each other. It’s important to note that only active shares are used when determining distribution. This is to prevent resource hoarding based on shares, if you do not exercise you shares, you lose the rights to compete in the bidding of resources.
Reservations are the complete opposite, the resource is protected by a reservation the moment you used it. Basically the virtual machine “owns” that resources and cannot be pressured to relinquish it. Therefor reservations can be seen as the complete opposite of shares, a basic mechanism to hoard resources.
Back to the scenario, what happens in most environments?
00-logical-diagram-of-cluster-configuration
First of all the demand is driven from bottom to top, that means that virtual machines ask their parent if they can have the resources they demand. The resource pool will ask the cluster for resources.
01-demand-up
The distribution is going in the opposite direction; top to bottom and that’s where activity and shares come in to play. If both resource pools are asking for more resources than the cluster can supply, then the cluster needs to decide which resource pool gets the resources. As resource pool (RP) Gold contains a lot more virtual machines its safe to assume that RP Gold is demanding more resources than RP Bronze. The total demand of the virtual machines in RP Gold is 180 GB while the virtual machines in RP Bronze demand a total of 25GB. In total the two RP’s demand 205GB while the cluster can only provide 200GB. Notice that I split up demand request into two levels, VMs to RP, RP to cluster.
The cluster will take multiple passes to distribute the resources. In the first pass the resources are distributed according to the relative share value, in this case 4:1 that means that RP Gold is entitled to 160GB of memory (4/5 of 200) and RP Bronze 40GB (1/5 of 200).
02-distribution-pass-1
While RP Bronze gets awarded 40GB, it is only requesting 25GB, returning the excessive 15GB of memory to the cluster. (Remember if you don’t use it, you lose it)
03-Return-of-excessive-resources
As the cluster has a “spare” 15GB to distribute it executes a second distribution pass and since there are no other resource consumers in the cluster it awards these 15GB of memory resources to the claim of RP Gold.
04-distribution-pass-2
This leads to a distribution of 175GB to Resource Pool Gold and 25GB of memory of Resource Pool Bronze. Please note that in this scenario I broke down the sequence into multiple passes, in reality these multiple passes are contained within a (extremely fast) single operation. The moment resource demand changes, a new distribution of resources will occur. Allowing the cluster resources to satisfy the demand in the most dynamic way.
The same sequence is happening in the resource pool itself; virtual machines receive their resources based on their activity and their share value. Hereby distributing the resources “owned” by the resource pool to the most important and active virtual machines within the pool
If no custom share values are configured on the virtual machine itself, the virtual machine CPU and memory configuration along with the configured share level will determine the amount of shares the virtual machine posses. For example a virtual machine configured with a normal share value and a configuration of 2vCPU and 2GB will posses 2000 shares of CPU and 20480 shares of memory. For more info about share calculation please consult the VMware vSphere 5.1 resource management guide, table 2-1 page 12. (share values have not been changed since the introduction, therefor it’s applicable to ESX and all vSphere versions)
Key takeaway
I hope that by using this scenario it’s clear that shares do not hoard resources. The most important thing to understand that it all comes down to activity. Supply is to meet its demand, whenever demand changes new distribution of resources are executed. And although the number of the virtual machines might not be comparable to the share ratio of the resource pools, it’s the activity that drives the dynamic distribution.
Mixing multiple resource allocation settings
In theory an unequal distribution of resources is possible, in reality the presences of more virtual machines equal more demand. Now architecting an environment can be done in many ways, a popular method is to design for worst-case scenario. Great designs usually do not rely on a single element and therefor a configuration with the use of multiple resource allocation settings (reservations, shares and limits) might provide the level of performance throughout the cluster.
If you are using a cluster design as described in the scenario and you want to ensure that load and smoke testing do not interfere with the performance levels of the virtual machines in RP Gold, than a mix of resource pool reservations and shares might be a solution. Determine the amount of resources that need to be permanently available to your production environment and configure a reservation on RP Gold. Hereby creating a pool of guaranteed resources and a pool for burstability. Allowing the remaining resources to be allocated by both resource pools on a dynamic and opportunistic basis. You can even further restrict the use of physical resources to the RP bronze by setting a limit on the resource pool.
Longing for SDDC? Start with resource pools!
Its too bad resource pools got a bad rep and maybe I have been a part of it by only describing worst-case scenarios. When understanding resource pool one recongnizes that resource pools are a crucial element in the Software Defined Datacenter. By using the correct mix of resource allocation settings you can provide an abstraction layer that is able to isolate resources for specific workloads or customers. Resources can be flexibly added, removed, or reorganized in resource pools as per changing business needs and priorities. All this is available to you without the need for tinkering with low-level settings on virtual machines or using power-cli scripts to adjust the shares on resource pools.

Filed Under: DRS

Do you use vApps?

February 26, 2013 by frankdenneman

We’re interested in learning more about how you use vApps for workload provisioning today and how you envision it evolving in the future.
If you have a couple of spare minutes, please fill out these 15 questions: http://www.surveymethods.com/EndUser.aspx?FFDBB7AEFDB5AAAAFB
Thanks!

Filed Under: DRS

Reserve all guest memory (all locked)

February 21, 2013 by frankdenneman

Some applications do not perform well when memory is reclaimed from the virtual machine. Most users set a virtual machine memory reservation to prevent memory reclamation and to ensure stable performance levels.
Memory reservation settings are static, meaning that when you change the memory configuration of the virtual machine itself the memory reservation remains the same. If you want to keep the reservation equal to the virtual machine memory reservation, the UI (included in both the vSphere client and the web client) offers the setting: “Reserve all guest memory (all locked)”.
This setting is linked to the virtual machine memory configuration. The memory reservation is immediately readjusted when the memory configuration changes. Increase the memory size and the memory reservation is automatically increased as well. Reduce the memory size of a virtual machine, and the reservation is immediately reduced.
The behavior is extremely useful when using the vSphere client as management tool. Within the vSphere client the memory configuration and the memory reservation settings do not share the same screen. While changing the memory configuration one can easily forget to adjust the memory reservation.
00-virtual-machine-memory-configuration-vSphere client
01-reserve-all-guest-memory-vSphere-client
The web client is redesigned and shows the memory configuration and reservation in a single screen. Yet having a setting that automates and controls alignment of memory configuration and reservation reduce the change for human error.
 
02-reserve-all-guest-memory-web-client

Filed Under: Memory

PernixData Flash Virtualization Platform will revolutionize virtual infrastructure ecosystem design

February 20, 2013 by frankdenneman

A couple of weeks ago I was fortunate enough to attend a tech preview of PernixData Flash Virtualization Platform (FVP). Today PernixData exited stealth mode so we can finally talk about FVP. Duncan already posted a lengthy article about PernixData and FVP and I recommend you to read it.
At this moment a lot of companies are focusing on flash based solutions. PernixData distinguishes itself in today’s flash focused world by providing a new flash based technology but that is not a storage array based solution or a server-bound service. I’ll expand on what FVP does in a bit, let’s take a look at the aforementioned solutions. The solutions have drawbacks. A Storage array based flash solution is plagued by common physics. Distance between the workload and the fast medium (flash) generates a higher latency than when the flash disk is placed near the workload. Placing the flash inside a server provides the best performance but it must be shared between the hosts in the cluster to become a true enterprise solution. If the solution breaks important functions such as DRS and vMotion than the use case of this technology remains limited.
FVP solves these problems by providing a flash based data tier that becomes a cluster-based resource. FVP virtualizes server side flash devices such as SSD drives or PCIe flash devices (or both) and pools these resources into a data tier that is accessible to all the hosts in the cluster. One feature that stands out is remote access. By allowing access to remote devices, FVP allows the cluster to migrate virtual machines around while still offering performance acceleration. Therefor cluster features such as HA, DRS and Storage DRS are fully supported when using FVP.
Unlike other server based flash solutions, FVP accelerates both read and write operations. Turning the flash pool in to a “data-in-motion-tier”. All hot data exists in this tier, thus turning the compute layer into an all-IOPS-providing platform. Data that is at rest is moved to the storage array level, turning this layer into the capacity platform. By keeping the I/O operations as close to the source (virtual machines) as possible, performance is increased while reducing the traffic load to the storage platform as well. By filtering out read I/Os the traffic pattern to the array is changed as well, allow the array to focus more on the writes
Another great option is the ability to configure multiple protection levels when using write-back. Data is synchronously replicated to remote devices. During the tech preview Satyam and Poojan provided some insights on the available protection levels, however I’m not sure if I’m allowed to share these publically. For more information about FVP visit Pernixdata.com
The beauty of FVP is that its not a virtual appliance and that it does not require any agents installed in the guest OS. FVP is embedded inside the hypervisor. Now this for me is the key to believe that this ”data-in-motion-tier” is only the beginning of PernixData. By having insights in the hypervisor and understanding the dataflow of the virtual machines, FVP can become a true platform that accelerates all types of IOPS. I do not see any reasons why FVP is not able to replicate/encrypt/duplicate any type of input and output of a virtual machine. 🙂
As you can see I’m quite excited by this technology. I believe FVP is as revolutionary/disruptive as vMotion. It might not be as “flashy” (forgive the pun) as vMotion but it sure is exciting to know that the limitation of use-cases is actually the limitation of your imagination. I truly believe this technology will revolutionize virtual infrastructure ecosystem design.

Filed Under: Miscellaneous

Voting for the 2013 top virtualization blogs – A year in review

February 20, 2013 by frankdenneman

When Eric Siebert opens up the voting for the top VMware & virtualization blogs you know another (blogging) year has passed. First of all I want to thank Eric for organizing this year in year out. I know he spends an awful lot of time on this. Thanks Eric!
Its amazing to see that there are more than 200 blogs dedicated to virtualization and that each month new blogs appear. Unfortunately I don’t have the time to read them all but I do want to show my appreciation for the blog sites that I usually visit. Best newcomer is an easy one, Cormac Hogan. The content is absolutely great and he should be in the top 10. Then we have the usually suspects, my technical marketing colleagues and buddies: Alan Renouf, Rawlinson Rivera and William Lam. I start of the day by making coffee, checking my email and logging into yellow-bricks.com. It’s the de facto standard of the virtualization blogs. Duncan’s blog provide not only technical in-depth articles, but also insights in the industry. Who else? Eric Sloof of course! Always nice to read to find out that your white paper is published before you get the official word through company channels. 😉 Two relative unknown blog sites but quality content: Erik Bussink and Rickard Nobel. These guys create awesome material. One blog that I’m missing in the list is the one from Josh Odgers. Great content. Hope to be able to vote for him next year.
When reviewing content from others you end up reviewing the stuff you did yourself and 2012 was a very busy year for me. During the year I published and co-authored a couple of white papers such as the vSphere Metro Cluster Case Study, Storage DRS interoperability guide and vCloud Director Resource Allocation Models.
I presented at a couple of VMUGS and at both VMword San Francisco and Europe. The resource pool best practice session was voted as one of the top 10 presentations of VMworld. And of course Duncan and I released the vSphere 5.1 Clustering Deepdive, also know as 50 shades of Orange. ☺ I believe it’s the best one of the series.
50-shades-of-orange1
In the mean time I ended up writing for the vSphere blog, appearing on a couple of podcast and writing a little over a 100 blog articles on frankdenneman.nl. I then to focus on DRS, Storage DRS, SIOC and vMotion but once in a while I like to write about something that gives a little insight peek of my life such as the whiteboard desk or the documentaries I like to watch. It seems you like these articles also as they are frequently visited.
In my articles I try to give insights in the behavior of the features of vSphere, this to help you understand the impact of these features. Understanding the behavior allows you to match your design to the requirements and constrains of the project/virtual infrastructure your working on. During my years in the field I was always looking for this type of information, by providing this material I hope to help out my fellow architects.
When publishing more than over 100 articles you tend to like some more than others. While it’s very difficult to choose individual articles, I enjoyed spending time on writing a series of articles on the same topic, such as the series Architecture and design of datastore clusters (5 posts) and Designing your (Multi-NIC) vMotion network (5 posts). But I also like the individual post:
• vSphere 5.1 vMotion Deepdive
• A primer on Network I/O Control
• vSphere 5.1 Storage DRS load balancing and SIOC threshold enhancements
• HA admission control is not a capacity management tool
• Limiting the number of storage vMotions
I hope you can spare a couple of minutes to cast your vote and show your appreciation for the effort these bloggers put into their work. Instead of picking the customary names please look back and review last year, think about the cool articles you read that helped you or sparked your interest to dive into the technology yourself. Thanks
I can’t wait to watch the Top 25 countdown show Eric, John and Simon did in the previous years.

Filed Under: Miscellaneous

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 44
  • Page 45
  • Page 46
  • Page 47
  • Page 48
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2026 · SquareOne Theme on Genesis Framework · WordPress · Log in