• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

DRS 4.1 Adaptive MaxMovesPerHost

August 27, 2010 by frankdenneman

Another reason to upgrade to vSphere 4.1 is the DRS adaptive MaxMovesPerHost parameter. The MaxMovesPerHost setting determines the maximum amount of migrations per host for DRS load balancing. DRS evaluates a cluster and recommends migrations. By default this evaluation happens every 5 minutes. There are limits to how many migrations DRS will recommend per interval per ESX host because there’s no advantage to recommending so many migrations that they won’t all be completed by the next re-evaluation, by which time demand could have changed anyway.
Be aware that there is no limit on max moves per host for a host entering maintenance or standby mode, but there’s a limit on max moves per host for load balancing. This can (but usually shouldn’t) be changed by setting the DRS Advanced Option “MaxMovesPerHost”. The default value is 8 and is set at the Cluster level. Remember, the MaxMovesPerHost is a cluster setting but configures the maximum migrations from a single host on each DRS invocation. This means you can still see 30 or 40 vMotion operations in the cluster during a DRS invocation.
In ESX/ESXi 4.1, the limit on moves per host will be dynamic, based on how many moves DRS thinks can be completed in one DRS evaluation interval. DRS adapts to the frequency it is invoked (pollPeriodSec, default 300 seconds) and the average migration time observed from previous migrations. In addition DRS follows the new maximum number of concurrent vMotion operations per host over depending on the Network Speed (1GB – 4 vMotions, 10GB – 8 vMotions).
Due to the adaptive nature the algorithm, the name of the setting is quite misleading as it’s no longer a maximum. The “MaxMovesPerHost” parameter will still exist, but its value might be exceeded by DRS. By leveraging the increased amount of concurrent vMotion operations per host and the evaluation of previous migration times DRS is able to rebalance the cluster in a fewer amount of passes. By using fewer amounts of passes, the virtual machines will receive their entitled resources much quicker which should positively affect virtual machine performance.

Filed Under: DRS Tagged With: DRS, MaxMovesPerHost, VMware

vSphere 4.1 – HA and DRS deepdive book

August 25, 2010 by frankdenneman

This is a complete repost of the article written by Duncan, as all publications about this book must be checked by VMware Legal, he wrote one article and speaks for the both of us.
URL: http://www.yellow-bricks.com/2010/08/24/soon-in-a-bookstore-near-you-ha-and-drs-deepdive
Over the last couple of months Frank Denneman and I have been working really hard on a secret project. Although we have spoken about it a couple of times on twitter the topic was never revealed. Months ago I was thinking about what a good topic would be for my next book. As I already wrote a lot of articles on HA it made sense to combine these and do a deepdive on HA. However a VMware Cluster is not just HA. When you configure a cluster there is something else that usually is enabled and that is DRS. As Frank is the Subject Matter Expert on Resource Management / DRS it made sense to ask Frank if he was up for it or not… Needless to say that Frank was excited about this opportunity and that was when our new project was born: VMware vSphere 4.1 – HA and DRS deepdive.
As both Frank and I are VMware employees we contacted our management to see what the options were for releasing this information to market. We are very excited that we have been given the opportunity to be the first official publication as part of a brand new VMware initiative, codenamed Rome. The idea behind Rome along with pertinent details will be announced later this year.
Our book is currently going through the final review/editing stages. For those wondering what to expect, a sample chapter can be found here. The primary audience for the book is anyone interested in high availability and clustering. There is no prerequisite knowledge needed to read the book however, the book will consist of roughly 220 pages with all the detail you want on HA and DRS. It will not be a “how to” guide, instead it will explain the concepts and mechanisms behind HA and DRS like Primary Nodes, Admission Control Policies, Host Affinity Rules and Resource Pools. On top of that, we will include basic design principles to support the decisions that will need to be made when configuring HA and DRS.
I guess it is unnecessary to say that both Frank and I are very excited about the book. We hope that you will enjoy reading it as much as we did writing it. Stay tuned for more info, the official book title and url to order the book.
Frank and Duncan

Filed Under: DRS

VCDX tip: The application form

August 23, 2010 by frankdenneman

Last week I reviewed some recently submitted designs and it appears that the requirements stated in the application form are too ambiguous. During this year I’ve seen many application forms and the same error are made by many candidates. Let’s go over the sections which contain the most errors and try to remove any doubts for future candidates.
The VMware VCDX Handbook and application form is subject to change. So this article is based on version 1.0.5. The application form is available for candidates enrolled in the VCDX program.
Section 4 Project References
What deliverables were provided? (This should represent a comprehensive design package and include, at a minimum, the design, blueprints, test plan, assembly and configuration guide, and operations guide.)
Ok so this requirement is not understood clearly by some. To meet this requirement you MUST submit at least:
1. the VMware VI 3.5 or vSphere design document.
2. blueprints (Visio drawings of physical and logical layout)
3. a documented test plan
4. a assembly and configuration guide
5. and a operation guide.
This means you are required to submit those five listed documents otherwise your application is rejected (bad) or returned for rework (Still bad, but it doesn’t cost you 300 bucks and you might have a chance to defend during the upcoming defense panels).
Section 5 Design Development Activities
This section requires you to submit five requirements, assumptions and constrains that had to be followed within this design.
This means you must submit at least five requirements, five assumptions and five constrains you encountered when working on the design. I’ve seen some application forms with requirements such as enough power, enough floor space and enough cables. Which are all genuine requirements if you are a project manager. We are requesting a list of requirements, assumptions and constraints which you as a virtual infrastructure architect had to deal with. The submitted design needs to align and deal with requirements and constraints listed in the application form.
Design Deliverable Documentation:
A small error made by many, no big deal if you miss this but it makes our live much easier if you do it correctly. This sections requires you to list the page numbers where the diagrams can be found not how many pages the document has.
Design Decisions
In this section you must provide four decision criteria for each of the decision areas, this means if you leave one field empty the application will be rejected.
It’s just really simple; your application form is NOT completed when a field is empty. Not completed forms get rejected.
Application form does not equal design document
The application form is not a substitute for the design document. It is a part of the VCDX certification program and not a part of the VMware virtual infrastructure design. The two are not complimentary to each other. Everything stated in the application form must be included in the design document or any of the other documents. Just remember you are submitting a defense you have delivered to a real or imaginary customer! Ask yourself have you ever submitted a VCDX application form during a design project to your customer?

Filed Under: VCDX Tagged With: Application form, VCDX

DRS-FT integration

July 22, 2010 by frankdenneman

Another new feature of vSphere 4.1 is the DRS-Fault Tolerance integration. vSphere 4.1 allows DRS not only to perform initial placement of the Fault Tolerance (FT) virtual machines, but also migrate the primary and secondary virtual machine during DRS load balancing operations. In vSphere 4.0 DRS is disabled on the FT primary and secondary virtual machines. When FT is enabled on a virtual machine in 4.0, the existing virtual machine becomes the primary virtual machine and is powered-on onto its registered host, the newly spawned virtual machine, called the secondary virtual machine is automatically placed on another host. DRS will refrain from generating load balancing recommendations for both virtual machines.
The new DRS integration removes both the initial placement- and the load-balancing limitation. DRS is able to select the best suitable host for initial placement and generate migration recommendations for the FT virtual machines based on the current workload inside the cluster. This will result in a more load-balanced cluster which likely has positive effect on the performance of the FT virtual machines. In vSphere 4.0 an anti-affinity rule prohibited both the FT primary- and secondary virtual machine to run on the same ESX hosts based on an anti-affinity rule, vSphere 4.1 offers the possibility to create a VM-host affinity rule ensuring that the FT primary and secondary virtual machine do not run on ESX hosts in the same blade chassis if the design requires this. For more information about VM-Host affinity rules please visit this article.
Not only has the DRS-FT integration a positive impact on the performance of the FT enabled virtual machines and arguably all other VMs in the cluster but it will also reduce the impact of FT-enabled virtual machines on the virtual infrastructure. For example, DPM is now able to move the FT virtual machine to other hosts if DPM decides to place the current ESX host in standby mode, in vSphere 4.0, DPM needs to be disabled on at least two ESX host because of the DRS disable limitation which I mentioned in this article.
Because DRS is able to migrate the FT-enabled virtual machines, DRS can evacuate all the virtual machines automatically if the ESX host is placed into maintenance mode. The administrator does not need to manually select an appropriate ESX host and migrate the virtual machines to it, DRS will automatically select a suitable host to run the FT-enabled virtual machines. This reduces the need of both manual operations and creating very “exiting” operational procedures on how to deal with FT-enabled virtual machines during the maintenance window.
DRS FT integration requires having EVC enabled on the cluster. Many companies do not enable EVC on their ESX clusters based on either FUD on performance loss or arguements that they do not intend to expand their clusters with new types of hardware and creating homogenous clusters. The advantages and improvement DRS-FT integration offers on both performance and reduction of complexity in cluster design and operational procedures shed some new light on the discussion to enable EVC in a homogeneous cluster. If EVC is not enabled, vCenter will revert back to vSphere 4.0 behavior and enables the DRS disable setting on the FT virtual machines.

Filed Under: DRS Tagged With: DRS, FT, integration, VMware

Disable DRS and VM-Host rules

July 22, 2010 by frankdenneman

vSphere 4.1 introduces DRS VM-Host Affinity rules and offer two types of rules, mandatory (must run on /must not run on) and preferential (should run on /should nor run on). When creating mandatory rules, all ESX hosts not contained in the specified ESX Host DRS Group are marked as “incompatible” hosts and DRS\VMotion tasks will be rejected if an incompatible ESX Host is selected.
A colleague of mine ran into the problem that mandatory VM-Host affinity rules remain active after disabling DRS; the product team explained the reason why:
By design, mandatory rules are considered very important and it’s believed that the intended user case which is licensing compliance is so important, that VMware decided to apply these restrictions to non-DRS operations in the cluster as well.
If DRS is disabled while mandatory VM-Host rules still exist, mandatory rules are still in effect and the cluster continues to track, report and alert mandatory rules. If a VMotion would violate the mandatory VM-Host affinity rule even after DRS is disabled, the cluster still rejects the VMotion.
Mandatory rules can only be disabled if the administrator explicitly does so. If it the administrator intent to disable DRS, remove mandatory rules first before disabling DRS.

Filed Under: DRS Tagged With: Disable DRS, VM-Host affinity rule, VMware

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 78
  • Page 79
  • Page 80
  • Page 81
  • Page 82
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in