DUTCH VBEERS

Simon Long of The SLOG is introducing vBeers to Holland. I’ve copied the text from his vBeers blog article. Every month Simon Seagrave and I try organise a social get together of like-minded Virtualization enthusiasts held in a pub in central London (and Amsterdam). We like to call it vBeers. Before I go on, I would just like to state, although it’s called vBeers, you do NOT have to drink beer or any other alcohol for that matter. This isn’t just an excuse to get blind drunk. We came up with idea whilst on the Gestalt IT Tech Field Day back in April. We were chatting and we both recognised that we don’t get together enough to catch-up, mostly do to busy work schedules and private lives. We felt that if we had a set date each month, the likely hood of us actually making that date would be higher than previous attempts. So the idea of vBeers was born.

HA AND DRS TECHNICAL DEEPDIVE AVAILABLE

After spending almost a year on writing, drawing and editing, the moment Duncan and I waited for finally arrived… Our new book, the vSphere 4.1 HA and DRS technical deepdive is available on CreateSpace and Amazon.com. Early this year Duncan approached me and asked me if I was interested in writing a book together on HA and DRS, without hesitation I accepted the honor. Before discussing the contents of the book I would like take the opportunity to thank our technical reviewers for their time, their wisdom and their input: Anne Holler (VMware DRS Engineering), Craig Risinger (VMware PSO), Marc Sevigny (VMware HA Engineering) and Bouke Groenescheij (Jume.nl). And a very special thanks to Scott Herold for writing the foreword! But most of all I would like to thank Duncan for giving me this opportunity to work together with him on creating this book. The in-depth discussions we had are without a doubt the most difficult I have ever experienced and were very interesting, both most of all fun! Thanks! Now let’s take a look at the book.Please note that we are still working on an electronic version of the book and we expect to finish this early 2011. This is the description of the book that is up on CreateSpace: About the authors: Duncan Epping (VCDX 007) is a Consulting Architect working for VMware as part of the Cloud Practice. Duncan works primarily with Service Providers and large Enterprise customers. He is focused on designing Public Cloud Infrastructures and specializes in bc-dr, vCloud Director and VMware HA. Duncan is the owner of Yellow-Bricks.com, the leading VMware blog. Frank Denneman (VCDX 029) is a Consulting Architect working for VMware as part of the Professional Services Organization. Frank works primarily with large Enterprise customers and Service Providers. He specializes in Resource Management, DRS and storage. Frank is the owner of frankdenneman.nl which has recently been voted number 6 worldwide on vsphere-land.com VMware vSphere 4.1 HA and DRS Technical Deepdive zooms in on two key components of every VMware based infrastructure and is by no means a “how to” guide. It covers the basic steps needed to create a VMware HA and DRS cluster, but even more important explains the concepts and mechanisms behind HA and DRS which will enable you to make well educated decisions. This book will take you in to the trenches of HA and DRS and will give you the tools to understand and implement e.g. HA admission control policies, DRS resource pools and host affinity rules. On top of that each section contains basic design principles that can be used for designing, implementing or improving VMware infrastructures. Coverage includes: • HA node types • HA isolation detection and response • HA admission control • VM Monitoring • HA and DRS integration • DRS imbalance algorithm • Resource Pools • Impact of reservations and limits • CPU Resource Scheduling • Memory Scheduler • DPM We hope you will enjoy reading it as much as we did writing it. Thanks, Eric Sloof received a proof copy of the book and shot a video about it.

SHOULD OR MUST VM-HOST AFFINITY RULES?

VMware vSphere 4.1 introduces a new affinity rule, called “Virtual Machines to Hosts” (VM-Host), which I described in the article “VM to Host affinity rule”. A short recap: VM-Host affinity rules are available in two flavors: Must run rules (Mandatory) Should run rules (Preferential) By providing these two options a new problem arises for the administrator\architect, when will the need occur for using the mandatory rule and when is it desired to use preferential rules? I think it all depends on the risk and limitations introduced by each rule. Let’s review difference between the rules, the behavior of each rule and the impact they have on cluster services and maintenance mode. What is the difference between a mandatory and a preferential rule?

DISALLOWING MULTIPLE VM CONSOLE SESSIONS

Currently I’m involved in a high-secure virtual infrastructure design and we are required to reduce the number of entry points to the virtual infrastructure. One of the requirements is to allow only a single session to the virtual machine console. Due to the increasing awareness \ demand of security in virtual infrastructure more organizations might want to apply this security setting. 1. Turn of the virtual machine. 2. Open Configuration parameters of the VM to edit the advanced configuration settings 3. Add Remote.Display.maxConnections with a value of 1 4. Power on virtual machine Update: Arne Fokkema created a Power-CLI function to automate configuring this setting throughout your virtual infrastructure. You can find the power-cli function on ICT-freak.nl.

DISABLE BALLOONING?

Recently, Paul Meehan submitted this question via a comment on the “Memory reclamation, when and how” article: Hi, we are currently considering virtualising some pretty significant SQL workloads. While the VMware best practices documents for SQL server inside VMware recommend turning on ballooning, a colleague who attended a deep dive with a SQL Microsoft MVP came back and the SQL guy strongly suggested that ballooning should always be turned off for SQL workloads. We have 165 SQL instances, some of which will need 5-10000 IOPS so performance and memory management is critical. Do you guys have a view on this from experience? Thx, Paul

THE IMPACT OF QOS NETWORK TRAFFIC ON VM PERFORMANCE

A lot of interesting material is written about configuring Quality of Service (QoS) on 10GB (converged) networks in Virtual Infrastructures. With the release of vSphere 4.1, VMware introduced a network QoS mechanism called Network I/O Control (NetIOC). The two most popular Blade systems; HP with Flex10 technology and Cisco UCS both offer traffic shaping mechanisms at hardware level. Both NetIOC and Cisco UCS approach network Quality of Service with a sharing perspective, guaranteeing a minimum amount of bandwidth opposed to the HP Flex-10 technology, which isolates the available bandwidth and dedicate an X amount of bandwidth to a specified NIC. When allocating bandwidths to the various network traffic streams most admins try to stay on the safe side and over-allocate bandwidth to virtual machine traffic. Obviously it is essential to guarantee enough bandwidth to virtual machines but bandwidth is finite, resulting in less bandwidth available to other types of traffic such as vMotion. Unfortunately by reducing the available bandwidth used for vMotion traffic can ultimately have negative effect on the performance of the virtual machines. MaxMovesPerHost In vSphere 4.1 DRS uses an adaptive technique called MaxMovesPerHost. This technique allows DRS to decide the optimum concurrent vMotions per ESX host for Load-Balancing operations. DRS will adapt the maximum concurrent vMotions per host (8) based upon the average migration time observed from previous migrations. Decreasing bandwidth available for vMotion traffic can result in a lower number of allowed concurrent vMotions.In turn the amount of allowed concurrent vMotions affects the number of migration recommendations generated by DRS. DRS will only calculate and generate the amount of migration recommendation is believes it can complete before the next DRS invocation. It limits the amount of generated migration recommendations, as there is no advantage in generating recommending migrations that cannot be complement before the next DRS invocation. During the next re-evaluation cycle, virtual machine resource demand can have changed rendering the previous recommendations obsolete By limiting the amount of bandwidth available to vMotion, it can decrease the maximum amount of concurrent vMotions per host and could risk leaving the cluster imbalanced for a longer period of time. Both NetIOC and Cisco UCS Class of Service (COS) Quality of Service can be used to guarantee a minimum amount of bandwidth available to vMotion during contention. Both techniques allow vMotion traffic to use all the available bandwidth if no contention occurs. HP uses a different approach, isolating and dedicating a specific amount of bandwidth to an adapter and thereby possible restricting specific workloads. Bred Hedlund wrote an article explaining the fundamental differences in how bandwidth is handled between HP Flex-10 and Cisco UCS. Cisco UCS intelligent QoS vs. HP Virtual Connect rate limiting Recommendations for Flex-10 Due to the restrictive behavior of Flex-10, it is recommended to specifically take the adaptive nature of DRS into account and not restricting vMotion traffic too much when shaping network bandwidth for the configured FlexNics. It is recommended to monitor the bandwidth requirements of the virtual machines and adjust the rate limit for virtual machine traffic and vMotion traffic accordingly, reducing the possibility of delaying DRS to reach a steady state when a significant load imbalance in the cluster exits. Recommendations for NetIOC and UCS QoS Fortunately the sharing nature of NetIOC and UCS allow other network streams allocate bandwidth during periods without bandwidth contention. Despite this “plays well with other” nature, it is recommended to assign a minimum guarantee amount of bandwidth for vMotion traffic (NetIOC) or a custom Class of Service to the vMotion vNICs (UCS). Chances are that if virtual machines saturate the network, virtual machines are experiencing a high workload and DRS will try to provide the resources the virtual machines are entitled to.

VSWITCH FAILBACK AND HIGH AVAILABILITY

One setting most admins get caught off-guard is vSwitch Failback setting in combination with HA. If the management network vSwitch is configured with Active/Standby NICs and the HA isolation response is set to “Shutdown” VM or “Power-off” VM it is advised to set the vSwitch Failback mode to No. If left at default (Yes), all the ESX hosts in the cluster or entire virtual infrastructure might issue an Isolation response if one of the management network physical switches is rebooted. Here’s why: Just a quick rehash: Active\Standby One NIC (vmnic0) is assigned as active to the management\service console portgroup, the second NIC (vmnic1) is configured as standby. The vMotion portgroup is configured with the first NIC (vmnic0) in standby mode and the second NIC as Active (vmnic1). [caption id=“attachment_1344” align=“aligncenter” width=“316” caption=“Active Standby setup management network vSwitch0”][/caption] Failback The Failback setting determines if the VMkernel will return the uplink (NIC) to active duty after recovery of a downed link or failed NIC. If the Failback setting is set to Yes the NIC will return to active duty, when Failback is set to No the failed NIC is assigned the Standby role and the administrator must manually reconfigure the NIC to the active state. Effect of Failback yes setting on environment When using the default setting of Failback unexpected behavior can occur during maintenance of a physical switch. Most switches, like those from Cisco, initiate the port after boot, so called Lights on. The port is active but is still unable to receive or transmitting data. The process from Lights-on to forwarding mode can take up to 50 seconds; unfortunately ESX is not able to distinguish between Lights-on status and forwarding mode, there for treating the link as usable and will return the NIC to active status again. High Availability will proceed to transmit heartbeats and expect to receive heartbeats, after missing 13 seconds of heartbeats HA will try to ping its Isolation Address, due to the specified Isolation respond it will shut down or power-off the virtual machines two seconds later to allow other ESX hosts to power-up the virtual machines. But because it is common – recommended even – to configure each host in the cluster in an identical manner, each active NIC used by the management network of every ESX host connect to the same physical switch. Due to this design, once the switch is booted, a cluster wide Isolation response occurs resulting in a cluster wide outage. To allow switch maintenance, it’s better to set the vSwitch failback mode to No. Selecting this setting introduces an increase of manual operations after failure or certain maintenance operations, but will reduce the change of “false positives” and cluster-wide isolation responses.

BEST PRACTICES

Last week at VMworld and the VCDX defense panels I heard the term “Best Practices” a lot. The term best practice makes me feel happy, shudder and laugh at the same time. Now when it comes to applying best practice I always use the analogy of crossing the road: I am born and raised in the Netherlands and best practice is to look left first, then to the right and finally check left again before crossing the road. This best practice served me well and it helped me avoid being hit by a car/truck/crazy people on bikes and even trams and trolleys. But I ask you does this best practice still apply when I try to cross the street in London? Don’t get me wrong, best practice are useful and very valuable, but to apply a best practice blindly won’t be as lethal as my analogy but it can get you into a lot of trouble.

VMWORLD VCLOUD DIRECTOR LABS

Yesterday the VMworld Labs opened up to the public and if you want to take vCloud Director for a spin I’m recommending doing the following labs: Private cloud – Management: Lab 13 VMware vCloud Director Install and Config Lab 18 VMware vCloud Director Networking Private Cloud - Security: Lab20 VMware vShield Its best to complete Lab 18 vCloud Director Networking before doing VMware vShield Lab (Lab 20) because the terms and knowledge gained in Lab 18 will prepare you for Lab 20. Today the VMworld 2010 speaker sessions started and I strongly recommend Duncan’s session “BC7803 - Planning and Designing an HA Cluster that Maximizes VM Uptime” and Kit Colbert’s “ TA7750 - Understanding Virtualization Memory Management Concepts”. Go check it out.

NUMA, HYPERTHREADING AND NUMA.PREFERHT

I received a lot of questions about Hyperthreading and NUMA in ESX 4.1 after writing the ESX 4.1 NUMA scheduling article. A common misconception is that Hyperthreading is ignored and therefore not used on a NUMA system. This is not entirely true and due to the improved Hyperthreading code on Nehalems, the CPU scheduler is programmed to use the HT feature more aggressively than the previous releases of ESX. The main reason why I think this misconception exists is the way the NUMA load balancer handles vCPU placement of vSMP virtual machine. Before continuing, let’s get our CPU elements nomenclature aligned, I’ve created a diagram showing all the elements: