Welcome.

VMWARE TOOLS DISK TIMEOUT VALUE LINUX GOS

After I posted the “VMtools increases TimeOutValue article” I received a lot of questions if the VMware Tools automatically adjust the timeout value for Linux machines as well. Well, VMware Tools of versions ESX 3.5 Update 5 and ESX 4.0 install a udev rule file on Linux operating systems with kernel version equal or greater then 2.6.13. This rule file changes the default timeout value of VMware virtual disks to 180 seconds. This helps the guest operating system to better survive a SAN failure and keep the linux system disk from becoming read only. Because of the requirement of updates related to udev featured in the 2.6.13 kernel, the SCSI timeout value in other Linux kernels is not touched by the installation of VMware tools and the default value remains active. The two major Linux Kernel version each have a different timeout value: Linux 2.4 - 60 seconds Linux 2.6 - 30 seconds You can set the timeout value manually listed in /sys/block/disk/device/timeout. The problem is the distinction VMtools make between certain Linux Kernels, if you do not know this caveat you might end up with an Linux environment which is not configured exactly the same. This can lead to different behaviour during a SAN outage. Standardization is key when managing virtual infrastructure environments and a uniform environment eases troubleshooting A while ago Jason wrote an excellent article about the values and benefit of increasing the guest os timeout

ESX4 ALUA, TPGS AND HP CA

In my blog post: “HP CA and the use of LUN balancing scripts” I tried to cover the possible impact of using HP continuous Access EVA on the LUN path load balancing scheme in ESX 3.x. I received a lot of questions about this and wanted to address some issues again and try to clarify them. Let’s begin with a recap of the HP CA article; The impact of CA on the load-balancing scheme is due to the fact that an EVA is an asymmetric Active-Active array that uses the Asymmetric Logical Unit Access protocol (ALUA). ESX3 is not ALUA aware and does not recognize the different specific access characteristics of the array’s target ports. VMware addressed this shortcoming and added ALUA support in the new storage stack of ESX4. The ALUA support is a great feature of the new storage architecture, it reduces a lot of extra manual steps of creating a proper load-balanced environment. But how exactly does ALUA identifies which path is optimized and will HP Continuous Access still have an impact on ESX4 environments as well?

IDENTIFY STORAGE PERFORMANCE ISSUES

VMware has recently updated the kb article " Using esxtop to identify storage performance issues Details" (KB1008205). The KB article provides information about how to use esxtop to determine the latency statistics across various devices. The article contain easy to follow, step-by-step instructions on how to setup ESXtop to monitor storage performance per HBA, LUN and virtual machine. It also list generic acceptable values to put your measured values in perspective. It’s a great article, bookmark it for future reference. If you want to learn about threshold of certain metrics in ESXtop, please check out the ESXtop metric bible featured on Yellow-bricks.com. ESXtop is a great tool to view and measure certain criteria in real time, but sometimes you want to collect metrics for later reference. If this is the case, the tool vscsiStats might be helpful. vscsiStats is a tool to profile your storage environment and collects info such as outstanding IO, seekdistance and many many more. Check out Duncan’s excellent article on how to use vscsiStats. Because vscsiStats will collect data in a .csv file you can create diagrams, Gabe written an article how to convert the vscsiStats data into excel charts.

VCDX TIP: VMTOOLS INCREASES TIMEOUTVALUE

This is just a small heads-up post for all the VCDX candidates. Almost every VCDX application I read mentions the fact that they needed to increase the Disk TimeOutValue (HKEY_LOCAL_MACHINE/System/CurrentControlSet/Services/Disk) by to 60 seconds on Windows machines. The truth is that the VMware Tools installation (ESX version 3.0.2 and up) will change this registry value automatically. You might want to check your operational procedures documentation and update this! VMware KB 1014

REMOVING ORPHANED NEXUS DVS

During the test of the Cisco Nexus 1000V the customer deleted the VSM first without removing the DVS using commands from within the VSM, ending up with an orphaned DVS. One can directly delete the DVS from the DB, but there are bunch of rows in multiple tables that need to be deleted. This is risky and may render DB in some inconsistent state if an error is made while deleting any rows. Luckily there is a more elegant way to remove an orphaned DVS without hacking and possibly breaking the vCenter DB. A little background first: When installing the Cisco Nexus 1000V VSM, the VSM uses an extension-key for identification. During the configuration process the VSM spawns a DVS and will configure it with the same extension-key. Due to the matching extension keys (extension session) the VSM owns the DVS essentially. And only the VSM with the same extension-key as the DVS can delete the DVS. So to be able to delete a DVS, a VSM must exist registered with the same extension key. If you deleted the VSM and are stuck with an orphaned DVS, the first thing to do is to install and configure a new VSM. Use a different switch name than the first (deleted) VSM. The new VSM will spawn a new DVS matching the switch name configured within the VSM. The first step is to remove the new spawned DVS and do this the proper way using commands from within the VSM virtual machine.

DRS RESOURCE DISTRIBUTION CHART

A customer of mine wanted more information about the new DRS Resource Distribution Chart in vCenter 4.0, so I thought after writing the text for the customer, why not share this? The DRS Resource Distribution Chart was overhauled in vCenter 4.0 and is quite an improvement over the resource distribution chart featured in vCenter 2.5. Not only does it use a better format, the new charts produce more in-depth information.

RESOURCE POOLS AND AVOIDING HA SLOT SIZING

Virtual machines configured with large amounts of memory (16GB+) are not uncommon these days. Most of the time these “heavy hitters” run mission critical applications so it’s not unusual setting memory reservations to guarantee the availability of memory resources. If such a virtual machine is placed in a HA cluster, these significant memory reservations can lead to a very conservative consolidation ratio, due to the impact on HA slot size calculation. (For more information about slot size calculation, please review the HA deep dive page on yellow-bricks.com.) There are options to avoid creation of large slot sizes. Such as not setting reservations, disabling strict admission control, using vSphere new admission control policy “percentage of cluster resources reserved” or creating a custom slot size by altering the advanced settings das.vmMemoryMinMB. But what if you are still using ESX 3.5, must guarantee memory resources for that specific VM, do not want to disable strict admission control or don’t like tinkering with the custom slot size setting? Maybe using the resource pool workaround can be an option. Resource pool workaround During a conversation with my colleague Craig Risinger, author of the very interesting article “The resource pool priority pie paradox”, we discussed the lack of relation between resource pools reservation settings and High Availability. As Craig so eloquently put it:

IMPACT OF HOST LOCAL VM SWAP ON HA AND DRS

On a regular basis I come across NFS based environments where the decision is made to store the virtual machine swap files on local VMFS datastores. Using host-local swap can affect DRS load balancing and HA failover in certain situations. So when designing an environment using host-local swap, some areas must be focused on to guarantee HA and DRS functionality. VM swap file Lets start with some basics, by default a VM swap file is created when a virtual machine starts, the formula to calculate the swap file size is: configured memory – memory reservation = swap file. For example a virtual machine configured with 2GB and a 1GB memory reservation will have a 1GB swap file. Reservations will guarantee that the specified amount of virtual machine memory is (always) backed by ESX machine memory. Swap space must be reserved on the ESX host for the virtual machine memory that is not guaranteed to be backed by ESX machine memory. For more information on memory management of the ESX host, please the article on the impact of memory reservation. During start up of the virtual machine, the VMkernel will pre-allocate the swap file blocks to ensure that all pages can be swapped out safely. A VM swap file is a static file and will not grow or shrink not matter how much memory is paged. If there is not enough disk space to create the swap file, the host admission control will not allow the VM to be powered up. Note: If the local VMFS does not have enough space, the VMkernel tries to store the VM swap file in the working directory of the virtual machine. You need to ensure enough free space is available in the working directory otherwise the VM is still not allowed to be powered up. Let alone ignoring the fact that you initially didn’t want the VM swap stored on the shared storage in the first place. This rule also applies when migrating a VM configured with a host-local VM swap file as the swap file needs to be created on the local VMFS volume of the destination host. Besides creating a new swap file, the swapped out pages must be copied out to the destination host. It’s not uncommon that a VM has pages swapped out, even if there is not memory pressure at that moment. ESX does not proactively return swapped pages back into machine memory. Swapped pages always stays swapped, the VM needs to actively access the page in the swap file to be transferred back to machine memory but this only occurs if the ESX host is not under memory pressure (more than 6% free physical memory). Copying host-swap local pages between source- and destination host is a disk-to-disk copy process, this is one of the reasons why VMotion takes longer when host-local swap is used. Real-life scenario A customer of mine was not aware of this behavior and had discarded the multiple warnings of full local VMFS datastores on some of their ESX hosts. All the virtual machines were up and running and all seemed well. Certain ESX servers seemed to be low on resource utilization and had a few active VMs, while other hosts were highly utilized. DRS was active on all the clusters, fully automated and a default (3 stars) migration threshold. It looked like we had a major DRS problem. DRS If DRS decide to rebalance the cluster, it will migrate virtual machines to low utilized hosts. VMkernel tries to create a new swap file on the destination host during the VMotion process. In my scenario the host did not contain any free space in the VMFS datastore and DRS could not VMotion any virtual machine to that host because the lack of free space. But the host CPU active and host memory active metrics were still monitored by DRS to calculate the load standard deviation used for its recommendations to balance the cluster. (More info about the DRS algorithm can be found on the DRS deepdive page). The lack of disk space on the local VMFS datastores influenced the effectiveness of DRS and limited the options for DRS to balance the cluster. High availability failover The same applies when a HA isolation response occurs, when not enough space is available to create the virtual machine swap files, no virtual machines are started on the host. If a host fails, the virtual machines will only power-up on host containing enough free space on their local VMFS datastores. It might be possible that virtual machines will not power-up at-all if not enough free disk space is available. Failover capacity planning When using host local swap setting to store the VM swap files, the following factors must be considered. • Amount of ESX hosts inside cluster. • HA configured host failover capacity. • Amount of active virtual machines inside cluster. • Consolidation ratio (VM per host). • Average swap file size. • Free disk space local VMFS datastores.

VCDX NUMBER 029

Monday 8th of February I was scheduled to participate in the defend session of the VCDX panel at Las Vegas. For people not familiar with the VCDX program, the defend panel is the final part of the extensive VCDX program. My defend session was the first session of the week, so my panel members where fresh and eager to get started. Besides the three panel members, an observer and a facilitator where also present in the room. The session consisted out of three parts; • Design defend session (75 minutes) • Design session (30 minutes) • Troubleshooting session (15 minutes) During the design defend session you are required to present your design, I used a twelve deck slide presentation and included all blueprints\Visio drawings as appendix. This helped me a lot, as I am not a native English speaker using diagrams helped me to explain the layout. There is no time limit on the duration of the presentation, but it is wise to keep it as brief as possible. During the session, the panel will try to address a number of sections and if they cannot address these sections this can impact your score. The design and troubleshooting session you need to show you are able to think on your feet. One of the goals is to understand your though process. Thinking out loud and using the whiteboard will help you a lot. So how was my experience? After meeting my panel members I started to get really nervous as one of the storage guru’s within VMware was on my panel. The other two panel members have an extreme good track record inside the company as well, so basically I was being judged by an all-star panel. I thought my presentation went well, but word of advice; read your submitted documentation on a regular basis before entering the defend panel as the smallest details can be asked. After completing the design defend pane, I was asked to step outside. After the short break the design session and troubleshooting scenarios were next. I did not solve the design and troubleshooting scenarios, but that is really not the goal of those sections. Thinking out loud in English can be challenging for non-native English speakers, so my advice is to try to practice this as much as possible. I did a test presentation for a couple of friends and discovered some areas to focus on before doing the defend part of the program. After completing my defend panel, I was scheduled to participate as an observer on the remaining defend panel sessions the rest of the week. After multiple sessions as an observer and receiving the news that I passed the VCDX defend panel, I participated as a panel member on a defend session. Hopefully I will be on a lot more panels in the upcoming year, because sitting on the other side of the table is so much better than standing in front of it sweating like a pig. :)

SIZING VMS AND NUMA NODES

Note: This article describes NUMA scheduling on ESX 3.5 and ESX 4.0 platform, vSphere 4.1 introduced wide NUMA nodes, information about this can be found in my new article: ESX4.1 NUMA scheduling With the introduction of vSphere, VM configurations with 8 CPUs and 255 GB of memory are possible. While I haven’t seen that much VM’s with more than 32GB, I receive a lot of questions about 8-way virtual machines. With today’s CPU architecture, VMs with more than 4 vCPUs can experience a decrease in memory performance when used on NUMA enabled systems. While the actually % of performance decrease depends on the workload, avoiding performance decrease must always be on the agenda of any administrator.