DO YOU USE VAPPS?

We’re interested in learning more about how you use vApps for workload provisioning today and how you envision it evolving in the future. If you have a couple of spare minutes, please fill out these 15 questions: http://www.surveymethods.com/EndUser.aspx?FFDBB7AEFDB5AAAAFB Thanks!

RESERVE ALL GUEST MEMORY (ALL LOCKED)

Some applications do not perform well when memory is reclaimed from the virtual machine. Most users set a virtual machine memory reservation to prevent memory reclamation and to ensure stable performance levels. Memory reservation settings are static, meaning that when you change the memory configuration of the virtual machine itself the memory reservation remains the same. If you want to keep the reservation equal to the virtual machine memory reservation, the UI (included in both the vSphere client and the web client) offers the setting: “Reserve all guest memory (all locked)”. This setting is linked to the virtual machine memory configuration. The memory reservation is immediately readjusted when the memory configuration changes. Increase the memory size and the memory reservation is automatically increased as well. Reduce the memory size of a virtual machine, and the reservation is immediately reduced. The behavior is extremely useful when using the vSphere client as management tool. Within the vSphere client the memory configuration and the memory reservation settings do not share the same screen. While changing the memory configuration one can easily forget to adjust the memory reservation. The web client is redesigned and shows the memory configuration and reservation in a single screen. Yet having a setting that automates and controls alignment of memory configuration and reservation reduce the change for human error.

PERNIXDATA FLASH VIRTUALIZATION PLATFORM WILL REVOLUTIONIZE VIRTUAL INFRASTRUCTURE ECOSYSTEM DESIGN

A couple of weeks ago I was fortunate enough to attend a tech preview of PernixData Flash Virtualization Platform (FVP). Today PernixData exited stealth mode so we can finally talk about FVP. Duncan already posted a lengthy article about PernixData and FVP and I recommend you to read it. At this moment a lot of companies are focusing on flash based solutions. PernixData distinguishes itself in today’s flash focused world by providing a new flash based technology but that is not a storage array based solution or a server-bound service. I’ll expand on what FVP does in a bit, let’s take a look at the aforementioned solutions. The solutions have drawbacks. A Storage array based flash solution is plagued by common physics. Distance between the workload and the fast medium (flash) generates a higher latency than when the flash disk is placed near the workload. Placing the flash inside a server provides the best performance but it must be shared between the hosts in the cluster to become a true enterprise solution. If the solution breaks important functions such as DRS and vMotion than the use case of this technology remains limited. FVP solves these problems by providing a flash based data tier that becomes a cluster-based resource. FVP virtualizes server side flash devices such as SSD drives or PCIe flash devices (or both) and pools these resources into a data tier that is accessible to all the hosts in the cluster. One feature that stands out is remote access. By allowing access to remote devices, FVP allows the cluster to migrate virtual machines around while still offering performance acceleration. Therefor cluster features such as HA, DRS and Storage DRS are fully supported when using FVP. Unlike other server based flash solutions, FVP accelerates both read and write operations. Turning the flash pool in to a “data-in-motion-tier”. All hot data exists in this tier, thus turning the compute layer into an all-IOPS-providing platform. Data that is at rest is moved to the storage array level, turning this layer into the capacity platform. By keeping the I/O operations as close to the source (virtual machines) as possible, performance is increased while reducing the traffic load to the storage platform as well. By filtering out read I/Os the traffic pattern to the array is changed as well, allow the array to focus more on the writes Another great option is the ability to configure multiple protection levels when using write-back. Data is synchronously replicated to remote devices. During the tech preview Satyam and Poojan provided some insights on the available protection levels, however I’m not sure if I’m allowed to share these publically. For more information about FVP visit Pernixdata.com The beauty of FVP is that its not a virtual appliance and that it does not require any agents installed in the guest OS. FVP is embedded inside the hypervisor. Now this for me is the key to believe that this ”data-in-motion-tier” is only the beginning of PernixData. By having insights in the hypervisor and understanding the dataflow of the virtual machines, FVP can become a true platform that accelerates all types of IOPS. I do not see any reasons why FVP is not able to replicate/encrypt/duplicate any type of input and output of a virtual machine. :) As you can see I’m quite excited by this technology. I believe FVP is as revolutionary/disruptive as vMotion. It might not be as “flashy” (forgive the pun) as vMotion but it sure is exciting to know that the limitation of use-cases is actually the limitation of your imagination. I truly believe this technology will revolutionize virtual infrastructure ecosystem design.

VOTING FOR THE 2013 TOP VIRTUALIZATION BLOGS - A YEAR IN REVIEW

When Eric Siebert opens up the voting for the top VMware & virtualization blogs you know another (blogging) year has passed. First of all I want to thank Eric for organizing this year in year out. I know he spends an awful lot of time on this. Thanks Eric! Its amazing to see that there are more than 200 blogs dedicated to virtualization and that each month new blogs appear. Unfortunately I don’t have the time to read them all but I do want to show my appreciation for the blog sites that I usually visit. Best newcomer is an easy one, Cormac Hogan. The content is absolutely great and he should be in the top 10. Then we have the usually suspects, my technical marketing colleagues and buddies: Alan Renouf, Rawlinson Rivera and William Lam. I start of the day by making coffee, checking my email and logging into yellow-bricks.com. It’s the de facto standard of the virtualization blogs. Duncan’s blog provide not only technical in-depth articles, but also insights in the industry. Who else? Eric Sloof of course! Always nice to read to find out that your white paper is published before you get the official word through company channels. ;) Two relative unknown blog sites but quality content: Erik Bussink and Rickard Nobel. These guys create awesome material. One blog that I’m missing in the list is the one from Josh Odgers. Great content. Hope to be able to vote for him next year. When reviewing content from others you end up reviewing the stuff you did yourself and 2012 was a very busy year for me. During the year I published and co-authored a couple of white papers such as the vSphere Metro Cluster Case Study, Storage DRS interoperability guide and vCloud Director Resource Allocation Models. I presented at a couple of VMUGS and at both VMword San Francisco and Europe. The resource pool best practice session was voted as one of the top 10 presentations of VMworld. And of course Duncan and I released the vSphere 5.1 Clustering Deepdive, also know as 50 shades of Orange. ☺ I believe it’s the best one of the series. In the mean time I ended up writing for the vSphere blog, appearing on a couple of podcast and writing a little over a 100 blog articles on frankdenneman.nl. I then to focus on DRS, Storage DRS, SIOC and vMotion but once in a while I like to write about something that gives a little insight peek of my life such as the whiteboard desk or the documentaries I like to watch. It seems you like these articles also as they are frequently visited. In my articles I try to give insights in the behavior of the features of vSphere, this to help you understand the impact of these features. Understanding the behavior allows you to match your design to the requirements and constrains of the project/virtual infrastructure your working on. During my years in the field I was always looking for this type of information, by providing this material I hope to help out my fellow architects. When publishing more than over 100 articles you tend to like some more than others. While it’s very difficult to choose individual articles, I enjoyed spending time on writing a series of articles on the same topic, such as the series Architecture and design of datastore clusters (5 posts) and Designing your (Multi-NIC) vMotion network (5 posts). But I also like the individual post: • vSphere 5.1 vMotion Deepdive • A primer on Network I/O Control • vSphere 5.1 Storage DRS load balancing and SIOC threshold enhancements • HA admission control is not a capacity management tool • Limiting the number of storage vMotions I hope you can spare a couple of minutes to cast your vote and show your appreciation for the effort these bloggers put into their work. Instead of picking the customary names please look back and review last year, think about the cool articles you read that helped you or sparked your interest to dive into the technology yourself. Thanks I can’t wait to watch the Top 25 countdown show Eric, John and Simon did in the previous years.

IMPLICIT ANTI-AFFINITY RULES AND DRS PLACEMENT BEHAVIOR

Yesterday I had an interesting conversation with a colleague about affinity rules and if DRS reviews the complete state of the cluster and affinity rules when placing a virtual machine. The following scenario was used to illustrate to question: The following affinity rules are defined: 1. VM1 and VM2 must stay on the same host 2. VM3 and VM4 must stay on the same host 3. VM1 and VM3 must NOT stay on the same host If VM1 and VM3 is deployed first, everything will be fine. Because VM1 and VM3 will be placed on 2 different hosts, and VM2 and VM 4 will also be placed accordingly However, if VM1 is deployed first, and then VM4, there isn’t an explicit rule to say these two need to be on separate hosts, this is implied by looking into dependencies of the 3 rules created above. Would DRS be intelligent enough to recognize this? Or will it place VM1 and VM4 on the same host, but by the time VM3 needs to be placed, there is a clear deadlock.

HA PERCENTAGE BASED ADMISSION CONTROL FROM A RESOURCE MANAGEMENT PERSPECTIVE – PART 1

Disclaimer: This article contains references to the words master and slave. I recognize these as exclusionary words. The words are used in this article for consistency because it’s currently the words that appear in the software, in the UI, and in the log files. When the software is updated to remove the words, this article will be updated to be in alignment. HA admission control is quite challenging to understand as it interacts with multiple layers of resource management. In the upcoming series of articles I want to focus on the HA Percentage based admission control and how it interacts with vCenter and Host management. Let’s cover the basis first before diving into percentage based admission control.

HAVE YOU SIGNED UP FOR THE BENELUX SOFTWARE DEFINED DATACENTER ROADSHOW YET?

In less than 3 weeks time, the Benelux Software Defined Datacenter Roadshow starts. Industry-recognized experts from both IBM and VMware share their vision and insights on how to build a unified datacenter platform that provides automation, flexibility and efficiency to transform the way you deliver IT. Not only can you attend their sessions and learn how to abstract, pool and automate your IT services, the SDDC roadshow provides you to meet the expert, sit down and discus technology. The speakers and their field of expertise: VMware Frank Denneman – Resource Management Expert Cormac Hogan – Storage Expert Kamau Wanguhu – Software Defined Networking Expert Mike Laverick – Cloud Infrastructure Expert Ton Hermes – End User Computing Expert IBM Tikiri Wanduragala – IBM PureSystems Expert Dennis Lauwers – Converged Systems Expert Geordy Korte – Software Defined Networking Expert Andreas Groth – End User Computing Expert The roadshow is held in three different countries: Netherlands – IBM forum in Amsterdam - March 5th 2013 Belgium – IBM forum in Brussels - March 7th 2013 Luxembourg – March 8th 2013 The Software Defined Datacenter Roadshow is a full day event and best of all it is free! Sign up now!

USING REMOTE DESKTOP CONNECTION ON A MAC? SWITCH TO CORD

One of the benefits of working for VMware technical marketing, is that you have your own lab. Luckily my lab is hosted by an external datacenter, which helps me avoid a costly power-bill at home each month :) However, that means I need to connect to my lab remotely. As a MAC user I used Remote Desktop Connection for MAC from Microsoft. One of the limiting factors of this RDP for MAC is the limited resolution of 1400 x 1050 px. The screens at home have a minimum resolution 2560 x 1440 px. This first world problem bugged me until today! Today I found CoRD - http://cord.sourceforge.net/. CoRD allows me to connect to my servers with a resolution 2500 x 1600, using the full potential of my displays at home. Another create option is the hotkey function, using a key combination I spin up a remote desktop connection. I love these kinds of shortcuts that help me reduce time spend navigating throughout the UI. If you are using a MAC and often RDP into your lab, I highly recommend to download CoRD. Btw, it’s free ;)

VCD AND INITIAL PLACEMENT OF VIRTUAL DISKS IN A STORAGE DRS DATASTORE CLUSTER

Recently a couple of consultants brought some unexpected behavior of vCloud Director to my attention. If the provider vDC is connected to a datastore cluster and a virtual disk or vApp is placed in the datastore, vCD displays an error when the datastores do not have enough free space available. Last year I wrote an article (Storage DRS initial placement and datastore cluster defragmentation) describing storage DRS initial placement engine and it’s ability to move virtual machines around the datastore cluster if individual datastores did not have enough free space to store the virtual disk. I couldn’t figure it out why Storage DRS did not defragment the datastores in order to place the vApp, thus I asked the engineers about this behavior. It turns out that this behavior is by design. When creating vCloud director the engineers optimized the initial placement engine of vCD for speed. When deploying a virtual machine, defragmenting a datastore cluster can take some time. To avoid waiting, vCD reports an error of not enough free space and relies on the vCloud administrator to manage and correct the storage layer. In other words, Storage DRS initial placement datastore cluster defragmentation is disabled in vCloud Director. I can understand the choice the vCD engineers made, but I also believe in the benefit of having datastore cluster defragmentation. I’m interested in your opinion? Would you trade initial placement speed over reduced storage management?

EXPANDABLE RESERVATION ON RESOURCE POOLS, HOW DOES IT WORK?

It seems that the expandable reservation setting of a resource pool appears to be shrouded in mystery. How does it work, what is it for, and what does it really expand? The expandable reservation allows the resource pool to allocate physical resource (CPU/memory) protected by a reservation from a parent source to satisfy its child object reservation. Let’s dig a little deeper into this. Parent-child relation A resource pool provides resources to its child objects. A child object can either be a virtual machine or a resource pool. This is what called the parent-child relationship. If a resource pool (A), contains a resource pool (B), which contains a resource pool (C), then C is the child of B. B is the parent of C, but is the child of A, A is the parent of B. There is no terminology for the relation A-C as A only provides resource to B, it does not care if B provide any resource to C. As a virtual machine is placed in to a resource pool, the virtual machine becomes a child-object of the resource pool. It is the responsibility of the resource pool to provide the resources the virtual machine requires. If a virtual machine is configured with a reservation, than it will request the physical resources from its parent resource pool. Remember that a reservation guarantees that the resources protected by the reservation will and cannot be reclaimed by the VMkernel, even during memory pressure. Therefor the reservation of the virtual machine is directed to its parent and the parent must exclusively provide this to the virtual machine. It can only provide these resources from its own pool of protected resources. The resource pool can only distribute the resources it has obtained itself. Protected or reserved resources? I’m deliberately calling a resource claimed by a reservation a protected resource, as the VMkernel cannot reclaim it. However when a resource pool is configured with a reservation, it immediately claims this memory from its parent. This goes on all the way up to the cluster level. The cluster is the root resource pool and all the resources provided by the ESXi hosts are owned by the resource pool and protected by a reservation. Therefor the cluster – root resource pool – contains and manages the protected pool of resources. For example, the cluster has 100GB of resources, meaning that the root resource pool consists of 100GB of protected memory. Resource pool A is configured with a 50GB reservation, consuming this 50Gb from the root resource pool. However resource pool B is configured with a 30GB reservation, immediately claiming 30 GB of resources protected by the reservation of resource pool A. Leaving resource pool A with only 20 GB of protected resources for itself. Resource Pool C is configured with a 20GB memory reservation. Resource pool C claims this from its parent, resource pool B which is left with 10GB of protected resources for itself. But what happens if the resource pool runs out of protected resources? Or is not configured with a reservation at all? In other words, If the child objects in the resource pool are configured with reservations that exceeds the reservation set on the resource pool, the resource pool needs to request protected resources from its parent. This can only be done if expandable reservation is enabled. Please note that the resource pool request protected resources, it will not accept resources that are not protected by a reservation. Now in this scenario, the five virtual machines in the resource pool are each configured with 5GB memory reservation, totaling it to 25GB. Resource pool C is configured with a 20GB memory reservation. Therefor resource pool is required to make a request for 5GB of protected memory resources on behalf of the virtual machines to its parent resource pool B. If resource pool B does not have the protected resources itself, it can request these protected resources from its parent. This can only occur when the resource pool is configured with expandable reservation enabled. The last stop in the cluster it the cluster itself. What can stop this river of requests? Two things, the request for protected resources is stopped by a resource limit or by a disabled expandable reservation. If a resource pool has expandable reservation disabled, it will try to satisfy the reservation itself if it’s unable to do so, it will deny the reservation request. If a resource pool is set with a limit, the resource pool is limited to that amount of physical resources. For example if the parent resource pool has a reservation and a limit of 20GB, the reservation on behalf of its child need to be satisfied by the protected pool itself otherwise it will deny the resource request. Now lets use a more complex scenario, resource pool B is configured with expandable reservation enabled and a 30 GB reservation. A limit is set to 35GB. Resource pool C is requesting an additional 10GB on top of the 20GB it is already granted. Resource pool B is running 2 VM with a total reservation of 10GB. This means the protected pool of Resource pool B is servicing 20GB resource request from resource pool C and 10 GB for its own virtual machines. Its protected pool is depleted, the additional 10GB request of resource pool C is denied, as this would raise the protected pool of resource pool B to a total of 40GB memory, which exceeds the 35GB limit. Virtual machine memory overhead Please remember that each virtual machine is configured with a memory reservation. To run the virtual machine a small amount of memory resources are required by the VMkernel. This is called the virtual machine memory overhead. To be able to run a virtual machine inside a resource pool, either the expandable reservation should be enabled or a memory reservation is configured on the resource pool.