Using VM Storage Policies with PernixData FVP Datastore Write Policies

Profile-Driven Storage, introduced in vSphere 5.0, provides rapid and intelligent placement of virtual machines based on pre-defined storage profiles. vSphere 5.5 enhanced the previous version of Storage Profiles and along the way renamed it to VM Storage Policy.

The new architecture looks slightly different than the old one. VM Storage Policy kingpin is the rule-set. The rule-set is a group of rules that describe the storage characteristics of a datastore. These characteristics are provided by either the vendor-specific capabilities (through a VASA provider) or by a user-defined tag. A rule-set can contain multiple rules and a combination of vendor-specific capabilities rules and tag-based rules. This article focuses on using VM storage Policies with rules based on tags. The schematic overview provides insight on the relationship between objects:

01-vSphere VM Storage Policy

Configuring VM Storage Policy
The vSphere UI does not provide a single point of configuration of a VM Storage Policy. The VM Storage Policy UI allows you to define a rule-set and adding tags to the rule-set. However tags should be defined and associated with vSphere objects before creating the rule-set. Creating and assigning tags cannot be done from the VM Storage Policy UI.

Creating Tags
Tags creation can be done in different ways. Storage related tags can best be created either in the tag menu option in the home menu, or at the tag menu option of the manage tab of the datastore cluster or datastore itself. A tag must be assigned to a category. Categories defined the cardinality of the tag and which vSphere objects the tag is associated with.

Please note that you can edit the category and add associable objects at any time, however once set you cannot remove an object. If required, the category needs to be deleted and then created with the correct required objects.

Cardinality allows you to define whether the object accepts one tag or multiple tags from that category. In this scenario I will use the tag to define which FVP storage profile is assigned to the datastore. When accelerating a datastore in FVP, you can assign a default storage policy. All virtual machines, in the vSphere cluster, configured to use that particular datastore will be accelerated accordingly. As datastore write policies are mutually exclusive, they cannot exist on the same datastore at the same time.

02-Overview of Tags

With that in mind, the cardinality of “One tag per object”, aligns perfect to the exclusivity of the FVP write policy. Once an administrator assigns the FVP Write Back mode tag to a datastore, vSphere will not allow the administrator to also assign the FVP write through mode to the datastore.

Assigning a tag is a bit tedious. None of the workflows provide the option to assign the tag to multiple datastores.

Please note that when assigning a tag to a datastore cluster, the tag itself is not waterfall down and is assigned to the members of the datastore. This has to be done manually.

I prefer to create the tags and go to the datastore option in vCenter view. Multi-select (Shift-select) the appropriate datastores, open up submenu (right-click) the selection, and choose Assign Tag…

03-Multi-select datastores

Once the tags have been created and assigned to the vSphere storage objects (Datastore Cluster and Datastore) a VM storage Policy can be created. Go to the Home view and select the VM Storage Policy. After providing an name and description the rule-set is created. A rule-set can contain multiple rules, however you have to add them by category. After selecting the tags, the workflow will show a list of compatible datastores.

In this scenario I have four datastores, Cryo-SYN1,2,3,4, all four are replicated. Depending on the service level agreement I accelerated two datastores with Write-Through mode and two datastores with Write-Back mode. I created two VM Storage policies and assigned the following tags to their rule-set.

04-Overview VM storage Policies

Once the VM Storage Policies were created, VM Storage Policies were enabled onthe compute cluster. The schematic overview provides insight on the relationship between all objects:

05-VM-Storage-Policy-WB-WT

If the customer wants to deploy a virtual machine that has an RPO of 15 minutes defined it its service level agreement, the administrator selects the VM Storage Policy of RPO-15-R-DS-FVP-WB. vSphere provisioning process displays the compatible datastores save to provision the virtual machine with an RPO requirement of 15 minutes.

06-Provisioning-new-vm

Before configuring the VM Storage Policies, I accelerated the datastores within FVP.
07-FVP-Datastore-Acceleration

With the help of VM Storage Policies and FVP write polices at datastore level, the virtual machine is placed on a replicated datastore with Write-Back enabled. The VM summary page confirms:
08-VM provisioned

MS Word style formatting shortcut keys for Mac

Recently I started to spend a lot of time in MS word again, and as a stickler for details I dislike a mishmash of font types throughout my document. I spend a lot of time on configuring the styles of the document, yet when I paste something from other documents, MS word tend to ignore these. Correcting the format burns a lot of time and it simply annoys the crap out of me.

To avoid this further, I started to dig around to find some font and style related shortcut keys. Yesterday I tweeted the shortcut key to apply the normal style and by the looks of retweets many of you are facing the same challenge.

Below is a short list of shortcut keys that I use. There are many more, share the common ones you like to use. As I use Mac I listed the Mac shortcut combination. Replace CTRL for CMD if you are using MS Word on a windows machine.

Select text:
Select all: CTRL+A
Select sentence: CMD + click
Select word: Double click
Select paragraph: Triple click

Formatting:
Clear formatting: CTRL+spacebar
Apply Normal Style: Shift+CMD+N
Header 1: CMD+ALT+1
Header 2: CMD+ALT+2
Header 3: CMD+ALT+3
Change Case: CMD+Option+C (repeat combination to cycle through options)
Indent paragraph: CTRL+Shift+M
Remove indent: CMD+Shift+M

Find and replace: F5

Future direction of disabling TPS by default and its impact on capacity planning

Eric Sloofs tweet alerted me to the following announcement of TPS being disabled by default in the upcoming vSphere release

In short TPS will no longer be enabled by default due to security concerns starting with the following releases:

ESXi 5.5 Update release – Q1 2015
ESXi 5.1 Update release – Q4 2014
ESXi 5.0 Update release – Q1 2015
The next major version of ESXi

More information here: Security considerations and disallowing inter-Virtual Machine Transparent Page Sharing (2080735)

After reading this announcement I hope architects review the commonly (mis) used over-commitment ratios during capacity planning exercises. It was always one of favorites topics to discuss at VCDX defense sessions.

It’s common to see a 20 to 30% over-commitment ratio in a vSphere design attributed to TPS. But in reality these ratios are never seen due to IT organization monitoring processes. Why? Because TPS is not used in the same frequency as in the older pre-vSphere infrastructures (ESX 2.x and 3.x) anymore. In reality vSphere have disintegrated the out-of-the-box over-commitment ratios. It only leverages TPS when certain memory usage thresholds are exceeded. Typically architects do not design their environments to reach the memory usage thresholds at 96%.

Large pages and processor architectures
When AMD and Intel introduced hardware-assisted memory virtualization features (RVI and EPT) VMware engineers quickly discovered that lead to increased virtual machine performance while reducing the memory consumption of the kernel. However there was some overhead involved but this could be solved by using large pages. A normal memory page is 4KB a large page is 2MB in size.

However large pages could not be combined with TPS as of the overhead introduced by scanning these 2MB block regions. The probability of finding identical large pages made them realize that the overhead was not worth the low potential of memory saving. The performance increase was calculated around 30% while the impact of sharing loss was perceived minimal, as memory footprints in physical machines tend to increase every year. Therefore virtual machines provisioned on vSphere are using a hardware-MMU leveraging the CPU hardware assisted memory virtualization features.

Although vSphere uses large pages, TPS still is active. It scans and hashes all pages inside a large page to decrease memory usage pressure when a memory threshold is reached. During my time at VMware I wrote an article on the VMkernel memory thresholds in vSphere 5.x Another interesting thing about large pages is the tendency to provide the best performance. The kernel will split up Large pages and share pages during memory pressure, but when no memory pressure is present new incoming pages will be stored in large pages. Potentially creating a cyclical process of constructing and deconstructing large pages.

NUMA
Another impact on the memory sharing potential is the NUMA processor architecture. NUMA allows the best memory performance by storing memory pages as close to a CPU as possible. TPS memory sharing could reduce the performance while pages are shared between two separate CPU systems. For more info about NUMA and TPS please read the article: “Sizing VMS and NUMA nodes

Capacity planning impact
Therefor the impact of disabling TPS by default will not be as big some might expect. What I do find interesting is the attention of security. I absolutely agree that security out of the box is crucial, but when regarding probability I would rather do a man-in-the-middle attack of the vMotion network, reading clear text memory across the network then wait for TPS to collapse memory. Which leads me to wonder when to expect encryption for vMotion traffic.

99 cents Promo to celebrate a major milestone of the vSphere Clustering Deepdive series

This week Duncan was looking at the sales numbers of the vSphere Clustering Deep Dive series and he noticed that we hit a major milestone in September. In September 2014 we passed the 45000 copies distributed of the vSphere Clustering Deep Dive. Duncan and I never ever expected this or even dared to dream to hit this milestone.

vSphere-clustering-booksWhen we first started writing the 4.1 book we had discussions around what to expect from a sales point of view and we placed a bet, I was happy if we sold 100 books, Duncan was more ambitious with 400 books. Needless to say we reset our expectations many times since then… We didn’t really follow it closely in the last 12-18 months, and as today we were discussing a potentially update of the book we figured it was time to look at the numbers again just to get an idea. 45000 copies distributed (ebook + printed) is just remarkable.

We’ve noticed that the ebook is still very popular, and decided to do a promo. As of Monday the 13th of October the 5.1 e-book will be available for only $ 0.99 for 72 hours, then after 72 hours the price will go up to $ 3.99 and then after 72 hours it will be back to the normal price. So make sure to get it while it is low priced!

Pick it up here on Amazon.com! The only other kindle store we could open the promotion up for was amazon.co.uk, so that is also an option!

Multi-FVP cluster design – using RAM and FLASH in the same vSphere cluster

A frequently asked question is whether RAM and Flash resources can be mixed in the same FVP Cluster? In FVP 2.0 we allow hosts to provide both RAM and Flash to FVP, time to provide some design considerations about FVP clusters.

One host resource per cluster
A FVP cluster only accepts one single type of acceleration resource from a host. If a host contains RAM and Flash, you can decide which resource is assigned to that particular cluster. When selecting one type of resource, FVP automatically removes the option of selecting the other resource available in the host.

01-Add acceleration resource

RAM and Flash in a single FVP cluster
A FVP Cluster can be comprised of different acceleration resources
You can have a FVP cluster configuration that one host provides RAM as an acceleration resource and another host provides Flash as an acceleration resource to the FVP Cluster.

02-Multiple acceleration resources in FVP Cluster

Symmetry equals predictability
Common architectural best practice is symmetry in resource design. Identical host, component and software configuration design provides reduction in management operations, simpler troubleshooting and above all consistent and predictable performance. Although a FVP cluster can contain RAM and Flash resources from multiple hosts, I would recommend only a mixed configuration as a transition state migrating to a new acceleration resource standard in the FVP cluster (moving from flash to RAM or vice versa).

One vSphere cluster, multiple FVP clusters
To leverage multiple acceleration resources, FVP allows you to create multiple FVP clusters within the same vSphere cluster. This allows you to create multiple acceleration tiers. Assign memory resources to a separate FVP cluster, for example “FVP Memory Cluster” and assign the Flash resources to the “FVP Flash Cluster”. As the atomic level of acceleration is the virtual machine level, a virtual machine can only be part of a single FVP cluster.

03-multi-fvp-clusters

Per VM-level Stats
One cool thing about FVP is the retention of stats. FVP collects stats on a per-VM level basis and retains it regardless of FVP cluster membership. This means that if you create a multiple FVP cluster design you can easily track the difference in performance. As FVP primary goal is to provide non-disruptive services, you can move virtual machines between different FVP clusters without having to reboot the virtual machine. Everything can be done on the fly without impacting service up times.

04-Add VM to cluster

One great use case is to set up a monitor FVP cluster, which contains no acceleration resources and allow FVP to monitor the I/O operations of that particular application.

00-Before-and-After

Once you decide which acceleration resource provides the best performance, you can easily move this virtual machine to the appropriate FVP cluster. To learn more about Monitor mode, please read the article: “Investigate your application performance by using FVP monitor capabilities”.