frankdenneman Frank Denneman is the Machine Learning Chief Technologist at VMware. He is an author of the vSphere host and clustering deep dive series, as well as podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

Thin or thick disks? – it’s about management not performance

1 min read

This is my contribution to the debate Zero or Thick disks – debunking the performance myth.
The last couple of years all sorts of VMware engineers worked very hard to reduce the performance difference between thin disks and thick disks. Many white-papers have been written by performance engineers to explain the improvements made on thin-disk. Therefore today the question whether to use Thin-provisioned disks or Eager zero thick is not about the difference in performance but the difference in management.
When using Thin-provisioned VMDKs you need to have a very clear defined process. What to do, when your datastore, which stores the thin provisioned disks is getting full? You need to define a consolidation ratio, you need to understand which operational process might be dangerous to your environment (think Patch-Tuesday) and what space utilization threshold you need to define before migrating thin-provisioned disks to other datastores.
Today Storage DRS can help you with many of the fore mentioned challenges. For more information please read the article: Avoiding VMDK level over-commitment while using Thin-provisioned disks and Storage DRS.
If Storage DRS is not used, Thin-provisioned disks can require a seamless collaboration between virtualization teams (provisioning and architecture) and storage administrators. When this is not possible due to organizational cultural differences, thin provisioning is rather a risk, than bliss.
Zero out process: Eager zero thick on the other hand might provide in some (corner) cases a marginal performance increase; the costs involved could outweigh the perceived benefits. First of all, Eager zero thick disks need to be zeroed out during creation, when your array doesn’t support the VAAI initiatives, this can take a hit on performance and the time to provision is extended. With terabyte sized disks becoming more common this will impact provisioning time immensely.
Waste of space: Most virtualized environments use virtual machines, typically configured with oversized OS disks and over-specced data disks, resulting in wasted space full of zero’s. Thin-provisioned disks only occupy the space used for storing data, not zero’s.
Migration: Storage vMotion goes out of its way to migrate every little bit of a virtual disk, this means it needs to copy over every zeroed out block. Combined with the oversized disks, you are creating unnecessary overhead on your hosts and storage subsystem copying and verifying the integrity of zeroed out blocks. Migrating thin disks only requires migrating the “user-data”, resulting in faster migration times, lesser overhead on hosts and storage subsystem.
In essence, Thin-provisioned disks versus Eager zero thick is all about resource/time saving versus risk avoidance. Choose wisely

frankdenneman Frank Denneman is the Machine Learning Chief Technologist at VMware. He is an author of the vSphere host and clustering deep dive series, as well as podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

2 Replies to “Thin or thick disks? – it’s about management not…”

  1. my provisioning strategy for a long time has been on storage systems with thin provisioning support I use lazy thick VMDKs (only time I use eager zero is when it’s required by say FT or something). This way I can be sure the LUN will never be over provisioned. If I don’t happen to come close to the utilization of the LUN before VMware thinks it is out of space, that is fine, it can stay under provisioned – there is no waste there.
    My typical data store does maybe 100-200 IOPS(avg say 1-5 IOPS/VM), anything that might be intensive (say databases), get put on RDMs, mainly for array-based snapshot support, also finer grained performance management and space management (via array tools), and the ability to swing the volumes(if needed) back and forth between physical and virtual hosts.
    For arrays w/o TP support or for local storage I always use TP VMDKs, though space management can sometimes be a bit annoying compared to the above scenario.
    Would the migration of blocks that are filled with zeros via Storage vMotion be mostly(or totally) offset by arrays with VAAI support and zero detection enabled ? I would expect so but am not sure.

  2. I think in some cases it´s also reasonable to consider the i/o utilization of a datastore, when thinking about using thin or thik disks.
    I have seen a customer, who saved a lot of space on a datastore by using only thin disks, but he couldn´t make use of the saved space because the datastore was in peaktimes already on its max. I/O throughput. So no way to put more VMs on this datastore. I think in cases like this it´s not worth using thin disks in compare to the additioal managment / monitoring effort.

Comments are closed.