Mem.MinFreePct sliding scale function

One of the cool “under the hood” improvements vSphere 5 offers is the sliding scale function of the Mem.MinFreePct.

Before diving into the sliding scale function, let’s take a look at the Mem.MinFreePct function itself. MinFreePct determines the amount of memory the VMkernel should keep free. This threshold is subdivided in various memory thresholds, i.e. High, Soft, Hard and Low and is introduced to prevent performance and correctness issues.
The threshold for the low state is required for correctness. In other words, it protects the VMkernel layer from PSOD’s resulting from memory starvation. The soft and hard thresholds are about virtual machine performance and memory starvation prevention. The VMkernel will trigger more drastic memory reclamation techniques when it approaches the Low state. If the amount of free memory is just a bit less than the Min.FreePct threshold, the VMkernel applies ballooning to reclaim memory. The ballooning memory reclamation technique introduces the least amount of performance impact on the virtual machine by working together with the Guest operating system inside the virtual machine, however there is some latency involved with ballooning. Memory compressing helps to avoid hitting the low state without impacting virtual machine performance, but if memory demand is higher than the VMkernels’ ability to reclaim, drastic measures are taken to avoid memory exhaustion and that is swapping. However swapping will introduce VM performance degradations and for this reason this reclamation technique is used when desperate moments require drastic measurements. For more information about reclamation techniques I recommend reading the “disable ballooning” article.

vSphere 4.1 allowed the user to change the default MinFreePct value of 6% to a different value and introduced a dynamic threshold of the Soft, Hard and Low state to set appropriate thresholds and prevent virtual machine performance issues while protecting VMkernel correctness. By default vSphere 4.1 thresholds was set to the following values:

Free memory state Threshold Reclamation mechanism
High 6% None
Soft 64% of MinFreePct Balloon, compress
Hard 32% of MinFreePct Balloon, compress, swap
Low 16% of MinFreePct Swap

Using a default MinFreePct value of 6% can be inefficient in times where 256GB or 512GB systems are becoming more and more mainstream. A 6% threshold on a 512GB will result in 30GB idling most of the time. However not all customers use large systems and prefer to scale out than to scale up. In this scenario, a 6% MinFreePCT might be suitable. To have best of both worlds, ESXi 5 uses a sliding scale for determining its MinFreePct threshold.

Free memory state threshold Range
6% 0-4GB
4% 4-12GB
2% 12-28GB
1% Remaining memory

Let’s use an example to explore the savings of the sliding scale technique. On a server configured with 96GB RAM, the MinFreePct threshold will be set at 1597.6MB, opposed to 5898.24MB if 6% was used for the complete range 96GB.

Free memory state Threshold Range Result
High 6% 0-4GB 245.96MB
4% 4-12GB 327.68MB
2% 12-28GB 327.68MB
1% Remaining memory 696.32MB
Total High Threshold 1597.60MB

Due to the sliding scale, the MinFreePct threshold will be set at 1597.96MB, resulting in the following Soft, Hard and low threshold:

Free memory state Threshold Reclamation mechanism Threshold in MB
Soft 64% of MinFreePct Balloon 1022.69
Hard 32% of MinFreePct Balloon, compress 511.23
Low 16% of MinFreePct Balloon, compress, swap 255.62

Although this optimization isn’t as sexy as Storage DRS or one of the other new features introduced by vSphere5 it is a feature of vSphere 5 that helps you drive your environments to higher consolidation ratios.


  1. Is the First Table wrong in the book?

  2. Very cool article Frank. Also, this kb recommend to set 2% on ESXi 4.1 with more than 64GB :

  3. Hi Ron,

    Thanks for spotting this error, no the book is correct, the table in the article isn’t.
    I corrected the mistake in the table.

  4. Andrew Fidel

    July 27, 2011 at 15:33

    Frank, when does the breaking down of large page tables to enable TPS take place on Nehalem systems? If this is at high I would think it would be advantageous to have a buffer between High and Soft.

  5. Hi,
    has there been any change to TPS regarding when (physical RAM utilization) large pages are broken down to small pages?


  6. Hi Frank,

    thanks for this entry. It helped me a lot. Initially I addressed the request to Duncan



  7. Frank, thanks for posting this useful piece of information. However, I noticed that the last 2 tables were not accurate (Thanks to YP from Kingston). Please refer to my blog @ for a clarification.

  8. swapnendu CCIE

    August 1, 2012 at 20:06

    Very good article.

    One correction required :

    Table is still incorrect
    2% 12-28GB 696.32MB

    The correct value should 327.68MB (=16384×2/100)


  9. Hi, in all the examples I’ve found, the 6 % value is used for the Mem.MemMinFreePct-setting. But what if you change this value? With the 6 %, the other percentages of the ranges are 4 %, 2 % and 1 %… But what if you change the 6 % to 10 % (for example!) ? Will the other percentages change too? I assume that the 4, 2 and 1 % are an outcome of 64 % of 6 % (which is 4 %), 32 % of 6 % (which is 2 %) and 16 % of 6 % (which is 1 %)…

    So changing the Mem.MemMinFreePct-setting to 10 % would, following the above assumption, result in:
    Range %
    4-12 GB => 6 % (64 % of 10 %)
    12-28 GB => 3 % (32 % of 10 %)
    remaining => 1.6 ~ 2 % (16 % of 10 %)

    Is this assumption correct? Or how does it work?

Comments are closed.

© 2018

Theme by Anders NorenUp ↑