• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Adjust timeout value ESXi Embedded Host Client

April 12, 2016 by frankdenneman

I love to use the ESXi Embedded Host Client next to vCenter in my lab. It’s quick, it provide most of the functionality and best of it all it has a functioning VM console when accessing it from a MAC. The ESXi Embedded Host Client time-out default is set to 15 minutes, but you can adjust this setting.
Time-out ESXI embedded client
On the right side of the menu bar there is a drop down menu next to the IP-address or DNS name of your ESXi server. Open it and go to:

  1. Settings
  2. Application timeout
  3. Select the appropriate timeout value

As I use it in my lab, I select the option off, but if you use this in other environments I can expect you use a different value.

Filed Under: VMware

DVD Store, the perfect homelab workload tool

March 31, 2016 by frankdenneman

DVD Store 2.1, a magnificent tool for all aspiring VCP/VCAP candidates. A great tool for home lab enthousiasts to understand performance metrics, a fantastic tool to understand the behavior of an application stack in a virtual datacenter.
WHAT IS DVD STORE?
According to the official site the DVD Store Version 2.1 (DS2) is a complete open source online e-commerce test application, with a backend database component, a web application layer, and driver programs. The goal in designing the database component as well as the midtier application was to utilize many advanced database features (transactions, stored procedures, triggers, referential integrity) while keeping the database easy to install and understand. The DS2 workload may be used to test databases or as a stress tool for any purpose.
Thanks Todd Muirhead and Dave Jaffe for creating this! However there is a slight challenge in installing it properly. You can install it on windows or on Linux and use many different database programs. I like to use windows for this. Unfortunately I tried to follow the instruction video on youtube and it was lacking some crucial details to get it deployed successfully. Therefor I started to document the steps involved to get it deployed on a Windows 2012 system using SQL 2014 SP1. Please note that you can run DVD store on Linux as well, and it might be even better (more lean and mean than a windows install) for homelabs. If you have a detailed write-up (100% reproducible) of a working deployment DVD store on Linux, please share the link to your article in the comments.


DVDSTORE ARCHITECTURE
As described above the DVD Store is an application stack that can run on a single or multiple virtual machines. By using multiple virtual machines, you can test various components and layers in your virtual datacenter. As this is my goal I’m creating a VM that will run the database and another VM that generates the workload.
00-DVDStore architecture
Requirements
I’m listing the software I’ve used in order to create a working environment. Many variations are possible. If you can create a lightweight version of this build, or a complete community edition (license free) please share URL of your article in the comments.

  • Two virtual machines
  • Windows 2008 R2 and Windows 2012
  • Windows 2008 R2 SP1
  • DVD Store 2.1 ds21.tar.gz
  • DVD Store 2.1 ds21_sqlserver.tar.gz
  • Winzip
  • SQL 2014 SP1*
  • ActiveState ActivePerl Community Edition

DATABASE VM
In this exercise I’m going to install and configure a 20GB database on a Windows 2012 VM. If you are using templates, check if you have enough space for the DVD store on your C-drive. During the first stage the temporary files will be stored on the C: drive, provide enough space which is at least equal to the DB size. The database hard disk needs to be twice the size of the DB in order to successfully import the data. Post configuration optimizations can reduce the consumed space of the database, but don’t be too frugal when configuring the hard disks. Play around with the compute settings depending on your lab equipment. I noticed that Windows 2012 uses 5.4 GB of memory to run its OS and SQL Express when idling, but during installation it consumed close to 11GB.
Windows 2012 configuration
Update Windows 2012 with all the latest patches and update VMtools, enable remote desktop if you don’t want to use the VM console. Disable the firewall, as this I run an air-gapped lab I don’t want to spend too much time on firewall rules. SQL requires to Enable Microsoft .Net Framework 3.5 SP1. and Download and install Microsoft .Net Framework 4.0. .Net Framework 4.0 is already a part of the Windows 2012 OS, therefore you only have to enable 3.5. by executing the following steps:

  1. Go to Server Manager
  2. Add roles and features
  3. Next
  4. Role-based or feature-based installation
  5. Click Next until you reach Features
  6. Select .Net Framework 3.5 Features
  7. Click Install

Extracting DVD Store
The DVD Store kit is available at linux.dell.com/dvdstore. Download the file ds21.tar.gz and ds21_sqlserver.tar.gz. Both include scripts that are made on a unix based machine, missing the proper CR/LF format for a windows system. Winzip converts files to proper windows format while extracting, therefor I recommend using Winzip. Alternatively you can use a tool such as Unix2Dos to convert the files if you don’t want to use Winzip. Extract both files to the C:\ Drive creating a directory structure as follows:
01-Ds21 directory structure
Install ActivePerl
The installation of DVD Store is done via a Perl script, Windows 2012 doesn’t contain a Perl utility. One of the recommended Perl Utility is ActiveState ActivePerl Community Edition. You can download it here. As I’m using Windows 2012, I need to download the x64 MSI version. The install is straightforward, no specific options need to be selected, basically a next next finish install.


SQL 2014
DVD store can leverage both the full version or the Express version of SQL. Microsoft allows you to evaluate their products 180 days. If you do maintain a VM configuration for more than 180 days you can use the free version of SQL 2014 express. Please be aware that you need SQL Server Express with Advanced Services as it includes the full version of SQL Server 2014 Management Studio and Full Text Search and Reporting Service. Both features are required to run DVD Store. For more info on SQL 2014 versions go here: https://www.microsoft.com/en-us/download/details.aspx?id=42299. Download SQL 2014 Express ADV SP1 here: https://www.microsoft.com/en-us/download/details.aspx?id=46697
If you are going to use the express version, adjust your VM configuration. Unfortunately SQL Express has some CPU limitations for the database engine (Limited to lesser of 1 Socket or 4 cores) and a 10 GB DB limitation. Therefore a 4 vCPU configuration would be 1 virtual sockets: 1 and 4 cores per socket. For more info about virtual sockets and cores please read this article: http://frankdenneman.nl/2013/09/18/vcpu-configuration-performance-impact-between-virtual-sockets-and-virtual-cores/
Install SQL 2014 Express ADV SP1
Run Install and select the following options:

  1. New SQL Server stand-alone instalation
  2. Accept the license terms
  3. Check “Use MS Update to check for updates”
  4. Database Engine Configuration: Mixed Mode (SQL Server Authentication and Windows Authentication) (provide password)
  5. Reporting Services Native Mode: Install and Configure

Install SQL 2014 SP1
Download the eval version of SQL 2014 SP1 here: http://technet.microsoft.com/evalcenter/dn205290.aspx
Run Install and select the following options:

  1. New SQL Server stand-alone instalation
  2. Select Evaluation
  3. Accept the license terms
  4. Setup Role: Select All Features using default values for service accounts*
  5. Database Engine Configuration: Mixed Mode (SQL Server Authentication and Windows Authentication) (provide password)
  6. Analysis Services Configuration: Add current User
  7. Reporting Services Configuration: Install and Configure
  8. Distributed Replay Controller: Add Current User
  9. Install

During the install it can happen that the install process freezes when on a step called “Install_WatsonX86_Cpu32_Action”. To solve this state, open up task manager and end all “extra” processes called “Windows Installer (32 bit) ” leaving only a single Windows Installer process.
I’m sure you can improve and optimise the SQL installation, but I haven’t really looked into this. For more information I recommend David Klee’s blog (http://www.davidklee.net/) and the book of Michael Webster “Virtualising SQL Server with VMware” (http://longwhiteclouds.com/)


INSTALLING DVD STORE
Once SQL is installed you can begin installing DVD Store.The process of installing DVD store consists of executing two scripts, the Install_DVDStore.pl script and the SQL script.
Install_DVDStore.pl script
The Install_DVDStore.pl script generates the database content (such as users and products) by creating CSV files and it generates a SQL script that allows MSSQL to create the DB2user, the databases and importing the CSV content files. In order to correctly generate these files, you must create the directories where the MSSQL Database files will be stored. I’m using a single drive for all databases, therefore I create a directory SQL\DBfiles on the E: drive (E:\SQL\DBfiles).
Please note that the workload CSV files are generated in the C:\DS2 folder! That means that if you are going to generate a 20GB database, you need at least 20GB of free space on your C:\ drive as well to temporarily store the CSV files.
Once installed SQL you can run the Install_DVDStore script in the C:\DS2 folder. I prefer to open up a command prompt to run the script. The window remains open after the script has completed successfully, allowing me to do other stuff in the mean time. If you have more trust in scripts than me, go right ahead and click on the perl script from the windows explorer.
C:\ds2\Install_DVDStore.pl. In order to create a 20GB DB in the directory E:\SQL\DBfiles, I’m going to answer the questions as follows:

  • Database size: 20
  • Database size is in MB or GB: GB
  • Database type: MSSQL
  • System type: WIN
  • Path where Database files will be stored: E:\SQL\DBFiles\ *

* Please note the trailing \ in E:\SQL\DBFiles\, this is required otherwise the script will fail.
02-Install_DVDStore
Creating the custom CSV and the sql script files took my system roughly 20 minutes. The CSV files are stored in the directory structure of the C:\DS2\Data_files. The SQL script is stored in the directory C:\DS2\sqlserverds2\. The Install_DVDStore script generated the following script: sqlserverds2_create_all_20GB. Thats the script we want to run in order to get the DB loaded with the records.
03-SQLScript
Edit the SQL script
David Klee (@kleegeek), the SQL MVP, discovered there was a slight error in the script. In order to fix this, edit the script in notepad or SQL management studio. Go to line 91 (or use find) and change (1) of GENDER VARCHAR(1) into (2) resulting in GENDER VARCHAR(2). Save and exit.
04-SQLscript-Gender VARCHAR
It seems the DS2 scripts use the SA account with an blank password. You can do two thing, go through all the scripts or change the SA password on your SQL server. If someone knows the location of the SA user in the scripts, please leave a comment. In order to change the SA password, open up the SQL 2014 management studio. (Go to start, apps, SQL Server 2014 Management Studio). Select “SQL Server Authentication” and use the SA user with the password you entered during the installation process of SQL. Go to Security \ Logins and select the SA account, go to properties and deselect the option “Enforce password policy”. Now remove the password and click on OK. Yes you are sure you want to continue so click on Yes 😉 Exit the management studio.
Execute the SQL script
Go to the C:\DS2\sqlserverds2 directory and click on the sqlserverds2_create_all_20GB script. This opens SQL2014 management Studio and you need to authenticate again. A good time to check to see if the SA account is using a blank password, use the SA user account and click on connect.
Management Studio shows the script, press F5 to execute or go to the Query menu and click on Execute. In the bottom left corner, it will show executing query. Select the Message tab to monitor the progress of the script. It took my system 1 hour and 5 minutes to complete the script, it might be a good time to start working on the “workload” VM that’s going to generate the queries in the mean time. After the script finishes, it’s time to run a SQL maintenance task. Although the script creates a 20GB database, 37GBs of space is consumed on the hard disk.
05-Diskspace
SQL2014 Maintenance Plan
In the DVD Store documentation it’s recommended to run the maintenance plan to optimize performance. The SQL Agent service is turned off by default in SQL 2014. Start this service by opening a command prompt and type in the command: net start sqlserveragent otherwise the follow error will be presented when attempting to create a maintenance plan in SQL Management Studio:
06-Agent XP error
Open the SQL Server 2014 Management Studio(GUI), follow following steps:

  1. Go to Object Explorer and click and expand database server tree.
  2. Under server tree, expand management and right click on maintenance plans.
  3. Left Click on “Maintenance Plan Wizard Option”.
  4. In the wizard opened, click next and enter name of plan as “ds2”.
  5. Click next and check “Update Statistics” checkbox and again click next.
  6. Click next and then choose database as DS2 and click OK.
  7. Ensure “All existing statistics” and “Sample By” checkbox are set along with value “18” “percent”.
  8. Once above step is done click next twice to create a task under “Maintenance Plans” under “Management” object under SQL Server tree.
  9. Now right click on this task “ds2” created from above steps and it will show a menu option for right click.
  10. Click execute to update statistics on all tables in DS2 database using task created due to above steps.

Visit the sites of the SQL experts to learn more about optimizing SQL DB’s if you want to get more performance out of this database. At this point, the Database VM configuration is complete and we can start generating some workload by running the ds2sqlserverdriver program on the worload VMs.


DS2 WORKLOAD VM
Unfortunately the DS2webdriver kept on crashing on a Windows 2012 system, complaining about invalid registry settings. Therefor I’m using a Windows 2008 system. The configuration of the VM is straightforward. Ensure that the workload VM can connect to the database VM across the network and run the ds2sqlserverdriver program.
Database VM configuration

  • OS: Windows 2008
  • CPU config:
  • Number of virtual sockets: 2
  • Number of cores per socket: 1
  • Memory 12GB
  • Harddisk 1: 40 GB
  • SCSI controller 0: LSI Logic SAS
  • Network Adapter: VMXNET 3

Windows 2008 configuration

  • Update Windows 2008 with all the latest patches, service packs and update VMtools.
  • Download SP1 here: https://www.microsoft.com/en-us/download/details.aspx?id=5842
  • Disable the firewall. *
  • Enable remote desktop if you don’t want to use the VM console
  • Enable .Net 3.5 if you want to install SQL management studio

* As this I run an air gapped lab I don’t want to spend too much time on firewall rules)
DS2SQLSERVERDRIVER
Extract the ds2.tar.gz and ds2_sqlserver.tar.gz on the C:\.
Open command prompt and go to c:\ds2\sqlserverds2\ and run ds2sqlserverdriver.exe. This will show the options:
07-ds2sqlserverdriver
An example script (by David Klee):
c:\ds2\sqlserverds2\ds2sqlserverdriver.exe --target=192.168.0.132 --run_time=60 --db_size=20GB --n_threads=4 --ramp_rate=10 --pct_newcustomers=0 --warmup_time=0 --think_time=0.085
08-Running script
This program allows you to customize every workload possible. The command that I like the most is the think time. This is the amount of time that a simulated user would ‘think’ before clicking again. This command allows you to create a more realistic workload that differs from any synthetic benchmark tool out there. You can run spawn multiple virtual machines running different configured workloads against a single database. Adjust the think time, adjust the average number of search order per customer. The application stack allows you to investigate the complete stack. You can run multiple workload VMs and the DB VM on a single host, allowing to understand CPU or memory contention. It allows to distribute the workload across multiple hosts, allowing you do dive into the impact of networking and possibly DRS. Moving VMs onto a single datastore and monitor the storage path and the impact of SIOC. The possibilities are endless. Genuinely a tool that can help anyone at any level understand virtualization and IT infrastructures better.

Filed Under: Home Lab, VMware

You do not have permissions to view this object error after updating VCSA to 6.0 Update 1b

March 1, 2016 by frankdenneman

Today I’ve updated my vCenter Server Appliance with the VC-6.0.0U1b-Appliance.ISO in my lab. After rebooting I was surprised to see the error “You do not have permissions to view this object” on almost every object in the inventory screen.
VCSA-error
Unfortunately a reboot of the DC (home lab, I do not run an elaborate AD here)Time to google and it seems that a lot of other people have hit this bug. After googling some more I found the the VMware KB article: KB 2125229.
Problem is, this is solely focused on the windows version of vCenter and not focussed on solving the problem occurring on the VCSA. Although I can log in and see the inventory when using my admin account (Lab\vAdmin) I can’t access the objects. Maybe a permission problem? When checking the global permissions the (vAdmin) user is still listed as an administrator.
Global-Permissions-1
However administrators should be able to access all objects, as I found out a refresh is required. Here is how I solved it:
1. Log out of vC and login with the default admin account “administrator@vsphere.local”
2. In the Home view, select “Administration” from the menu
3. Go to Global Permissions, remove the user (In my case vAdmin)
4. Click on “Add Permission”
5. Select your AD domain and select the correct user
Global-Permissions-2
6. Click on Ok
7. Check the list to see whether your user is added with the correct role (administrator).
8. Logout and login with the correct AD user.
9. Back to work.
Inventory-money shot
Time for me to power on these servers again.
Follow Frank on twitter @frankdenneman

Filed Under: VMware

Insights into VM density

February 15, 2016 by frankdenneman

For the last 3 months my main focus within PernixData has been (and still is) the PernixCloud program. In short PernixData Cloud is the next logical progression of PernixData Architect and provides visibility into and analytics of virtual datacenters, it’s infrastructure, and it’s applications. By providing facts on the various elements of the virtual infrastructure, architects and administrators can design their environment in a data-driven way.
The previous article “Insights into CPU and Memory configurations of ESXi hosts” zoomed in to the compute configuration of 8000 ESXi hosts and helped us understand which is the most popular system in today’s datacenter running VMware vSphere. The obvious next step was to determine the average number of virtual machines on these systems. Since that time the dataset has expanded and it now contains data of more than 25.000 ESXi hosts.
An incredible dataset to explore I can tell you and it’s growing each day. Learning how to deal with these vast quantities of data is incredibly interesting. Extracting various metrics from a dataset this big is challenging. Most commercial tools are not designed to cope with this amount of data, thus you have to custom build everything. And with the dataset growing at a rapid pace, you are constantly exploring the boundaries of what’s possible with software and hardware.
VM Density
After learning what system configurations are popular, you immediately wonder how many virtual machines are running on that system. But what level of detail do you want to know? Will this info be useable for architects and administrators to compare their systems and practical to help them design their new datacenter?
One of the most sought after question is the virtual CPU to physical CPU ratio. A very interesting one, but unfortunately to get a result that is actually meaningful you have to take multiple metrics into account. Sure you can map out the vCPU to pCPU ratio, but how do you deal with the fact of oversizing of virtual machines that has been happing since the birth of virtualization? What about all these countless discussions whether the system only needs a single or double CPU because it’s running a single threaded program? How many times have you heard the remark that the vendor explicitly states that the software requires at least 8 CPU’s? Therefor you need to add utilization of CPU to get an accurate view, which in turn leads to the question what timeframe you need to use to understand whether the VM is accurately sized or whether the vCPUs are just idling most of the time? You are now mixing static data (inventory) and transient data (utilization). Same story applies for memory.
In consequence I focused just on the density of virtual machines per host. The whole premise of virtualization is to exploit the variation of activity of applications, combined with distribution mechanisms as DRS and VMturbo you can argue that virtual and physical compute configurations will be matched properly. Therefor it’s interesting to see how far datacenters stretch their systems and understand the consolidation ratio of virtual machines. Can we determine a sweet spot of the number of virtual machines per host?
The numbers
Discovered earlier, dual socket systems are the most popular system configuration in the virtual datacenters, therefor I focused on these systems only. With the dataset now containing more than 25.000 ESXi hosts, it’s interesting to see what the popular CPU types are.
01
The popular systems contained in total 12, 16, 20 and 24 cores. Therefor the popular CPU’s of today are 6, 8, 10 and 12 cores. But since we typically see a host as a “closed” system and trust on the host local CPU scheduler to distribute the vCPUs amongst the available pCPUs, all charts use the total cores per system instead of on a per-CPU basis. For example a 16 cores system is ESXi host containing two 8 cores CPUs.
Before selecting a subset of CPU configurations let’s determine the overall distribution of VM density.
02
Interesting to see that it’s all across the board, VM density ranging from 0-10 VM’s per host up to more than 250. There were some outliers, but I haven’t included them. One system runs over 580 VM’s, this system contains 16 cores and 192 GB. Let’s dissect the VM density per CPU config.
Dissecting it per CPU configuration
03
Instead of focusing on all dual socket CPU configurations, I narrowed it down to three popular configurations. The 16 core config as it’s the most popular today, and the 20 to 24 core as I expect this to be the configuration as the default choice for new systems this year. This allows us to compare the current systems in today’s datacenter to the average number and help you to understand what VM density possible future systems run.
Memory
Since host memory is an integral part of providing performance to virtual machines, it’s only logical to determine VM density based on CPU and Memory configurations. What is the distribution of memory configuration of dual socket systems in today’s virtual datacenters?
04
Memory config and VM density of 16 cores systems
30% of all 16 cores ESXi hosts is equipped with 384GB of memory. Within this configuration, 21 to 30 VMs is the most popular VM density.
05
06
Memory config and VM density of 20 cores systems
50% of all 20 cores ESXi hosts is equipped with 256GB of memory. Within this configuration, 31 to 40 VMs is the most popular VM density.
07
08
Interesting to see that these systems, on average, have to cope with less memory per core than the 16 cores system (24GB per core versus 12,8GB per core)
Memory config and VM density of 24 cores systems
39% of all 24 cores ESXi hosts is equipped with 384GB of memory. Within this configuration, 101 to 150 VMs is the most popular VM density.
09
10
101 to 150 VM’s sound like a VDI platform usage. Are these systems the sweetspot for virtual desktop environments?
Conclusion
Not only do we have an actual data on VM density now, other interesting facts were discovered as well. When I was crunching these numbers one thing that stood out to me was memory configurations used. Most architects I speak with tend to configure the hosts with as much memory as possible and swap out the systems when their financial lifespan has ended. However I seen some interesting facts, for example memory configurations such as 104 GB or 136 GB per system. How do you even get 104GB of memory in such a system, did someone actually found a 4GB DIMM laying around and decided to stick it in th system? More memory = better performance right? Please ready my memory deepdive series on how this hurts your overall performance. But I digress. Another interesting fact is that 4% of all 24 cores systems in our database are equipped with 128GB of memory. That is an average of 5,3 GB per core, 64GB per NUMA node. Which immediately raises questions such as average host memory per VM or VM density per NUMA node. The more we look at data, the more questions arise. Please let me know what questions you have!

Filed Under: Miscellaneous

Insights into CPU and Memory configuration of ESXi Hosts

December 23, 2015 by frankdenneman

Recently Satyam Vaghani wrote about PernixData cloud. In short PernixData Cloud is the next logical progression of PernixData Architect and provide visibility and analytics around virtual datacenters, it’s infrastructure and applications.
As a former architect I love it. The most common question asked by customers around the world was how other companies are running and designing their virtual datacenters. Which systems do they use and how do these system perform with similar workload? Many architects struggle with justifying their bill of materials list when designing their virtual infrastructure. Or even worse getting the budget. Who hasn’t heard the reply when suggesting their hardware configuration: “you want to build a Ferrari, a Mercedes is good enough”. With PernixData Cloud you will be able to show trends in the datacenter, popularity of particular hardware and application details. It let you start ahead of the curve, aligned with the current datacenter trends instead of trailing. Of course I can’t go into detail as we are still developing the solution, but I can occasionally provide a glimpse of what we are seeing so far.
For the last couple of days I’ve been using a part of the dataset and queried 8000 hosts on their CPU, memory and ESXi build configuration to get insight in popularity of particular host configurations.
CPU socket configuration
I was curious about the distribution of CPU socket configurations. After analyzing the dataset it is clear that dual socket CPU configurations are the most popular setup. Although single CPU socket configuration are more common than quad CPU socket in the dataset, quad core are more geared towards running real world workload while single CPU configurations are typically test/dev/lab servers. Therefor the focus will primarily on dual CPU socket systems and partially quad CPU sockets systems. The outlier of this dataset is the 8 socket servers. Interesting enough some of these are chuck-full with options. Some of them were equipped with 15 core CPU’s. 120 CPU cores per host, talk about CPU power!
01-Number-of-CPU-sockets-in-ESXi-Host
CPU core distribution
What about the CPU core popularity? The most popular configuration is 16 cores per ESXi host, but without the context of CPU sockets one can only guess which CPU configuration is the most popular.
02-Total number of CPU Cores in ESXi hosts
Core distribution of dual CPU socket ESXi hosts
When zooming in to the dataset of dual CPU socket ESXi host, it becomes clear that 8 Core CPU’s are the most popular. I compared it with an earlier dataset and quad and six core systems are slowly reducing popularity. Six core CPU’s were introduced in 2010, assumable most will be up for a refresh in 2016. I intend to track the CPU configurations to provide trend analysis on popular CPU configurations in 2016.
03-Number of cores per CPU in dual CPU socket ESXi hosts
Core count quad socket CPU systems
What about quad socket CPU systems? Which CPU configuration is the most populair? It turns out that CPU’s containing 10 cores are the sweetspot when it comes to configuring a Quad core CPU system.
04-Number of cores per CPU in quad CPU socket ESXi hosts
Memory configuration
Getting insights into memory configuration of the servers provides us a clear picture of the compute power of these systems. What is the most popular memory configuration of dual socket server? As it turns out 256 and 384 GB are the most memory popular configuration. Today’s servers are getting beefy!
05-Memory configuration dual socket ESXi hosts
Zooming into the dataset quering the memory configuration of dual socket 8 core servers, the memory configuration distribution is as follows:
06-Memory in GB in dual socket 8 core server
What about the memory configuration of quad CPU servers?
07-Memory configurations quad CPU socket ESXu hosts
NUMA
512 GB is the most popular memory configuration for quad CPU socket ESXi host. Assuming the servers are configured properly, this configuration is providing the same amount of memory to each NUMA node of the systems. The most popular NUMA node is configured with 128 GB in both dual and quad CPU socket systems.

ESXi version distribution

I was also curious about the distribution of ESXi versions amongst the dual and quad CPU socket systems. It turns out that 5.1.0 is the most popular ESXi version for dual CPU systems, while most Quad CPU socket machines have ESXi version 5.5 installed
08-ESXi versions
More To Come
Satyam and I hope to publish more results from our dataset in the coming months. The dataset is expanding rapidly, increasing the insights of the datacenters around the globe. And we hope to cover other dimensions like applications and the virtualization layer itself. Please feel free to send me interesting questions you might have for the planet’s datacenters and we’ll see what we can do. Follow me on twitter @frankdenneman

Filed Under: Miscellaneous

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 29
  • Page 30
  • Page 31
  • Page 32
  • Page 33
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in