NTPQ -P CONNECTION REFUSED ERROR MESSAGE
Sometimes a small misconfiguration can cause havoc in a complex distributed system. It becomes really annoying when no proper output is provided by log files and status report. While investigating time issues in my lab I ran into the following error message while executing the ntpq -p command: TL;DR NTP client is disabled, enable it via the GUI The standard NTP query program (ntpq) is one of the quickest way to verify that the Network Time Protocol Daemon (ntpd) is up and running. The command ntpq -p prints a list of peers known to the ESXi host as well as a summary of their state. Running the command on another ESXi host provided the following output. Requesting the status of the NTPD status on the host with weird time issues, shows it’s not running. No proper feedback is provided by the command line other than it’s starting, no failure code is returned. Management service initialisation, such as ntpd starts are logged in the file /var/log/syslog.log in ESXi 5.1 and up. Unfortunately, nothing useful is logged in this logfile as well. I couldn’t find a command that provides accurate output whether the NTP client was enabled or not. Time to open up the web client. Host time configuration can be found when selecting the ESXi host, Manage, Time Configuration. Apparently NTP was not enabled. Simple problem to fix, unfortunately there is no simple command line function that allows to verify while NTP client is enabled (sans PowerCli)
NO NETWORK CONNECTION AFTER RE-REGISTERING VCSA USING THE I'VE COPIED IT ANSWER
Paulo Coelho once stated “Life moves very fast. It rushes from Heaven to Hell in a matter of seconds” Well I think he perfectly described a day working in the lab and rushing through a migration. I’m upgrading the lab and I moved the vCenter Server Appliance (VCSA) to its new home. While trying to do a million things all at once, I didn’t pay attention to the question whether I moved the virtual machine or whether I copied it. I selected the option “I copied it”. And that’s when the fun started, vCenter down. TL;DR: Selecting “I copied it” implies that this machine is a duplicate and that a new identity should be generated. This means that the VM is getting a new UUID and a new MAC address. SUSE Linux Enterprise Server 11 detects this new MAC address and views this as a new Ethernet Device. The VCSA does not allow the creation of a new ethernet controller. Rename 70-persistent-net.rules file and reboot to have SUSE auto-generate a new 70-persistent-net.rules file with the correct MAC Address that allows you restore network connectivity via the console. Troubleshooting the problem Both the web client and the VCSA config web page are unreachable, time to open up the VM console (Alt-F1). When logging in and pinging the gateway the error, the system returns the error message “Network is unreachable” Before tinkering with the configuration files, I like to restart the services and see if the status report exposes interesting information. “No configuration found for eth1”. The VCSA is configured with a single NIC and SUSE Linux Enterprise Server 11, which is the OS for the appliance, assigns the label eth0 to the first Ethernet adapter. VCSA networking is configured through the Virtual Appliance Management Interface (VAMI). Executing the command “/opt/vmware/share/vami/vami_config_net allows you to retrieve the current network configuration When selecting option 6 “IP Address Allocation for eth1” VAMI reveals that it cannot read the interface files for ’eth1’ The networking interface files are stored in the directory /etc/sysconfig/networking/devices. When listing the files (ls) only ifcfg-eth0 shows up. Reviewing the ifcfg-eth0 file with cat shows that the correct networking configuration is still applied to eth0. It looks like the problem occurs due to the way SUSE handles devices. The following text is copied directly from the SUSE documentation:
MONITORING POWER CONSUMPTION OF YOUR HOME LAB WITH A SMART PLUG
Home labs are interesting beasts, at one hand you would love to have all the compute, storage and network power available, on the other hand you do not want to have a power bill similar to a Google data center. I have a decent setup, with 4 Xeon servers, two cisco 1GB switches, a 10Gb switch and 3 Synology’s, but I don’t keep everything on all the time. One server acts as the management server, running a Windows DC, vCenter appliance, the PernixMS server and some other stuff. These machines are always on, not only to save time when I want to use my lab but increased stability as well. Due to this, my network gear and storage systems are also on. Which made me wonder how much the need for availability and stability will cost me on a yearly basis. The big Xeon rigs equipped with multiple PCIe devices are usually shut down after tests because I expect them to consume lots of power. Time to stop guessing and start monitoring. As always Home Lab Sensei Erik Bussink pointed me out to a simple solution the Smart Plug Edimax SP-2101W Smart Plug Switch. Please leave a comment if you are using a different solution that is a better alternative to this device. The device Nothing much to add about the device itself, it is sleek enough so it will not eat up multiple power outlets. The device is managed via an apple or android app, the following screenshots are taken from an Apple device, you can monitor it with both your iPhone or iPad. You can manage multiple smart plugs from one device. As I’ve spread my lab over two power-groups I’ve installed two power-plugs to monitor my home lab. Unfortunately, the app doesn’t allow displaying two smart plugs simultaneously, you have to open each individually. The monitor page shows the real time power consumption registered by the plug. It displays Amps and Watts. Quite cool to see what happens when you power-on devices or even a virtual machine, this monitored server generates a spike of 30 watts when powering on a VM, it quickly returns to a steady state though. Fun to see that ESXi hosts do not consume a steady high state of power. The Now button shows the real-time power consumption and the total power consumption registered of today, this week and this month. By providing the price of energy, it calculates the total cost additionally. Unfortunately I haven’t found the option to change the currency sign, so you are stuck with the dollar sign. By selecting the Usage button provides you a chart to view the power consumption of that day. The app allows you to analyze power consumption trends of your home lab by provides an overview based on 24 hours of data, a week, a month and a full year. Conclusion The smart plugs are a great addition to my home lab, it provides me insights in the consumption and it for me personally have removed the reluctancy of leaving my full lab on. The answer to the question whether you need a smart plug if you run a home lab is in my opinion a straight and simple no. You can estimate cost or you can just ignore it and pay the bill when it arrives. I’m just curious about these things and it helps to clear my conscious.
TRACKING DOWN NOISY NEIGHBORS
A big part of resource management is sizing of the virtual machines. Right-sizing the virtual machines allows IT teams to optimize the resource utilization of the virtual machines. Right sizing has become a tactical tool for enterprise IT-teams to ensure maximum workload performance and efficient use of the physical infrastructure. Another big part of resource management is keeping track of resource utilization, some of these processes are a part of the daily operation tasks performed by specialized monitoring teams or the administrators themselves. Service Providers usually cannot influence the right sizing element, therefor they focus more on the monitoring part. What is almost universal across virtual infrastructure owners is the incidental nature of tracking down ’noisy-neighbors’ VMs . Noisy neighbor VMs generate workload in such a way that it monopolizes resources and have negative impact on the performance of other virtual machines. Service Providers and enterprise IT teams have to deal with these consumer outliers in order to meet the SLAs of existing workloads and being able to satisfy the SLA requirements of new workloads. It’s interesting that noisy neighbor tracking is an incidental activity as it can be so detrimental to the performance of the virtual datacenter. Tools such as vSphere Storage IO Control (short term focus) and vSphere Storage DRS (long term focus) assist to alleviate the infrastructure from the burden of noisy neighbors, but attacking this problem structurally is necessary to ensure consistent and predictable performance from your infrastructure. At long term, noisy neighbor VMs impact the projected consolidation ratio, which in turn influences the growth rate of the infrastructure. I’ve seen plenty of knee jerk reactions, creating a server and storage infrastructure sprawl due to introduction of these outlier workloads. Identifying noisy neighbors can become a valuable tool in both strategic and tactical playbooks of the IT organization. Having insight of which VMs are monopolizing the resources allow IT teams to act appropriately. Similar to real life the behavior of noisy neighbor can be changed often, but sometimes that’s the nature of the beast and you just have to live with it. In that situation noisy neighbors become outliers of conduct and one ha to make external adjustments. This insight allows IT teams to respond along the entire vertical axis of the virtual datacenter, from application to infrastructure choice. By having the correct analysis, the IT team can provide insights to the application owner, allowing them to adjust accordingly. It helps the IT team to understand whether the environment can handle the workload and make adjustment to the infrastructure necessary. Sometimes the intensity of the workload is just what it is and hosting that workload is necessary to support the business. In that case the IT team has to understand whether the infrastructure is suitable to support the application. As most IT organization have access to multiple platforms, the accurate insight of characteristics (and requirements) of the workload allows them to identify the correct platform. Virtual Datacenters are difficult to monitor. They are comprised of a disparate stack of components. Every component logs and presents data differently. Different granularity of information, different time frames, and different output formats make it extremely difficult to correlate data. In addition you need to be able to correctly identify the workload characteristics and interpret the impact it has on the shared environment. We do not live in a world anymore where we have to deal with isolated technology stacks. Applications typically do not run anymore on a single box, connected to a single and isolated raid array. Today everything within the infrastructure is shared, the level of hardware resource distribution is diluting with each introduction of new hardware. Where we used to run a single application in a VM on top of server with ten other VMs, sharing a couple of NICs and HBA’s, we slowly moved towards converged network platforms. In the last 10 years, we shared and shared more, the only monolith remaining is the application in the VM and that is rapidly changing as well with the popularity of containers and micro services. Yet most of our testing mechanisms and monitoring efforts are still based on the architecture we left behind 10 years ago. Virtual Datacenters require continuous analytics that fully comprehends the context of the environment, with the ability to zoom in and focus on outliers if necessary. In the upcoming series I’m going to focus on how to explore cluster level workloads and progressively zooming into specific workloads based on IOPS, block size, throughput and unaligned IOs.
MANAGING YOUR VIRTUAL DATACENTER AND HOME LAB WITH A MAC
The majority of virtual datacenters are managed from Windows systems. When I started with virtualization I also used a windows system, however when I joined VMware I received a MacBook and this was the beginning of the end. Soon ever window device was replaced with an Apple device in my home. The problem was that I still needed to manage by home lab. To circumvent this, I created a Windows admin VM and installed all my trusted Windows apps, such as Putty, vSphere client and WinSCP. Works great! Until you want to rebuild your lab or restructure the environment. It always felt as a burden and on top of that I didn’t want to spend CPU cycles and waste memory of my home lab on admin VM. Throughout the years I discovered tools for Mac OS that replaced their trusted Windows equivalent and with the new release of the HMTL 5 Web client fling it removed dependency on the Client Integration Plugin (CIP). Here is the list of program and tips and tricks I use on my Mac to manage my Home lab. Putty > iTerm2 PuTTY is an SSH and telnet client for the Windows Platform allowing you to have access to the command line of the ESXi server. For the Mac platform I recommend iTerm2. Although Mac OS has a native terminal application, iTerm2 has a couple of cool features that I absolutely love. It can run multiple sessions, each in its own tab. With profiles you can configure the connection settings to your ESX host and with a simple shortcut key combination (for example, Control-command-1, you open a tab to the ESXi host. Download iTerm2 here. Remote Desktop > Royal TSX MS Remote Desktop is available for Mac OS, but the one remote desktop application you want to get is Royal TSX. The free version allows up for ten remote desktop connections, typically more than enough for the majority of home labs. I bought a licensed version as I’m using more than ten profiles and like to separate the workload part of the lab in a separate configuration document from the management part of the lab. One of the cool features is the tabbed layout, allowing you to switch between remote desktops quite easily. The screens at home have a minimum resolution 2560 x 1440 resolution, Royal TSX allows for any resolution, even native Retina resolution. I like to use the smart zoom and the resolution set by the virtual machine allowing you to have a proper environment to work in without hitting the time-consuming scroll bars. If security isn’t a big concern for you, you can specify the user and password for the connection at multiple levels. Either on the remote desktop connection profile itself or specify it on the ‘connection document’ for the entire environment. A nice time saver! If you are the complete opposite and you need higher levels of security, such as Network Level Authentication (NLA) Royal TSX is the application to get. NLA is enabled by default and you can configure to use Transport Layer Security (TLS) as well. Download Royal TSX here. WinSCP > CyberDuck WinSCP and Veeam Backup Free Edition (previously Veeam FastSCP) are the most popular Secure FTP applications that allows you to copy files directly onto the ESXi host. Unfortunately the once announced port to MacOS of WinSCP never came into fruition and therefor I looked for alternatives. There are plenty SFTP clients, the one I use and like is Cyberduck It allows for creating connection profiles called bookmarks, allowing you to connect to the correct folder directly. It also supports various encryption ciphers and authentication algorithms if you operate in a secure environment. Cyberduck is like all the other listed tools free but the occasionally ask for a donation. Download Cyberduck here. VMware ESXi Embedded Host Client fling The ESXi embedded host client fling allows you to manage the ESXi host directly through a web client. Its fast, it’s easy to install and it provides most of the functionality you need when you are building your lab before deploying the vCenter Appliance. One of the great assets to this tool is the integrated VM console. It’s directly accessible within the browser and does not require any addiotnal plugins or installers. Solving the annoying Client Integration Plugin problem most Mac users faced when connecting to the vCenter via the web client. The Fling currently only supports ESXi 6.0, however William published a workaround for ESXi 5.x. found here: http://www.virtuallyghetto.com/2015/08/new-html5-embedded-host-client-for-esxi.html Download the VMware ESXi Embedded Host Client fling here. vSphere HTML5 Web Client Fling v1.2 (h5client) This fling got released this week and it allows you to connect with an HTML5 based web client to the vCenter server. Be aware that this client is designed for managing vCenter only! This release focuses on removing the dependency of the client integration plugin allowing administrators to connect with the VM console via the web client and do the basic stuff. Combine that with the normal web client and you execute the majority of operations to setup and deploy your home lab / virtual datacenter. The client is deployed as a vib on one of the ESXi host. For detailed install instructions visit the VMware vSphere blog. Download the HTML5 Web Client Fling v1.2 here. Function keys Not a tool, but sometimes you are required to press a function key, such as F11 when installing ESXi. No problem when installing physical boxes, a challenge when installing a nested ESXi system using a remote (VM) console. In order to send the correct key, press FN-CMD-F11. This works on most function keys and other non-alphanumeric keys Please leave a comment if you want to share your favorite tool or handy tips and tricks to save time.
ADJUST TIMEOUT VALUE ESXI EMBEDDED HOST CLIENT
I love to use the ESXi Embedded Host Client next to vCenter in my lab. It’s quick, it provide most of the functionality and best of it all it has a functioning VM console when accessing it from a MAC. The ESXi Embedded Host Client time-out default is set to 15 minutes, but you can adjust this setting. On the right side of the menu bar there is a drop down menu next to the IP-address or DNS name of your ESXi server. Open it and go to:
DVD STORE, THE PERFECT HOMELAB WORKLOAD TOOL
DVD Store 2.1, a magnificent tool for all aspiring VCP/VCAP candidates. A great tool for home lab enthousiasts to understand performance metrics, a fantastic tool to understand the behavior of an application stack in a virtual datacenter. WHAT IS DVD STORE? According to the official site the DVD Store Version 2.1 (DS2) is a complete open source online e-commerce test application, with a backend database component, a web application layer, and driver programs. The goal in designing the database component as well as the midtier application was to utilize many advanced database features (transactions, stored procedures, triggers, referential integrity) while keeping the database easy to install and understand. The DS2 workload may be used to test databases or as a stress tool for any purpose. Thanks Todd Muirhead and Dave Jaffe for creating this! However there is a slight challenge in installing it properly. You can install it on windows or on Linux and use many different database programs. I like to use windows for this. Unfortunately I tried to follow the instruction video on youtube and it was lacking some crucial details to get it deployed successfully. Therefor I started to document the steps involved to get it deployed on a Windows 2012 system using SQL 2014 SP1. Please note that you can run DVD store on Linux as well, and it might be even better (more lean and mean than a windows install) for homelabs. If you have a detailed write-up (100% reproducible) of a working deployment DVD store on Linux, please share the link to your article in the comments.
YOU DO NOT HAVE PERMISSIONS TO VIEW THIS OBJECT ERROR AFTER UPDATING VCSA TO 6.0 UPDATE 1B
Today I’ve updated my vCenter Server Appliance with the VC-6.0.0U1b-Appliance.ISO in my lab. After rebooting I was surprised to see the error “You do not have permissions to view this object” on almost every object in the inventory screen. Unfortunately a reboot of the DC (home lab, I do not run an elaborate AD here)Time to google and it seems that a lot of other people have hit this bug. After googling some more I found the the VMware KB article: KB 2125229. Problem is, this is solely focused on the windows version of vCenter and not focussed on solving the problem occurring on the VCSA. Although I can log in and see the inventory when using my admin account (Lab\vAdmin) I can’t access the objects. Maybe a permission problem? When checking the global permissions the (vAdmin) user is still listed as an administrator. However administrators should be able to access all objects, as I found out a refresh is required. Here is how I solved it: 1. Log out of vC and login with the default admin account “administrator@vsphere.local” 2. In the Home view, select “Administration” from the menu 3. Go to Global Permissions, remove the user (In my case vAdmin) 4. Click on “Add Permission” 5. Select your AD domain and select the correct user 6. Click on Ok 7. Check the list to see whether your user is added with the correct role (administrator). 8. Logout and login with the correct AD user. 9. Back to work. Time for me to power on these servers again. Follow Frank on twitter @frankdenneman
INSIGHTS INTO VM DENSITY
For the last 3 months my main focus within PernixData has been (and still is) the PernixCloud program. In short PernixData Cloud is the next logical progression of PernixData Architect and provides visibility into and analytics of virtual datacenters, it’s infrastructure, and it’s applications. By providing facts on the various elements of the virtual infrastructure, architects and administrators can design their environment in a data-driven way. The previous article “Insights into CPU and Memory configurations of ESXi hosts” zoomed in to the compute configuration of 8000 ESXi hosts and helped us understand which is the most popular system in today’s datacenter running VMware vSphere. The obvious next step was to determine the average number of virtual machines on these systems. Since that time the dataset has expanded and it now contains data of more than 25.000 ESXi hosts. An incredible dataset to explore I can tell you and it’s growing each day. Learning how to deal with these vast quantities of data is incredibly interesting. Extracting various metrics from a dataset this big is challenging. Most commercial tools are not designed to cope with this amount of data, thus you have to custom build everything. And with the dataset growing at a rapid pace, you are constantly exploring the boundaries of what’s possible with software and hardware. VM Density After learning what system configurations are popular, you immediately wonder how many virtual machines are running on that system. But what level of detail do you want to know? Will this info be useable for architects and administrators to compare their systems and practical to help them design their new datacenter? One of the most sought after question is the virtual CPU to physical CPU ratio. A very interesting one, but unfortunately to get a result that is actually meaningful you have to take multiple metrics into account. Sure you can map out the vCPU to pCPU ratio, but how do you deal with the fact of oversizing of virtual machines that has been happing since the birth of virtualization? What about all these countless discussions whether the system only needs a single or double CPU because it’s running a single threaded program? How many times have you heard the remark that the vendor explicitly states that the software requires at least 8 CPU’s? Therefor you need to add utilization of CPU to get an accurate view, which in turn leads to the question what timeframe you need to use to understand whether the VM is accurately sized or whether the vCPUs are just idling most of the time? You are now mixing static data (inventory) and transient data (utilization). Same story applies for memory. In consequence I focused just on the density of virtual machines per host. The whole premise of virtualization is to exploit the variation of activity of applications, combined with distribution mechanisms as DRS and VMturbo you can argue that virtual and physical compute configurations will be matched properly. Therefor it’s interesting to see how far datacenters stretch their systems and understand the consolidation ratio of virtual machines. Can we determine a sweet spot of the number of virtual machines per host? The numbers Discovered earlier, dual socket systems are the most popular system configuration in the virtual datacenters, therefor I focused on these systems only. With the dataset now containing more than 25.000 ESXi hosts, it’s interesting to see what the popular CPU types are. The popular systems contained in total 12, 16, 20 and 24 cores. Therefor the popular CPU’s of today are 6, 8, 10 and 12 cores. But since we typically see a host as a “closed” system and trust on the host local CPU scheduler to distribute the vCPUs amongst the available pCPUs, all charts use the total cores per system instead of on a per-CPU basis. For example a 16 cores system is ESXi host containing two 8 cores CPUs. Before selecting a subset of CPU configurations let’s determine the overall distribution of VM density. Interesting to see that it’s all across the board, VM density ranging from 0-10 VM’s per host up to more than 250. There were some outliers, but I haven’t included them. One system runs over 580 VM’s, this system contains 16 cores and 192 GB. Let’s dissect the VM density per CPU config. Dissecting it per CPU configuration Instead of focusing on all dual socket CPU configurations, I narrowed it down to three popular configurations. The 16 core config as it’s the most popular today, and the 20 to 24 core as I expect this to be the configuration as the default choice for new systems this year. This allows us to compare the current systems in today’s datacenter to the average number and help you to understand what VM density possible future systems run. Memory Since host memory is an integral part of providing performance to virtual machines, it’s only logical to determine VM density based on CPU and Memory configurations. What is the distribution of memory configuration of dual socket systems in today’s virtual datacenters? Memory config and VM density of 16 cores systems 30% of all 16 cores ESXi hosts is equipped with 384GB of memory. Within this configuration, 21 to 30 VMs is the most popular VM density. Memory config and VM density of 20 cores systems 50% of all 20 cores ESXi hosts is equipped with 256GB of memory. Within this configuration, 31 to 40 VMs is the most popular VM density. Interesting to see that these systems, on average, have to cope with less memory per core than the 16 cores system (24GB per core versus 12,8GB per core) Memory config and VM density of 24 cores systems 39% of all 24 cores ESXi hosts is equipped with 384GB of memory. Within this configuration, 101 to 150 VMs is the most popular VM density. 101 to 150 VM’s sound like a VDI platform usage. Are these systems the sweetspot for virtual desktop environments? Conclusion Not only do we have an actual data on VM density now, other interesting facts were discovered as well. When I was crunching these numbers one thing that stood out to me was memory configurations used. Most architects I speak with tend to configure the hosts with as much memory as possible and swap out the systems when their financial lifespan has ended. However I seen some interesting facts, for example memory configurations such as 104 GB or 136 GB per system. How do you even get 104GB of memory in such a system, did someone actually found a 4GB DIMM laying around and decided to stick it in th system? More memory = better performance right? Please ready my memory deepdive series on how this hurts your overall performance. But I digress. Another interesting fact is that 4% of all 24 cores systems in our database are equipped with 128GB of memory. That is an average of 5,3 GB per core, 64GB per NUMA node. Which immediately raises questions such as average host memory per VM or VM density per NUMA node. The more we look at data, the more questions arise. Please let me know what questions you have!
INSIGHTS INTO CPU AND MEMORY CONFIGURATION OF ESXI HOSTS
Recently Satyam Vaghani wrote about PernixData cloud. In short PernixData Cloud is the next logical progression of PernixData Architect and provide visibility and analytics around virtual datacenters, it’s infrastructure and applications. As a former architect I love it. The most common question asked by customers around the world was how other companies are running and designing their virtual datacenters. Which systems do they use and how do these system perform with similar workload? Many architects struggle with justifying their bill of materials list when designing their virtual infrastructure. Or even worse getting the budget. Who hasn’t heard the reply when suggesting their hardware configuration: “you want to build a Ferrari, a Mercedes is good enough”. With PernixData Cloud you will be able to show trends in the datacenter, popularity of particular hardware and application details. It let you start ahead of the curve, aligned with the current datacenter trends instead of trailing. Of course I can’t go into detail as we are still developing the solution, but I can occasionally provide a glimpse of what we are seeing so far. For the last couple of days I’ve been using a part of the dataset and queried 8000 hosts on their CPU, memory and ESXi build configuration to get insight in popularity of particular host configurations. CPU socket configuration I was curious about the distribution of CPU socket configurations. After analyzing the dataset it is clear that dual socket CPU configurations are the most popular setup. Although single CPU socket configuration are more common than quad CPU socket in the dataset, quad core are more geared towards running real world workload while single CPU configurations are typically test/dev/lab servers. Therefor the focus will primarily on dual CPU socket systems and partially quad CPU sockets systems. The outlier of this dataset is the 8 socket servers. Interesting enough some of these are chuck-full with options. Some of them were equipped with 15 core CPU’s. 120 CPU cores per host, talk about CPU power! CPU core distribution What about the CPU core popularity? The most popular configuration is 16 cores per ESXi host, but without the context of CPU sockets one can only guess which CPU configuration is the most popular. Core distribution of dual CPU socket ESXi hosts When zooming in to the dataset of dual CPU socket ESXi host, it becomes clear that 8 Core CPU’s are the most popular. I compared it with an earlier dataset and quad and six core systems are slowly reducing popularity. Six core CPU’s were introduced in 2010, assumable most will be up for a refresh in 2016. I intend to track the CPU configurations to provide trend analysis on popular CPU configurations in 2016. Core count quad socket CPU systems What about quad socket CPU systems? Which CPU configuration is the most populair? It turns out that CPU’s containing 10 cores are the sweetspot when it comes to configuring a Quad core CPU system. Memory configuration Getting insights into memory configuration of the servers provides us a clear picture of the compute power of these systems. What is the most popular memory configuration of dual socket server? As it turns out 256 and 384 GB are the most memory popular configuration. Today’s servers are getting beefy! Zooming into the dataset quering the memory configuration of dual socket 8 core servers, the memory configuration distribution is as follows: What about the memory configuration of quad CPU servers? NUMA 512 GB is the most popular memory configuration for quad CPU socket ESXi host. Assuming the servers are configured properly, this configuration is providing the same amount of memory to each NUMA node of the systems. The most popular NUMA node is configured with 128 GB in both dual and quad CPU socket systems. ESXi version distribution I was also curious about the distribution of ESXi versions amongst the dual and quad CPU socket systems. It turns out that 5.1.0 is the most popular ESXi version for dual CPU systems, while most Quad CPU socket machines have ESXi version 5.5 installed More To Come Satyam and I hope to publish more results from our dataset in the coming months. The dataset is expanding rapidly, increasing the insights of the datacenters around the globe. And we hope to cover other dimensions like applications and the virtualization layer itself. Please feel free to send me interesting questions you might have for the planet’s datacenters and we’ll see what we can do. Follow me on twitter @frankdenneman