vSphere 5.5 Home lab

For a while I’ve been using three Dell R610 servers in my home lab. The machines specs are quite decent, each server equipped with two Intel Xeon 5530 CPUs, 48GB of memory and four 1GB NICs. With a total of 24 cores (48 HT Threads) and 144GB of memory the cluster has more than enough compute power.

However from a bandwidth perspective they are quite limited, 3 Gbit/s SATA and 1GbE network bandwidth is not really pushing the envelope. These limitations do not allow me to properly understand what a customer can expect when running FVP software. In addition I don’t have proper cooling to keep the machines cool and their power consumption is something troubling.

Time for something new, but where to begin?

CPU
Looking at the current lineup of CPUs doesn’t make it easier. Within the same CPU vendor product line multiple types of CPU socket exist, multiple different processor series exist with comparable performance levels. I think I spent most of my time on figuring out which processor to select. Some selection criteria were quite straightforward. I want a single CPU system and at least 6 cores with Hyper-Threading technology. The CPU must have a high clock speed, preferably above 3GHz.

Intel ARK (Automated Relational Knowledge base) provided me the answer. Two candidates stood out; the Intel Core i7 4930 and the Intel Xeon E5 1650 v2. Both 6 core, both HT-enabled, both supporting the advanced technologies such as VT-x, VT-d and EPT. http://ark.intel.com/compare/77780,75780

The main difference between the two CPU that matters the most to me is the higher number of supported memory of the Intel Xeon E5. However the i7-4930 supports 64GB, which should be enough for a long time. But the motherboard provided me the answer

Motherboard
Contrary to the variety of choices at CPU level, there is currently one Motherboard that stands out for me. It looks it almost too good to be true and I’m talking about the SuperMicro X9SRH-7TF. This thing got it all and for a price that is unbelievable. The most remarkable features are the on-board Intel X540 Dual Port 10GbE NIC and the LSI 2308 SAS controller. 8 DIMM slots, Intel C602J chipset and a dedicated IPMI LAN port complete the story. And the best part is that its price is similar of a PCI version of the Intel X540 Dual Port 10GbE NIC. The motherboard only supports Intel E5 Xeons, therefor the CPU selection is narrowed down to one choice, the Intel Xeon E5 1650 v2.

CPU Cooler
The SuperMicro X9SRH-7TF contains an Intel LGA2011 socket with Narrow ILM (Independent Loading Mechanism) mounting. This requires a cooler designed to fit this narrow socket. The goal is to create silent machines and the listed maximum acoustical noise of 17.6 dB(A) of the Noctua NH-U9DX i4 “sounds” promising.

Memory
The server will be equipped with 64GB. Four 16GB DDR3-1600 modules allowing for a future upgrade of memory. The full product name: Kingston ValueRAM KVR16R11D4/16HA Modules.

Network
Although two 10 GbE NICs provide more than enough bandwidth, I need to test scenarios where 1GbE is used. Unfortunately vSphere 5.5 does not support the 82571 chipset used by the Intel PRO/1000 Pt Dual Port Server Adapter currently inserted in my Dell servers. I need to find an alternative 1 GbE NIC recommendations are welcome.

Power supply
I prefer a power supply that is low noise and fully modular. Therefore I selected the Corsair RM550. Besides a noise-reducing fan the PSU has a Zero RPM Fan Mode, which does not spin the fan until it is under heavy load, reducing the overall noise level of my lab when I’m not stressing the environment.

Case
The case of choice is the Fractal Design Define R4. Simple but elegant design, enough space inside and has some sound reducing features. Instead of the standard black color, I decided to order the titanium grey.

SSD
Due to the PernixDrive program I have access to many different SSD devices. Currently my lab contains Intel DC 3700 100GB and Kingston SSDNOW enterprise e100 200GB drives. Fusion I/O currently not (yet) in the PernixDrive program was so kind to lend me a Fusion I/O IODrive of 3.2 TB, unfortunately I need to return this to Fusion someday.

Overview

Component Type Cost
CPU Intel Xeon E5 1650 v2 540 EUR
CPU Cooler Noctua NH-U9DX i4 67 EUR
Motherboard SuperMicro X9SRH-7TF 482 EUR
Memory Kingston ValueRAM KVR16R11D4/16HA 569 EUR
SSD Intel DC 3700 100GB 203 EUR
Kingston SSDNOW enterprise e100 200GB 579 EUR
Power Supply Corsair RM550 90 EUR
Case Fractal Design Define R4 95 EUR
Price per Server (without disks) 1843 EUR

In total two of these machines are build as a start of my new lab. Later this year more of these machines will be added. I would like to thank Erik Bussink for providing me recommendations and feedback on the component selection of my new vSphere 5.5 Home Lab. I’m sure he will post a new article of his new lab soon.

Installing Exchange Jetstress without full installation media.

I believe in testing environments with applications that will be used in the infrastructure itself. Pure synthetic workloads, such as IOmeter, are useful to push hardware to their theoretical limit but that’s about it. Using a real life workload, common to your infrastructure, will give you a better understanding of the performance and behavior of the environment you are testing. However, it can be cumbersome to setup the full application stack to simulate that workload and it might be difficult to simulate future workload.

Simulators made by the application vendor, such as SQLIO Disk Subsystem Benchmark Tool or Exchange Server Jetstress, provide an easy way to test system behaviour and simulate workloads that might be present in the future.

One of my favourite workload simulators is MS Exchange server Jetstress however its not a turn-key solution. After installing Exchange Jetstress you are required to install the ESE binary files from an Exchange server. It can happen that you don’t have the MS exchange installation media available or a live MS exchange system installed.

01-Missing files - jetstress 2010

Microsoft recommends downloading the trail version of Exchange, install the software and then copy the files from its directory. Fortunately you can save a lot of time by skipping these steps and extract the ESE files straight from an Exchange Service Pack. Added bonus, you immediately know you have the latest versions of the files.

I want use Jetstress 2010 and therefor I downloaded Microsoft Exchange Server Jetstress 2010 (64 bit) and Microsoft Exchange Server 2010 Service Pack 3 (SP3).

To extract the files direct from the .exe file, I use 7zip file archiver. ()

The ESE files are located in the following directory:

File Path
ese.dll \setup\serverroles\common
eseperf.dll \setup\serverroles\common\perf\amd64
eseperf.hxx \setup\serverroles\common\perf\amd64
eseperf.ini \setup\serverroles\common\perf\amd64
eseperf.xml \setup\serverroles\common\perf\amd64



Copy the ESE files into the Exchange Jetstress installation folder. By default, this folder is “C:\Program Files\Exchange Jetstress”.

Be aware that you need to run Jetstress as an administrator. Although you might login your system using you local and domain admin account, Jetstress will be kind enough to throw the following error:

The MSExchange Database or MSExchange Database ==> Instrances performance counter category isn’t registered

Just right-click the Jetstress shortcut and select “run as administrator” and you are ready for action.

Happy testing!

vSphere 5.5 vCenter server inventory 0

After logging into my brand spanking new vCenter 5.5 server I was treated with a vCenter server inventory count of 0. Interesting to say the least as I installed vCenter on a new windows 2008 R2 machine, connected to a fresh MS active directory domain. I installed vCenter with a user account that is domain admin, local admin and has all the appropriate local rights (Member of the Administrators group, Act as part of the operating system and Log on as a Service). The install process went like a breeze, no error messages whatsoever and yet the vCenter server object was mysteriously missing after I logged in. A mindbender! Being able to log into the vCenter server and finding no trace of this object whatsoever, it felt like someone answering the door and saying he’s not home.

I believed I did my due diligence, I read the topic “Prerequisites for Installing vCenter Single Sign-On, Inventory Service, and vCenter Server” and followed every step, however it appeared I did not RTFM enough.

administrator@vsphere.local only
Apparently vSphere will only attach the permissions and assign the role of administrator to the default account administrator@vsphere.local and you have to logon with this account after the installation is complete. See “How vCenter Single Sign-On Affects Log In Behavior” for the following quote:

After installation on a Windows system, the user administrator@vsphere.local has administrator privileges to both the vCenter Single Sign-On server and to the vCenter Server system.

It threw my off balance by allowing me to log in with the account that I used to install vCenter, this made me assume the account automatically received the appropriate rights to manage the vCenter server. To gain access to the vCenter database you must manually assign the administrator role to the AD group or user account of your liking. As an improvement over 5.1 vCenter 5.5 adds the active directory as an identity source, but will not assign any administrator rights, ignoring the user account used for installing the product. Follow these steps to use your AD accounts to manage vCenter.

1. Verify AD domain is listed as an Identity Source
Log in with administrator@vsphere.local and select Configuration in the home menu tree. Only when you are logged in with an SSO administrator vCenter will show the Single Sign-on menu option. Select Single Sign-on | Configuration and verify if AD domain is listed.

1-SSO configuration identify sources

2. Add Permissions to top object vCenter
Go back to home, select menu option vCenter, vCenter Servers and then the vCenter server object. Select the menu option Manage, Permissions

2-vCenter permissions

3. Add User or Group to vCenter
Click on the green + icon to open the add permission screen. Click on the Add button located at the bottom.

4. Select the AD domain
Select the AD domain and then the user or group. In my example I selected the AD group “vSphere-admins”. I’m using groups to keep the vCenter configuration as low-touch as possible. When I need grant additional users administrator rights I can simple do this in my AD Users and Computers tool. Traditionally auditing is of a higher level in AD then in vCenter.

3-Select AD domain and Group

5. Assign Administrator Role
In order to manage the vCenter server all privileges need to be assigned to that user, by selecting the administrator role all privileges are assigned and propagated to all the child objects in the database.

4-Assign Administrator Role to AD group

6. Log in with your AD account
Log out the user administrator@vsphere.local and enter your AD account. Click on vCenter to view the vCenter Inventory list. vCenter Servers should list the new vCenter server.

5-vCenter Server Inventory List

VCDX defend clinic: Choosing between Multi-NIC vMotion and LBT

A new round of VCDX defenses will kickoff soon and I want to wish everyone that participates in the panel session good luck. Usually when VCDX panels are near, I receive questions on how to prepare for a panel. And one recommendation I usually provide is

“Know why you used a specific configuration of a feature and especially know why you haven’t used the available alternatives”.

Let’s have some fun with this and go through a “defend clinic”. The point of this clinic is to provide you an exercise model than you can use for any configuration, not only for a vMotion configuration. It helps you to understand the relationship of information you provide throughout your documentation set and helps you explain how you derived through every decision to come to this design.

To give you some background, when a panel member is provided the participants documentation set, he enters a game of connecting the dots. This set of documents are his only view into the your world while creating the design and dealing with your customer. He needs to take your design and compare it to the requirements of the customer, the uncertainties you dealt with in the form of assumptions and the constraints that were given. Reviewing the design on technical accuracy is only a small portion of the process. That’s just basically checking to see if you are using your tools and material correctly, the remaining part is to understand if you build the house to the specification of the customer while dealing with regional laws and the available space and layout of the land. Building a 90.000 square feet single floor villa might provide you the most amount of easily accessible space, but if you want to build that thing in downtown Manhattan you’re gonna have a bad time. ;)

Structure of the article
This exercise lists the design goals and its influencers, requirements, constraints and assumptions. The normal printed text is architects (technical) argument while the paragraphs are displayed in Italic can be seen as questions or thoughts of a reviewer/panel member.

Is this a blue print on how to beat the panel? No! It’s just an exhibition on how to connect and correlate certain statement made in various documents. Now let’s have some fun exploring and connecting the dots in this exercise.

Design goal and influencers
Your design needs to contain a vMotion network as the customer wants to leverage DRS load balancing, maintenance mode and overall enjoy the fantastic ability of VM mobility. How will you design your vMotion network?

In your application form you have stated that the customer want to see a design that reduces complexity, increases scalability, prefers to have the best performance available as possible. Financial budget and the amount of IP-addresses are constraints and the level of expertise of the virtualization management team is an assumption.

Listing the technical requirements
Since you are planning to use vSphere 5.x you have the choice to create a traditional single vMotion-enabled VMKnic, Multi-NIC vMotion setup or use vMotion configuration that uses “Route based on physical NIC load” load balance algorithm (commonly known as LBT) to distribute vMotion traffic amongst multiple active NICs. As the customer does not prefer to use link aggregation, IP-hash based / EtherChannel configurations are not valid.

First let’s review the newer vMotion configurations and how they differentiate from the traditional vMotion configuration, where you have one single VMKnic, a single IP address, connected to a single Portgroup which is configured to use an active and standby NIC?

Multi-NIC vMotion
• Multiple VMKnics required
• Multiple IP-addresses required
• Consistent configuration of NIC failover order required
• Multiple physical NICs required

Route based on physical NIC load
• Distributed vSwitch required
• Multiple physical NICs required

It goes without saying that you want to provide the best performance possible that leads you into considering using multiple NICs to increase bandwidth. But which one will be better? A simple performance test will determine that.

VCDX application form: Requirements
In your application document you stated that one of the customer requirements was “Reducing complexity”. Which of the two configurations do you choose now, what are your arguments? How do you balance or prioritize performance over complexity reduction?

If Multi-NIC vMotion beats LBT configuration in performance, leading to faster maintenance mode operations, better DRS load balance operations and overall reduction in lead time of a manual vMotion process, would you still choose the simpler configuration over the complex one?

Simplicity is LBTs forte, just enable vMotion on a VMKnic, add multiple uplinks, set them to active and your good to go. Multi-NIC vMotion exists of more intricate steps to get a proper configuration up and running. Multiple vMotion-enabled VMKnics are necessary, each with their own IP-range configuration, secondly vMotion requires deterministic path control, meaning that it wants to know which path is selects to send traffic across.

As the vMotion load balancing process is higher up in the stack, NIC failover orders are transparent for vMotion. It selects a VMKnic and assumes it resembles a different physical path then the other available VMKnics. That means its up to the administrator to provide these unique and deterministic paths.

Are they capable of doing this? You mentioned the level of expertise of the admin team as an assumption, how do you guarantee that they can execute this design, properly manage it for a long period and expand the design without the use of external resources?

Automation to the rescue
Complexity of technology by itself should not pose a problem, its how you (are required to) interact with it that can lead to challenges. As mentioned before Multi-NIC vMotion requires multiple IP-addresses to function. On a side note this could put pressure on the IP-ranges as all vMotion enabled VMKnics inside the cluster requires being a part of the same network. Unfortunately routed vMotion is not supported yet. Every vMotion VMKnic needs to be configured properly, Pair this with availability requirements and the active and standby NIC configuration of each VMKnic can cause headaches if you want to have a consistent and identical network configuration across the cluster. Power-CLI and Host Profiles can help tremendously in this area.

Supporting documents
Now have you included these scripts in your documentation? Have you covered the installation steps on how to configure vMotion on a distributed switch? Make sure that these elements are included in your supporting documents!

What about the constraints and limitations?

Licensing
Unfortunately LBT is only available in distributed vSwitches, resulting in a top-tier licensing requirement if LBT is selected. The LBT configuration might be preferred over Multi-NIC vMotion configuration because it provides the least amount of complexity increase over the traditional configuration.

How does this intersect with the listed budget constraint and the customer is not able –or willing – to invest in enterprise licenses?

IP4 pressure
One of the listed constraints in the application form is the limited amount of IP addresses in the available IP range destined for the virtual infrastructure. This could impact your decision on which configuration to select. Would you “sacrifice” the amount of IP-s to get a better vMotion performance and all the related improvements on the remaining dependent features or is scalability and future expansion of your cluster more important? Remember scalability is also listed in the application form as a requirement.

Try this at home!
These are just an example of questions that can be asked during a defense. Try to find these answers when preparing for you VCDX panel. When finalizing the document set, try to do this exercise. Even better to find a group of your peers and try to review each others design while reviewing the application form and the supporting set of documents. At the Nordic VMUG Duncan and I spoke with a group of people that are setting up a VCDX study group, I think this is a great way of not only preparing for a VCDX panel but to learn and improve your skill set you can use in your daily profession.

Send F11 key to nested ESXi on Mac

I only use Mac at home, most of the times its great sometimes it’s not.
For example when installing or configuring your remote lab. I have a windows server installed on a virtual machine that runs vCenter and the vSphere client.

When I’m installing a new nested ESXi server, I connect with a remote desktop session to the windows machine and use the VMware vSphere client.
During the ESXi install process, it requires to press the F11 key to continue with the install process. However F11 isn’t mapped by the vSphere client automatically and there isn’t a menu option in the vSphere client to send it to the client.

Fortunately I found the combination, so I’m writing it down here as I’m bound to forget.
Press FN-CMD-F11 to send the key to the install screen of ESXi.

Happy installing!