• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

VCDX Defence: Are you planning to use a fictitious design?

February 10, 2014 by frankdenneman

This week the following tweet caught my eye:

“@wonder_nerd Fictitious designs in a #VCDX defense have a failure rate of 90%. #VMwarePEX” < interesting statistic

— Mark Snook (@VirtualSnook) February 8, 2014


Apparently Marc Brunstad (VCDX program manager) stated this fact during the PEX VCDX workshop. But what does this stat mean and to what level do you need to take this into regard when submitting your own design?
During my days as a panel member, I’ve seen only a handful of fictitious designs and although they were technically sound, the reasoning and defense were usually not that strong. Be aware that the VCDX program isn’t born into existence to find the best design ever. It determines if the candidate has aligned the technical functionality with the customers’ requirements, the constraints provided by the environment and the assumptions the team made about for example future workloads or organizational growth.
But does that mean that you shouldn’t use any fictitious element in your design? Are fictiticous elements inherently bad? I don’t think so. Speaking from own experience I made some adjustments to my design I submitted.
My submitted design was largely based on the environment that I worked on for a couple of years. At that time the customer used rack-based systems, my design contained a blade architecture. The reason why I changed this, as it allowed me to demonstrate my knowledge of the HA stack featured in vSphere 4.1. Some might argue that I deliberately made my design more complex, but I was comfortable enough to defend my choices and explain High Availability Primary and Secondary node interaction and how to mitigate risk.
More over it allowed me to demonstrate the pros and cons of such a design on various levels, such as the impact it had on operational processes, the influence on scalability and the alignment of availability policies to org-defined failure domains. Did I have these discussions in real life? Yes, with many other customers but just not with that specific customer that this design was based on.
And that’s why complete fictitious designs fail and why most reasoning is incomplete. The candidate only focused on the alignment of technical specs and workload. Not the “softer” side of things.
Arguing that this design element was just the wish of a customer just doesn’t cut it. Sure we all met customers that were strung on having that particular setting configured in the way they saw fit, but its your responsibility to explain to the panel which steps you took to inform the customer about the risk and potential impact that setting had. Try to explain which setting you would have used and why. Demonstrate your knowledge about feasible alternatives.
My recommendation to future candidates; when incorporating a specific fictitious design element in your design, make sure you had a conversation with a customer about that element once. You can easily align this with the main design and it helps to recollect the specifics during your defense.

Filed Under: VCDX

Installing Exchange Jetstress without full installation media.

February 5, 2014 by frankdenneman

I believe in testing environments with applications that will be used in the infrastructure itself. Pure synthetic workloads, such as IOmeter, are useful to push hardware to their theoretical limit but that’s about it. Using a real life workload, common to your infrastructure, will give you a better understanding of the performance and behavior of the environment you are testing. However, it can be cumbersome to setup the full application stack to simulate that workload and it might be difficult to simulate future workload.
Simulators made by the application vendor, such as SQLIO Disk Subsystem Benchmark Tool or Exchange Server Jetstress, provide an easy way to test system behaviour and simulate workloads that might be present in the future.
One of my favourite workload simulators is MS Exchange server Jetstress however its not a turn-key solution. After installing Exchange Jetstress you are required to install the ESE binary files from an Exchange server. It can happen that you don’t have the MS exchange installation media available or a live MS exchange system installed.
01-Missing files - jetstress 2010
Microsoft recommends downloading the trail version of Exchange, install the software and then copy the files from its directory. Fortunately you can save a lot of time by skipping these steps and extract the ESE files straight from an Exchange Service Pack. Added bonus, you immediately know you have the latest versions of the files.
I want use Jetstress 2010 and therefor I downloaded Microsoft Exchange Server Jetstress 2010 (64 bit) and Microsoft Exchange Server 2010 Service Pack 3 (SP3).
To extract the files direct from the .exe file, I use 7zip file archiver. ()
The ESE files are located in the following directory:

File Path
ese.dll \setup\serverroles\common
eseperf.dll \setup\serverroles\common\perf\amd64
eseperf.hxx \setup\serverroles\common\perf\amd64
eseperf.ini \setup\serverroles\common\perf\amd64
eseperf.xml \setup\serverroles\common\perf\amd64



Copy the ESE files into the Exchange Jetstress installation folder. By default, this folder is “C:\Program Files\Exchange Jetstress”.
Be aware that you need to run Jetstress as an administrator. Although you might login your system using you local and domain admin account, Jetstress will be kind enough to throw the following error:

The MSExchange Database or MSExchange Database ==> Instrances performance counter category isn’t registered

Just right-click the Jetstress shortcut and select “run as administrator” and you are ready for action.
Happy testing!

Filed Under: VMware

vSphere 5.5 vCenter server inventory 0

January 16, 2014 by frankdenneman

After logging into my brand spanking new vCenter 5.5 server I was treated with a vCenter server inventory count of 0. Interesting to say the least as I installed vCenter on a new windows 2008 R2 machine, connected to a fresh MS active directory domain. I installed vCenter with a user account that is domain admin, local admin and has all the appropriate local rights (Member of the Administrators group, Act as part of the operating system and Log on as a Service). The install process went like a breeze, no error messages whatsoever and yet the vCenter server object was mysteriously missing after I logged in. A mindbender! Being able to log into the vCenter server and finding no trace of this object whatsoever, it felt like someone answering the door and saying he’s not home.
I believed I did my due diligence, I read the topic “Prerequisites for Installing vCenter Single Sign-On, Inventory Service, and vCenter Server” and followed every step, however it appeared I did not RTFM enough.
administrator@vsphere.local only
Apparently vSphere will only attach the permissions and assign the role of administrator to the default account administrator@vsphere.local and you have to logon with this account after the installation is complete. See “How vCenter Single Sign-On Affects Log In Behavior” for the following quote:

After installation on a Windows system, the user administrator@vsphere.local has administrator privileges to both the vCenter Single Sign-On server and to the vCenter Server system.

It threw my off balance by allowing me to log in with the account that I used to install vCenter, this made me assume the account automatically received the appropriate rights to manage the vCenter server. To gain access to the vCenter database you must manually assign the administrator role to the AD group or user account of your liking. As an improvement over 5.1 vCenter 5.5 adds the active directory as an identity source, but will not assign any administrator rights, ignoring the user account used for installing the product. Follow these steps to use your AD accounts to manage vCenter.
1. Verify AD domain is listed as an Identity Source
Log in with administrator@vsphere.local and select Configuration in the home menu tree. Only when you are logged in with an SSO administrator vCenter will show the Single Sign-on menu option. Select Single Sign-on | Configuration and verify if AD domain is listed.
1-SSO configuration identify sources
2. Add Permissions to top object vCenter
Go back to home, select menu option vCenter, vCenter Servers and then the vCenter server object. Select the menu option Manage, Permissions
2-vCenter permissions
3. Add User or Group to vCenter
Click on the green + icon to open the add permission screen. Click on the Add button located at the bottom.
4. Select the AD domain
Select the AD domain and then the user or group. In my example I selected the AD group “vSphere-admins”. I’m using groups to keep the vCenter configuration as low-touch as possible. When I need grant additional users administrator rights I can simple do this in my AD Users and Computers tool. Traditionally auditing is of a higher level in AD then in vCenter.
3-Select AD domain and Group
5. Assign Administrator Role
In order to manage the vCenter server all privileges need to be assigned to that user, by selecting the administrator role all privileges are assigned and propagated to all the child objects in the database.
4-Assign Administrator Role to AD group
6. Log in with your AD account
Log out the user administrator@vsphere.local and enter your AD account. Click on vCenter to view the vCenter Inventory list. vCenter Servers should list the new vCenter server.
5-vCenter Server Inventory List

Filed Under: VMware

VCDX defend clinic: Choosing between Multi-NIC vMotion and LBT

January 7, 2014 by frankdenneman

A new round of VCDX defenses will kickoff soon and I want to wish everyone that participates in the panel session good luck. Usually when VCDX panels are near, I receive questions on how to prepare for a panel. And one recommendation I usually provide is

“Know why you used a specific configuration of a feature and especially know why you haven’t used the available alternatives”.

Let’s have some fun with this and go through a “defend clinic”. The point of this clinic is to provide you an exercise model than you can use for any configuration, not only for a vMotion configuration. It helps you to understand the relationship of information you provide throughout your documentation set and helps you explain how you derived through every decision to come to this design.
To give you some background, when a panel member is provided the participants documentation set, he enters a game of connecting the dots. This set of documents are his only view into the your world while creating the design and dealing with your customer. He needs to take your design and compare it to the requirements of the customer, the uncertainties you dealt with in the form of assumptions and the constraints that were given. Reviewing the design on technical accuracy is only a small portion of the process. That’s just basically checking to see if you are using your tools and material correctly, the remaining part is to understand if you build the house to the specification of the customer while dealing with regional laws and the available space and layout of the land. Building a 90.000 square feet single floor villa might provide you the most amount of easily accessible space, but if you want to build that thing in downtown Manhattan you’re gonna have a bad time. 😉
Structure of the article
This exercise lists the design goals and its influencers, requirements, constraints and assumptions. The normal printed text is architects (technical) argument while the paragraphs are displayed in Italic can be seen as questions or thoughts of a reviewer/panel member.
Is this a blue print on how to beat the panel? No! It’s just an exhibition on how to connect and correlate certain statement made in various documents. Now let’s have some fun exploring and connecting the dots in this exercise.
Design goal and influencers
Your design needs to contain a vMotion network as the customer wants to leverage DRS load balancing, maintenance mode and overall enjoy the fantastic ability of VM mobility. How will you design your vMotion network?
In your application form you have stated that the customer want to see a design that reduces complexity, increases scalability, prefers to have the best performance available as possible. Financial budget and the amount of IP-addresses are constraints and the level of expertise of the virtualization management team is an assumption.
Listing the technical requirements
Since you are planning to use vSphere 5.x you have the choice to create a traditional single vMotion-enabled VMKnic, Multi-NIC vMotion setup or use vMotion configuration that uses “Route based on physical NIC load” load balance algorithm (commonly known as LBT) to distribute vMotion traffic amongst multiple active NICs. As the customer does not prefer to use link aggregation, IP-hash based / EtherChannel configurations are not valid.
First let’s review the newer vMotion configurations and how they differentiate from the traditional vMotion configuration, where you have one single VMKnic, a single IP address, connected to a single Portgroup which is configured to use an active and standby NIC?
Multi-NIC vMotion
• Multiple VMKnics required
• Multiple IP-addresses required
• Consistent configuration of NIC failover order required
• Multiple physical NICs required
Route based on physical NIC load
• Distributed vSwitch required
• Multiple physical NICs required
It goes without saying that you want to provide the best performance possible that leads you into considering using multiple NICs to increase bandwidth. But which one will be better? A simple performance test will determine that.
VCDX application form: Requirements
In your application document you stated that one of the customer requirements was “Reducing complexity”. Which of the two configurations do you choose now, what are your arguments? How do you balance or prioritize performance over complexity reduction?
If Multi-NIC vMotion beats LBT configuration in performance, leading to faster maintenance mode operations, better DRS load balance operations and overall reduction in lead time of a manual vMotion process, would you still choose the simpler configuration over the complex one?
Simplicity is LBTs forte, just enable vMotion on a VMKnic, add multiple uplinks, set them to active and your good to go. Multi-NIC vMotion exists of more intricate steps to get a proper configuration up and running. Multiple vMotion-enabled VMKnics are necessary, each with their own IP-range configuration, secondly vMotion requires deterministic path control, meaning that it wants to know which path is selects to send traffic across.
As the vMotion load balancing process is higher up in the stack, NIC failover orders are transparent for vMotion. It selects a VMKnic and assumes it resembles a different physical path then the other available VMKnics. That means its up to the administrator to provide these unique and deterministic paths.
Are they capable of doing this? You mentioned the level of expertise of the admin team as an assumption, how do you guarantee that they can execute this design, properly manage it for a long period and expand the design without the use of external resources?
Automation to the rescue
Complexity of technology by itself should not pose a problem, its how you (are required to) interact with it that can lead to challenges. As mentioned before Multi-NIC vMotion requires multiple IP-addresses to function. On a side note this could put pressure on the IP-ranges as all vMotion enabled VMKnics inside the cluster requires being a part of the same network. Unfortunately routed vMotion is not supported yet. Every vMotion VMKnic needs to be configured properly, Pair this with availability requirements and the active and standby NIC configuration of each VMKnic can cause headaches if you want to have a consistent and identical network configuration across the cluster. Power-CLI and Host Profiles can help tremendously in this area.
Supporting documents
Now have you included these scripts in your documentation? Have you covered the installation steps on how to configure vMotion on a distributed switch? Make sure that these elements are included in your supporting documents!
What about the constraints and limitations?
Licensing
Unfortunately LBT is only available in distributed vSwitches, resulting in a top-tier licensing requirement if LBT is selected. The LBT configuration might be preferred over Multi-NIC vMotion configuration because it provides the least amount of complexity increase over the traditional configuration.
How does this intersect with the listed budget constraint and the customer is not able –or willing – to invest in enterprise licenses?
IP4 pressure
One of the listed constraints in the application form is the limited amount of IP addresses in the available IP range destined for the virtual infrastructure. This could impact your decision on which configuration to select. Would you “sacrifice” the amount of IP-s to get a better vMotion performance and all the related improvements on the remaining dependent features or is scalability and future expansion of your cluster more important? Remember scalability is also listed in the application form as a requirement.
Try this at home!
These are just an example of questions that can be asked during a defense. Try to find these answers when preparing for you VCDX panel. When finalizing the document set, try to do this exercise. Even better to find a group of your peers and try to review each others design while reviewing the application form and the supporting set of documents. At the Nordic VMUG Duncan and I spoke with a group of people that are setting up a VCDX study group, I think this is a great way of not only preparing for a VCDX panel but to learn and improve your skill set you can use in your daily profession.

Filed Under: VCDX, VMware

My lab and the birth of the portable Ikea lack 19” datacenter rack

December 16, 2013 by frankdenneman

Currently, topics about labs are hot, and when meeting people at the VMUGs or other tech conferences, I get asked a lot about my lab configuration. I’m a big fan of labs, and I think everybody who works in IT needs a lab, whether it’s at home or in a centralized location.

At PernixData, we have two major labs. One on the east coast and one on the west coast of the U.S. Both these labs are shared, so you cannot do everything you like. However, sometimes you want to break stuff. You want to pull cables and disks and kill an entire server or array. To see what happens. For these reasons having a lab that is 4000 miles away doesn’t work. Enough reasons to build a small lab at home.

Currently topic about labs are hot and when meeting people at the VMUGs or other tech conferences, I get asked a lot about my lab configuration. I’m a big fan of labs and I think everybody who works in IT needs a lab, whether it’s at home or in a centralised location.
At PernixData we have two major labs. One at the east coast and one at the west coast of the U.S of A. Both these labs are shared, that means you cannot do everything you like. However sometimes you want to break stuff, you want to pull cables, disks, kill an entire server or array. Just to see what happens. For these reasons having a lab that is 4000 miles away doesn’t really work, enough reasons to build a small lab at home.

Nested or physical hardware?
To nest or not to nest, that’s not even the question. Nesting is amazing, and VMware spends a lot of energy and time on nested environments (think HOL). Recently the fling VMware tools for Nested ESXi was released, and I assume more nested ESXi flings will follow after seeing the attention it received from the community.

But to run nested ESXi, you need to have physical hardware. Thanks to a generous donation, I received 6 Dell r610s, which covered my compute level requirements. But sometimes, you only want to test the software, and in those cases, you do not need to fire up an incredibly loud semi-datacenter rig. For those situations, I created an ESXi host that is near silent when running full speed. This ESXi server also hosts a nested ESXi environment and is just a white box with a simple ASUS mobo, 24GB, and the Intel 1GB Ethernet port. Once this machine is due for renewal, a white box following the baby dragon design will replace it.

To test the software at the enterprise level, you require multiple levels of bandwidth, sometimes the bare minimum and sometimes copious amounts of it. The R610 sports 4 x 1GB Ethernet connections, allowing me to test scenarios that can happen in a bandwidth-constrained environment. Usually, compelling cases happen when you have a lot of restrictions to deal with, and these 1GB NICs are perfect for this. 10GB connections are on my wish list, but to have a nice setup, you still need to invest more than 1000 bucks in testing it adequately.

A little bit over the top for my home lab, but the community came to the rescue and provided me with a solution; the Infiniband hack. A special thanks go out to Raphael Schitz and Eric Bussink for providing me the software and the information to run my lab at 10Gbps and being able to provide incredibly low latencies to my virtual machines. With the InfiniBand setup, I can test scenarios where bandwidth is not a restriction and investigate specific setups and configurations. For more info, listen to the vBrownbag tech talk where Erik Bussink dives into the topic “InfiniBand in the Lab“

The storage layer is provided by some virtual storage appliances, each backed by a collection of different SSD disks and WD Black Caviar 750GB disks. Multiple solutions allow me to test various scenarios such as all-flash arrays, hybrid, and all magnetic disk arrays. If I need to understand the specific dynamics of an array, I log in to one of the two US-based labs.

Home office
My home office is designed to be an office and not a data center. So where do you place 19″ rack servers without ruining the esthetics of your minimalistic designed home office ;). Well, you create a 19″ rack on wheels so you can roll it out of sight and place it wherever you want it. Introducing the portable Ikea lack 19″ datacenter rack.
ikea portable rack-1
Regular readers of my blog or Twitter followers know I’m a big fan of hacking IKEA furniture. I created a whiteboard desk that got the attention of multiple sites and ikeahackers.net provided me with a lot of ideas on how to hack the famous lack table side table.

I bought two lack tables, a couple of L-shaped brackets, four wheels, nuts, and bolts. The first lack table provides the base platform. Only the tabletop is used. The legs are discarded and act as a backup if I make a mistake during the drilling.

I didn’t test the center of the tabletop, but the corners of the tabletop are solid and can be used to install wheels. I used heavy-duty ball-bearing wheels with an offset swivel caster design that permits ease of directional movement. Simple 5mm nuts and bots keep the L shape brackets in place, but beware, the table legs are not made of solid wood. They are hollow! Only a few centimeters of the top of the leg is solid. This to hold the screw that connects the table and leg. To avoid having the server pull the screw through the leg due to its weight, I used washers to keep them in place

What’s next?
From a hardware perspective, 10GbE is still high on my wishlist. When looking at the software layer, I want to create a more automated way of deploying and testing PernixData FVP software. One of the things I’m looking into is using and incorporating Auto Deploy in the lab. But that’s another blog post.t.

Filed Under: Uncategorized

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 38
  • Page 39
  • Page 40
  • Page 41
  • Page 42
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in