For a while I’ve been using three Dell R610 servers in my home lab. The machines specs are quite decent, each server equipped with two Intel Xeon 5530 CPUs, 48GB of memory and four 1GB NICs. With a total of 24 cores (48 HT Threads) and 144GB of memory the cluster has more than enough compute power.
However from a bandwidth perspective they are quite limited, 3 Gbit/s SATA and 1GbE network bandwidth is not really pushing the envelope. These limitations do not allow me to properly understand what a customer can expect when running FVP software. In addition I don’t have proper cooling to keep the machines cool and their power consumption is something troubling.
Time for something new, but where to begin?
Looking at the current lineup of CPUs doesn’t make it easier. Within the same CPU vendor product line multiple types of CPU socket exist, multiple different processor series exist with comparable performance levels. I think I spent most of my time on figuring out which processor to select. Some selection criteria were quite straightforward. I want a single CPU system and at least 6 cores with Hyper-Threading technology. The CPU must have a high clock speed, preferably above 3GHz.
Intel ARK (Automated Relational Knowledge base) provided me the answer. Two candidates stood out; the Intel Core i7 4930 and the Intel Xeon E5 1650 v2. Both 6 core, both HT-enabled, both supporting the advanced technologies such as VT-x, VT-d and EPT. http://ark.intel.com/compare/77780,75780
The main difference between the two CPU that matters the most to me is the higher number of supported memory of the Intel Xeon E5. However the i7-4930 supports 64GB, which should be enough for a long time. But the motherboard provided me the answer
Contrary to the variety of choices at CPU level, there is currently one Motherboard that stands out for me. It looks it almost too good to be true and I’m talking about the SuperMicro X9SRH-7TF. This thing got it all and for a price that is unbelievable. The most remarkable features are the on-board Intel X540 Dual Port 10GbE NIC and the LSI 2308 SAS controller. 8 DIMM slots, Intel C602J chipset and a dedicated IPMI LAN port complete the story. And the best part is that its price is similar of a PCI version of the Intel X540 Dual Port 10GbE NIC. The motherboard only supports Intel E5 Xeons, therefor the CPU selection is narrowed down to one choice, the Intel Xeon E5 1650 v2.
The SuperMicro X9SRH-7TF contains an Intel LGA2011 socket with Narrow ILM (Independent Loading Mechanism) mounting. This requires a cooler designed to fit this narrow socket. The goal is to create silent machines and the listed maximum acoustical noise of 17.6 dB(A) of the Noctua NH-U9DX i4 “sounds” promising.
The server will be equipped with 64GB. Four 16GB DDR3-1600 modules allowing for a future upgrade of memory. The full product name: Kingston ValueRAM KVR16R11D4/16HA Modules.
Although two 10 GbE NICs provide more than enough bandwidth, I need to test scenarios where 1GbE is used. Unfortunately vSphere 5.5 does not support the 82571 chipset used by the Intel PRO/1000 Pt Dual Port Server Adapter currently inserted in my Dell servers. I need to find an alternative 1 GbE NIC recommendations are welcome.
I prefer a power supply that is low noise and fully modular. Therefore I selected the Corsair RM550. Besides a noise-reducing fan the PSU has a Zero RPM Fan Mode, which does not spin the fan until it is under heavy load, reducing the overall noise level of my lab when I’m not stressing the environment.
The case of choice is the Fractal Design Define R4. Simple but elegant design, enough space inside and has some sound reducing features. Instead of the standard black color, I decided to order the titanium grey.
Due to the PernixDrive program I have access to many different SSD devices. Currently my lab contains Intel DC 3700 100GB and Kingston SSDNOW enterprise e100 200GB drives. Fusion I/O currently not (yet) in the PernixDrive program was so kind to lend me a Fusion I/O IODrive of 3.2 TB, unfortunately I need to return this to Fusion someday.
|CPU||Intel Xeon E5 1650 v2||540 EUR|
|CPU Cooler||Noctua NH-U9DX i4||67 EUR|
|Motherboard||SuperMicro X9SRH-7TF||482 EUR|
|Memory||Kingston ValueRAM KVR16R11D4/16HA||569 EUR|
|SSD||Intel DC 3700 100GB||203 EUR|
|Kingston SSDNOW enterprise e100 200GB||579 EUR|
|Power Supply||Corsair RM550||90 EUR|
|Case||Fractal Design Define R4||95 EUR|
|Price per Server||(without disks)||1843 EUR|
In total two of these machines are build as a start of my new lab. Later this year more of these machines will be added. I would like to thank Erik Bussink for providing me recommendations and feedback on the component selection of my new vSphere 5.5 Home Lab. I’m sure he will post a new article of his new lab soon.
12 Replies to “vSphere 5.5 Home lab”
Hi Frank, I would recommend getting another Intel X540-T2 dual port 10G NIC. They are pretty affordable. They support running at 1Gb and you can run them at that for as long as you like for your testing, then you can go to 10G when you need to test scenarios where 4 x 10G ports would be good. This is what I would do if I were upgrading my home lab to something like that. This way you know the drivers are solid too.
Hi Frank. Is there any way for IT professionals to test full ESXi functionality in labs at home without requesting 30 day trials over and over? Similar to Microsoft’s Technet licensing? Maybe they put some limit on VMs or Memory or CPU so that it could never effectively be used in a company’s server environment but enough to try and guage things at home? Or as someone tightly integrated in VMware do you have access to licensing the regular pros out there don’t have (which is understandable)?
I’d recommend the Intel i340-T4 – four 1 GbE ports, 82580 chipset. They’ll cost about half of an Intel x540-T2, although I don’t have the impression that cost really matter with your new NICE home lab 🙂
I was on budget and picked some used HP NC 365T from eBay, which are just OEMs of the i340, for less than 100€ each.
Just posting here to get notified of the answer to DITGUY’s question. Carry on, nothing to see.
Hey Frank. What does a setup like you described cost? (US$)
I exclusively use Noctua cooling CPU and case fans. You will not be disappointed with that choice for both cooling and db noise levels.
Ahh. Finally some nice posts about homelab design…. Starting to like this motherboard which does”have it all”, Erik Bussink got two for his lab….. It’s great that those include an IO controller (the LSI 2308) which is on the VMware HCL for VSAN …
Its a shame you haven’t mention the cost, I think most of homelab builders would be interested, as the WAF (wife acceptance factor) concerning the budget is playing crucial role design scenarios.. -:).
For the moment I finally decided to upgrade (not replace yet) only 2 of my lab whitebox hosts (they’re from 2011), which are however maxing at 24Gb RAM. I Ordered three Dell PERC H310 IO 8 port SAS/SATA 6Gb/s controller cards from eBay for $79 each. I’ll have 3 hosts for VSAN to start with with 4th one in separate cluster for management.
Homelabs are getting more and more expensive though. Many folks might just try to keep their labs as long as they can (if the upgrade still works). This small upgrade will allow me to spend some more $$ into SSDs storage (for VSAN and testing PernixData) and also need a bigger switch….
Vladan (& Frank)
I’ve also been looking for a while for a new more powerful homelab (for home), that scales and passes the limits I currently have. I had a great success last year with the Supermicro X9SRL-F motherboard for the home NAS, so I know I loved the Supermicro X9S series. Because of the Intel C200/C600 series of chipset, you can break the barrier of the 32GB you find on most motherboards (Otherwise the X79 chipset allows you upto 64GB). As time passes, and you see product solutions coming out (Horizon View, vCAC, DeepSecurity, ProtectV, Veeam VBR), the memory requirements are just exploding. You need more and more memory. I’m done with the homelab, where you really need to upgrade just because you can’t upgrade the top limit of the memory.
With the Supermicro X9SRH-7TF/X9SRL-F you can go upto 128GB easy (8x16GB) for now. It’s really just a $$$ choise. 256GB (8x32GB) is still out of reach for now, but that might change in 2 years.
I have attempted to install PernixData FVP 1.5 on my Shuttle XH61v at home, but the combo of the motherboard/AHCI/realtek makes for an unstable ESXi 5.5. Sometimes the PernixData FVP Management Server sees the SSD on my host, then it looses it. I did work with PernixData engineers (and Satyam), but my homelab is just not stable. Having received the PernixPro invitation, doesn’t give me the right to use hours and hours of PernixData engineers time to solve my homelab issues. This has made the choice for my two X9SRH-7TF boxes much easier.
The choice of the X9SRH-7TF is great because of the integrated management, the F in the X9SRH-7TF. Its a must these day, having the Dual X540 Intel network card will allow me to start using the board with dual 1000base-T and when I have the budget for a Netgear XS708E or XS712T it will scale to dual 10Gbase-T. In the meantime I can also have a single point-to-point 10GbE link between my two X9SRH-7TF boxes for vMotion and the PernixData data synchronization. The third component that comes on the X9SRH-7TF is the integrated LSI 2308 SAS2 HBA. This will allow to build a great VSAN lab, once I go from two to three nodes (once I have some budget set aside again this summer).
For the cases, I have gone just like Frank with the Fractal Design Define R4 (Black). I used a Fractal Design Arc Midi R2 for my Home NAS last summer, and I really liked the case’s flexibility. I removed the default two Fractal Design Silent R2 12cm cooling fans in the case and replaced with two Noctua NH-A14 FLX fans that are even quieter, and are connected using rubber holders so they vibrate even less. It’s all about having a quiet system. The Home NAS is in the guest room, and people sleep next to it without noticing it.
For the CPU Cooler, I ordered two Noctua NH-U12DX i4 coolers which support the Narrow ILM socket. Its a bit bigger than the NH-U9DX i4 that Frank ordered, so we will be able to compare.
For the Power supply, I have invested last year in an Enermax, so I took this time a Enermax Revolution X’t 550W power supply that is very efficient, supports ATX v2.4 (can drop to 0.5W on standby) and uses the same modular connectors of my other power supplies. These smaller 500W power supplies are very efficient when they run at 20% to 50% charge. This should also be a very quiet PSU.
For the memory, I’m going to reuse what I purchased last year for my Home NAS. So each box will receive 4x16GB Kingston for now. I will also reuse some of the large Samsung 840Pro 512GB SSD I purchased last year.
Sorry Frank, didn’t meant to write a book on your blog… I need to take some of what I just wrote and put it on my own with pictures, as soon as the equipment starts arriving.
@Erik – I’m guessing your Shuttle issues were probably due to duplicate UUID. If so, you should be able to use the AMIDMI utility to update and make them unique. Feel free to reach out if I can help.
@Frank & @Erik – I also noticed you both chose power supplies with Active PFC. Not sure if you’ve tested with your UPS, but you may want to since lots of older UPS aren’t compatible due to less stringent PWM sinewave standards.
Thanks for the light-bulb moment about the duplicate UUID !!! Sold my Shuttle XH61v today to some collegues, so its no more an issue. But thanks a lot.
Concerning UPS. I did a cost analysis a few years ago. In the area I live in, I can expect a single unexpected power-outage every two years. Having an UPS would cost more in $$$, heat and noise than the risk to rebuild 3-4 VMs in the event of a crash/corruption. This is a homelab.
I didn’t know about the UPS and Active PFC. Reading up on it now. Its like all the riple stability in PSU. Its not something I have focused on. But thanks again for the heads up.
Frank, I just wanted to thank you for including an actual bill of materials. Too few people done that, something I’ve griped about on blogs\Twitter in the past. Truly appreciated, thank you.
Comments are closed.