The servers in my home lab are dying on a daily basis. After four years of active duty, I think they have the right to retire. So I need something else. But what? I can’t rent lab space as I work with unreleased ESXi code. I’ve been waiting for the Intel Xeon D 21xx Supermicro systems, but I have the feeling that Elon will reach Mars before we see these systems widely available. The system that I have in mind is the following:
- Intel Xeon Silver 4108 – 8 Core at 1.8 GHz (85TDP)
- Supermicro X11SPM-TF (6 DIMMs, 2 x 10 GbE)
- 4 x Kingston Premier 16GB 2133
- Intel Optane M.2 2280 32 GB
CPU
Intel Xeon Silver 4108 8 Core. I need to have a healthy number of cores in my system to run some test workload. Primarily to understand host and cluster scheduling. I do not need to run performance tests, thus no need for screaming fast CPU cores. TDP value of 85W. I know there is a 4109T with a TDP value of 70W, but they are very hard to get in the Netherlands.
Motherboard
Supermicro X11SPM-TF.Rocksolid Supermicro, 2 x Intel X722 10GbE NICs onboard and IPMI.
Memory
Kingston Premier 4 x 16 GB 2133 MHz. DDR4 money is nearing HP Printer Ink prices, 2133 MHz is fast enough for my testing, and I don’t need to test 6 channels of RAM at the moment. The motherboard is equipped with 6 DIMM slots, so if memory prices are reducing, I can expand my system.
Boot Device
Intel Optane M.2 32 GB. ESXi still needs to have a boot device, no need to put in 256 GB SSD.
This is the config I’m considering. What do you think? Any recommendations or alternate views?
Hello, Frank.
Why you whant’s an Intel configuration, I mean MB & CPU?
Why not AMD Opteron series?
I’d consider the Dell PowerEdge R7415, a 16 core AMD EPYC based system.
Hi Deja, good question. I heard positive news about the AMD Epyc systems, however, they have a very high TDP. I want to keep the operation cost to a minimum
AMD EPYC systems from 120W TDP for 16 cores which is less than the 190W TDP for 16 cores using the XEON equivalent.
AMD have done great work on their power management and consumption in the ZEN core.
Yeah on a relative scale it beats the Xeon, however, on an absolute scale I can’t have 4 machines consuming 190W each. The heat and power consumption of the rest of the infrastructure will kill me
I also know some of the chaps here are running their Home Labs on NUC.
Very low power consumption.
NUCs don’t have 10GbE onboard and 64GB support. otherwise I would have taken the plunge already
Why not consider a Dell Precision Workstation and load that up with RAM?
I temporarily have a Xeon D-2100 based SYS-E300-9D in my home lab (which I’m far from this week, at VeeamON in Chicago w/ Duncan), but a lot of limitations on that form factor with no M.2 on mobo for NVMe awesomeness, and quite loud. What kind of storage plans do you have? I’ll be blogging about my informal tests soon, and not sure when it will be shipping in volume.
Thanks for the feedback Paul! Luckily they are running inside the house anymore, so I’m not worried about the sound. The missing M.2 option is a bummer. From a storage plan perspective I have a 10Gb AFA Synology 1817+, in addition, I have Intel P3700 and Micron P9100 PCIe devices for vSAN.
I wish Supermicro provided some insight on expected availability!
The 10GbE issue on NUC can be handled by thunderbolt 3 based Ethernet devices, but sadly, no 64GB RAM yet.
Maybe just buy twice as many of the NUCs?
They’re only little.
https://www.sonnettech.com/product/twin10g-thunderbolt3.html
That’s just my 2c.
I’m very interested to see what you eventually do especially as we pay VERY high power prices here in Ireland, my monthly bill is about €300.
Frank, check this http://www.asrockrack.com/general/productdetail.asp?Model=E3C236D2I#CPU
It’s very small power consumption motherboard at low costs.
Thx, unfortunately E3 CPUs only provide 4 cores
Hello Frank, take a look of my new built at home, i am finding it quite good on what you need and into the budget I would say.
https://anksos.wordpress.com/2018/05/07/new-home-lab-in-the-hood-specifications-part-1/
If core count and power consumption are priorities, but cpu performance isnt that important perhaps a homelab based on the Intel Atom Processor C3000 (codename: Denverton) series cpu. For my new homelab im considering going for this cpu, a Xeon cpu has more computing performance then the Atom, however that performance comes with downside which is a higher power consumption. Most of the time my current homelab is running near idle.
For example:
Supermicro A2SDi-H-TF (https://www.supermicro.nl/products/motherboard/atom/A2SDi-H-TF.cfm)
It has 8 cores @ 2,2 Ghz and 2 nics @ 10 Gbit. The biggest downside of the board is that it sports only one PCIe slot, at the moment there are no boards with more then one PCIe slot available. Going with this board does a require a storage redesign, either free the M2 slot by using a satadom or a cheap sata-ssd as boot device and replace one PCIe for a SSD with a M2 slot or use a sata-ssd as storage device instead of a PCIe ssd.
Interesting, I’m running the following off 1 PC I build with my own little Scottish hands here in Bangkok, Tailand where I work and very smoothly:
VCSA 6.7, ESXi 6.7 x3 (PROD Cluster + vSAN, ESXi x1 MGMT Cluster), vRealize Operations Management (vCOPS) 6.7, vRealize Log Insight 4.6.0, VEEAM 9.5 on a 2016 Server Domain managed by a 2016 Windows Server Domain Controller…. ALL on VMWare Workstation Pro 14
PC Spec is:
Intel i7 770 Kaby Lake 14nm @ 3.60Ghz, 64GB high performance RAM (16GB x4 Modules – cost a fair bit though, Gigabyte z270X-UD5, NVMe OS Drive, 4x 2 TB WD black label drives 10K SATA @ RAID 0, 6GB GeForce GTX 1060 OC Graphics (For gaming after lab 🙂 ) via a 32″ LG TV operating @ full 1080p + sizes
Hi Frank,
this is my lab, just for an alternative view…I also considered a Threadripper 1920x when I was building it but the cost did not change much…
I love white boxes for labs, all the parts are more affordable and can be changed easy in case of failure (or added for future expansions). Some X399 motherboards can host 3 M.2 drive…also, Threadripper platform has 64 PCIe 3.0 lanes and it’s a quad channel with 8 sticks.
https://nestedvlabs.com/home-lab/
See U
Benja
PS. The 10 Gbit onboard on my X399 doesn’t work (I tried also to compile the driver with the Open Source Disclosure packages).
I use USB sticks for my ESXi installation drives. Very cheap and easy to have multiple installations and just swap sticks when I want to swap installs.
Frank – did you go with this build? I am considering to build using the Silver 4114 + X11SPM-TF but I am having a hard time finding a CPU FAN for the lga3467 socket… what did you use?
I postponed the build as I bought new NICs and a new 10GbE switch. The onboard Intel NICs are getting freaking hot. During a test for the new book, Niels fried the onboard NICS. They were continually running at 115+ degrees Celcius. By moving away from the onboard NICs to Mellanox SFP Nics, the overall system heat went down tremendously. No more weird behavior. The systems are running fine for a couple of weeks now.