Category: Miscellaneous (page 1 of 9)

Exploring the Core Motivation of Writing a Book

More than a week ago Niels and I released the VMware vSphere 6.5 Host Resources Deep Dive and the community has welcomed it with open arms. The book is finding its way across the globe, from Argentina to New Zealand. To see the massive amounts of tweets praising the books brings us pride and joy.

Over the last couple of days, I have received many inquiries what it takes to write a book and if I could provide some hints and tips. I thought it might be an interesting blog post. Three questions you need to ask yourself

  1. What is my core motivation for wanting to write a book?
  2. What are you willing to give up for pursuing this goal?
  3. Do I have the platform to launch the book and capture and maintain the attention of the book/brand?

Core motivation
The first question you need to ask yourself is why exactly you want to write a book? An answer often heard is not money or fame. It’s typically a charitable cause, such as to educate the community or a stepping-stone for one’s career.

The last reason is by far the most likely one that will provide you the return on investment. People respond differently to you once they found out that you wrote a book. It shows dedication, it hints at mastery of a subject, it differentiates you from the rest. Nay-sayers will automatically point to the option of self-publishing, but they forget that the leading word in self-publishing is SELF. You have to do it all by yourself.

If money is your answer, then I need to burst that bubble. The chances are that the same amount of time spent working at the local supermarket will be more profitable. What I’ve learned with publishing five books, is that the creation of a single page takes approximately 90 minutes.

The sole exercise of writing 300 to 500 words does not take 90 minutes. It’s the second-guessing, the formatting and the phrasing that takes a lot of time. Once you write something down, thoughts will start to flow; they will lead to more questions, they will lead to second-guessing your initial idea. This leads you back to vendor collateral, academic papers, or testing in your lab. And you will hit writer’s block.

90 minutes is a good number to work with when you are in the planning phase of the book. We wrote 569 pages. 569 pages times 90 minutes equals 51210 minutes. That is 853,5 hours.

This number leads to additional questions. But first, let’s answer the money question, this intertwines with the platform question.

How many books does one sell? Duncan and I sold over 75.000 copies of the vSphere clustering deep dive series. I know that the early Bible of virtualization; VMware ESX Server, Advanced Technical Design Guide by Scott Herold and Ron Oglesby sold approximately 30.000 copies. Both are exceptions to the rule. Most successful self-published books sell between 500 and 1500 copies. Let’s say you earn 5 dollars per book, and you sell 1500 copies, you receive 7500 dollars before tax. If you spend 700 hours on the book, you will make a little over 10 dollars an hour.

Self-publishing books provide more revenue to the author than using a publishing house. Published books by VMware Press or any other publisher house will get you into bookstores. Unfortunately, this will eat into your royalties.

Those 700 hours need to come from somewhere, and because you are writing a tech book, you are bound to the time limit of the software version. It doesn’t make sense to publish a book about the previous version of the software, so you typically have one year of writing. If you have a day-time job, you need to spend times in the weekend and evening hours. Let’s say you keep the weekends for your friends and family and household chores. That leaves you with five evenings to write. If you write 3 hours a night, that means you are spending 233 consecutive workdays to write your book. Question yourself whether you or your loved ones will have this stamina?

What are you willing to give up
So that brings me to the second question, what are you willing to give up. I’m not saying divorce your spouse, but if you want to keep your family happy, you need to get the hours from something else. Gym time, drinking time, game time or sleep time. Typically all of the above, because sometimes you will get sick, or other responsibilities will get in the way. Going back to question one, is this worth the ˜10K you probably make? When you want to use it as a stepping stone for your career, the money is just a nice bonus.

If you want to educate the community, or you want to get more exposure, then you need to have a platform already in place. This platform can be a successful blog, a popular Twitter account or you are a regular on a podcast. Due to the maturity of this particular industry (virtualization) you need to be a regular in the community to have people accept your wisdom. There are a lot of people sharing their knowledge, some not always as correct as they believe. So ask yourself, why would anyone want to buy your book? Why should he believe you? Are you seen as an authority on a particular subject by the community? Just getting a book out and expect people will buy it because of the subject is, unfortunately, a thing of the past. There are a lot of books about virtualization on Amazon and people need an extra level of confirmation before they spend their money.

Find your niche and share your knowledge! But, before spending a lot of time on writing a book and getting disappointed by the sales result ask yourself. Am I an authority on a certain topic and does the community share the same perception? A book can certainly help built this image. But in general, people already need to understand that you know what you are talking about. How can you become one? Publishing articles on your blog or LinkedIn will help. Appear on podcasts such as Virtually Speaking Podcast or vBrownbag. And speak! Speak a lot at local and neighboring VMUGs. Hone your skill, so that you can shine at VMworld.

You need to harvest that popularity and keep riding that wave. That’s why you need to have that platform. Otherwise, it will be a short 15 minutes of fame. Popularity has a momentum. People will forget, and the attention to your new book will soon be pointed at another new thing. You need to provide a platform that can maintain that momentum. Write about the book, publish sections of the book on your blog, speak about it in podcasts. This allows you to create better and bigger things after your first book, maybe a second one.

As with everything in life, nothing is self-contained. It’s always intertwined with other elements. This blog post is not to discourage you from writing a book, but hopefully, helps you prepare to launch a successful book. There is a lot of work that needs to be done before releasing your knowledge in the form of a book. Don’t let my words discourage you. To paraphrase Nike: Just don’t quit!

Host Deep Dive Stickers and More

Last week we released the VMware vSphere 6.5 Host Resources Deep Dive book and Twitter and Facebook exploded. We’ve seen some pretty bad-ass pictures on our Twitter feeds such as this one by Jamie Girdwood (@creamcookie)

It’s always nice to hear some praise after spending more than 800 hours on something. (When writing and self-publish a book, expect to spend over 90 minutes on one page). Thanks!

The top three most often heard questions were:

  1. When will you release an ebook version?
  2. Do you have any stickers?
  3. When is Niels joining VMware?

When will you release an ebook version?
We hope to get the ebook finalized after VMworld. Vacation time is coming up, and we also need to prep for VMworld (vSphere 6.5 Host Resources Deep Dive: Part 2 [SER1872BU]). It might happen sooner, but that depends on the process of creating an eBook itself. Unfortunately, it’s not as easy as sharing a PDF online. Please stay tuned.

Do you have any stickers?
We got you covered. We met up with our designer over at and explained our wishes. We received a lot of comments on the depth of the book. Such as the one from Duncan’s article Must have book: Host Resources Deep Dive:

As most of you know, I wrote the Clustering Deepdive series together with Frank, which means I kinda knew what to expect in terms of level of depth. Kinda, as this is a whole new level of depth. I don’t think I have ever seen (for example) topics like NUMA or NIC drivers explained at this level of depth. If you ask me, it is fair to say that Frank and Niels redefined the term “deep dive”.

So instead of snorkeling and hovering a bit below sea-level, we help you get into the depths of the material. What better way to express this than a divers helmet. We will bring 250 stickers to VMworld. First come first serve. If you can’t wait, download the 800 DPI PNG here and create one for yourself.

White Background

Transparent Background

I think the design rocks, so much that Niels and I decided to put it on some t-shirts as well. We are not backed by a vendor, so we can’t give away shirts. Similar to the book, we kept the price low. We created two campaigns, one for the US and one for EU.This allows you to get the order as fast as possible. The shirts and hoodies come in various colors.

When is Niels joining VMware?
I don’t know, he should though!

VMworld Geek Whisperers Podcast – Choosing Titles You Want To Have

Amy Lewis asked me to appear on the Geek Whisperers Live podcast at VMworld 2016 in Las Vegas. And as always I had a blast discussing various topics with Amy, Matt, and John. In this talk, we spoke about becoming an evangelist, what the challenges are as an evangelist and why you won’t want to pick the title of evangelist yourself.


Of course, while interacting with this magnificent group of people you tend to talk about a lot more things. So go on and check it out, I had a blast doing it.

Top 5 vBlog Again, Thanks!!!!

Yesterday the top 25 vBlogs were announced and once again I’m in the top 5. I would like to thank all who have voted for me! It’s great to see that the content is appreciated.

The broadcast:

Looking forward, there is a lot of content getting ready to be published and I hope to release my 5th book this year, the vSphere 6.x host resource deep dive. I’m excited about the content I’m working on and I’ll hope you guys will too!



Insights into VM density

For the last 3 months my main focus within PernixData has been (and still is) the PernixCloud program. In short PernixData Cloud is the next logical progression of PernixData Architect and provides visibility into and analytics of virtual datacenters, it’s infrastructure, and it’s applications. By providing facts on the various elements of the virtual infrastructure, architects and administrators can design their environment in a data-driven way.

The previous article “Insights into CPU and Memory configurations of ESXi hosts” zoomed in to the compute configuration of 8000 ESXi hosts and helped us understand which is the most popular system in today’s datacenter running VMware vSphere. The obvious next step was to determine the average number of virtual machines on these systems. Since that time the dataset has expanded and it now contains data of more than 25.000 ESXi hosts.

An incredible dataset to explore I can tell you and it’s growing each day. Learning how to deal with these vast quantities of data is incredibly interesting. Extracting various metrics from a dataset this big is challenging. Most commercial tools are not designed to cope with this amount of data, thus you have to custom build everything. And with the dataset growing at a rapid pace, you are constantly exploring the boundaries of what’s possible with software and hardware.

VM Density
After learning what system configurations are popular, you immediately wonder how many virtual machines are running on that system. But what level of detail do you want to know? Will this info be useable for architects and administrators to compare their systems and practical to help them design their new datacenter?

One of the most sought after question is the virtual CPU to physical CPU ratio. A very interesting one, but unfortunately to get a result that is actually meaningful you have to take multiple metrics into account. Sure you can map out the vCPU to pCPU ratio, but how do you deal with the fact of oversizing of virtual machines that has been happing since the birth of virtualization? What about all these countless discussions whether the system only needs a single or double CPU because it’s running a single threaded program? How many times have you heard the remark that the vendor explicitly states that the software requires at least 8 CPU’s? Therefor you need to add utilization of CPU to get an accurate view, which in turn leads to the question what timeframe you need to use to understand whether the VM is accurately sized or whether the vCPUs are just idling most of the time? You are now mixing static data (inventory) and transient data (utilization). Same story applies for memory.

In consequence I focused just on the density of virtual machines per host. The whole premise of virtualization is to exploit the variation of activity of applications, combined with distribution mechanisms as DRS and VMturbo you can argue that virtual and physical compute configurations will be matched properly. Therefor it’s interesting to see how far datacenters stretch their systems and understand the consolidation ratio of virtual machines. Can we determine a sweet spot of the number of virtual machines per host?

The numbers
Discovered earlier, dual socket systems are the most popular system configuration in the virtual datacenters, therefor I focused on these systems only. With the dataset now containing more than 25.000 ESXi hosts, it’s interesting to see what the popular CPU types are.


The popular systems contained in total 12, 16, 20 and 24 cores. Therefor the popular CPU’s of today are 6, 8, 10 and 12 cores. But since we typically see a host as a “closed” system and trust on the host local CPU scheduler to distribute the vCPUs amongst the available pCPUs, all charts use the total cores per system instead of on a per-CPU basis. For example a 16 cores system is ESXi host containing two 8 cores CPUs.

Before selecting a subset of CPU configurations let’s determine the overall distribution of VM density.
Interesting to see that it’s all across the board, VM density ranging from 0-10 VM’s per host up to more than 250. There were some outliers, but I haven’t included them. One system runs over 580 VM’s, this system contains 16 cores and 192 GB. Let’s dissect the VM density per CPU config.

Dissecting it per CPU configuration

Instead of focusing on all dual socket CPU configurations, I narrowed it down to three popular configurations. The 16 core config as it’s the most popular today, and the 20 to 24 core as I expect this to be the configuration as the default choice for new systems this year. This allows us to compare the current systems in today’s datacenter to the average number and help you to understand what VM density possible future systems run.

Since host memory is an integral part of providing performance to virtual machines, it’s only logical to determine VM density based on CPU and Memory configurations. What is the distribution of memory configuration of dual socket systems in today’s virtual datacenters?

Memory config and VM density of 16 cores systems
30% of all 16 cores ESXi hosts is equipped with 384GB of memory. Within this configuration, 21 to 30 VMs is the most popular VM density.


Memory config and VM density of 20 cores systems
50% of all 20 cores ESXi hosts is equipped with 256GB of memory. Within this configuration, 31 to 40 VMs is the most popular VM density.


Interesting to see that these systems, on average, have to cope with less memory per core than the 16 cores system (24GB per core versus 12,8GB per core)

Memory config and VM density of 24 cores systems
39% of all 24 cores ESXi hosts is equipped with 384GB of memory. Within this configuration, 101 to 150 VMs is the most popular VM density.

101 to 150 VM’s sound like a VDI platform usage. Are these systems the sweetspot for virtual desktop environments?

Not only do we have an actual data on VM density now, other interesting facts were discovered as well. When I was crunching these numbers one thing that stood out to me was memory configurations used. Most architects I speak with tend to configure the hosts with as much memory as possible and swap out the systems when their financial lifespan has ended. However I seen some interesting facts, for example memory configurations such as 104 GB or 136 GB per system. How do you even get 104GB of memory in such a system, did someone actually found a 4GB DIMM laying around and decided to stick it in th system? More memory = better performance right? Please ready my memory deepdive series on how this hurts your overall performance. But I digress. Another interesting fact is that 4% of all 24 cores systems in our database are equipped with 128GB of memory. That is an average of 5,3 GB per core, 64GB per NUMA node. Which immediately raises questions such as average host memory per VM or VM density per NUMA node. The more we look at data, the more questions arise. Please let me know what questions you have!

Older posts

© 2017

Theme by Anders NorenUp ↑