• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

What’s Your Favorite Tech Novel?

July 30, 2020 by frankdenneman

Today I was discussing with Duncan some great books to read. Disconnecting fully from work is difficult for me, so typically, the books I read are tech-related. I have read some brilliant books that I want to share with you, but mostly I want to hear your recommendations for other excellent tech novels.

Cyberwarfare

Cyberwarfare intrigues me, so any book covering Operation Olympic Games -or- Stuxnet interests me. One of the best books on this topic “Countdown to Zero day” by Kim Zetter. The book is filled with footnotes and references to corresponding research.

David Sanger, the NYT reporter who broke the Olympic Games story, wrote another brilliant piece on the future of Cyberwarfare, “The Perfect Weapon: War, Sabotage, and Fear in the Cyber Age“. This book is the basis of an upcoming HBO documentary.

 “No Place to Hide,” tells the story of Glenn Greenwald (The Guardian Journalist) of the encounters with Snowden right after he walked away with highly classified material. Greenwald explores some of the technology used by the NSA uncovered by the Snowden leak.

Tech History

If you are interested in the inception of the internet, then “Where Wizards stay up late” should be on your bookshelf or a part of your digital library. Moving the computer from being a giant calculator to a communication device.

“Command and Control” explores the systems used to manage the American nuclear arsenal. It tells stories about near misses. If you think you’re behind on patching and updating your systems, do yourself a favor and read this book 😉

Technothriller

Old-timers know Mark Russinovich from the advanced system utility toolset called Sysinternals, the new generation knows him as CTO of Azure Webservices. It turns out, that Mark is a gifted author as well. He published three tech novels that are highly entertaining to read: “Zero day“, “Rogue Code” and “Trojan Horse“. 

Little Brother by Cory Doctorow (Thanks to Mark Brookfield for recommending this) tells an entertaining story of how a young hacker takes on the department of homeland security.

Hacking

“Ghost in the Wires” reads like a technothriller, but tells the story of the hunt on Kevin Mitnick. A must-have.

“Kingpin: How One Hacker Took Over the Billion-Dollar Cybercrime Underground” tells the story of Kevin Poulsen, ofter referred to in Ghost in the Wires, as one of the most notorious hackers focusing on credit card fraud. An exciting and quick read.

What’s in your top 5?

Filed Under: Uncategorized

New whitepaper available on vSphere 7 DRS Load Balancing

July 22, 2020 by frankdenneman

vSphere 7 contains the new DRS algorithm that differs tremendously from the old one. The performance team has put the new algorithm through the test and have published a whitepaper presenting their findings.

Read the white paper: Load Balancing Performance of DRS in vSphere 7.0.

Filed Under: DRS, Uncategorized

5 Things to Know About Project Pacific

August 26, 2019 by frankdenneman

During the keynote of the first day of VMworld 2019, Pat unveiled Project Pacific. In short, project Pacific transforms vSphere into a unified application platform. By deeply integrating Kubernetes into the vSphere platform, developers can deploy and operate their applications through a well-known control plane. Additionally, containers are now first-class citizens enjoying all the operations generally available to virtual machines.

Although it might seem that the acquisition of Heptio and Pivotal kickstarted project Pacific, VMware has been working on project Pacific for nearly three years! Jared Rosoff, the initiator or the project and overall product manager, told me that over 200 engineers are involved as it affects almost every component of the vSphere platform.

Lengthy technical articles are going to be published in the following days. With this article, I want to highlight the five key takeaways from project Pacific.

1: One Control Plane to Rule Them All

By integrating Kubernetes into the vSphere platform, we can expose the Kubernetes control plane to allow both developers and operation teams to interact with the platform. Instead of going through the hassle of installing, configuring, and maintaining Kubernetes clusters, each ESXi host acts as a Kubernetes worker node. Every cluster runs a Kubernetes control plane that is lifecycle managed by vCenter. We call this Kubernetes cluster the supervisor cluster, and it runs natively inside the cluster. This means that Kubernetes functionality, just like DRS and HA, is just a toggle switch away.

2: Unified Platform = Simplified Operational Effort

As containers are first-class citizens, multiple teams can now interact with them. By being able to run them natively on vSphere means they are visible to all your monitoring, log analytics, change management operations as well. This allows IT teams to move away from the dual-stack environments. Many IT teams that have been investing in Kubernetes over the last few years started to create a full operational stack beside the stack to manage, monitor, and operate the virtualization environment. Running independent and separate stacks next to each other is a challenge by itself.

However, most modern application landscapes are not silo’ed in either one of these stacks. They are a mix of containers, virtual machines, and sometimes even functions. Getting the same view across multiple operational stacks is near impossible. Project Pacific provides a unified platform where developers and operations share the same concepts. Each team can see all the objects across the compute, storage, and network layers of the SDDC. The platform provides a universal view with common naming and organization methods while offering a unified view of the complete application landscape.

3: Namespaces Providing Developer Self-service and Simplifying Management

Historically, vSphere is designed with the administrator group in mind as the sole operator. By exposing the Kubernetes API, developers can now deploy and manage their applications directly. As mentioned earlier, modern applications are a collection of containers and VMs, and therefore the vSphere Kubernetes API has been extended to support virtual machines, allowing the developer to use the Kubernetes API to deploy and manage both containers as well as virtual machines.

To guide the deployments of applications by the developers, project Pacific uses namespaces. Within Kubernetes, namespaces allow for resource allocation requirements and restrictions, and grouping of objects such as containers and disks. Withing project Pacific it’s way more than that. In addition, these namespaces allow the IT ops team to apply policies to it as well. For example, in combination with Cloud-Native Storage (CNS), a storage policy can be attached to the namespace, providing a persistent volume with the appropriate service levels. For more info on CNS, check out Myles Gray’s session: HCI2763BU Technical Deep Dive on Cloud Native Storage for vSphere

Besides the benefits for the developers, as the supervisor cluster is subdivided into namespaces, they become a unit of tenancy and isolation. In essence, they become a unit of management within vCenter, allowing IT ops to resource allocation, policy management, and diagnostic and troubleshooting at namespace and workload level. As the namespace is now a native component within vCenter, it is intended to group every workload, both VMs, containers, and guest clusters and allow operators to manage it as a whole.

4: Guest Clusters

The supervisor cluster is meant to enrich vSphere, providing integrations with cloud-native storage and networking. However, the supervisor cluster is not an upstream conformant Kubernetes cluster. Guest clusters use the Kubernetes upstream cluster API for lifecycle management. It is an open system that’s going to work with the whole Kubernetes ecosystem.

5: vSphere Native Pods providing lightweight containers with the isolation of VMs

As we almost squashed the incorrect belief that ESXi is a Linux OS, we are now stating that containers are first-class citizens. Is ESXi after all a Linux OS, since you need to run Linux to operate containers? No ESXi is still not Linux, to run containers project Pacific is using a new container runtime called CRX.

Extremely simplified, a vSphere Native Pod is a virtual machine. We took out all the unnecessary components and run a lightweight Linux kernel and a small container runtime (CRX). To utilize our years of experience of paravirtualization, we optimized this CRX in such a way that it outperforms containers running on the traditional platforms. As Pat mentioned in the keynote, 30% faster than a traditional Linux VM and 8% faster than bare-metal Linux.

The beauty of using a VM construct is that these vSphere Native Pods are isolated at the hypervisor layer. Unlike pods that run on the same Linux host which share the same Linux kernel and virtual hardware (CPU and memory). vSphere Native Pods have completely separate Linux Kernel and virtual hardware, hence much stronger isolation from security and resource consumption perspective. Simplifying security and ensuring proper isolation models for multi-tenancy.

Modern IT Centers Around Flexibility

It’s all about using the right tool for the job. The current focus of the industry is to reach cloud-native nirvana. However, cloud-native can be great for some products, while other applications benefit from a more monolith perspective. Most applications are a hybrid form of microservices mixed with stateful data collections. Project Pacific allows the customer to use the correct tool for the job; all managed and operated from a single platform.

VMware Breakouts to Attend or Watch

HBI4937BU – The future of vSphere: What you need to know now by Kit Colbert. Monday, August 26, 01:00 PM – 02:00 PM | Moscone West, Level 3, Room 3022

More to follow

Where Can I Sign Up for a Beta?

We called this initiative a project as it is not tied to a particular release of vSphere. Because it’s in tech preview, we do not have a beta program going on at the moment. As this project is a significant overhaul of the vSphere platform, we want to collect as much direct feedback from customers as we can. You can expect we will make much noise when the beta program of Project Pacific starts.

Stay tuned!

Filed Under: Uncategorized

Allen, McKeown, and Kondo

April 24, 2019 by frankdenneman

The title is a reference to one of the most interesting books I have ever read, Escher, Godel, and Bach. Someone described it as, “Read this book if you like to think about thinking, as well as to think about thinking about thinking”. The three books I want to share my thoughts on are in a sense feeding and shaping the behavior that allows you to clear your mind and focus more on the task at hand.

The three books that I’m referring to are Getting Things Done (Allen), Essentialism (McKeown), and the KonMari (Kondo) method. They are written by three different authors, from three different continents, impacted by three different cultures in different years. Seemingly they have nothing to do with each other, but they complement each other so perfectly it’s downright amazing. After reading all three and re-reading them again, you start to discover hooks where these individual books mesh together

The three books are instrumental in the way on how I live my life. I can imagine other people in IT with a similar travel lifestyle can benefit from reading these books as well. When you travel a lot, you need to get everything in order, as you have to ensure you have the essentials with you. You sometimes have little time to decompress from your last trip and prepare for the next one. You have to keep track of meetings and obligations both in your personal life as well as your professional one. And above all, you want to avoid wasting time on mundane or trivial tasks while being at home and spend your precious time most optimally. Three books have changed my mindset, and they help me guide my decisions. It helps to provide clarity and streamline day-to-day tasks.

When this topic comes up during a conversation, many of the people I talk to end up buying one (or more) of these books, and it seems they catch the same bug, optimizing life, streamlining their behavior. I thought maybe it’s interesting to more people, let’s write an article about something else than hardcore CPU and Memory resource management. Let’s focus on how to manage time and to some extent energy.

Getting Things Done
The overall theme of Getting Things Done (GTD) is helping you manage to focus and therefore time. The main premise of the book in order, any time, everywhere. In your mind but also your surroundings. Instead of getting distracted by things that you need to do, the main rule is to do it immediately or write it down so you can do when appropriate. Writing it down and classifying the tasks helps to clear your mind, it helps you to focus on the task at hand. The author stresses to get rid of context switching.

A perfect example is the junk drawer. Every time you walk past the junk drawer, it reminds you that you need to sort it out. You need to sift through the junk and see what you can use or what can be tossed away. That’s the context switch. Here you are walking around your house thinking about your big project, and there’s that junk drawer again, providing you with the annoying feeling that you really need to sort that out. You don’t want that, you want your mind focused on bigger things, no guilt trips when walking around the house. That’s where the other two books come into play.

Same applies with the GTD method of categorizing tasks. To have oversight of the tasks at hand, you need to have a clear and tidy surrounding. You can’t keep efficient track of things if you have to go through a lot of junk to find the relevant to-do list. This scene in the movie Limitless is a perfect example. The protagonist is a writer who happens to be excellent in procrastinating. This results in no goals finished and an untidy house. When taking the cognitive expanding drugs, he wants to finish his life long goal of writing a book, but before he begins, he realizes that he cannot deal with any distraction and want order around him, resulting in a big cleanup of the house. That’s what GTD wants you to do as well, sans drugs of course.

Essentialism
When cleaning the house, you typically end up throwing things away. A time-consuming job that never seems to finish. Sometimes you come across that you can’t let go, but you also don’t know what to do with it. In the end, it generates a conflicting feeling, introducing a context switch every time you see it. Hooking back into GTD. Essentialism allows you to prevent this by restructuring your behavior when buying new things and helps you to understand the role of your current belongings. Essentialism is not a lot different than minimalism. However, there is one significant difference, and that is the factor of happiness. With essentialism, you get to rate your belongings on the scale of happiness and usefulness. Does it make you happy or is it useful in day to day activities? If answered yes, then keep it. The interesting thing is that the book starts to reshape your decision making – or better said, the selecting criteria when buying something new. After reading it, I began to buy less of the things that I was eyeing because they just didn’t meet both criteria completely.

The time of the acquisition process of an item is extended as you start to look for the object that provides the most happiness while delivering the required functionality. You begin to research the available options more thoroughly, it’s not uncommon to come to the conclusion that it’s better to approach the “problem” differently. You start to drive towards the essence of the problem, what am I solving here? Is there a better way. This ties in with a mindset that has been introduced by the book of Michael Hammer, reengineering the corporation. A fantastic book about redesigning processes, but I’ll cover that book another time. Another benefit of the elaborate purchase process is the occurrence of (re)buying a similar product, or actually the lack off. We’ve all bought a similar object after the first one because the current one wasn’t living up to its expectation or isn’t functioning properly. As you do your due diligence, you analyze the problem and research the best “tools” available. This can go as far as understand your preference of tactile feel of your cutlery. Trust me, you can go very far with applying this pattern of behavior. As a result, you surround yourself with a minimal set of objects that satisfy your needs perfectly. The stuff you have makes you very happy while decluttering your home as much as possible.

Another example is my collection of Air Jordan shoes. Completely unnecessary, but they bring me joy. I collected these from the period when I played basketball myself. In the beginning, it was almost like a free-for-all, get the next version that is released. Buying it because you can (almost must). After reading essentialism, I reviewed my collection. Yes, collecting specific models makes me happy, but most ones that I have are not meeting the criteria of some of the special ones. In result, I reduced my collection by 70%, sold them so others can have them while reducing “footprint” of the collection in the house. I applied focus to the collection. To this day, with everything that I buy I ask myself: Do I need it? And is this the best I can obtain? What I learned is that the majority of objects acquired after reading essentialism have a longer lifespan than buying the first thing you come across when discovering the need for it. It improves the sustainability of your household tremendously. In short, you end up with a lot less stuff in your house, making it easier to get it organized and clean, increasing or maintaining your focus to the choirs at hand.

The Life-changing Magic of Tidying Up (KonMari Method)
This book took the world by storm, I discovered that the author, Marie Kondo, now has a show on Netflix. Before you wonder, I do not talk to my socks and thank them for the days’ work. 😉 The key takeaway I had from reading this book is that junk is stuff that does not has a permanent place in your home. Everything that keeps moving through the house is junk. It generates context switching. To reduce junk, you have to learn some techniques about how to efficiently store things. Some things have exceeded their purpose and can be let go off. This ties back to the essentialism part. Does it make me happy or is it useful? These are excellent criteria to review all your belongings while cleaning up the house. By ending up with less stuff, it frees up room in your home to find permanent places for things that matter. And with a permanent location, it means less time spent on searching for things. Fewer context switches as the junk drawer is now the drawer that houses x, y, and z. I store my phone, wallet, keys in one particular place. When leaving the house, I do not waste time finding the stuff. I can maintain my focus while grabbing the necessities. The time to pack for a trip is significantly reduced, I just have to understand the weather and the purpose of the trip, I know exactly where everything is stored.

These three books helped me tremendously, maybe they can be of help to you as well, give them a try. Please leave a comment about the books that structurally changed your perception on how to deal with these type of things, hopefully, it expands the must-read book list of others and me.

Filed Under: Uncategorized

Tracking down noisy neighbors

May 3, 2016 by frankdenneman

A big part of resource management is sizing of the virtual machines. Right-sizing the virtual machines allows IT teams to optimize the resource utilization of the virtual machines. Right sizing has become a tactical tool for enterprise IT-teams to ensure maximum workload performance and efficient use of the physical infrastructure. Another big part of resource management is keeping track of resource utilization, some of these processes are a part of the daily operation tasks performed by specialized monitoring teams or the administrators themselves. Service Providers usually cannot influence the right sizing element, therefor they focus more on the monitoring part. What is almost universal across virtual infrastructure owners is the incidental nature of tracking down ‘noisy-neighbors’ VMs . Noisy neighbor VMs generate workload in such a way that it monopolizes resources and have negative impact on the performance of other virtual machines. Service Providers and enterprise IT teams have to deal with these consumer outliers in order to meet the SLAs of existing workloads and being able to satisfy the SLA requirements of new workloads.
It’s interesting that noisy neighbor tracking is an incidental activity as it can be so detrimental to the performance of the virtual datacenter. Tools such as vSphere Storage IO Control (short term focus) and vSphere Storage DRS (long term focus) assist to alleviate the infrastructure from the burden of noisy neighbors, but attacking this problem structurally is necessary to ensure consistent and predictable performance from your infrastructure. At long term, noisy neighbor VMs impact the projected consolidation ratio, which in turn influences the growth rate of the infrastructure. I’ve seen plenty of knee jerk reactions, creating a server and storage infrastructure sprawl due to introduction of these outlier workloads.
Identifying noisy neighbors can become a valuable tool in both strategic and tactical playbooks of the IT organization. Having insight of which VMs are monopolizing the resources allow IT teams to act appropriately. Similar to real life the behavior of noisy neighbor can be changed often, but sometimes that’s the nature of the beast and you just have to live with it. In that situation noisy neighbors become outliers of conduct and one ha to make external adjustments. This insight allows IT teams to respond along the entire vertical axis of the virtual datacenter, from application to infrastructure choice. By having the correct analysis, the IT team can provide insights to the application owner, allowing them to adjust accordingly. It helps the IT team to understand whether the environment can handle the workload and make adjustment to the infrastructure necessary. Sometimes the intensity of the workload is just what it is and hosting that workload is necessary to support the business. In that case the IT team has to understand whether the infrastructure is suitable to support the application. As most IT organization have access to multiple platforms, the accurate insight of characteristics (and requirements) of the workload allows them to identify the correct platform.
Virtual Datacenters are difficult to monitor. They are comprised of a disparate stack of components. Every component logs and presents data differently. Different granularity of information, different time frames, and different output formats make it extremely difficult to correlate data. In addition you need to be able to correctly identify the workload characteristics and interpret the impact it has on the shared environment. We do not live in a world anymore where we have to deal with isolated technology stacks. Applications typically do not run anymore on a single box, connected to a single and isolated raid array. Today everything within the infrastructure is shared, the level of hardware resource distribution is diluting with each introduction of new hardware. Where we used to run a single application in a VM on top of server with ten other VMs, sharing a couple of NICs and HBA’s, we slowly moved towards converged network platforms. In the last 10 years, we shared and shared more, the only monolith remaining is the application in the VM and that is rapidly changing as well with the popularity of containers and micro services. Yet most of our testing mechanisms and monitoring efforts are still based on the architecture we left behind 10 years ago. Virtual Datacenters require continuous analytics that fully comprehends the context of the environment, with the ability to zoom in and focus on outliers if necessary.
Noisy Neighbor
In the upcoming series I’m going to focus on how to explore cluster level workloads and progressively zooming into specific workloads based on IOPS, block size, throughput and unaligned IOs.

Filed Under: Uncategorized

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Interim pages omitted …
  • Page 9
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in