frankdenneman

  Frank Denneman is the Chief Technologist for AI at VMware by Broadcom. He is an author of the vSphere host and clustering deep dive series, as well as a podcast host for the Unexplored Territory podcast. You can follow him on Twitter @frankdenneman

     



441 Stories by frankdenneman

Could not initialize plugin ‘libnvidia-vgx.so – Check SR-IOV in the BIOS

I was building a new lab with some NVIDIA A30 GPUs in a few hosts, and after installing the NVIDIA driver onto the ESXi...
1 min read

Sub-NUMA Clustering

I’m noticing a trend that more ESXi hosts have Sub-NUMA Clustering enabled. Typically this setting is used in the High-Performance Computing space or Telco...
10 min read

VMware Sessions at NVIDIA GTC

Overcome Your AI/ML Challenges with VMware + NVIDIA AI-Ready Enterprise Platform (Presented by VMware, Inc.) Tuesday, Sep 20, 8:00 PM – 9:00 PM...
31 sec read

vSphere 8 and vSan 8 Unexplored Territory Podcast Double Header

This week we released two episodes covering the vSphere 8 and vSan 8 releases. Together with Feidhlim O’Leary, we discover all the new functions...
18 sec read

Unexplored Territory – VMware Explore USA Special

This week Duncan and I attended VMware Explore to co-present the session “60 Minutes of Virtually Speaking Live: Accelerating Cloud Transformation.” with William Lam...
22 sec read

New vSphere 8 Features for Consistent ML Workload Performance

vSphere 8 is full of enhancements. Go to blogs.vmware.com or yellow-bricks.com for more extensive overviews of the vSphere 8 release. In this article, I want to highlight two...
2 min read

Training vs Inference – Network Compression

This training versus inference workload series provides platform architects and owners insights about ML workload characteristics. Instead of treating deep neural networks as black...
8 min read

How to Write a Book – Show Up Daily

During the Belgium VMUG, I talked with Jeffrey Kusters and the VMUG leadership team about the challenges of writing a book. Interestingly enough, since...
2 min read

Training vs Inference – Numerical Precision

Part 4 focused on the memory consumption of a CNN and revealed that neural networks require parameter data (weights) and input data (activations) to generate...
9 min read

Training vs Inference – Memory Consumption by Neural Networks

This article dives deeper into the memory consumption of deep learning neural network architectures. What exactly happens when an input is presented to a...
11 min read