Basic Terminologies Large Language Models
Many organizations are in the process of deploying large language models to apply to their use cases. Publically available Large Language Models (LLMs), such as ChatGPT, are trained on publicly available data through September...
My Sessions at VMware Explore 2023 Las Vegas
Next week we are back in Las Vegas. Busy times ahead with meeting customers, old friends, making new friends, and presenting a few sessions. Next week I will present...
Aug 17, 2023
1 min read
#47 – How VMware accelerates customers achieving their net zero carbon emissions goal
In episode 047, we spoke with Varghese Philipose about VMware’s sustainability efforts and how they help our customers meet...
May 30, 2023
26 sec read
#46 – VMware Cloud Flex Compute Tech Preview
We’re extending the VMware Cloud Services overview series with a tech preview of the VMware Cloud Flex Compute service....
May 15, 2023
39 sec read
VMware Cloud Services Overview Podcast Series
Over the last year, we’ve interviewed many guests, and throughout the Unexplored Territory Podcast show, we wanted to provide...
Apr 17, 2023
1 min read
Research and Innovation at VMware with Chris Wolf
In episode 042 of the Unexplored Territory podcast, we talk to Chris Wolf, Chief Research and Innovation Officer of...
Mar 27, 2023
19 sec read
vSphere ML Accelerator Spectrum Deep Dive –NVIDIA AI Enterprise Suite
vSphere allows assigning GPU devices to a VM using VMware’s (Dynamic) Direct Path I/O technology (Passthru) or NVIDIA’s vGPU technology. The...
May 23, 2023
10 min read
vSphere ML Accelerator Spectrum Deep Dive – GPU Device Differentiators
The two last parts reviewed the capabilities of the platform. vSphere can offer fractional GPUs to Multi-GPU setups, catering to the...
May 16, 2023
3 min read
vSphere ML Accelerator Spectrum Deep Dive for Distributed Training – Multi-GPU
The first part of the series reviewed the capabilities of the vSphere platform to assign fractional and full GPU to workloads....
May 12, 2023
12 min read
Training vs Inference – Network Compression
This training versus inference workload series provides platform architects and owners insights about ML workload characteristics. Instead of treating deep neural...
Aug 26, 2022
8 min read
How to Write a Book – Show Up Daily
During the Belgium VMUG, I talked with Jeffrey Kusters and the VMUG leadership team about the challenges of writing a book....
Aug 1, 2022
2 min read
Training vs Inference – Numerical Precision
Part 4 focused on the memory consumption of a CNN and revealed that neural networks require parameter data (weights) and input data...
Jul 26, 2022
9 min read
Training vs Inference – Memory Consumption by Neural Networks
This article dives deeper into the memory consumption of deep learning neural network architectures. What exactly happens when an input is...
Jul 15, 2022
11 min read
Unexplored Territory Podcast Episode 19 – Discussing NUMA and Cores per Sockets with the main CPU engineer of vSphere
Richard Lu joined us to talk basics of NUMA, Cores per Socket, why modern windows and mac systems have a default...
Jul 1, 2022
25 sec read
Machine Learning on VMware Platform – Part 3 – Training versus Inference
Machine Learning on VMware Cloud Platform – Part 1 covered the three distinct phases: concept, training, and deployment, part 2 explored the...
Jun 30, 2022
7 min read
Unexplored Territory Podcast Episode 18 – Not just artificially intelligent featuring Mazhar Memon
In this week’s Unexplored Territory Podcast, we have Mazhar Memon as our guest. Mazhar is one of the founders of VMware...
Jun 13, 2022
45 sec read
MACHINE LEARNING ON VMWARE PLATFORM – PART 2
Resource Utilization Efficiency Machine learning, especially deep learning, is notorious for consuming large amounts of GPU resources during training. However, as...
Jun 8, 2022
5 min read
Machine Learning on VMware Platform – Part 1
Machine Learning is reshaping modern business. Most VMware customers look at machine learning to increase revenue or decrease cost. When talking...
May 25, 2022
7 min read
Solving vNUMA Topology Mismatch When Migrating between Dual Socket Servers and Quad Socket Servers
I recently received a few questions from customers migrating between clusters with different CPU socket footprints. The challenge is not necessarily...
Mar 11, 2022
3 min read