• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

vSphere Storage Area Network Traffic system network resource pool -NetIOC

February 12, 2013 by frankdenneman

After posting the Network I/O Control primer I received a couple of questions about the vSAN traffic system network resource pool, such as:

What’s the “vSphere Storage Area Network Traffic” system network resource pool for?
I tried to further investigate by searching practically everywhere, but I didn’t manage to find any detailed description…

The vSphere Storage Area Network Traffic is a system network pool designed for a future vSphere storage feature that is not released yet. Unfortunately Network I/O Control exposes this system network resource pool in vSphere 5.1 already.
02-network-pools-overview
Although it is defined as system network resource pool, the vSphere client lists the network pool as user-defined, providing the impression that this pool can be assigned to other streams of traffic. Unfortunately this is not possible. The pool is a system network resource pool and therefor only available to traffic that is specifically tagged by the VMkernel.
I received the question if this network pool could be assigned to a third party NIC or an FCoE card. As mentioned, network pools only manage traffic that is assigned with the appropriate tag. Tagging of traffic is only done by the VMkernel and this functionality is not exposed to the user.
Although its exposed in the user-interface, this system network pool has no function and it will not have any affect on other network streams. It can be happily ignored.

Filed Under: Uncategorized

How to enable SIOC stats only mode?

February 11, 2013 by frankdenneman

Today on twitter, David Chadwick, Cormac Hogan and I were discussing SIOC stats only mode. SIOC stats only mode gathers statistics to provide you insights on the I/O utilization of the datastore. Please note that Stats only mode does not enable the datastore-wide scheduler and will not enforce throttling. Stats only mode is disabled due to the (significant) increase of log data into the vCenter database.
SIOC stats only mode is available from vSphere 5.1 and can be enabled via the web client. To enable SIOC stats only mode go to:

  1. Storage view
  2. Select the datastore
  3. Select Manage
  4. Select Settings

01-SIOC-disabled
By default both SIOC and SIOC stats only mode is disabled. Click on the edit button at the right side of the screen. Un-tick the check box “Disable Storage I/O statistics collection (applicable only if Storage I/O Control is disabled)”. Click on OK
02-enable-SIOC-stats-only-mode
To test to see if there is any difference, I used a datastore that SIOC had enabled. I disabled SIOC and un-ticked the “Disable Storage I/O statistics collection (applicable only if Storage I/O Control is disabled)” option. I opened up the performance view and selected the “Realtime” Time Range.

  1. Storage view
  2. Select the datastore
  3. Select Monitor
  4. Select Performance
  5. Select “Realtime” Time range

03-SIOC-Time-Range
At 15:35 I disabled SIOC, which explains the dip, at 15:36 SIOC stats only mode was enabled and it took vCenter roughly a minute to start displaying the stats again.
04-running stats only mode
As all new vSphere 5.1 features, SIOC stats only mode can only be enabled via the vSphere web client.

Filed Under: SIOC Tagged With: SIOC

Error -1 in opening & reading the slot file error in storageRM.log (SIOC)

February 11, 2013 by frankdenneman

The problem
Recently I noticed that my datastore cluster was not providing Latency statistics during initial placement. The datastore recommendation during initial placement displayed space utilization statistics, but displayed 0 in the I/O Latency Before column
01-Datastore-recommendations
The performance statistics of my datastores showed that there was I/O activity on the datastores.
02-Datastore-latency
However the SIOC statistics all showed no I/O activity on the datastore
03-SIOC-activity
The SIOC log file (storagerm.log) showed the following error:
Open /vmfs/volumes/ /.iorm.sf/slotsfile (0x10000042, 0x0) failed: permission denied
Giving UP Permission denied Error -1 opening SLOT file /vmfs/volumes/datastore/.iorm.sf/slotsfile
Error -1 in opening & reading the slot file
Couldn’t get a slot
Successfully closed file 6
Error in opening stat file for device: datastore. Ignoring this device

The following permissions were applied on the slotfile:
04-slotsfile-before
The Solution
Engineering explained to me that these permissions were not the default standards and default permissions are read and execute access for everyone and write access for the owner of the file. The following command sets the correct permissions on the slotsfile:
Chmod 755 /vmfs/volumes/datastore/.iorm.sf/slotsfile
Checking the permission shows that the permissions are applied:
05-slotsfile-after
The SIOC statistics started to show the I/O activity on the datastore
06-SIOC-activity-after
Before changing the permissions on the slotsfile I stopped the SIOC service on the host by entering the command: /etc/init.d/storageRM stop
However I believe this isn’t necessary. Changing the permissions without stopping SIOC on the host should work.
Cause
We are not sure what causes this problem and support and engineering are troubleshooting this error. In my case I believe it has to do with the frequent restructuring of my lab. vCenter and ESXi servers are reinstalled regularly, but I have never reformatted my datastores. I do not expect to see this error appear in stable production environments. Please check the current permissions on the slotsfile if Storage DRS does not show I/O utilization on the datastore. (VMs must be running and I/O metric on the Datastore cluster must be enabled of course)
I expect the knowledge base article to be available soon.

Filed Under: SIOC

Error -1 in opening & reading the slot file error in storageRM.log (SIOC)

February 11, 2013 by frankdenneman

The problem
Recently I noticed that my datastore cluster was not providing Latency statistics during initial placement. The datastore recommendation during initial placement displayed space utilization statistics, but displayed 0 in the I/O Latency Before column
01-Datastore-recommendations
The performance statistics of my datastores showed that there was I/O activity on the datastores.
02-Datastore-latency
However the SIOC statistics all showed no I/O activity on the datastore
03-SIOC-activity
The SIOC log file (storagerm.log) showed the following error:
Open /vmfs/volumes/ /.iorm.sf/slotsfile (0x10000042, 0x0) failed: permission denied
Giving UP Permission denied Error -1 opening SLOT file /vmfs/volumes/datastore/.iorm.sf/slotsfile
Error -1 in opening & reading the slot file
Couldn’t get a slot
Successfully closed file 6
Error in opening stat file for device: datastore. Ignoring this device

The following permissions were applied on the slotfile:
04-slotsfile-before
The Solution
Engineering explained to me that these permissions were not the default standards and default permissions are read and execute access for everyone and write access for the owner of the file. The following command sets the correct permissions on the slotsfile:
Chmod 755 /vmfs/volumes/datastore/.iorm.sf/slotsfile
Checking the permission shows that the permissions are applied:
05-slotsfile-after
The SIOC statistics started to show the I/O activity on the datastore
06-SIOC-activity-after
Before changing the permissions on the slotsfile I stopped the SIOC service on the host by entering the command: /etc/init.d/storageRM stop
However I believe this isn’t necessary. Changing the permissions without stopping SIOC on the host should work.
Cause
We are not sure what causes this problem and support and engineering are troubleshooting this error. In my case I believe it has to do with the frequent restructuring of my lab. vCenter and ESXi servers are reinstalled regularly, but I have never reformatted my datastores. I do not expect to see this error appear in stable production environments. Please check the current permissions on the slotsfile if Storage DRS does not show I/O utilization on the datastore. (VMs must be running and I/O metric on the Datastore cluster must be enabled of course)
I expect the knowledge base article to be available soon.

Filed Under: SIOC

Why is vMotion using the management network instead of the vMotion network?

February 7, 2013 by frankdenneman

On the community forums, I’ve seen some questions about the use of the management network by vMotion operations. The two most common scenarios are explained, please let me know if you notice this behavior in other scenarios.

Scenario 1: Cross host and non-shared datastore migration
vSphere 5.1 provides the ability to migrate a virtual machine between hosts and non-shared datastores simultaneously. If the virtual machine is stored on a local or non-shared datastore vMotion is using the vMotion network to transfer the data to the destination datastore. When monitoring the VMkernel NICs, some traffic can be seen following over the management NIC instead of the VMkernel NIC enabled for vMotion.
When migrating a virtual machine, vMotion determines hot data and cold data. Virtual disks or snapshots that are actively used are considered hot data, while the cold data are the underlying snapshots and base disk. Let’s use a virtual machine with 5 snapshots as an example. The active data is the recent snapshot, this is sent over across the vMotion network while the base disk and the 4 older snapshots are migrated via a network file copy operation across the first VMkernel NIC (vmk0).

The reason why vMotion uses separate networks is that the vMotion network is reserved for data migration of performance-related content. If the vMotion network is used for network file copies of cold data, it could saturate the network with non-performance related content and thereby starving traffic that is dependent on bandwidth. Please remember that everything sent over the vMotion network directly affects the performance of the migrating virtual machine.

During a vMotion the VMkernel mirrors the active I/O between the source and the destination host. If vMotion would pump the entire disk hierarchy across the vMotion network it would steal bandwidth from the I/O mirror process and this will hurt the performance of the virtual machine.
If the virtual machine does not contain any snapshots, the VMDK is considered active and it is migrated across the vMotion network. The files in the VMDK directory are copied across the network of the first VMkernel NIC.

Scenario 2: Management network and vMotion network sharing same IP-range/subnet
If the management network (actually the first VMkernel NIC) and the vMotion network share the same subnet (same IP-range) vMotion sends traffic across the network attached to first VMkernel NIC. It does not matter if you create a vMotion network on a different standard switch or distributed switch or assign different NICs to it, vMotion will default to the first VMkernel NIC if same IP-range/subnet is detected.
Please be aware that this behavior is only applicable to traffic that is sent by the source host. The destination host receives incoming vMotion traffic on the vMotion network!

I’ve been conducting an online-poll and more than 95% of the respondents are using a dedicated IP-range for the vMotion traffic. Nevertheless, I would like to remind you that it’s recommended to use a separate network for vMotion. The management network is considered to be an unsecured network and therefore vMotion traffic should not be using this network. You might see this behavior in POC environments where you use a single IP-range for virtual infrastructure management traffic.

If the host is configured with a Multi-NIC vMotion configuration using the same subnet as the management network/1st VMkernel NIC, then vMotion respects the vMotion configuration and only sends traffic through the vMotion-enabled VMkernel NICs.

If you have an environment that is using a single IP-range for the management network and the vMotion network, I would recommend creating a Multi-NIC vMotion configuration. If you have a limited amount of NICs, you can assign the same NIC to both VMkernel NICs, although you do not leverage the load balancing functionality, you force the VMkernel to use the vMotion-enabled networks exclusively.

Filed Under: vMotion

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 46
  • Page 47
  • Page 48
  • Page 49
  • Page 50
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in