Storage I/O Control (SIOC) and the VMware Disk Scheduler

There are a number of fundamentals that are pertinent to discuss when looking at how VMware SIOC really works. The fact that it automatically throttles I/O based on load is a great feature in itself, but what really triggers this event and what mechanisms comprise this feature?

In the storage world, there is one enemy of performance that outweighs all the rest and really causes major issues for data access. This is none other than latency.¬†Quite simply put, Storage I/O Control is the prioritization of I/O on every ESX server that leverages a common datastore with additional detection of SAN bottlenecks. It accomplishes this by utilizing the virtual storage stack and giving priority to those vm’s with higher disk shares.

VMware Virtualized Storage Stack

The virtualized storage stack contained within vSphere has basically two disk scheduler parts to it – the host-level and datastore disk scheduler.

Host-Level Disk Scheduler: Virtual machines that reside on the same ESX node have priorities on the I/O traffic only in the event that the local host bus adapter (HBA) becomes overloaded. This type of scheduler has been around since the ESX 3.x days and has configurable limits for I/O throughput.

Datastore (SAN) Disk Scheduler: vSphere 4.1 added this feature that contains two functions and only functions on block based storage (i.e. Fiber Channel or iSCSI).

1) I/O prioritization across all ESX nodes in the cluster.

2) SAN path contention analysis/calculations.

Again, both of these actions are facilitated by the distributed disk scheduler analysis based on each virtual machine’s share value.

Remember, SIOC kicks in only when the latency threshold has been breached (which has a default value of 30ms). I also put together a very basic flowchart of this operation so you can see where the logic is injected in this process.


As an added benefit, SIOC allows for the maximum density per ESX host while maintaining maximum performance on each cluster node.

SIOC Support:

One thing to consider are the threshold settings on SIOC with automatic tiering arrays. This must be adjusted based on the vendors recommendation so that the SAN is not adversely affected by SIOC.

At the time of this writing, there are three things that are not supported by SIOC:

  1. NAS based arrays (NFS in particular)
  2. RDM’s (Raw Device Mappings)
  3. Datastores that have multiple extents.
  4. Datastores that are managed by multiple vCenter servers.

Look for a subsequent posting on my blog about implementation considerations.

vSphere Management Assistant (vMA) Review

After my last post, I decided to hang out in the vMA for a while to get acclimated with it. I’ve been using it off and on for quite a while now, but over the last few days I have been living in it. I found a few things that I think are pertinent to anyone who uses this.

First off, I should start with the basics. The vSphere Management Assistant comes from VMware in an OVF format for easy insertion to your virtual infrastructure. The management stations supports any version of ESX or ESXi from version 3.5 update 2 to present. It also consumes 512mb of memory from the host(s) and has a standard 5GB vDisk attached to it. It has two users that are already setup, one is the vi-admin account for administrative operations and the other the vi-user account with read-only rights to execute vCLI commands with. You can even join the vMA to an AD domain if you want to.

I’ve deployed about 5 of them so far and here are a few things that I do to these appliances:

1) Give it a static address and a proper DNS name so it is easy to get to and available when you need it. This can be done during the setup phase or you can run a pre-built script located in /opt/vmware/vma/bin/ – you will notice that you need to become root in order to run this, so see below to reset the password.

2) Change the root password of the appliance (since it is necessary to ‘su’ to run certain scripts on it). You can do this by running the following command: # sudo passwd root

Here are some handy commands and capabilities that you can use for the vMA:

  • Scan for updates to the vMA: # vma-update scan
  • Update the vMA (after you’ve scanned for updates): # vma-update update

Neat capabilities:

  • You can run custom code that force proprietary software or hardware components to work with ESXi.
  • Developers can you the integrated API’s within the VmaTargetLib (library) to connect to vMA targets via Java or Perl.
  • Administrators can add vCenter servers as well as ESX/ESXi nodes as potential “targets” to run scripts. This allows for a single sign on type of mechanism.
  • You can even re-use your old service console scripts that you may have used on older ESX systems.