R.D.K holdings S.A

Sunday, 26 January 2014

Changes to vMotion in VMWare vSphere 5

Changes to vMotion in VMWare vSphere 5


There were some fundamental changes made to vMotion in vSphere 5.0 when it comes to scalability and performance:

- Multi-NIC vMotion support

- Stun During Page Send (SDPS)

- Higher Latency Link Support

- Improved error reporting

Multi-NIC vMotion support

vMotion is now capable of using multiple NICs concurrently to decrease the amount of time required for a vMotion operation. That means that even a single vMotion can leverage all of the configured vMotion NICs.

 The following list shows the currently supported maximum number of NICs for multi-NIC vMotion:

- 1GbE – 16 NICs supported
- 10GbE – 4 NICs supported


Stun During Page Send (SDPS)

The “stun” in Stun During Page Send (SDPS)
refers to the vCPU of the virtual machine that is being vMotioned. vMotion will track the rate at which the guest pages are changed, or as the engineers prefer to call it, “dirtied.” The rate at which this occurs is compared to the vMotion transmission rate. If the rate at which the pages are dirtied exceeds the transmission rate, the source vCPUs will be placed into a sleep state to decrease the rate at which pages are dirtied and to allow the vMotion process to complete.

To disable SDPS for all virtual machines on a particular host:

In the vSphere client, select the host and then click the Configuration tabUnder Software, click Advanced Settings. Click Migrate in the left side and the scroll down to Migrate.SdpsEnabled on the right side. Change the value of Migrate.SdpsEnabled to 0. Click OK.

SDPS kicks in when the rate at which pages are dirtied exceeds the rate at which the pages can be transferred to the other host.

Higher Latency Link Support


When discussing long distance vMotion, the biggest constraint is usually latency. As of vSphere 5.0, the maximum supported latency has been doubled to 10ms for environments using Enterprise Plus licensing.

No comments:

Post a Comment