VMware VMotion Improvement in VMware vSphere 5

As most bloggers when vSphere 5 arrived decided to cover the improvement on VMware HA, as it was totally revamped. I have decided to cover up the improvement on VMware VMotion in vSphere 5. VMware VMotion had as much improvement in vSphere 5 as VMware HA had, but seems to be forgotten by most bloggers. Below I will summarize some of the nice improvements VMware VMotion has gained with the release of vSphere 5.

– In vSphere 4 simultaneous VMotions were supported with up to 4 VMs if you have 1Gbps NICs & 8 VMs if you were using 10Gbps NICs. In vSphere 5 this has been extended to support multiple NICs for VMotion which increase the number of simultaneous VMotions allowed. At now a max of 4 10Gbps NICs or 16 1Gbps NICs can be used for VMotion in vSphere 5. Doing the math will show you that you can evacuate any ESXi 5 host really fast with that amount of simultaneous VMotions. Further, if you are not going to carry out as many simultaneous VMotions as allowed between your hosts, vSphere 5 can load balance the load across the free adapters to speed up the process.

– As memory must be copied during any live migration including VMware VMotion, busier VMs with high memory activity used to take longer in earlier version of vSphere. Actually if the memory load was too high it could stopped the VMotion process and reverted the process safely. In vSphere 5 VMotion is being taken to the next level. Now VMotion can slow the VM for a very very short period of time that is not noticeable for the applications or users, but just enough to ensure the copying of the memory records complete on time. This process is called Slowdown during page send(SDPS). SDPS will ensure VMotion can be completed for almost any VM no matter how heavy on memory use its, & it will ensure the VMotion is completed much faster than ever before(Normally less than 1 second).

– vSphere 5 has improved VMware VMotion & Resource Pool integration. In earlier version, after a VM is VMotioned the VM will be left in the root resource pool for a while till DRS is invoked and put it back to its correct resource pool. vSphere 5 has improved VMotion that it can move the VM directly into the correct resource pool without having to wait for DRS.

– Logging Messages for vSphere 5 has been dramatically improved, & error messages have been improved so its easier to debug.

– Metro VMotion: vSphere 5 Enterprise Plus Edition can now tolerate up to 10ms round-trip latency, which shall allow VMotion to long latency networks. Editions below Enterprise Plus are still limited to the same vSphere 4.1 5ms round-trip latency.

I hope this article help show some of the great improvements vSphere 5 has added to VMotion.

Comments

  1. I’ve created two videos showing the new and enhanced vMotion features in vSphere 5
    Video – Metro vMotion in vSphere 5.0 – http://www.ntpro.nl/blog/archives/1868-Video-Metro-vMotion-in-vSphere-5.0.html
    Video – Running vMotion on multiple–network adaptors – http://www.ntpro.nl/blog/archives/1859-Video-Running-vMotion-on-multiplenetwork-adaptors.html

  2. Thanks Eric for pointing the videos out :).

Speak Your Mind

*