Don’t use VMware Raw Device Mapping (RDM) for performance, but ..

Till today everywhere I looked recommended using RDM over VMFS for performance (when a higher I/O is required), though today while studying for my VMware Design Exam I have got my hand on the “Performance Characterization of VMFS and RDM Using a SAN” document by VMware. The interesting executive summary of the document is copied up to the letter below:

=======

Executive Summary:

The main conclusions that can be drawn from the tests described in this study are:

  • For random reads and writes, VMFS and RDM yield a similar number of I/O operations per second.
  • For sequential reads and writes, performance of VMFS is very close to that of RDM (except on sequential reads with an I/O block size of 4K). Both RDM and VMFS yield a very high throughput in excess of 300 megabytes per second depending on the I/O block size.
  • For random reads and writes, VMFS requires 5 percent more CPU cycles per I/O operation compared to RDM.
  • For sequential reads and writes, VMFS requires about 8 percent more CPU cycles per I/O operation compared to RDM.

=====

Reading the above executive summary, I was kinda shocked at first glance. Further looking at the graphs showing the tests numbers they were close enough that I felt the difference might not be noticeable between RDM & VMFS. I guess the conclusion I have drawn from that paper, which as well clearly mentioned in the summary of it is below:

You should limit the usage of RDM to the  few special cases that require the use of raw disks. Backup applications that use such inherent SAN features as snapshots or clustering applications (for both data and quorum disks) require raw disks. RDM is recommended for these cases. We recommend use of RDM for these cases not for performance reasons but because these applications require lower‐level disk control.

My final recommendation is to avoid using VMware Raw Device Mapping if you don’t need it for any of the special cases mentioned above as it does not give you much of performance improvement  & would restrict you from using many VMware ESX capabilities including the below:

  • No migrating VMs with physical mode RDMs if the migration involves copying the disk  (Storage VMotion)
  • No VMotion with physical mode RDMs when any clustering software is used.
  • No VMware snapshots with physical mode RDMs
  • No VCB support with physical mode RDMs, because VCB requires VMware snapshots
  • No cloning VMs that use physical mode RDMs
  • No converting VMs that use physical mode RDMs into templates

I feel this post will start a nice argument & would like to hear other experts opinion about the subject in the comments area below. I appreciate any feedback about my finding.

Comments

  1. Jeff Snavely says

    I know it probably didn’t effect the outcome, but their choice of disk configuration strikes me as really odd. I am left wondering why you would choose that for your test configuration.

  2. Benjamin Troch says

    Hi, Vmotion is possible with physical and virtual RDM’s. You might want to change that on the post.
    Cheers, Ben

  3. Hi Ben,

    Thanks for the note. I am not sure how I missed that & I fixed it now as follow:

    * No migrating VMs with physical mode RDMs if the migration involves copying the disk (Storage VMotion)
    * No VMotion with physical mode RDMs when any clustering software is used.

    Regards,
    Eiad

  4. Ashraf Zorkani says

    Hi, This is was a good conclusion for me as I am studying the feasibility of hosting Oracle RAC over ESX 4.1. I am trying to build a workable scenario on how this will be done, taking in consideration all other VMware features such as server vMotion and other stuff!.

  5. Hi Ashraf,

    I thought it was worth mentioning that Oracle RAC is officially supported on vSphere. If you would like any more info just e-mail me.

    By the way I am one of the Systems Engineers covering around your region, so feel free to contact me if you need any official VMware help around this and I can hook you up with the right contacts.

    Regards,
    Eiad Al-Aqqad

  6. Thanks for this post. I’ve referenced it too many times to count. Do the recommendations change with VMFS5?

  7. I am glad it was helpful errsta. VMFS 5 performance was quite improved and its closer and closer to what RDM is giving that its not much noticeable.

  8. Hello –

    Nice write up. I’m interested in migrating some VMs using RDMs to standard virtual disks. Do you know of any snazzy tools out there to that can do this in the background? I had a vague recollection that this could be done via svmotion, but I’m finding no proof of that now. I have used guest OS solutions to sync data from old:new volume in the past, but there’s always a brief outage at cut-over with these solutions. Would love to figure out a way to do it live.

    Thanks!

    Brent

  9. Hi Brent,

    svmotion should do and it ask you if you want to convert to VMDK files or not. If its physical RDMs then converting to VMDK files is actually the only option it give.

    Regards,
    Eiad

  10. I’ve been running performance benchmarks on VMFS and both physical and virtual RDM on ESX5.1 and I have to disagree.
    My benchmark results show RDM (there is not much difference between virtual or physical mode) is more than twice as good.
    Total IO/s: 1743 vs 4644
    Avg IO response: 2.29ms vs 0.85ms
    Throughput MB/s: ~230 vs ~630

    Tests were conducted using 4 worker threads:
    -4k 50% read write random
    -64k 100% read seq
    -64k 100% write seq
    -64k 50% read write seq

  11. Hi Tom,

    This is quite interesting as your results seems to contradict with most benchmark that were done by many others including VMware internals who has published articles on it and you can find both articles and results showing on the following post: http://blogs.vmware.com/vsphere/2013/01/vsphere-5-1-vmdk-versus-rdm.html. I would be as many others interested to know more about your test configuration. What type of Storage? Storage configuration? Which servers? what kind of HBAs? and so on.

    Thanks,
    Eiad

  12. Ravinder Bahadur says

    Hi Eiad,

    We are doing a RAC setup on Cisco VM ESX 5 vSphere 5.5 . We are facing a challenge in configuring the disks to be visible to both VM’s of the RAC node. The Site uses EMC Storage and RDM’s.

    Any help would be appreciated.

    Thanks & Regards

Speak Your Mind

*