How to successfully Virtualize MS Exchange – Part 11 – Types of Datastores

Datastores are a logical construct which allows DAS,SAN or NAS storage is presented to ESXi. In the case of SAN and NAS storage it is generally “shared storage” which enabled virtualization features such as HA, DRS and vMotion.

When storage is presented to ESXi from DAS or SAN (block based) storage it is formatted with VMFS (Virtual Machine File System) and when storage is presented via file storage (NFS), it is presented to ESXi as an NFS mount.

Regardless of datastores being presented via block (iSCSI,FC,FCoE) or file based (NFS) protocols, they both host VMDKs (Virtual Machine Disks) which are block based storage. In the case of NFS, the SCSI commands are emulated by the hypervisor. This process is explained in Emulation of the SCSI Protocol and can be compared to Hyper-V SMB 3.0 (File) storage with VHDX which also emulates SCSI commands over File (SMB 3.0) storage.

The following diagram is courtesy of http://pubs.vmware.com and shows “host1” and “host2” runnings VMs across VMFS (block) and NFS (file) datastores. Note the VMs residing on datastore1 and datastore2 all have .vmx and .vmdk files and operate in the exact same way from the perspective of the VM, Guest OS and applications.

GUID-AD71704F-67E4-4AC2-9C22-10B531755566-high

The next paragraph is controversial and may be hotly debated, but to the best of my knowledge and the countless industry experts (from several different vendors) I have investigated this with over the last year including VMware’s formal position, it is completely true and I welcome any credible and detailed evidence to the contrary! (I even asked this question of Microsoft here).

Using either VMFS or NFS datastores meets the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands and because drives within Windows are formatted with NTFS which is a journalling file system as such the requirement to protect against Torn I/O is also achieved.

With that being said, Microsoft currently do not support Exchange running in VMDKs on NFS datastores.

The below is a quote from Exchange 2013 storage configuration options outlining the storage support statement for MS Exchange with the underlined section applying to NFS datastores.

All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

If your interested in finding out more information about MS Exchange running in VMDKs on NFS datastores, see the links at the end of this post.

Now let’s discuss the limitations of Datastore’s and what impact they have on vSphere environments with MS Exchange deployments and why.

Number of LUNs / NFS Mounts : 256

This can be a significant constraint when using one or multiple datastore/s per Exchange MBX/MSR VM however in my opinion this should not be necessary nor is it recommended.

Generally Exchange VMDKs can be mixed with other VMs in the same datastore providing there is not a performance constraint. As such keep high I/O VMs (including other Exchange VMs) in different datastores.

As discussed in Part 10, if legacy per LUN snapshot based backup solutions are being used then in-Guest iSCSI may have to be used but for new deployments especially where new storage will be purchased, per LUN solutions should not be considered!

Number of Paths per ESXi host : 1024

If using VMFS datastores, the simple fact is 4 paths per LUN is the maximum you can use if you plan to reach or near the limit of 256 datastores. This is not a performance limiting factor with any enterprise grade storage solution.

Number of Paths per LUN : 32

If you configure 32 paths per LUN, you straight away restrict yourself to 32 LUNs per ESXi host (and vSphere cluster) so don’t do it! As mentioned earlier 4 paths per LUN is the maximum if you plan to reach 256 datastores. Do the math and this limit is not a problem.

Number of Paths per NFS mount : N/A

NFS mounts connect over IP to the Storage Controllers via vNetworking so there is no maximum as such although with NFS v3 only one physical NIC will be used at a time per IP subnet. This topic will be covered in a future post on vNetworking for MS Exchange.

VMFS Datastore Maximum Size : 64TB

256 LUNs x 64TB = 16,384TB per vSphere HA cluster. This is not a problem.

NFS Datastore Maximum Size : Varies depending on vendor

The limit depends on vendor but its typically higher than the VMFS limit of 64TB per mount, with some vendors not having a limit.

So you’re safe to assume >=16.384TB but always check with your current or potential storage vendor.

ESXi hosts per Volume (Datastore) : 64 (Note: HA cluster limit of 32)

As a vSphere cluster is limited to 32 currently, this limitation isn’t really an issue. With vSphere 6.0 it is expected cluster size will increase to 64 but ESXi hosts per Volume is not a maximum I have ever heard of being reached.

Recommendations:

1. If you require a fully supported configuration use VMFS datastores.
2. Maximum 4 paths per LUN to ensure maximum scalability (if required).
3. Consider the underlying storage configuration and type of a datastore before deploying MS Exchange.
4. Do not deployment MS Exchange VMDKs onto datastores with other high I/O workloads
5. When mixing workloads on a datastore, enable SIOC to ensure fairness between workloads in the event of storage contention.
6. Spread Exchange VMDKs across multiple datastores for maximum performance and resiliency. e.g.: 12 VMDKs per Exchange MBX/MSR VM across 4 mixed workload datastores.
7. Do not use dedicated datastore/s per MS Exchange database or VM. (This is unnecessary from a performance perspective)
8. If choosing to use NFS Datastores, purchase Premier Support from Microsoft and negotiate support for NFS. Microsoft do provide support for many customers with Premier Support with Exchange running on NFS datastores although it is not their preference.

On a final note, in future posts I will be discussing in detail the underlying storage from a performance and availability perspective with Database availability Groups in mind.

Thank you to @mattliebowitz for reviewing this post. I highly recommend his book Virtualizing Business Critical Applications by VMware Press. I purchase and reviewed this book mid 2014, well worth a read!

Back to the Index of How to successfully Virtualize MS Exchange.

Articles on MS Exchange running in VMDK on NFS datastores

1. “Support for Exchange Databases running within VMDKs on NFS datastores

2. Microsoft Exchange Improvements Suggestions Forum – Exchange on NFS/SMB

3. What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?

4. Integrity of I/O for VMs on NFS Datastores Series

Part 1 – Emulation of the SCSI Protocol
Part 2 – Forced Unit Access (FUA) & Write Through
Part 3 – Write Ordering
Part 4 – Torn Writes
Part 5 – Data Corruption

How to successfully Virtualize MS Exchange – Part 10 – Presenting Storage direct to the Guest OS

Let’s start with listing three common storage types which can be presented direct to a Windows OS?

1. iSCSI LUNs
2. SMB 3.0 shares
3. NFS mounts

Next let’s discuss these 3 options.

iSCSI LUNs are a common way of presenting storage direct to the Guest OS even in vSphere environments and can be useful for environments using storage array level backup solutions (which will be discussed in detail in an upcoming post).

The use of iSCSI LUNs is fully supported by VMware and Microsoft as iSCSI meets the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands. iSCSI LUNs presented to Windows are then formatted with NTFS which is a journalling file system which also protects against Torn I/O.

In vSphere environments nearing the configuration maximum of 256 datastores per ESXi host (and therefore HA/DRS cluster) presenting iSCSI LUNs to applications such as Exchange can help ensure scalability even where vSphere limits may have been reached.

Note: I would recommend reviewing the storage design and trying to optimize VMs/LUN etc first before using iSCSI LUNs presented to VMs.

The problem with iSCSI LUNs is they result in additional complexity compared to using VMDKs on Datastores (discussed in Part 11). The complexity is not insignificant as typically multiple LUNs need to be created per Exchange VM, things like iSCSI initiators and LUN masking needs to be configured. Then when the iSCSI initiator driver is updated (say via Windows Update) you may find your storage disconnected and you may need to troubleshoot iSCSI driver issues. You also need to consider the vNetworking implications as the VM now needs IP connectivity to the storage network.

I wrote this article (Example VMware vNetworking Design w/ 2 x 10GB NICs for IP Storage) a while ago showing an example vNetworking design that supports IP storage with 2 x 10GB NICs.

The above article shows NFS on the dvPortGroup name but the same configuration is also optimal for iSCSI. Each Exchange VM would then need a 2nd vmNIC connected to the iSCSI portgroup (or dvPortgroup) ideally with a static IP address.

IP addressing is another complexity added by presenting storage direct to VMs rather than using VMDKs on datastores.

Many system administrators, architects and engineers might scoff at the suggestion iSCSI is complex, but in my opinion while I don’t find iSCSI at all difficult to design/install/configure and use, it is significantly more complex and has many more points of failure than using a VMDK on a Datastore.

One of the things I have learned and seen benefit countless customers over the years is keeping things as simple as possible while meeting the business requirements. With that in mind, I recommend only considering the use of iSCSI direct to the Guest OS in the following situations:

1. When using a Backup solution which triggers a storage level snapshot which is not VM or VMDK based. i.e.: Where snapshots are only support at the LUN level. (Older storage technologies).
2. Where ESXi scalability maximums are going to be reached and creating a separate cluster is not viable (technically and/or commercially) following a detailed review and optimization of storage for the vSphere environment.
3. When using legacy storage architecture where performance is constrained at a datastore level. e.g.: Where increasing the number of VMs per Datastore impacts performance due to latency created from queue depth or storage controller contention.

Next let’s discuss SMB 3.0 / CIFS shares.

SMB 3.0 or CIFS shares are commonly used to present storage for Hyper-V and also file servers. However presenting SMB 3.0 directly to Windows is not a supported configuration for MS Exchange because SMB 3.0 presented to the Guest OS directly does not meet the technical requirements for Exchange, such as Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands.

However SMB 3.0 is supported for MS Exchange when presented to Hyper-V and where Exchange database files reside within a VHD which emulates the SCSI commands over the SMB file protocol. This will be discussed in the upcoming Hyper-V series.

The below is a quote from Exchange 2013 storage configuration options outlining the storage support statement for MS Exchange.

All storage used by Exchange for storage of Exchange data must be block-level storage because Exchange 2013 doesn’t support the use of NAS volumes, other than in the SMB 3.0 scenario outlined in the topic Exchange 2013 virtualization. Also, in a virtualized environment, NAS storage that’s presented to the guest as block-level storage via the hypervisor isn’t supported.

The above statement is pretty confusing in my opinion, but what Microsoft mean by this is SMB 3.0 is supported when presented to Hyper-V with Exchange running in a VM with its databases housed within one or more VHDs. However to be clear presenting SMB 3.0 direct to Windows for Exchange files is not supported.

NFS mounts can be used to present storage to Windows although this is not that common. Its important to note presenting NFS directly to Windows is not a supported configuration for MS Exchange and as with SMB 3.0, presenting NFS to Windows directly also does not meet the technical requirements for Exchange, being Write Ordering, Forced Unit Access (FUA) and SCSI abort/reset commands. iSCSI LUNs can be formatted with VMFS which is a journalling file system which also protects against Torn I/O.

As such I recommend not presenting NFS mounts to Windows for Exchange storage.

Note: Do not confuse presenting NFS to Windows which presenting NFS datastores to ESXi as these are different. NFS datastores will be discussed in Part 11.

Summary:

iSCSI is the only supported storage protocol to present storage direct to Windows for storage of Exchange databases.

Lets now discuss the Pros and Cons for presenting iSCSI storage direct to the Guest OS.

PROS

1. Ability to reduce overheads of legacy LUN based snapshot based backup solutions by having MS Exchange use dedicated LUN/s therefore reducing delta changes that need to be captured/stored. (e.g.: Netapp SnapManager for Exchange)
2. Does not impact ESXi configuration maximums for LUNs per ESXi host as storage is presented to the Guest OS and not the hypervisor
3. Dedicated LUN/s per MS Exchange VM can potentially improve performance depending on the underlying storage capabilities and design.

CONS

1. Complexity e.g.: Having to create, present and manage LUN/s per Exchange MBX/MSR VMs
2. Having to manage and potentially troubleshoot iSCSI drivers within a Guest OS
3. Having to design for IP storage traffic to access VMs directly, which requires additional vNetworking considerations relating to performance and availability.

Recommendations:

1. When choosing to present storage direct to the Guest OS, only iSCSI is supported.
2. Where no requirements or constraints exist that require the use of storage presented to the Guest OS directly, use VMDKs on Datastores option which is discussed in Part 11.
3. Use a dedicated vmNIC on the Exchange VM for iSCSI traffic
4. Use NIOC to ensure sufficient bandwidth for iSCSI traffic in the event of network congestion. Recommended share values along with justification can be found in Example Architectural Decision – Network I/O Control Shares/Limits for ESXi Host using IP Storage.
5. Use a dedicated VLAN for iSCSI traffic
6. Do NOT present SMB 3.0 or NFS direct to the Guest OS and use for Exchange Databases!

Back to the Index of How to successfully Virtualize MS Exchange.

How to successfully Virtualize MS Exchange – Part 9 – Raw Device Mappings (RDMs)

A Raw Device Mapping or “RDM” allows a VM to access a volume (or LUN) on the physical storage via either Fibre Channel or iSCSI.

When discussing Raw Device Mappings, it is important to highlight there are two types of RDM modes, Virtual Compatability Mode and Physical Compatibility Mode.

See the following article for a detailed breakdown of the Difference between Physical compatibility RDMs and Virtual compatibility RDMs(2009226).

So how does an RDM compare to a VMDK on a Datastore?

VMware released a white paper called Performance Characterization of VMFS and RDM Using a SAN in 2008, which debunked the myth that RDMs gave significantly higher performance than VMDKs on datastores.

So RDMs have NO performance advantage over VMDKs on a Datastore.

With that in mind, what advantages (if any) do RDMs have today?

VMware released their Microsoft Exchange 2010 on VMware
Best Practices Guide which has the following Table on page 14 showing the trade-offs between VMFS and RDMs.

RDMsvsVMDKs

I have highlighted one advantage that RDMs still have over VDMKs on datastore style deployments which is the ability to migrate from a physical Exchange server using centralized SAN storage to a VM without data migration.

However, I find the most common way to migrate from a physical deployment to a virtual deployment is by performing Mailbox migrations to virtualized Exchange servers in an ESXi environment. This avoids the complexities of RDMs and ensures no capacity on the shared storage is wasted (i.e.: Siloed).

The table also lists one other advantage for RDMs, being support for up to 64TB drives whereas Virtual Disks were limited to 2TB for VMFS but this limitation has since been lifted to 62TB in vSphere 5.5.

Recommendation: Do not use RDMs for MS Exchange deployments.

As with Local Storage discussed in Part 7, RDM deployments have more downsides (mainly around inefficiency and complexity) than upsides and I would recommend considering other storage options for Virtualized Exchange deployments.

Other options along with my recommended options will be discussed in the next 2 parts of this series and in upcoming posts on Storage performance and resiliency.

Back to the Index of How to successfully Virtualize MS Exchange.