How to successfully Virtualize MS Exchange – Part 8 – Local Storage

As discussed in Part 7, Local Storage is probably the most basic form of storage we can present to ESXi and use for Exchange MBX/MSR VMs.

The below screen shot shows what local storage can look like to an ESXi host.

LocalStorage

As we can see above, the highlighted datastore is simply an SSD formatted with VMFS5. So in this case a single drive not running RAID, and therefore in the event of the drive failing, any data on the drive would be permanently lost.

Note: The above image is simply an example. In reality multiple drives most likely SAS or SATA would be used as SSD is unnecessary for Exchange.

In some ways this is very similar to a physical Exchange deployment on JBOD storage and I would like to echo the recommendations Microsoft give for JBOD deployments from the Exchange 2013 storage configuration options guide and say for JBOD deployments, I strongly recommend at least 3 database copies.

As per the recommendation in Part 4 (DRS), MS Exchange MBX/MSR VMs should always run on separate ESXi hosts to ensure a single host failure does not potentially cause an issue for the DAG. This is especially important because if two Exchange servers shared the same ESXi host and local storage, a single ESXi host outage could cause data loss and downtime for part or all of the Exchange environment.

The below is a screen shot from the Exchange 2013 storage configuration options guide showing the recommendations based on RAID or JBOD deployments. In my option these recommendations also apply to virtualized Exchange deployments on Local storage.

JBODexchange

Another option is to use Local Storage in a RAID configuration to eliminate the Single Point of Failure (SPOF) of a single drive failure.

Again, I agree with Microsoft’s recommendations and suggest at least two database copies when using a RAID configuration and again, each Exchange VM must run on its own ESXi host on dedicated physical disks.

Note: The RAID controller itself is still a SPOF which is why multiple copies is recommended from both an availability and data protection perspective.

Let’s now discuss the pros and cons for using Local Storage with JBOD for your Virtualized Exchange Deployment.

PROS

1. Generally lower cost per GB than centralized storage (e.g.: SAN)
2. Higher usable capacity per drive compared to RAID or centralized storage configurations using RAID or other propitiatory data protection techniques.
3. Local JBOD Storage formatted with VMFS is a fully supported configuration

CONS

1. No protection from data loss in the event of a JBOD drive failure. Note: For non DAG deployments, RAID and 3rd party backups should always be used!
2. Performance/Capacity in JBOD deployments is limited to the capabilities of a single drive.
3. Loss of Virtualization functionality such as HA / DRS and vMotion (without performing a Storage vMotion every time)
4. Can be difficult/costly to scale when nearing capacity.
5. Increased Management (Operational) overheads managing decentralized storage
6. At least 3 database copies are recommended, requiring more Exchange MBX/MSR servers.
7. Little/no protection against data corruption which may lead to all DAG copies suffering corruption. Note: If the corruption is not discovered in time, LAGGED copies can also be compromised.
8. Capacity cannot be shared between between ESXi hosts which may lead to inefficient use of the available capacity.

Next here are some pros and cons for using Local Storage with RAID for your Virtualized Exchange Deployment.

PROS

1. Generally lower cost per GB than centralized storage (e.g.: SAN)
2. A single drive failure will not cause data loss or a DAG failover
3. Performance is not limited to a single drives capabilities
4.Local Storage with RAID formatted with VMFS is a fully supported configuration
5. As there is no data loss with a single drive failure, less database copies are required (2 instead of >=3 for JBOD)

CONS

1. Increased Management (Operational) overheads managing decentralized storage
2. Performance/Capacity is limited to the capabilities of a single drive
3. Loss of Virtualization functionality such as HA / DRS and vMotion (without performing a Storage vMotion every time)
4. Little/no protection against data corruption which may lead to all DAG copies suffering corruption. Note: If the corruption is not discovered in time, LAGGED copies can also be compromised.
5. Capacity cannot be shared between ESXi hosts which may lead to inefficient use of the available capacity
6. Performance is constrained by a single RAID controller / set of drives and can be difficult/costly to scale when nearing capacity.

For more information about data corruption for JBOD or RAID deployments, see “Data Corruption“.

Recommendations:

1. When using local storage, (JBOD or RAID), as per Part 4, run only one Exchange MBX/MSR VM per ESXi host
2. Use dedicated physical disks for Exchange MBX/MSR VM (i.e.: Do not share the same disks with other workloads)
3. Store the Windows OS / Exchange application VMDK on local storage which is configured with RAID to ensure a single drive does not cause the VM an outage.
4. Ensure ESXi itself is install on local storage configured with RAID (and not a USB key) as the Exchange VM is dependant on that host and is not protected by vSphere HA. Nor is it easily/quickly portable due to the storage not being shared.

Summary:

Using Local Storage in either a JBOD or RAID configuration is fully supported by Microsoft and is a valid option for MS Exchange deployments.

In my opinion Local Storage deployments have more downsides than upsides and I would recommend considering other storage options for Virtualized Exchange deployments.

Other options along with my recommended options will be discussed in the next 3 parts of this series.

Back to the Index of How to successfully Virtualize MS Exchange.

~ Post Updated January 2nd 2015 Thanks to feedback from @zerszenyi ~

How to successfully Virtualize MS Exchange – Part 7 – Storage Options

When virtualizing Exchange, we not only have to consider the Compute (CPU/RAM) and Network, but also the storage to provide both the capacity and IOPS required.

However before considering IOPS and capacity, we need to decide how we will provide storage for Exchange as storage can be presented to a Virtual Machine in many ways.

This post will cover the different ways storage can be presented to ESXi and used for Exchange while subsequent posts will cover in detail each of the options discussed.

First lets discuss Local Storage.

What I mean by Local Storage is SSD/HDDs within a physical ESXi hosts that is not shared (e.g.: Not accessible by other hosts).

This is probably the most basic form of storage we can present to ESXi and apart from the Hypervisor layer could be considered similar to a physical Exchange deployment.

UseLocalStorage

Next lets discuss Raw Device Mappings.

Raw Device Mappings or “RDMs” are where shared storage from a SAN is presented through the hypervisor to the guest as a native SCSI device and enables.

RDMs

For more information about Raw Device Mappings, see: About Raw Device Mappings

The next option is Presenting Storage direct to the Guest OS.

It is possible and sometime advantageous to presents SAN/NAS storage direct to the Guest OS via NFS , iSCSI or SMB 3.0 and bypasses the hyper-visor all together.

DirectInGuest

The final option we will discuss is “Datastores“.

Datastores are probably the most common way to present storage to ESXi. Datastores can be Block or File based, and presented via iSCSI , NFS or FCP (FC / FCoE) as of vSphere 5.5.

Datastores are basically just LUNs or NFS mounts. If the datastore is backed by a LUN, it will be formatted with Virtual Machine File System (VMFS) whereas NFS datastores are simply NFS 3 mounts with no formatting done by ESXi.

ViaDatastore

For more information about VMFS see: Virtual Machine File System Technical Overview.

What do all the above options have in common?

Local storage, RDMs, storage presented to the Guest OS directly and Datastores can all be protected by RAID or be JBOD deployments with no data protection at the storage layer.

Importantly, none of the four options on their own guarantee data protection or integrity, that is, prevent data loss or corruption. Protecting from data loss or corruption is a separate topic which I will cover in a non Exchange specific post.

So regardless of the way you present your storage to ESXi or the VM, how you ensure data protection and integrity needs to be considered.

In summary, there are four main ways (listed below) to present storage to ESXi which can be used for Exchange each with different considerations around Availability, Performance, Scalability, Cost , Complexity and support.

1. Local Storage (Part 8)
2. Raw Device Mappings  (Part 9)
3. Direct to the Guest OS (Part 10)
4. Datastores (Part 11)

In the next four parts, each of these storage options for MS Exchange will be discussed in detail.

Back to the Index of How to successfully Virtualize MS Exchange.

How to successfully Virtualize MS Exchange – Part 6 – vMotion

Having a virtualized Exchange server opens up the ability to perform vMotion and migrate the VM between ESXi hosts without downtime. This is a handy feature to enable hardware maintenance , upgrades or replacement with no downtime and importantly no loss of resiliency to the application.

In this article, I am talking only about vMotion, not Storage vMotion.

Lets first discuss vMotions requirements and configuration maximums.

vMotion requirements:

1. A VMKernel enabled for vMotion
2. A minimum of 1 x 1Gb NIC
3. Shared storage between source and destination ESXi hosts (recommended).

vMotion Configuration Maximums:

Concurrent vMotion operations per host (1Gb/s network):  4
Concurrent vMotion operations per host (10Gb/s network):  8
Concurrent vMotion operations per datastore: 128

As discussed in Part 4, I recommend using DRS “VM to Host” should rules to ensure DRS does not vMotion Exchange VMs unnecessary while keeping the cluster load balanced.

However, it is still important to design your environment to ensure Exchange VMs can vMotion as fast as possible and with the lowest impact during the syncing of the memory and during the final cutover.

So that brings us to our first main topic, Multi-NIC vMotion.

Multi-NIC vMotion:

Multi-NIC vMotion is a feature introduced in vSphere 5.0 which allows vMotion traffic to be sent concurrently down multiple physical NICs to increase available bandwidth and speed up vMotion activity. This effectively lowers the impact of vMotion and enables larger VMs with very high memory change rates do be vMotioned.

For those who are not familiar with the feature, it is described in depth in VMware KB : Multiple-NIC vMotion in vSphere 5 (2007467) as is the process to set it up on Virtual Standard Switches (VSS) and Virtual Distributed Switches (VDS).

From an Exchange perspective, the larger the MBX/MSR VM’s vRAM, and more importantly the more “active” the memory, the longer the vMotion can take. If vMotion detects the memory change rate is higher than the available bandwidth, the hypervisor will insert micro “stuns” to the VM’s CPU over time until the change rate is low enough to vMotion. This is generally has minimal impact to VMs, including Exchange, but if it can be avoided the better.

So using Multi-NIC vMotion helps as more bandwidth can be utilized which means vMotion activity is either faster, or can support more active memory with a low impact.

vMotion “Slot size”:

A vMotion slot size can be thought of as the compute and ram capacity required to perform a vMotion of a VM between two hosts. So for a VM with 96Gb of vRAM and the same memory reservation, the destination host requires 96Gb of physical RAM to be available to even qualify to begin a vMotion.

The larger the VM, the more of a factor this can become in the design of a vSphere cluster.

For example, The diagram below shows a four ESXi host HA cluster with several large VMs including several which are assigned 96Gb of vRAM as is common with Exchange MBX / MSR VMs.

In this scenario the Exchange VMs are represented by VM #13,15 and 16 and have 96Gb RAM ea.

ClustervMotionSlotSizeBad

The issue here is there is insufficient memory on any host to accommodate a vMotion of any of the Exchange VMs. This leads to complexity during maintenance periods as well as a HA event.

In fact in the above example, if an ESXi host crashed, HA would not be able to restart any of the Exchange VMs.

This goes back to the point I made in Part 5 about always ensuring an N+1 (minimum) configuration for the cluster, as this should in most cases avoid this issue.

In addition to the recommendation in Part 4 about using VM to Host DRS “should” rules to ensure only one Exchange VM runs per host.

Enhanced vMotion Compatibility:

Enhanced vMotion Compatibility or EVC, is used to ensure vMotion compatibility for all the hosts within a cluster. EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the actual CPUs on the hosts differ. The end result is configuring EVC prevents vMotion from failing because of incompatible CPUs.

The knowledge based article Enhanced vMotion Compatibility (EVC) processor support (1003212) from VMware explains the EVC modes and compatible CPU models. Note: EVC does not support Intel to AMD or vice versa.

Contrary to popular belief, EVC does not “slow down” the CPU, it only masks processor features that affect vMotion compatibility. The full speed of the processor is still utilized, the only potential performance degradation is where an application is specifically written to take advantage of masked CPU features, in which case that workload may have some performance loss. However this is not the case with MS Exchange and as a result, I recommend EVC always be enabled to ensure the cluster is future proofed and Exchange VMs can be migrated to newer HW seamlessly via vMotion.

For more details on why you should enable EVC, review the Example Architectural Decision – Enhanced vMotion Compatibility.

Jumbo Frames:

Using Jumbo frames helps improve vMotion throughput by reducing the number of packets and therefore interrupts required to migrate the same Exchange VM between two hosts.

Michael Webster @vcdxnz001 (VCDX #66) wrote the following great article showing the benefits of Jumbo Frames for vMotion is up to 19% in Multi-NIC vMotion environments: Jumbo Frames on vSphere 5

So we know there is a significant performance benefit, but what about the downsides of Jumbo Frames?

The following two Example architectural decisions covers the pros and cons of Jumbo Frames, along with justification for using and not using Jumbo for IP Storage. The same concepts are true for vMotion so I recommend you review both decisions and choose which one best suits your requirements/constraints.

Note: Neither decision is “right” or “wrong” but if your environment is configured correctly for Jumbo Frames, you will get better vMotion performance with Jumbo Frames.

  1. Jumbo Frames for IP Storage (Do not use Jumbo Frames)
  2. Jumbo Frames for IP Storage (Use Jumbo Frames)

vMotion Security:

vMotion traffic is unencrypted, as a result anyone with access to the network can sniff the traffic. To avoid this, vMotion traffic should be placed on a dedicated non route-able VLAN.

For more information see: Example Architectural Decision : Securing vMotion & Fault Tolerant Traffic in IaaS/Cloud Environments.

Note: This post is relevant to all environments, not just IaaS/Cloud/Multi-tenant.

Performing a vMotion or entering Maintenance Mode:

As per Part 4, I recommended using VM to Host DRS “should” rules to ensure only one Exchange VM runs per host. This also ensures only one Exchange VM is potentially impacted by vMotion when a host enters maintenance mode.

However, simply entering maintenance mode can kick off up to 8 concurrent vMotion activities when using 10Gb networking for vMotion. In this situation, the length of the vMotion for the Exchange VM will increase and potentially impact performance for a longer period.

As such, I recommend to manually vMotion the Exchange VM onto another host not running any other Exchange VMs (and ideally no other large vCPU/vRAM VMs) and waiting for this to complete before entering the host into maintenance mode.

The benefit of this will depend on the size of your Exchange VMs and the performance of your environment but this is an easy way to minimize the chance of performance issues.

DAG Failovers during vMotion?

This can occur as even a momentary drop of the network during vMotion, or the quiesce of the VM during the final stage of the vMotion exceeds the default Windows cluster heartbeat thresholds.

With vMotion setup correctly and ideally if using Multi-NIC vMotion, this should not occur, however there are ways to mitigate against this issue by increasing the cluster heartbeat time-out help prevent unnecessary DAG failovers.

To increase the cluster heartbeat timeout see: Tuning Failover Cluster Network Thresholds

Recommendations for vMotion:

1. Ensure vMotion is Active on 10Gb (or higher) adapters
2. Enable Multi-NIC vMotion across 2 x 10Gb adapters in environments with Exchange VMs larger than 64Gb RAM
3. Enable Enhanced vMotion Compatibility (EVC) to the highest supported level in your cluster
4. Use Jumbo Frames for vMotion Traffic
5. Ensure sufficient cluster capacity to migrate Exchange VMs
6. Use DRS rules to separate Exchange VMs to ensure vMotion is not prevented (as per Part 4)
7. When evacuating ESXi hosts running Exchange VMs, vMotion the Exchange VM first, and once it has succeeded, put the hosts into maintenance mode.
8. Use Network I/O Control (NIOC) to ensure a minimum level of bandwidth to vMotion (Further details in an upcoming post)
9. Do not Route vMotion Traffic
10. Put vMotion traffic on a dedicated non route-able VLAN (ie: No gateway)
11. Increase cluster heartbeat time-outs for Windows failover clustering with the maximums outlined in Tuning Failover Cluster Network Thresholds.

Back to the Index of How to successfully Virtualize MS Exchange.