How to successfully Virtualize MS Exchange – Part 16 – Virtual Disk Provisioning Types

Once you have made the decision on storage platform, and assuming you have chosen to use VMFS or NFS datastores, the next decision is how should my VMDKs be provisioned?

The VMware Exchange 2013 Best Practice Guide does not make mention of disk provisioning options nor does it make any recommendations, however you’re in luck as we will cover all the options along with pros and cons here.

For Exchange 2010, Microsoft state in Understanding Exchange 2010 Virtualization:

Virtual disks that dynamically expand aren’t supported by Exchange.

Virtual disks that use differencing or delta mechanisms (such as Hyper-V’s differencing VHDs or snapshots) aren’t supported.

However I have been unable to find confirmation if this has changed or not for Exchange 2013 in the Exchange 2013 storage configuration options document which does state Thin provisioning for Storage spaces is supported but it does not state that any other form of thin provisioning is or is not supported.

While technically not supported in 2010, there is plenty of experts who understand and recommend thin provisioning including MCM and MVP for Exchange Dustin Smith who in this video talks about some of the considerations and benefits of thin Provisioning for Exchange 2010.

Now on to the topic at hand:

When creating a Virtual Machine, VMDK/s can be provisioned in one of three ways, these are:

1. Thick Provisioned Lazy Zeroed
2. Thick Provisioned Eager Zeroed
3. Thin Provisioned

Starting with Thick Provisioned Lazy Zeroed this means that the VMDK is thick provisioned but only zeroed in a just in time fashion.

The advantages of Thick Provisioned Lazy Zeroed VMDKs include:

1. Faster VM creation time than Eager Zeroed Thick (Minimal if the storage supports VAAI Write Same primitive) 
2. The entire VMDKs capacity is reserved making capacity planning easier than Thin Provisioning

The disadvantages of Thick Provisioned Lazy Zeroed VMDKs include:

1. Slower provisioning that Thin Provisioning (although the different is generally minimal)
2. The entire VMDKs capacity is reserved and unavailable for use by other virtual machines.

With Thick Provisioned Eager Zeroed (EZT) the VMDK is thick provisioned and all blocked zeroed at the time of creation. Eager Zeroed Thick VMDKs are supported on all VMFS datastores and on NFS datastores which support the VAAI-NAS Reserve Space primitive.

The advantages of EZT VMDKs these days are really minimal but include:

1.  Supporting Oracle RAC and VMware Fault Tolerance (neither being applicable to Exchange)
2. Increased performance verses Lazy and Thin Provisioned VMDKs (but more on this topic later).

However there are a number of downsides to this method which include:

1. Slower VM creation times. The time depends on the size of the VMDK/s being created and the speed of your storage as every Gb needs to be zeroed, just like performing a Full (not quick) format on your physical server.

Note: Storage array’s who support VAAI with the “Write Same” primitive can offload the zeroing to the storage array to reduce the load on the ESXi host and speed up provisioning time dramatically.

2. Increased potential for wasted capacity on a datastore.

3. Free space within VMDKs cannot be shared with other VMs which requires every VMDK have some (generally >10% is recommended) free space per VMDK to ensure the VM does not run out of space.

Lastly there is  Thin Provision which means the VMDK only takes up the amount of space that data is written too and before each write the block must be zeroed.

The advantages of Thin Provisioning VMDKs include:

1. You can create larger VMDKs with no space utilization penalty making capacity planning and growth easier.
2. Reduce wasted or unused space on the storage
3. Allows for disk space to be overcommitted ensuring maximum utilization and flexibility.
4. Free space in VMDKs is not wasted on the datastore reducing capacity requirements compared to Eager and Lazy Zeroed VMDKs.
5. The impact of SCSI reservations (VMFS datastores ONLY) causing performance issues (increased latency) when thin provisioned virtual machines (VMDKs) grow is no longer an issue as the VAAI Atomic Test & Set (ATS) primitive alleviates the issue of SCSI reservations.
6. Thin provisioned VMs reduce the overhead for Storage vMotion , Cloning and Snapshot activities. Eg: For Storage vMotion it eliminates the requirement for Storage vMotion (or the array when offloaded by VAAI XCOPY Primitive) to relocate “White space”. Note: Storage vMotion should rarely if ever be required for Exchange VMs.
7. Thin provisioning leaves maximum available free space on the physical spindles which should improve performance of the storage subsystem as a whole.

The disadvantages of thin provisioning include:

1. Increased risk of running out of space on a datastore or underlying storage array.
2. Additional write penalty of zeroing a block before writing to it. (again more on performance later in this post).
3. Increased importance of monitoring storage capacity utilization.
4. Not supported for Exchange 2010. Note: However there is no technical inhibitor for using Thin Provisioning but supported options are obviously preferable.

All in all, @FrankDenneman (VCDX #29) sums it up perfectly with his article Thin or thick disks? – it’s about management not performance. I would also suggest considering all other workloads in the environment, not just Exchange when making decisions about Thin Provisioning as it can be very beneficial and a huge cost saving (especially CAPEX) when purchasing new equipment.

Which brings us to our next topic, Thin Vs Thick Provisioning Performance!

There have been many recommendations not to use Thin Provisioning due to the performance impact of Zeroing a block before writing to it. This recommendation has been around for a long time, and like the VMDK on NFS debate appears to have strong options on both sides.

Now for the facts!

From a performance perspective most people are surprised to learn there is no significant performance advantage to using Thick Provisioned (Eager or Lazy Zeroed) VMDKs compared to Thin Provisioned disks.

In addition to that, with the reduction of I/O from Exchange 2007 to 2010 being around 50%, and from 2010 to 2013 another 50% reduction in I/O, Exchange is no longer the huge storage I/O heavy monster it once was.

VMware conducted a Performance Study of VMware vStorage Thin Provisioning back in the ESXi 4.0 days (~2009) which I will briefly summarize.

On page 6 of the performance study the following graph shows the different in performance between Thin and Thick VMDKs during zeroing and post-zeroing.

As you can see the performance is almost identical.

ThinThickScaling

The next chart shows also from Page 6 is a comparison of throughput between thin and thick VMDKs. Again we see the difference is insignificant.

AggThrougjputThickvThin

As a result of there being no significant performance impact of using Thin Provisioning, Performance should no longer be considered an objection to using Thin Provisioning!

I recommend taking advantage of the flexibility of using Thin Provisioning and creating larger Thin Provisioned VMDKs which can help simplify capacity management from a VM/OS and application perspective as well as making growth easier for Exchange as mailbox sizes increase over time.

ThinProvision

When using thin provisioning always ensure you have your alerting properly set-up with early warning on your vSphere environment AND underlying storage to advise when storage capacity of a datastore or underlying LUN/NFS mount or storage is running low so this can be remediated.

In an upcoming post I will discuss the underlying storage, including provisioning type for LUNs and NFS mounts (i.e.: Thin on Thick / Thin on Thin / Thick on Thick and Thick on Thin).

Recommendations for VMDK provisioning:

1. Check with your storage vendor and unless they have solid justification for not using Thin Provisioning OR you have an operational constraint preventing it, use Thin Provisioned VMDKs. (The pros outweigh the cons in my opinion)
2. When using Thin Provisioning create larger VMDKs to simplify capacity management at the VM and OS/Application layer.
3. When using Thick or Thin provisioning, ensure you test performance using Jetstress and LoadGen with the same provisioning type.
4. Ensure alerting is configured and working to monitor capacity utilization especially when using thin provisioned VMDKs.

Back to the Index of How to successfully Virtualize MS Exchange.

More Information on VMDK and Datastore provisioning options:

1. Example Architectural Decision – Datastore (LUN) and Virtual Disk Provisioning (Thin on Thin)

2. Example Architectural Decision – Datastore (LUN) and Virtual Disk Provisioning (Thin on Thick)

Back to the Index of How to successfully Virtualize MS Exchange.

Cloning VMs – Why less (I/O & throughput) is better!

I’ve seen the picture below floating around Twitter and LinkedIn which shows a 32GB VM being cloned in just 7 seconds on an All Flash Array (AFA) and has got a lot of attention.

The AFA peaked at over 7000MB/s during this time showing the AFA is capable of some serious throughput!345363bf-bbb3-4389-aafa-71c81f182de3-large

At this stage some people may be thinking im talking about Nutanix, so I would like to point out the above AFA is not a Nutanix NX-9000 All Flash Node.

So why did I write this post?

I am still surprised that technical people find this sort of test and result impressive, because to me the fact the AFA used 7000MB/s of bandwidth to perform the clone means it has not intelligently performed the clone and the process has used additional capacity while potentially having a high impact on the other workloads using the storage.

At this stage I guess I should explain what I mean by intelligently clone.

An intelligent clone in my mind is where:

a) The clone takes a few seconds to occur
b) The clone is offloaded to the storage layer
c) Uses almost zero I/O & bandwidth to perform the clone
d) Uses almost zero additional space

So in the above example, the solution has cloned the VM in a few seconds, so a) has been satisfied, and since there is no information provided I’m going to give it the benefit of the doubt and say the clone was offloaded to the storage layer, so im assuming (rightly or wrongly) that b) is also satisfied.

But what about c) and d).

If the clone uses 7000MB/s of bandwidth that must have some impact (if not a significant impact) on other workloads running on the storage, even if it is only for 7 seconds.

The clone was also writing data throughout the 7 seconds, so its also duplicating the data.

So the net result is a fast yet high impact (capacity / performance) clone.

Back in 2012, when I worked at IBM, I wrote this post (Netapp Edge VSA – Rapid Cloning Utility) about intelligent cloning, as a customer was suffering terrible VDI recompose times due to using a big dumb storage solution which had no inteligent cloning capabilities. The post shows even on an old IBM x3850 M2 with slow old 4 core processors running a Virtual Storage Appliance running on 3 peices of spinning rust (146GB SAS disks) and it still completes the task in just 4.73 seconds per clone in full compliance with the 4 items I identified as aspects of intelligent cloning (below).

a) The clone takes a few seconds to occur
b) The clone is offloaded to the storage layer
c) Uses almost zero I/O & bandwidth to perform the clone
d) Uses almost zero additional space

The reason intelligent cloning is so much faster is because there is no need to duplicate a VM, the intelligent cloning process simply creates pointers back to the original file (which remains Read Only) and only uses I/O & capacity when new data is created.

The process is actually mostly dependant on vCenter to register the new VM which is why the process takes a couple of seconds as the process takes almost no time at the storage layer. The size of the VM being cloned is irrelevant. (Note: In my post from 2012 it was a 10Gb VM although again the size has no impact on the speed of an intelligent clone)

In the post from 2012, I made the following observation:

Even if you have the worlds fastest array (insert you favorite vendor here), storage connectivity and the biggest and most powerful ESXi hosts the process of cloning a large number of virtual machines will still;

1. Take more time to complete than an intelligent cloning process like RCU

2. Impact the performance of your ESXi hosts and more than likley production VMs

3. Impact the performance of your storage network & array (and anything that uses it , physical or virtual).

So fast forward to 2015, we have lots of really fast All-Flash storage solutions, but for tasks like cloning, even these super fast all-flash solutions can’t outperform a single controller (2vCPU) Virtual Storage appliance running on an old IBM x3850 M2 server running in my test lab using intelligent cloning from back in 2012.

I also wrote this article (Is VAAI beneficial with Virtual Storage Appliance (VSA) based solutions ?) recently explaining the benefits of VAAI-NAS and how VAAI-NAS supports intelligent cloning even with Virtual Storage Appliance solutions.

In Summary:

I find a clone taking a few seconds and using next to no throughput and capacity to be impressive. This is a perfect example of less I/O and throughput (to perform the same task) being better!

Its great if a storage array has the capability to drive many GB/s of throughput, but its totally unnecessary for cloning and is only demonstrating the lack of intelligent cloning capabilities for the storage solution.

In my opinion its much better for a storage solutions to use its high performance capability for driving I/O to virtual machines servicing business applications than for tasks like cloning which can be done intelligently.

To show off more real world performance capabilities of a storage solution (especially an All-Flash array), the example really has to include multiple workloads with different I/O characteristics. This is something the storage industry (all vendors) continues to fail to provide and its something I would like to be a part of changing as things like “Peak” performance are no where near as important as “consistent” performance.

Back on topic though, If cloning is something you or your customers require, for say a VDI, Cloud deployment or just for rapid provisioning of testing & development VMs, consider a storage solution which has intelligent cloning capabilities such as VAAI-NAS which integrates with products like Horizon View (VCAI Clones) and vCloud Director (FAST Provisioning).

How to successfully Virtualize MS Exchange – Part 6 – vMotion

Having a virtualized Exchange server opens up the ability to perform vMotion and migrate the VM between ESXi hosts without downtime. This is a handy feature to enable hardware maintenance , upgrades or replacement with no downtime and importantly no loss of resiliency to the application.

In this article, I am talking only about vMotion, not Storage vMotion.

Lets first discuss vMotions requirements and configuration maximums.

vMotion requirements:

1. A VMKernel enabled for vMotion
2. A minimum of 1 x 1Gb NIC
3. Shared storage between source and destination ESXi hosts (recommended).

vMotion Configuration Maximums:

Concurrent vMotion operations per host (1Gb/s network):  4
Concurrent vMotion operations per host (10Gb/s network):  8
Concurrent vMotion operations per datastore: 128

As discussed in Part 4, I recommend using DRS “VM to Host” should rules to ensure DRS does not vMotion Exchange VMs unnecessary while keeping the cluster load balanced.

However, it is still important to design your environment to ensure Exchange VMs can vMotion as fast as possible and with the lowest impact during the syncing of the memory and during the final cutover.

So that brings us to our first main topic, Multi-NIC vMotion.

Multi-NIC vMotion:

Multi-NIC vMotion is a feature introduced in vSphere 5.0 which allows vMotion traffic to be sent concurrently down multiple physical NICs to increase available bandwidth and speed up vMotion activity. This effectively lowers the impact of vMotion and enables larger VMs with very high memory change rates do be vMotioned.

For those who are not familiar with the feature, it is described in depth in VMware KB : Multiple-NIC vMotion in vSphere 5 (2007467) as is the process to set it up on Virtual Standard Switches (VSS) and Virtual Distributed Switches (VDS).

From an Exchange perspective, the larger the MBX/MSR VM’s vRAM, and more importantly the more “active” the memory, the longer the vMotion can take. If vMotion detects the memory change rate is higher than the available bandwidth, the hypervisor will insert micro “stuns” to the VM’s CPU over time until the change rate is low enough to vMotion. This is generally has minimal impact to VMs, including Exchange, but if it can be avoided the better.

So using Multi-NIC vMotion helps as more bandwidth can be utilized which means vMotion activity is either faster, or can support more active memory with a low impact.

vMotion “Slot size”:

A vMotion slot size can be thought of as the compute and ram capacity required to perform a vMotion of a VM between two hosts. So for a VM with 96Gb of vRAM and the same memory reservation, the destination host requires 96Gb of physical RAM to be available to even qualify to begin a vMotion.

The larger the VM, the more of a factor this can become in the design of a vSphere cluster.

For example, The diagram below shows a four ESXi host HA cluster with several large VMs including several which are assigned 96Gb of vRAM as is common with Exchange MBX / MSR VMs.

In this scenario the Exchange VMs are represented by VM #13,15 and 16 and have 96Gb RAM ea.

ClustervMotionSlotSizeBad

The issue here is there is insufficient memory on any host to accommodate a vMotion of any of the Exchange VMs. This leads to complexity during maintenance periods as well as a HA event.

In fact in the above example, if an ESXi host crashed, HA would not be able to restart any of the Exchange VMs.

This goes back to the point I made in Part 5 about always ensuring an N+1 (minimum) configuration for the cluster, as this should in most cases avoid this issue.

In addition to the recommendation in Part 4 about using VM to Host DRS “should” rules to ensure only one Exchange VM runs per host.

Enhanced vMotion Compatibility:

Enhanced vMotion Compatibility or EVC, is used to ensure vMotion compatibility for all the hosts within a cluster. EVC ensures that all hosts in a cluster present the same CPU feature set to virtual machines, even if the actual CPUs on the hosts differ. The end result is configuring EVC prevents vMotion from failing because of incompatible CPUs.

The knowledge based article Enhanced vMotion Compatibility (EVC) processor support (1003212) from VMware explains the EVC modes and compatible CPU models. Note: EVC does not support Intel to AMD or vice versa.

Contrary to popular belief, EVC does not “slow down” the CPU, it only masks processor features that affect vMotion compatibility. The full speed of the processor is still utilized, the only potential performance degradation is where an application is specifically written to take advantage of masked CPU features, in which case that workload may have some performance loss. However this is not the case with MS Exchange and as a result, I recommend EVC always be enabled to ensure the cluster is future proofed and Exchange VMs can be migrated to newer HW seamlessly via vMotion.

For more details on why you should enable EVC, review the Example Architectural Decision – Enhanced vMotion Compatibility.

Jumbo Frames:

Using Jumbo frames helps improve vMotion throughput by reducing the number of packets and therefore interrupts required to migrate the same Exchange VM between two hosts.

Michael Webster @vcdxnz001 (VCDX #66) wrote the following great article showing the benefits of Jumbo Frames for vMotion is up to 19% in Multi-NIC vMotion environments: Jumbo Frames on vSphere 5

So we know there is a significant performance benefit, but what about the downsides of Jumbo Frames?

The following two Example architectural decisions covers the pros and cons of Jumbo Frames, along with justification for using and not using Jumbo for IP Storage. The same concepts are true for vMotion so I recommend you review both decisions and choose which one best suits your requirements/constraints.

Note: Neither decision is “right” or “wrong” but if your environment is configured correctly for Jumbo Frames, you will get better vMotion performance with Jumbo Frames.

  1. Jumbo Frames for IP Storage (Do not use Jumbo Frames)
  2. Jumbo Frames for IP Storage (Use Jumbo Frames)

vMotion Security:

vMotion traffic is unencrypted, as a result anyone with access to the network can sniff the traffic. To avoid this, vMotion traffic should be placed on a dedicated non route-able VLAN.

For more information see: Example Architectural Decision : Securing vMotion & Fault Tolerant Traffic in IaaS/Cloud Environments.

Note: This post is relevant to all environments, not just IaaS/Cloud/Multi-tenant.

Performing a vMotion or entering Maintenance Mode:

As per Part 4, I recommended using VM to Host DRS “should” rules to ensure only one Exchange VM runs per host. This also ensures only one Exchange VM is potentially impacted by vMotion when a host enters maintenance mode.

However, simply entering maintenance mode can kick off up to 8 concurrent vMotion activities when using 10Gb networking for vMotion. In this situation, the length of the vMotion for the Exchange VM will increase and potentially impact performance for a longer period.

As such, I recommend to manually vMotion the Exchange VM onto another host not running any other Exchange VMs (and ideally no other large vCPU/vRAM VMs) and waiting for this to complete before entering the host into maintenance mode.

The benefit of this will depend on the size of your Exchange VMs and the performance of your environment but this is an easy way to minimize the chance of performance issues.

DAG Failovers during vMotion?

This can occur as even a momentary drop of the network during vMotion, or the quiesce of the VM during the final stage of the vMotion exceeds the default Windows cluster heartbeat thresholds.

With vMotion setup correctly and ideally if using Multi-NIC vMotion, this should not occur, however there are ways to mitigate against this issue by increasing the cluster heartbeat time-out help prevent unnecessary DAG failovers.

To increase the cluster heartbeat timeout see: Tuning Failover Cluster Network Thresholds

Recommendations for vMotion:

1. Ensure vMotion is Active on 10Gb (or higher) adapters
2. Enable Multi-NIC vMotion across 2 x 10Gb adapters in environments with Exchange VMs larger than 64Gb RAM
3. Enable Enhanced vMotion Compatibility (EVC) to the highest supported level in your cluster
4. Use Jumbo Frames for vMotion Traffic
5. Ensure sufficient cluster capacity to migrate Exchange VMs
6. Use DRS rules to separate Exchange VMs to ensure vMotion is not prevented (as per Part 4)
7. When evacuating ESXi hosts running Exchange VMs, vMotion the Exchange VM first, and once it has succeeded, put the hosts into maintenance mode.
8. Use Network I/O Control (NIOC) to ensure a minimum level of bandwidth to vMotion (Further details in an upcoming post)
9. Do not Route vMotion Traffic
10. Put vMotion traffic on a dedicated non route-able VLAN (ie: No gateway)
11. Increase cluster heartbeat time-outs for Windows failover clustering with the maximums outlined in Tuning Failover Cluster Network Thresholds.

Back to the Index of How to successfully Virtualize MS Exchange.