How to successfully Virtualize MS Exchange – Part 9 – Raw Device Mappings (RDMs)

A Raw Device Mapping or “RDM” allows a VM to access a volume (or LUN) on the physical storage via either Fibre Channel or iSCSI.

When discussing Raw Device Mappings, it is important to highlight there are two types of RDM modes, Virtual Compatability Mode and Physical Compatibility Mode.

See the following article for a detailed breakdown of the Difference between Physical compatibility RDMs and Virtual compatibility RDMs(2009226).

So how does an RDM compare to a VMDK on a Datastore?

VMware released a white paper called Performance Characterization of VMFS and RDM Using a SAN in 2008, which debunked the myth that RDMs gave significantly higher performance than VMDKs on datastores.

So RDMs have NO performance advantage over VMDKs on a Datastore.

With that in mind, what advantages (if any) do RDMs have today?

VMware released their Microsoft Exchange 2010 on VMware
Best Practices Guide which has the following Table on page 14 showing the trade-offs between VMFS and RDMs.

RDMsvsVMDKs

I have highlighted one advantage that RDMs still have over VDMKs on datastore style deployments which is the ability to migrate from a physical Exchange server using centralized SAN storage to a VM without data migration.

However, I find the most common way to migrate from a physical deployment to a virtual deployment is by performing Mailbox migrations to virtualized Exchange servers in an ESXi environment. This avoids the complexities of RDMs and ensures no capacity on the shared storage is wasted (i.e.: Siloed).

The table also lists one other advantage for RDMs, being support for up to 64TB drives whereas Virtual Disks were limited to 2TB for VMFS but this limitation has since been lifted to 62TB in vSphere 5.5.

Recommendation: Do not use RDMs for MS Exchange deployments.

As with Local Storage discussed in Part 7, RDM deployments have more downsides (mainly around inefficiency and complexity) than upsides and I would recommend considering other storage options for Virtualized Exchange deployments.

Other options along with my recommended options will be discussed in the next 2 parts of this series and in upcoming posts on Storage performance and resiliency.

Back to the Index of How to successfully Virtualize MS Exchange.

How to successfully Virtualize MS Exchange – Part 8 – Local Storage

As discussed in Part 7, Local Storage is probably the most basic form of storage we can present to ESXi and use for Exchange MBX/MSR VMs.

The below screen shot shows what local storage can look like to an ESXi host.

LocalStorage

As we can see above, the highlighted datastore is simply an SSD formatted with VMFS5. So in this case a single drive not running RAID, and therefore in the event of the drive failing, any data on the drive would be permanently lost.

Note: The above image is simply an example. In reality multiple drives most likely SAS or SATA would be used as SSD is unnecessary for Exchange.

In some ways this is very similar to a physical Exchange deployment on JBOD storage and I would like to echo the recommendations Microsoft give for JBOD deployments from the Exchange 2013 storage configuration options guide and say for JBOD deployments, I strongly recommend at least 3 database copies.

As per the recommendation in Part 4 (DRS), MS Exchange MBX/MSR VMs should always run on separate ESXi hosts to ensure a single host failure does not potentially cause an issue for the DAG. This is especially important because if two Exchange servers shared the same ESXi host and local storage, a single ESXi host outage could cause data loss and downtime for part or all of the Exchange environment.

The below is a screen shot from the Exchange 2013 storage configuration options guide showing the recommendations based on RAID or JBOD deployments. In my option these recommendations also apply to virtualized Exchange deployments on Local storage.

JBODexchange

Another option is to use Local Storage in a RAID configuration to eliminate the Single Point of Failure (SPOF) of a single drive failure.

Again, I agree with Microsoft’s recommendations and suggest at least two database copies when using a RAID configuration and again, each Exchange VM must run on its own ESXi host on dedicated physical disks.

Note: The RAID controller itself is still a SPOF which is why multiple copies is recommended from both an availability and data protection perspective.

Let’s now discuss the pros and cons for using Local Storage with JBOD for your Virtualized Exchange Deployment.

PROS

1. Generally lower cost per GB than centralized storage (e.g.: SAN)
2. Higher usable capacity per drive compared to RAID or centralized storage configurations using RAID or other propitiatory data protection techniques.
3. Local JBOD Storage formatted with VMFS is a fully supported configuration

CONS

1. No protection from data loss in the event of a JBOD drive failure. Note: For non DAG deployments, RAID and 3rd party backups should always be used!
2. Performance/Capacity in JBOD deployments is limited to the capabilities of a single drive.
3. Loss of Virtualization functionality such as HA / DRS and vMotion (without performing a Storage vMotion every time)
4. Can be difficult/costly to scale when nearing capacity.
5. Increased Management (Operational) overheads managing decentralized storage
6. At least 3 database copies are recommended, requiring more Exchange MBX/MSR servers.
7. Little/no protection against data corruption which may lead to all DAG copies suffering corruption. Note: If the corruption is not discovered in time, LAGGED copies can also be compromised.
8. Capacity cannot be shared between between ESXi hosts which may lead to inefficient use of the available capacity.

Next here are some pros and cons for using Local Storage with RAID for your Virtualized Exchange Deployment.

PROS

1. Generally lower cost per GB than centralized storage (e.g.: SAN)
2. A single drive failure will not cause data loss or a DAG failover
3. Performance is not limited to a single drives capabilities
4.Local Storage with RAID formatted with VMFS is a fully supported configuration
5. As there is no data loss with a single drive failure, less database copies are required (2 instead of >=3 for JBOD)

CONS

1. Increased Management (Operational) overheads managing decentralized storage
2. Performance/Capacity is limited to the capabilities of a single drive
3. Loss of Virtualization functionality such as HA / DRS and vMotion (without performing a Storage vMotion every time)
4. Little/no protection against data corruption which may lead to all DAG copies suffering corruption. Note: If the corruption is not discovered in time, LAGGED copies can also be compromised.
5. Capacity cannot be shared between ESXi hosts which may lead to inefficient use of the available capacity
6. Performance is constrained by a single RAID controller / set of drives and can be difficult/costly to scale when nearing capacity.

For more information about data corruption for JBOD or RAID deployments, see “Data Corruption“.

Recommendations:

1. When using local storage, (JBOD or RAID), as per Part 4, run only one Exchange MBX/MSR VM per ESXi host
2. Use dedicated physical disks for Exchange MBX/MSR VM (i.e.: Do not share the same disks with other workloads)
3. Store the Windows OS / Exchange application VMDK on local storage which is configured with RAID to ensure a single drive does not cause the VM an outage.
4. Ensure ESXi itself is install on local storage configured with RAID (and not a USB key) as the Exchange VM is dependant on that host and is not protected by vSphere HA. Nor is it easily/quickly portable due to the storage not being shared.

Summary:

Using Local Storage in either a JBOD or RAID configuration is fully supported by Microsoft and is a valid option for MS Exchange deployments.

In my opinion Local Storage deployments have more downsides than upsides and I would recommend considering other storage options for Virtualized Exchange deployments.

Other options along with my recommended options will be discussed in the next 3 parts of this series.

Back to the Index of How to successfully Virtualize MS Exchange.

~ Post Updated January 2nd 2015 Thanks to feedback from @zerszenyi ~

Hardware support contracts & why 24×7 4 hour onsite should no longer be required.

In recent weeks, I have seen numerous RFQs which have the requirement for 24×7 2 or 4hr onsite HW replacement, and while this is not uncommon I’ve been thinking why is this the case?

Over my I.T career spanning coming up on 15 years, in the majority of cases, I have strongly recommended in my designs and Bill of Materials (BoMs) that customers buy 24×7 4 hours onsite hardware maintenance contracts for equipment such as Compute, Storage Arrays , Storage Area Networking and IP network devices.

I have never found it difficult to justify this recommendation, because traditionally if a component in the datacenter fails, such as a Storage Controller, this generally has a high impact on the customers business and could cost tens or hundreds of thousands of dollars or even millions of dollars in revenue depending on the size of the customer.

Not only is loosing a Storage controller general a high impact, it is also a high risk as the environment may no longer have redundancy and a subsequent failure could (and would likely) result in a full outage.

So in this example, a typical storage solution has a Storage Controller failure resulting in degraded performance (due to loosing 50% of the controllers) and high impact/risk to the customer, a customer purchasing 24×7 4 Hour, or even 24×7 2hr support contract makes perfect sense! The question is, why choose HW (or a solution) which puts you at high risk after a single component failure in the first place?

With technology fast changing and over the last year or so, I’ve been involved in many customer meetings where I am asked what I recommend in terms of hardware maintenance contracts (for Nutanix customers).

Normally this question/conversation happens after the discussion about the technology, where I explain various failure scenarios and how resilient a Nutanix cluster is.

My recommendation goes something like this.

If you architect your solution for your desired level of availability (e.g.: N+2) there is no need to buy 24×7 4hr hardware maintenance contract, the default Next Business Day option is perfectly fine.

Justification:

1. In the event of even an entire node failure, the Nutanix cluster will have automatically self healed the configured resiliency factor (2 or 3) well before even a 2hr support contract can provide a technician to be onsite, diagnose the issue and replace hardware.

2. Assuming the HW is replaced on the 2hr mark (not typical in my experience), AND assuming Nutanix was not automatically self healing prior to the drive/node replacement, the replacement drive or node would then START the process of self healing. So the actual time to recovery would be greater than 2hrs. In the case of Nutanix, self heal begins almost immediately.

3. If a cluster is sized for the desired level of availability based on business requirements, say N+2, a Node can fail, Nutanix will automatically self heal and then tolerate a subsequent failure with the ability to full self heal the configured resiliency factor (2 or 3) again.

4. If a cluster is sized only to customer requirement of only N+1, a Node can fail, Nutanix will automatically and fully self heal. Then in the unlikely (but possible) event of a subsequent failure (i.e.: A 2nd node failure before the next business day warranty replaces the failed HW), the Nutanix cluster will still continue to operate.

5. The performance impact of a node failure in a Nutanix environment is N-1, so in a worst case scenario (3 node cluster) the impact is 33%, compared to a 2 controller SAN/NAS where the impact would be 50%. In a 4 node cluster the impact is only 25% and for customer with say 8 nodes only 12.5%. The bigger the cluster the lower the impact. Nutanix recommends N+1 up to 16 nodes, and N+2 up to 32 nodes. Beyond 32 nodes higher levels of availability may be desired based on customer requirements.

The risk and impact of the failure scenario/s is key, in the case of Nutanix, because of the self healing capability, and the fact all controllers and SSDs/HDDs in the cluster participate in the self heal, it can be done very quickly and with low impact. So the impact of the failure is low (N-1) and the recovery is done quickly, so the risk to the business is low, therefore dramatically reducing (and in my opinion potentially removing) the requirement for a 24×7 2 or 4hr support contract for Nutanix customers.

In Summary:

1. The decision on what hardware maintenance contract is appropriate is a business level decision which should be based in part on a comprehensive risk assessment done by an experienced enterprise architect, intimately familiar with all the technology being used.

2. If the recommendation from the trusted experienced enterprise architect is that the risk of HW failure causing high impact or outage to the business is so high that purchasing a 4hr or 2hr onsite HW replacement is required, my advise would be to reconsider if the proposed “solution” meets the business requirements. Only if you are constrained to that solution, purchase a 24×7 2 or 4hr support contract.

3. Being heavily dependant on Hardware being replaced to restore resiliency / performance for a solution, is in itself a high risk to the business.

AND

4. In my experience, it is not uncommon to have problems getting onsite support or hardware replacement regardless of the support contract / SLA. Sometimes this is outside a vendors control, but most vendors will experience one or more of these issues which I have personally experienced on numerous occasions in previous roles:

a) Vendors failing to meet SLA for onsite support.
b) Vendors failing to have the required parts available within the SLA.
c) Replacement HW being refurbished (common practice) and being faulty.
d) The more propitiatory the HW, the more likely replacement parts will not be available in a timely manner.

Note: Support contracts don’t promise a resolution by the 2hr / 4hr contract, they simply promise somebody will be onsite and in some cases this is only after you have gone through troubleshooting with the vendor on the phone, sent logs for analysis and so on. So the reality is, the 2hr or 4hr part doesn’t hold much value.

If you have accepted the solution being sold to you OR your an architect recommending a solution which is enterprise grade and highly resilient with self healing capabilities, then consider why you need a 24×7 2hr or 4hr hardware maintenance contract if the solution is architected for the required availability level (i.e.: N+1 / N+2 etc)

So with your next infrastructure purchase (or when making your recommendations if you’re an architect), carefully consider what solution your investing in (or proposing), and if you feel an aggressive 2hr/4hr HW support contract is required, I would recommend revisiting the requirements as you may well be buying (or recommending) something that isn’t resilient enough to meet the requirements.

Food for thought.