Example Architectural Decision – Hyperthreading with Business Critical Applications (Exchange 2013)

Problem Statement

When Virtualizing Exchange 2013 (which is considered by the customer as a Business Critical Application) in a vSphere cluster shared with other production workloads of varying sizes and performance requirements,  should Hyper Threading (HT) be used?

Assumptions

1. vSphere 5.0 or greater
2. Exchange Servers are correctly sized day one or are subsequently “Right Sized”
3. Cluster average CPU overcommitment of 3:1

Motivation

1. Ensure Optimal performance for BCAs (Exchange)
2. Ensure Optimal performance for other Virtual servers in the shared vSphere cluster

Architectural Decision

Enable Hyper Threading (HT)

Alternatives

1. Disable Hyper Threading (HT)
2. Enable Hyper Threading but configure Exchange Virtual machine/s with Advanced CPU, HT Sharing Mode of “None” to ensure Exchange is always scheduled onto physical CPU cores
3. Split off a limited number of ESXi hosts and form a dedicated BCA cluster w/ <= 2:1 overcommitment and disable HT
4. Disable HT on a number of nodes within the cluster but leave HT enabled on other nodes and use DRS rules to pin Exchange VMs to non HT hosts

Justification

1. Enabling Hyper Threading (HT) improves the efficiency of the CPU scheduler, which will minimize the possibility of CPU Ready for the Exchange server/s and other virtual machines on the host where a low level of overcommitment exists (<2:1)
2. For optimal performance, DRS “Should” rules will be used to keep Exchange (BCA) workloads on specific ESXi hosts within the cluster where <=2:1 CPU overcommitment is maintained
3. Configuring Advanced CPU, HT Sharing Mode to “None” (to guarantee only pCore’s are used) may result in increased CPU Ready as the CPU scheduler is forced to find (and wait) for available pCore’s which may result in degraded or inconsistent performance.
4. Sizing for the Exchange solution was completed taking into account only pCore’s (Not HT cores) to simplify sizing
5. As the cluster where Exchange is virtualized is shared with other server workloads with varying levels of importance and performance, HT benefits the vast majority of workloads and results in a higher consolidation ratio and better performance for the vSphere cluster as a whole.
6. In physical servers, enabling Hyper Threading on Exchange servers resulted in wasted or excessive RAM usage for .NET garbage collection due to memory for .NET being allocated based on logical cores. This does not impact “Right Sized” Virtual Machines as only the required number of vCPUs are assigned to the VM, and therefore available to the Guest OS. This avoids the issue of memory being wasted for HT cores.
7. The CPU scheduler in vSphere 5.0 or later is very efficient and can intelligently schedule workload on a hyper-thread or a physical core depending on the VMs CPU demand. While the Exchange server will at some point be scheduled onto a HT thread, this is not likely to be for any extended duration or have any significant impact on performance.
8. Splitting the cluster into BCA’s and server workloads would increase the HA overhead, and effective reduce the usable compute capacity of the infrastructure.
9. Having a cluster with varying configurations (eg: HT enabled on some hosts and not others) is not advisable as it may lead to inconsistent performance and adds unnecessary complexity to the environment

Implications

1. In the event the vCPU to pCore ratio is > 2:1 for any reason (including HA event & Virtual Server Sprawl) the number of users supported and/or the performance of the Exchange environment may be impacted
2. DRS “Should” rules will need to be created to keep Exchange workloads on hosts with <2:1 vCPU to pCore ratio

 

 

 

Example Architectural Decision – Host Isolation Response for a Nutanix Environment

Problem Statement

What are the most suitable HA / host isolation response when using Nutanix?

Assumptions

1. vSphere 5.0 or greater
2. Two x 10GB Network interfaces are shared for Nutanix Storage Traffic and Virtual Machine Traffic

Motivation

1. Minimize the chance of a false positive isolation response
2. Ensure in the event the storage is unavailable that virtual machines are promptly shutdown to enable HA to recover the VMs in a timely manner (where other hosts are unaffected by isolation) and to prevent a “split brain” scenario
3. Ensure maximum availability

Architectural Decision

Turn off the default isolation address and configure the below specified isolation addresses, which check connectivity to multiple Nutanix Controller VMs (CVMs) on the IP Storage VLAN.

Configure the following Isolation addresses

das.isolationaddress1 : NDFS Cluster IP Address

Configure Host Isolation Response to: Power Off

For Nutanix Controller VMs override the cluster setting and configure Host Isolation Response to “Leave Powered On”

Justification

1. The ESXi Management traffic along with the Virtual machine traffic and inter-Nutanix node storage traffic is running over 2 x 10GB connections. Using the ESXi management gateway (default isolation address) to check for isolation is not suitable as the management network can be offline without impacting the IP storage or data networks. This situation could lead to false positives isolation responses.
2. The isolation addresses chosen tests IP storage connectivity over the converged 10Gb network and in the event this is unavailable, there is no point testing further connectivity as Virtual machines cannot function without their storage
3. In the event the Nutanix cluster IP address cannot be reached by ICMP the Node will not be able to properly function. As such, triggering isolation response and powering off the VMs based on this criteria is logical as the VMs will not be able to function under these conditions.
4. In the event the NDFS Cluster IP address does not respond to ICMP on the Management interfaces it is likely there has been an isolation event OR a catastrophic failure in the environment, either to the network, or the storage controllers themselves, in which case the safest option is to Power Off the VMs.
5. In the event the isolation response is triggered and the isolation does not impact all hosts within the cluster, the VMs can be restarted by HA onto a surviving host and resume functioning
6. Using the Nutanix Controller VM (CVM) IP address (192.168.5.2) for the Isolation address is not suitable as this address exists on each ESXi hosts and as such it is not a good candidate for isolation detection as the host will always be able to find this address even when the network is offline due to the CVM being local to the host
7. The Nutanix Controller VM accesses local storage and can continue to run locally even in an isolation event. When the isolated event is over, the CVM will then regain connectivity to the other CVMs in the Nutanix cluster.
8. Shutting down the CVM would only increase the recovery time once the isolation even is over and has no added benefits.

Implications

1. In the event the host cannot reach any of the isolation addresses, virtual machines will be powered off.
2. Initial cluster setup would require the vSphere administrator to override the Cluster settings for each Controller VM. Note: This is a one time task (Set & Forget)

Alternatives

1. Set Host isolation response to “Leave Powered On”
2. Do not use Datastore heartbeating
3. Use the default isolation address
4. Leave the CVM on the default cluster setting and “Shutdown” on isolation

Related Articles

1. VMware Host Isolation Response in a Nutanix Environment #NoSAN

2. Storage DRS and Nutanix – To use, or not to use, that is the question?

3. VMware HA and IP Storage

VMware Host Isolation Response in a Nutanix Environment #NoSAN

I was recently discussing the Nutanix solution with a friend of mine and fellow VCDX, Michael Webster (@vcdxnz001) and he asked what the recommended Host Isolation Response is for Nutanix.

At this stage I must advise there is no formal recommendation, but an Official vSphere on Nutanix Best Practice guide is in the works and will be released asap.

Back to my conversation with Michael, Being that Nutanix is an IP Storage solution, my initial feeling is that Host isolation Response should be set to “Shutdown”, but I didn’t go into any more detail with Michael, so I thought it best to post a quick explanation.

This post also assumes basic knowledge of vSphere as well as the Nutanix platform, for those of you who are not familiar with Nutanix please review the following links prior to reading the remainder of this post.

About Nutanix | How Nutanix Works | 8 Strategies for a Modern Datacenter

So back on topic, in other posts I have written for IP Storage, such as (Example Architectural Decision – Host Isolation Response for IP Storage) I have concluded that “Shutdown” was the most suitable setting and recommended specifying isolation addresses of the NAS controllers.

But as Nutanix changes the game and has one virtual storage controller per ESXi host, so does this change the recommendation?

In short, No, but for those who are interested, here is why.

If we leave the default isolation address, (being the default gateway for ESXi Management), in the event the gateway is down, it will trigger an isolation response even if the rest of the network is operating fine, thus an unnecessary outage would occur.

If we configure das.isolationaddress1 & 2 with the Management IP address of any two Nutanix Controller VMs (192.168.1.x , 192.168.1.y in my below diagram) then an isolation response will only be triggered if both Nutanix Controller VMs (CVMs) are not responding, in which case, the VMs should be Shutdown as the Nutanix cluster may not be function properly with two Controllers offline concurrently as its configured by default for N+1 (or replication factor of “2” in Nutanix speak).

The below is a high level example of the above configuration.

NutanixHostIsolation

Related Articles

1. Example Architectural Decision – Host Isolation Response for a Nutanix Environment

2. Storage DRS and Nutanix – To use, or not to use, that is the question?

3. VMware HA and IP Storage