Example Architectural Decision – Hyperthreading with Business Critical Applications (Exchange 2013)

Problem Statement

When Virtualizing Exchange 2013 (which is considered by the customer as a Business Critical Application) in a vSphere cluster shared with other production workloads of varying sizes and performance requirements,  should Hyper Threading (HT) be used?

Assumptions

1. vSphere 5.0 or greater
2. Exchange Servers are correctly sized day one or are subsequently “Right Sized”
3. Cluster average CPU overcommitment of 3:1

Motivation

1. Ensure Optimal performance for BCAs (Exchange)
2. Ensure Optimal performance for other Virtual servers in the shared vSphere cluster

Architectural Decision

Enable Hyper Threading (HT)

Alternatives

1. Disable Hyper Threading (HT)
2. Enable Hyper Threading but configure Exchange Virtual machine/s with Advanced CPU, HT Sharing Mode of “None” to ensure Exchange is always scheduled onto physical CPU cores
3. Split off a limited number of ESXi hosts and form a dedicated BCA cluster w/ <= 2:1 overcommitment and disable HT
4. Disable HT on a number of nodes within the cluster but leave HT enabled on other nodes and use DRS rules to pin Exchange VMs to non HT hosts

Justification

1. Enabling Hyper Threading (HT) improves the efficiency of the CPU scheduler, which will minimize the possibility of CPU Ready for the Exchange server/s and other virtual machines on the host where a low level of overcommitment exists (<2:1)
2. For optimal performance, DRS “Should” rules will be used to keep Exchange (BCA) workloads on specific ESXi hosts within the cluster where <=2:1 CPU overcommitment is maintained
3. Configuring Advanced CPU, HT Sharing Mode to “None” (to guarantee only pCore’s are used) may result in increased CPU Ready as the CPU scheduler is forced to find (and wait) for available pCore’s which may result in degraded or inconsistent performance.
4. Sizing for the Exchange solution was completed taking into account only pCore’s (Not HT cores) to simplify sizing
5. As the cluster where Exchange is virtualized is shared with other server workloads with varying levels of importance and performance, HT benefits the vast majority of workloads and results in a higher consolidation ratio and better performance for the vSphere cluster as a whole.
6. In physical servers, enabling Hyper Threading on Exchange servers resulted in wasted or excessive RAM usage for .NET garbage collection due to memory for .NET being allocated based on logical cores. This does not impact “Right Sized” Virtual Machines as only the required number of vCPUs are assigned to the VM, and therefore available to the Guest OS. This avoids the issue of memory being wasted for HT cores.
7. The CPU scheduler in vSphere 5.0 or later is very efficient and can intelligently schedule workload on a hyper-thread or a physical core depending on the VMs CPU demand. While the Exchange server will at some point be scheduled onto a HT thread, this is not likely to be for any extended duration or have any significant impact on performance.
8. Splitting the cluster into BCA’s and server workloads would increase the HA overhead, and effective reduce the usable compute capacity of the infrastructure.
9. Having a cluster with varying configurations (eg: HT enabled on some hosts and not others) is not advisable as it may lead to inconsistent performance and adds unnecessary complexity to the environment

Implications

1. In the event the vCPU to pCore ratio is > 2:1 for any reason (including HA event & Virtual Server Sprawl) the number of users supported and/or the performance of the Exchange environment may be impacted
2. DRS “Should” rules will need to be created to keep Exchange workloads on hosts with <2:1 vCPU to pCore ratio

 

 

 

Storage DRS and Nutanix – To use, or not to use, that is the question?

Storage DRS (SDRS) is an excellent feature which was released with vSphere 5.0 in late 2011. For those of you who are not familiar with SDRS I recommend reading the following article prior to reading the rest of this post as SDRS knowledge is assumed from now on.

Understanding VMware vSphere 5.1 Storage DRS

This post also assumes basic knowledge of the Nutanix platform, for those of you who are not familiar with Nutanix please review the following links prior to reading the remainder of this post.

About Nutanix | How Nutanix Works | 8 Strategies for a Modern Datacenter

Storage DRS & Nutanix – To use, or not to use, that is the question?

With Storage DRS (SDRS), both capacity and performance can be managed, but what should SDRS manage in a Nutanix environment?

Lets start with performance. SDRS can help ensure optimal performance of virtual machines by enabling the I/O metric for SDRS recommendations as shown in the screen shot below.

SDRSsettingsIOmetricCircledSmall

Once this is done, SDRS will evaluate I/O every 8 hours (by default) and where the configured latency threshold is exceeded, perform a cost/benefit analysis before deciding to make a migration recommendation or do nothing.

So the question is, does SDRS add value in a Nutanix environment from a performance perspective?

The Nutanix solution adopts the “Scale-out” methodology by having one (1) Nutanix Controller VM (CVM) per Nutanix Node (ESXi Host) and then presents NFS datastore/s to the vSphere cluster which are serviced by all CVMs. The CVMs use intelligent auto-tiering to ensure optimal performance. The way this works at a high level, is as follows.

Data is written to an SSD tier (either PCIe SSD such as Fusion-io or SATA SSD) before being migrated off to a SATA tier once the blocks are determined to be “Cold” and if/when required, promoted back the an SSD tier when they become “Hot” again for improved read performance.

As with other vendor storage solutions with auto tiering technologies (such as FAST-VP , FlashPools etc) the same recommendation around SDRS and the I/O metric is true for Nutanix, leave it disabled.

So, at this point we have concluded the I/O metric will be “Disabled”, lets move onto Capacity management.

The Nutanix solution presents large NFS datastore/s to the ESXi hosts (Nutanix nodes) which are shared across all ESXi hosts in one or more vSphere clusters.

When using SDRS, it can manage initial placement of a new Virtual machine based on the configured “Utilized Space” metric (shown below) to ensure there is not a capacity imbalance between the datastores in a datastore cluster, as well as move virtual machines around when new machines are provisioned to ensure the balance is maintained.

UtilizedSpaceSDRS

So this is a really good feature which I have and do recommend in several scenarios, however the Nutanix solution presents typical a small number of large NFS datastores to the vSphere cluster (or clusters) which are serviced by all Controller VMs (CVMs) in the Nutanix cluster. Using SDRS for initial placement does not add much (if any) value as the initial placement will almost always be on the same large NFS datastore.

Where actual physical capacity becomes an issue, space saving technologies such as compression can be enabled, or the environment can be granularly scaled by adding just a single additional Nutanix node which linearly scales the solution from both a capacity and performance perspective.

The only real choice is when you choose to present two (or more) datastores where one datastore leverage’s the Nutanix compression technology. This is a very easy scenario for a vSphere admin to choose the placement of a VM and is the same amount of administrative effort as choosing a datastore cluster which would be a collection of datastores either using compression, or not depending on the workloads.

As a result there is no advantage to using SDRS to manage utilized space.

In conclusion, Storage DRS is a great feature when used with storage arrays where performance does not scale linearly or provide intelligent tiering to address I/O bottlenecks and/or where your environment has large numbers of datastores where you need to actively manage capacity.

As performance and capacity management are intelligently managed natively by the Nutanix solution, the requirement (or benefit) provided by SDRS is negated, as a result there is no requirement or benefit for using SDRS with a Nutanix solution.

Related Articles

1. Example Architectural Decision – VMware DRS automation level for a Nutanix environment

 

 

Example Architectural Decision – VMware DRS automation level for a Nutanix environment

Problem Statement

What is the most suitable DRS automation level and migration threshold for a vSphere cluster running on Nutanix?

Requirements

1. Ensure optimal performance for Business Critical Applications
2. Minimize complexity where possible

Assumptions

1. Workload types and size are unpredictable and workloads may vary greatly and without notice
2. The solution needs to be as automated as possible without introducing significant risk
3. vSphere 5.0 or later

Constraints

1. 2 x 10GB NICs per ESXi host (Nutanix node)

Motivation

1. Prevent unnecessary vMotion migrations which will impact host & cluster performance
2. Ensure the cluster standard deviation is minimal
3. Reduce administrative overhead of reviewing and approving DRS recommendations
4. Ensure optimal storage performance

Architectural Decision

Use DRS in Fully Automated mode with setting “3” – Apply priority 1,2 and 3 recommendations

Create a DRS “Should run on hosts in group” rule for each Business Critical Applications (BCAs) and configure each BCA to run on a single specified host (ensuring BCA’s are separated or grouped according to workload)

DRS Automation will be Disabled for all Controller VMs (CVMs)

Justification

1. Fully Automated DRS prevents excessive vMotion migrations that do not provide significant compute benefits to cluster balance as the vMotion itself will use cluster & network resources

2. Ensure the Nutanix Distributed File System , specifically the “Curator” component does not need to frequently relocate data between Nutanix nodes (ESXi hosts) direct attached storage to ensure virtual machine/s have local access to data. Doing so would put additional load on the Controller VM (and Curator service), local/remote storage and the network.

2. Ensure cluster remains in a reasonably load balanced state without resource being wasted on load balancing the compute layer to only achieve minimal improvement which may impact the storage/network layer/s.

3. Applying Level 1,2 and 3 recommendations means recommendations that must be followed to satisfy cluster constraints, such as affinity rules and host maintenance will be applied (Level 1) as well as applying recommendations with four or more stars (Level 2) that promise a significant improvement in the cluster’s load balance. In the event significant improvement to the clusters load balance will be achieved, movement of data at the storage layer (via the CVM / Network) can be justified

3. DRS is a low risk, proven technology which has been used in large production environments for many years

4. Setting DRS to manual would be a significant administrative (BAU) overhead and introduce additional risks such as human error and situations where contention may go unnoticed which may impact performance of one or more VMs

5. Setting a more aggressive DRS migration threshold may put an additional load on the cluster which will likely not result in significantly better cluster balance (or VM performance) and could result in significant additional workload for the ESXi hosts (compute layer), the Nutanix Controller VM (CVM) ,network & underlying storage.

6. By using DRS “Should run on this host” rules for Business Critical Applications (BCAs) will ensure consistent performance for these workloads (by keeping VMs on the same ESXi host/Nutanix node where its data is local) without introducing significant complexity or limiting vSphere functionally

Implications

1. In some circumstances the DRS cluster may have a low level of imbalance

2. DRS will not move workloads via vMotion where only a moderate improvement to the cluster will be achieved

3. At times, including after performing updates (via VUM) of ESXi hosts (Nutanix Nodes) the cluster may appear to be unevenly balanced as DRS may calculate minimal benefit from migrations. Setting DRS to “Use Fully automated and Migration threshold 3” for a short period of time following maintenance should result in a more evenly balanced DRS cluster with minimal (short term) increased workload for the Nutanix Controller VM (CVM) , network & underlying storage.

4. DRS rules will need to be created for Business Critical Applications

Alternatives

1.Use Fully automated and Migration threshold 1 – Apply priority 1 recommendations
2.Use Fully automated and Migration threshold 3 – Apply priority 1,2 recommendations
3. Use Fully automated and Migration threshold 4- Apply priority 1,2,3 and 4 recommendations
4.Use Fully automated and Migration threshold 5- Apply priority 1,2,3,4 & 5 recommendations
5. Set DRS to manual and have a VMware administrator assess and apply recommendations
6. Set DRS to “Partially automated”

Related Articles

1. Storage DRS and Nutanix – To use or not to use, That is the question