VMware Host Isolation Response in a Nutanix Environment #NoSAN

I was recently discussing the Nutanix solution with a friend of mine and fellow VCDX, Michael Webster (@vcdxnz001) and he asked what the recommended Host Isolation Response is for Nutanix.

At this stage I must advise there is no formal recommendation, but an Official vSphere on Nutanix Best Practice guide is in the works and will be released asap.

Back to my conversation with Michael, Being that Nutanix is an IP Storage solution, my initial feeling is that Host isolation Response should be set to “Shutdown”, but I didn’t go into any more detail with Michael, so I thought it best to post a quick explanation.

This post also assumes basic knowledge of vSphere as well as the Nutanix platform, for those of you who are not familiar with Nutanix please review the following links prior to reading the remainder of this post.

About Nutanix | How Nutanix Works | 8 Strategies for a Modern Datacenter

So back on topic, in other posts I have written for IP Storage, such as (Example Architectural Decision – Host Isolation Response for IP Storage) I have concluded that “Shutdown” was the most suitable setting and recommended specifying isolation addresses of the NAS controllers.

But as Nutanix changes the game and has one virtual storage controller per ESXi host, so does this change the recommendation?

In short, No, but for those who are interested, here is why.

If we leave the default isolation address, (being the default gateway for ESXi Management), in the event the gateway is down, it will trigger an isolation response even if the rest of the network is operating fine, thus an unnecessary outage would occur.

If we configure das.isolationaddress1 & 2 with the Management IP address of any two Nutanix Controller VMs (192.168.1.x , 192.168.1.y in my below diagram) then an isolation response will only be triggered if both Nutanix Controller VMs (CVMs) are not responding, in which case, the VMs should be Shutdown as the Nutanix cluster may not be function properly with two Controllers offline concurrently as its configured by default for N+1 (or replication factor of “2” in Nutanix speak).

The below is a high level example of the above configuration.

NutanixHostIsolation

Related Articles

1. Example Architectural Decision – Host Isolation Response for a Nutanix Environment

2. Storage DRS and Nutanix – To use, or not to use, that is the question?

3. VMware HA and IP Storage

Storage DRS and Nutanix – To use, or not to use, that is the question?

Storage DRS (SDRS) is an excellent feature which was released with vSphere 5.0 in late 2011. For those of you who are not familiar with SDRS I recommend reading the following article prior to reading the rest of this post as SDRS knowledge is assumed from now on.

Understanding VMware vSphere 5.1 Storage DRS

This post also assumes basic knowledge of the Nutanix platform, for those of you who are not familiar with Nutanix please review the following links prior to reading the remainder of this post.

About Nutanix | How Nutanix Works | 8 Strategies for a Modern Datacenter

Storage DRS & Nutanix – To use, or not to use, that is the question?

With Storage DRS (SDRS), both capacity and performance can be managed, but what should SDRS manage in a Nutanix environment?

Lets start with performance. SDRS can help ensure optimal performance of virtual machines by enabling the I/O metric for SDRS recommendations as shown in the screen shot below.

SDRSsettingsIOmetricCircledSmall

Once this is done, SDRS will evaluate I/O every 8 hours (by default) and where the configured latency threshold is exceeded, perform a cost/benefit analysis before deciding to make a migration recommendation or do nothing.

So the question is, does SDRS add value in a Nutanix environment from a performance perspective?

The Nutanix solution adopts the “Scale-out” methodology by having one (1) Nutanix Controller VM (CVM) per Nutanix Node (ESXi Host) and then presents NFS datastore/s to the vSphere cluster which are serviced by all CVMs. The CVMs use intelligent auto-tiering to ensure optimal performance. The way this works at a high level, is as follows.

Data is written to an SSD tier (either PCIe SSD such as Fusion-io or SATA SSD) before being migrated off to a SATA tier once the blocks are determined to be “Cold” and if/when required, promoted back the an SSD tier when they become “Hot” again for improved read performance.

As with other vendor storage solutions with auto tiering technologies (such as FAST-VP , FlashPools etc) the same recommendation around SDRS and the I/O metric is true for Nutanix, leave it disabled.

So, at this point we have concluded the I/O metric will be “Disabled”, lets move onto Capacity management.

The Nutanix solution presents large NFS datastore/s to the ESXi hosts (Nutanix nodes) which are shared across all ESXi hosts in one or more vSphere clusters.

When using SDRS, it can manage initial placement of a new Virtual machine based on the configured “Utilized Space” metric (shown below) to ensure there is not a capacity imbalance between the datastores in a datastore cluster, as well as move virtual machines around when new machines are provisioned to ensure the balance is maintained.

UtilizedSpaceSDRS

So this is a really good feature which I have and do recommend in several scenarios, however the Nutanix solution presents typical a small number of large NFS datastores to the vSphere cluster (or clusters) which are serviced by all Controller VMs (CVMs) in the Nutanix cluster. Using SDRS for initial placement does not add much (if any) value as the initial placement will almost always be on the same large NFS datastore.

Where actual physical capacity becomes an issue, space saving technologies such as compression can be enabled, or the environment can be granularly scaled by adding just a single additional Nutanix node which linearly scales the solution from both a capacity and performance perspective.

The only real choice is when you choose to present two (or more) datastores where one datastore leverage’s the Nutanix compression technology. This is a very easy scenario for a vSphere admin to choose the placement of a VM and is the same amount of administrative effort as choosing a datastore cluster which would be a collection of datastores either using compression, or not depending on the workloads.

As a result there is no advantage to using SDRS to manage utilized space.

In conclusion, Storage DRS is a great feature when used with storage arrays where performance does not scale linearly or provide intelligent tiering to address I/O bottlenecks and/or where your environment has large numbers of datastores where you need to actively manage capacity.

As performance and capacity management are intelligently managed natively by the Nutanix solution, the requirement (or benefit) provided by SDRS is negated, as a result there is no requirement or benefit for using SDRS with a Nutanix solution.

Related Articles

1. Example Architectural Decision – VMware DRS automation level for a Nutanix environment

 

 

Example Architectural Decision – VMware DRS automation level for a Nutanix environment

Problem Statement

What is the most suitable DRS automation level and migration threshold for a vSphere cluster running on Nutanix?

Requirements

1. Ensure optimal performance for Business Critical Applications
2. Minimize complexity where possible

Assumptions

1. Workload types and size are unpredictable and workloads may vary greatly and without notice
2. The solution needs to be as automated as possible without introducing significant risk
3. vSphere 5.0 or later

Constraints

1. 2 x 10GB NICs per ESXi host (Nutanix node)

Motivation

1. Prevent unnecessary vMotion migrations which will impact host & cluster performance
2. Ensure the cluster standard deviation is minimal
3. Reduce administrative overhead of reviewing and approving DRS recommendations
4. Ensure optimal storage performance

Architectural Decision

Use DRS in Fully Automated mode with setting “3” – Apply priority 1,2 and 3 recommendations

Create a DRS “Should run on hosts in group” rule for each Business Critical Applications (BCAs) and configure each BCA to run on a single specified host (ensuring BCA’s are separated or grouped according to workload)

DRS Automation will be Disabled for all Controller VMs (CVMs)

Justification

1. Fully Automated DRS prevents excessive vMotion migrations that do not provide significant compute benefits to cluster balance as the vMotion itself will use cluster & network resources

2. Ensure the Nutanix Distributed File System , specifically the “Curator” component does not need to frequently relocate data between Nutanix nodes (ESXi hosts) direct attached storage to ensure virtual machine/s have local access to data. Doing so would put additional load on the Controller VM (and Curator service), local/remote storage and the network.

2. Ensure cluster remains in a reasonably load balanced state without resource being wasted on load balancing the compute layer to only achieve minimal improvement which may impact the storage/network layer/s.

3. Applying Level 1,2 and 3 recommendations means recommendations that must be followed to satisfy cluster constraints, such as affinity rules and host maintenance will be applied (Level 1) as well as applying recommendations with four or more stars (Level 2) that promise a significant improvement in the cluster’s load balance. In the event significant improvement to the clusters load balance will be achieved, movement of data at the storage layer (via the CVM / Network) can be justified

3. DRS is a low risk, proven technology which has been used in large production environments for many years

4. Setting DRS to manual would be a significant administrative (BAU) overhead and introduce additional risks such as human error and situations where contention may go unnoticed which may impact performance of one or more VMs

5. Setting a more aggressive DRS migration threshold may put an additional load on the cluster which will likely not result in significantly better cluster balance (or VM performance) and could result in significant additional workload for the ESXi hosts (compute layer), the Nutanix Controller VM (CVM) ,network & underlying storage.

6. By using DRS “Should run on this host” rules for Business Critical Applications (BCAs) will ensure consistent performance for these workloads (by keeping VMs on the same ESXi host/Nutanix node where its data is local) without introducing significant complexity or limiting vSphere functionally

Implications

1. In some circumstances the DRS cluster may have a low level of imbalance

2. DRS will not move workloads via vMotion where only a moderate improvement to the cluster will be achieved

3. At times, including after performing updates (via VUM) of ESXi hosts (Nutanix Nodes) the cluster may appear to be unevenly balanced as DRS may calculate minimal benefit from migrations. Setting DRS to “Use Fully automated and Migration threshold 3” for a short period of time following maintenance should result in a more evenly balanced DRS cluster with minimal (short term) increased workload for the Nutanix Controller VM (CVM) , network & underlying storage.

4. DRS rules will need to be created for Business Critical Applications

Alternatives

1.Use Fully automated and Migration threshold 1 – Apply priority 1 recommendations
2.Use Fully automated and Migration threshold 3 – Apply priority 1,2 recommendations
3. Use Fully automated and Migration threshold 4- Apply priority 1,2,3 and 4 recommendations
4.Use Fully automated and Migration threshold 5- Apply priority 1,2,3,4 & 5 recommendations
5. Set DRS to manual and have a VMware administrator assess and apply recommendations
6. Set DRS to “Partially automated”

Related Articles

1. Storage DRS and Nutanix – To use or not to use, That is the question