Acropolis Hypervisor (AHV) I/O Failover & Load Balancing

Many customers and partners have expressed interest in Acropolis since it was officially launched at .NEXT in June earlier this year, and since then lots of questions have been asked around resiliency/availability etc.

In this post I will cover how I/O failover occurs and how AHV load balances in the event of I/O failover to ensure optimal performance.

Let’s start with an Acropolis node under normal circumstances. The iSCSI initiator for QEMU connects to the iSCSI redirector which directs all I/O to the local stargate instance which runs within the Nutanix Controller VM (CVM) as shown below.

AHVMPdefault

I/O will always be serviced by the local stargate unless a CVM upgrade, shutdown or failure occurs. In the event one of the above occurs QEMU will loose connection to the local stargate as shown below.AHVMPfailedlocal

When this loss of connectivity to stargare occurs, QEMU reconnects to the iSCSI redirector and establishes a connection to a remote stargate as shown below.AHVMPremote

The process of re-establishing an iSCSI connection is near instant and you will likely not even notice this has occurred.

Once the local stargate is back online (and stable for 300 seconds) I/O will be redirected back locally to ensure optimal performance.

AHVMPfailback

In the unlikely event that the remote stargate goes down before the local stargate is back online then the iSCSI redirector will redirect traffic to another remote stargate.

Next lets talk about Load Balancing.

Unlike traditional 3-tier infrastructure (i.e.: SAN/NAS) Nutanix solutions do not require multi-pathing as all I/O is serviced by the local controller. As a result, there is no multi-pathing policy to choose which removes another layer of complexity and potential point of failure.

However in the event of the local CVM being unavailable for any reason we need to service I/O for all the VMs on the node in the most efficient manner. Acropolis does this by redirecting I/O on a per vDisk level to a random remote stargate instance as shown below.

pervmpathfailover

Acropolis can do this because every vdisk is presented via iSCSI and is its own target/LUN which means it has its own TCP connection. What this means is a business critical application such as MS SQL / Exchange or Oracle with multiple vDisks will be serviced by multiple controllers concurrently.

As a result all VM I/O is load balanced across the entire Acropolis cluster which ensures no single CVM becomes a bottleneck and VMs enjoy excellent performance even in a failure or maintenance scenario.

As i’m sure you can now see, Acropolis provides excellent resiliency and performance even during maintenance or failure scenarios.

Related Posts:

1. Scaling Hyper-converged solutions – Compute only.

2. Advanced Storage Performance Monitoring with Nutanix

3. Nutanix – Improving Resiliency of Large Clusters with Erasure Coding (EC-X)

4. Nutanix – Erasure Coding (EC-X) Deep Dive

5. Acropolis: VM High Availability (HA)

6. Acropolis: Scalability

7. NOS & Hypervisor Upgrade Resiliency in PRISM

The Key to performance is Consistency

In recent weeks I have been doing lots of proof of concepts and performance testing using tools such as Jetstress (with great success I might add).

What I have always told customers is to focus on choosing a solution which comfortably meets their performance requirements while also delivering consistent performance.

The key word here is consistency.

Many solutions can achieve very high peak performance especially when only testing cache performance, but this isn’t real world as I discussed in Peak Performance vs Real World Performance.

So with two Jetstress VMs on a 3 node Nutanix cluster (N+1 configuration) I configured Jetstress to create multiple databases which used about 85% of the available capacity per node. The nodes used were hybrid, meaning some SSD and some SATA drives.

What this means is the nodes have ~20% of data within the SSD tier and the bulk of the data residing within the SATA tier as shown in the Nutanix PRISM UI on the Storage tab as shown below.

Tierusage

As Jetstress performs I/O across all data concurrently, it means that things like caching and tiering become much less effective.

For this testing no tricks have been used such as de-duplicating Jetstress DBs, which are by design duplicates. Doing this would result in unrealistically high dedupe ratios where all data would be served from SSD/cache resulting in artificially high performance and low latency. That’s not how I roll, I only talk real performance numbers which customers can achieve in the real world.

In this post I am not going to talk about the actual IOPS result, the latency figures or the time it took to create the databases as I’m not interested in getting into performance bake offs. What I am going to talk about is the percentage difference in the following metrics between the nodes observed during these tests:

1. Time to create the databases : 1.73%

2. IOPS achieved : 0.44%

3. Avg Read Latency : 4.2%

As you can see the percentage difference between the nodes for these metrics is very low, meaning performance is very consistent across a Nutanix cluster.

Note: All testing was performed concurrently and background tasks performed by Nutanix “Curator” function such as ILM (Tiering) and Disk Balancing were all running during these tests.

What does this mean?

Running business critical workloads on the same Nutanix cluster does not cause any significant noisy neighbour types issues which can and do occur in traditional centralised shared storage solutions.

VMware have attempted to mitigate against this issue with technology such as Storage I/O Control (SIOC) and Storage DRS (SDRS) but these issues are natively eliminated thanks to the Nutanix scale out shared nothing architecture. (Nutanix Xtreme Computing Platform or XCP)

Customers can be confident that performance achieved on one node is repeatable as Nutanix clusters are scaled even with Business Critical applications with large working sets which easily exceed the SSD tier.

It also means performance doesn’t “fall of the cache cliff” and become inconsistent, which has long been a fear with systems dependant on cache for performance.

Nutanix has chosen not to rely on caching to achieve high read/write performance, instead we to tune our defaults for consistent performance across large working sets and to ensure data integrity which means we commit the writes to persistent media before acknowledging writes and perform checksums on all read and write I/O. This is key for business critical applications such as MS SQL, MS Exchange and Oracle.

Exchange 2013 & VMware – The latest round.

Recently there has been some back and forth on social media (Twitter & Blogs) around the following article published on the Exchange Team Blog:

Troubleshooting High CPU utilization issues in Exchange 2013
The article discusses several topics including Common Configuration Issues, Over-sizing and several performance metrics which can help Exchange administrators identify and hopefully resolve performance problems.
As a result of some of the recent updated recommendations, the Exchange 2013 Server Role Requirements calculator has been updated to reflect the newer recommendations.

The calculator can be found here: Exchange 2013 Server Role Requirements Calculator.

On key recommendation from Microsoft is the maximum Exchange Mailbox (or Multi-Role) Server sizing now being:

Recommended Maximum CPU Core Count

24

Recommended Maximum Memory

96 GB

In reply to the above, VMware have released the following article.

A Stronger Case For Virtualizing Exchange Server 2013 – Think “Performance”

Personally, I don’t think the article will (or has) had the effect VMware and Virtualization architects/admins would have liked due to its negativity. As such I am going to highlight the important points in this post.

In response to VMware’s article, a well known member of the Exchange community and MVP, Tony Redmond wrote the below article.

VMware tells Microsoft that they don’t know anything about Exchange 2013 performance

In this case, even as a strong Virtualization (and VMware) evangelist, I think Tony has some valid points.

Tony mentions the following regarding the sizing tool:

Most experienced people take the output from any general-purpose sizing tool and cast a cold eye over its recommendations to put them into context with the operational and business requirements for a deployment. In other words, the recommendations are adjusted. And yes, sometimes those recommendations are adjusted to make sure that Exchange 2013 works well when deployed on virtualized servers, either Hyper-V or VMware.

I totally agree! No matter if the sizing tool was recommending more CPUs or not, experienced architects should be making decisions based on many factors, the most important being their real world experience and using the calculator as a secondary (or tertiary) input and adjusting for the underlying infrastructure, regardless of it being physical or virtualized on one of many viable hypervisors.

I also agree with the following point:

Third, wouldn’t it be better use of VMware’s time to publish well-argued and pertinent observations on how you can take the output of Microsoft sizing tools and adjust them for their platform

VMware can be considered the experts in their own technology and they should as Tony suggested publish and continue to update documentation on how best to successfully deploy Exchange on vSphere. There is no advantage in an “I told you so” style blog posts when Microsoft have actually published recommendations which to the one point of VMware’s blog that I agree with, Strengthens the case for Virtualization of Exchange.

For companies like Nutanix who have numerous Virtualization experts across multiple hypervisors including vSphere, Hyper-V and Acropolis (which is fully supported for Windows 2012 and Exchange) we should also post reference architectures and best practice guides on how to deploy Exchange successfully.

Tony also commented in reply to VMware’s post which was not favourable towards Multi-role deployments:

Combined role means a multi-role server, which I think that every reasonable expert working with Exchange has concluded is the only way to go because it increases server utilization and improves the overall resilience of any deployment. But it’s a bad thing in VMware’s world, which is a pity for them because Exchange 2016 only supports multi-role servers, so I guess they will just have to get used to that fact.

VMware’s point here is scaling out (rather than up) means in general better overall consolidation and performance. Now to be fair this is true but one cannot apply a blanket rule for all applications.

Deploying Exchange 2013 in a Multi-Role configuration is a good way to simplify as well as ensure consistent performance and resiliency in the event of a server failure as all servers can service all roles.

As Exchange is generally considered a Business Critical Application, Virtualization architects like myself should consider the recommendations for the application, the critically of the solution to the customer and make an informed recommendation for its deployment. For an app like Exchange, I think its fair to say the MS Exchange team have solid justification to recommend MSR deployments.

The impact of less scale out (i.e.: MBX + CAS) is simply a design consideration, and not an uncommon one so this in my opinion is not an issue for virtual deployments.

Key Take Aways:

1. Ensure you size within Microsoft’s revised vCPU / vRAM recommended configuration maximums (24 vCPUs / 96GB RAM)

2. When sizing Exchange on any Hypervisor, start small and scale up vCPU/vRAM requirements as required to avoid oversizing. (This is a major advantage of virtualization, so don’t be afraid to use it!)

3. Multi-Role Deployments are perfectly fine for Virtual environments. Exchange is a Business Critical Application, so treat it as such in your design phase.

Final thoughts:

With the ever increasing number of cores per socket, the case to virtualize Exchange is strengthened when considering 12-18c CPUs are not uncommon these days. As such an 18vCPU / 96GB RAM Exchange 2013 MSR VM could be virtualized with zero CPU/RAM overcommitment (as is generally recommended) while running other VMs on the same host.

This helps to remove silos within the datacenter as well as driving up utilization (without creating resource contention) which equates to lower cost/power/cooling/maintenance, the list goes on.

While Virtualization does add some complexity & cost, I would argue with newer technologies such as Nutanix which replace complex and costly SAN/NAS storage with simple to deploy and manage scale out storage these (storage) challenges are soon going to be things of the past.

Nutanix also allows customers to choose a premium feature rich hypervisor such as ESXi, or lower cost/feature solutions such as Hyper-V or Acropolis which allows customers to work with what they are comfortable with.

Acropolis for example is fully supported by Microsoft (see Microsoft SVVP) and can be deployed in <30mins with just a few clicks and its free with Nutanix, so the cost and complexity arguments against virtualization of Exchange just went out the window and Exchange would have all the benefits of things like Virtual Machine High Availability, Migration (vMotion),

Please to hear everyone’s thoughts.