Benchmark(et)ing Nonsense IOPS Comparisons, if you insist – Nutanix AOS 4.6 outperforms VSAN 6.2

As many of you know, I’ve taken a stand with many other storage professionals to try to educate the industry that peak performance is vastly different to real world performance. I covered this in a post titled: Peak Performance vs Real World Performance.

I have also given a specific example of Peak Performance vs Real World Performance with a Business Critical Application (MS Exchange) where I demonstrate that the first and most significant constraining factor for Exchange performance is compute (CPU/RAM) so achieving more IOPS is unnecessary to achieve the business outcome (which is supporting a given number of Exchange mailboxes/message per day).

However vendors (all of them) who offer products which provide storage, whether it is as a component such as in HCI or a fully focused offering, continue to promote peak performance numbers. They do this because the industry as a whole has and continues to promote these numbers as if they are relevant and trying to one-up each other with nonsense comparisons.

VMware and the EMC federation have made a lot of noise around In-Kernel being better performance than Software Defined Storage running within a VM which is referred to by some as a VSA (Virtual Storage Appliance). At the same time the same companies/people are recommending business critical applications (vBCA) be virtualized. This is a clear contradiction, as I explain in an article I wrote titled In-Kernel verses Virtual Storage Appliance which in short concludes by saying:

…a high performance (1M+ IOPS) solution can be delivered both In-Kernel or via a VSA, it’s simple as that. We are long past the days where a VM was a significant bottleneck (circa 2004 w/ ESX 2.x).

I stand by this statement and the in-kernel vs VSA debate is another example of nonsense comparisons which have little/no relevance in the real world. I will now (reluctantly) cover off (quickly) some marketing numbers before getting to the point of this post.

VMware VSAN 6.2

Firstly, Congratulations to VMware on this release. I believe you now have a minimally viable product thanks to the introduction of software based checksums which are essential for any storage platform.

VMW Claim One: For the VSAN 6.2 release, “delivering over 6M IOPS with an all-flash architecture”

The basic math for a 64 node cluster = ~93700 IOPS / node but as I have seen this benchmark from Intel showing 6.7Million IOPS for a 64 node cluster, let’s give VMware the benefit of the doubt and assume its an even 7M IOPS which equates to 109375 IOPS / node.

Reference: VMware Virtual SAN Datasheet

VMW Claim Two: Highest Performance >100K IOPS per node

The graphic below (pulled directly from VMware’s website) shows their performance claims of >100K IOPS per node and >6 Million IOPS per cluster.

Reference: Introducing you to the 4th Generation Virtual SAN

Now what about Nutanix Distributed Storage Fabric (NDSF) & Acropolis Operating System (AOS) 4.6?

We’re now at the point where the hardware is becoming the bottleneck as we are saturating the performance of physical Intel S3700 enterprise-grade solid state drives (SSDs) on many of our hybrid nodes. As such we have moved onto performance testing of our NX-9460-G4 model which has 4 nodes running Haswell CPUs and 6 x Intel S3700 SSDs per node all in 2RU.

With AOS 4.6 running ESXi 6.0 on a NX9460-G4 (4 x NX-9040-G4 nodes), Nutanix are seeing in excess of 150K IOPS per node, which is 600K IOPS per 2RU (Nutanix Block).

The below graph shows performance per node and how the solution scales in terms of performance up to a 4 node / 1 block solution which fits within 2RU.

NOS46Perf

So Nutanix AOS 4.6 provides approx. 36% higher performance than VSAN 6.2.

(>150K IOPS per NX9040-G4 node compared to <=110K IOPS for All Flash VSAN 6.2 node)

It should be noted the above Nutanix performance numbers have already been improved upon in upcoming releases going through performance engineering and QA, so this is far from the best you will see.

but-wait-theres-more

Enough with the nonsense marketing numbers! Let’s get to the point of the post:

These 4k 100% random read IOPS (and similar) tests are totally unrealistic.

Assuming the 4k IOPS tests were realistic, to quote my previous article:

Peak performance is rarely a significant factor for a storage solution.

More importantly, SO WHAT if Vendor A (in this case Nutanix) has higher peak performance than Vendor B (in this case VSAN)!

What matters is customer business outcomes, not benchmark(eting)!

holdup

Wait a minute, the vendor with the higher performance is telling you peak performance doesn’t matter instead of bragging about it and trying to make it sound importaint?

Yes you are reading that correctly, no one should care who has the highest unrealistic benchmark!

I wrote things to consider when choosing infrastructure. a while back to highlight that choosing the “Best of Breed” for every workload may not be a good overall strategy, as it will require management of multiple silos which leads to inefficiency and increased costs.

The key point is if you can meet all the customer requirements (e.g.: performance) with a standard platform while working within constraints such as budget, power, cooling, rack space and time to value, you’re doing yourself (or your customer) a dis-service by not considering using a standard platform for your workloads. So if Vendor X has 10% faster performance (even for your specific workload) than Vendor Y but Vendor Y still meets your requirements, performance shouldn’t be a significant consideration when choosing a product.

Both VSAN and Nutanix are software defined storage and I expect both will continue to rapidly improve performance through tuning done completely in software. If we were talking about a product which is dependant on offloading to Hardware, then sure performance comparisons will be relevant for longer, but VSAN and Nutanix are both 100% software and can/do improve performance in software with every release.

In 3 months, VSAN might be slightly faster. Then 3 months later Nutanix will overtake them again. In reality, peak performance rarely if ever impacts real world customer deployments and with scale out solutions, it’s even less relevant as you can scale.

If a solution can’t scale, or does so in 2 node mirror type configurations then considering peak performance is much more critical. I’d suggest if you’re looking at this (legacy) style of product you have bigger issues.

Not only does performance in the software defined storage world change rapidly, so does the performance of the underlying commodity hardware, such as CPUs and SSDs. This is why its importaint to consider products (like VSAN and Nutanix) that are not dependant on proprietary hardware as hardware eventually becomes a constraint. This is why the world is moving towards software defined for storage, networking etc.

If more performance is required, the ability to add new nodes and the ability to form a heterogeneous cluster and distribute data evenly across the cluster (like NDSF does) is vastly more importaint than the peak IOPS difference between two products.

While you might think that this blog post is a direct attack on HCI vendors, the principle analogy holds true for any hardware or storage vendor out there. It is only a matter of time before customers stop getting trapped in benchmark(et)ing wars. They will instead identify their real requirements and readily embrace the overall value of dramatically simple on-premises infrastructure.

In my opinion, Nutanix is miles ahead of the competition in terms of value, flexibility, operational benefits, product maturity and market-leading customer service all of which matter way more than peak performance (which Nutanix is the fastest anyway).

Summary:

  1. Focus on what matters and determine whether or not a solution delivers the required business outcomes. Hint: This is rarely just a matter of MOAR IOPS!
  2. Don’t waste your time in benchmark(et)ing wars or proof of concept bake offs.
  3. Nutanix AOS 4.6 outperforms VSAN 6.2
  4. A VSA can outperform an in-kernel SDS product, so lets put that in-kernel vs VSA nonsense to rest.
  5. Peak performance benchmarks still don’t matter even when the vendor I work for has the highest performance. (a.k.a My opinion doesn’t change based on my employers current product capabilities)
  6. Storage vendors ALL should stop with the peak IOPS nonsense marketing.
  7. Software-defined storage products like Nutanix and VSAN continue to rapidly improve performance, so comparisons are outdated soon after publication.
  8. Products dependant upon propitiatory hardware are not the future
  9. Put a high focus on the quality of vendors support.

Related Articles:

  1. Peak Performance vs Real World Performance
  2. Peak performance vs Real World – Exchange on Nutanix Acropolis Hypervisor (AHV)
  3. The Key to performance is Consistency
  4. MS Exchange Performance – Nutanix vs VSAN 6.0
  5. Scaling to 1 Million IOPS and beyond linearly!
  6. Things to consider when choosing infrastructure.

Reminder: Copies of data on the same Primary Storage is not a backup solution.

I find it difficult to understand how any Account Manager, Sales Engineer or Consultant can go to a customer, who is at least in part trusting their statements & opinions when considering new product/s and make claims that a product is performing a “backup” function when the data remains on the same primary storage system (failure domain).

Most vendors have metadata or snapshot based options which allow space efficient recovery points to be maintained on primary storage for fast recovery and any vendor worth talking to will also tell you that until a FULL COPY of the data is maintained off the primary storage, it is NOT a backup.

Some vendors will play games and try and differentiate and say they don’t use snapshots and they are somehow amazing and unique. In reality, they can say whatever they like, but if the end result is the data is only maintained on primary storage, then its not a backup and you should not treat it like one.

In the old days, it was fairly common to have Primary data on one set of LUNs/RAID packs and for customers to keep full copies of data on different LUNs and underlying RAID packs before offloading to tape.

While the copy of data remained on primary storage, it at least meant that in the event the RAID pack/s hosting the primary data failed (e.g.: Double disk failure in a RAID 5) then data could be recovered and if not, then the customer could restore form tape.

As storage became more intelligent, keeping the full copy became less popular in favour of snapshot or metadata based copies. This makes a lot of sense as it reduced the overheads significantly while achieving a business outcome which allows for fast recovery in the event the Primary Storage is not impacted.

However, the requirement for data to be kept off the primary storage remains, as no matter what vendor you choose, its possible to have a catastrophic failure which means the snapshot/metadata copies on primary storage may not be available.

Also promoting that snapshots (or any form of metadata copies pointing to the same underlying blocks) are this amazing new data reduction technology which achieves 60:1 or 100:1 data reduction is misleading at best in my opinion.

So let’s cover off a few things:

Question 1: Are snapshots or metadata copies of data stored on primary storage a backup?

Answer: No

A snapshot or metadata based copies simply makes some data at various levels such as a vDisk, Virtual Machine , LUN , Container etc read only and new writes (commonly referred to as delta changes) are written elsewhere.

The data still resides on the same storage, meaning if data loss occurs (say multiple drive failures or storage system software issue) its possible if not probable that the data being referenced by the snapshot/metadata and delta changes will all be lost (or at least unavailable) in some failure scenarios depending on the vendor.

So having snapshot or metadata based copies on primary storage as a backup without at least one full copy in a seperate failure domain is simply asking for trouble.

Snapshots/metadata copies are only the first step in a backup solution which must ensure data is stored in at least two locations (different failure domains) so that data can be recovered in the event the primary storage is lost/unavailable for any reason.

Question 2: Are snapshots data reduction?

Answer: No

Snapshots and metadata copies don’t reduce data, they simply avoid creating and requiring the storage to store more data than is necessary to keep the point in time (or Recovery Point) copies (not backups) of data.

This is Data avoidance, not data reduction which cover this topic in more depth in a previous post: Deduplication ratios – What should be included in the reported ratio?

Now don’t get me wrong, Data avoidance (e.g.: Snapshots, Intelligent Cloning etc) has real value and its something I would recommend customers leverage wherever possible as it generally reduces the overheads on infrastructure significantly which can help achieve business outcomes like more frequent RPOs or faster deployment/maintenance times for VDI.

However making a claim that a customer has 60:1 or 100:1 data efficiency because they are taking frequent snapshots/metadata copies (which in many cases are unnecessary to meet business objectives) in my opinion is misleading customers and worse still, claiming its unique (as in other vendors cant achieve the same business outcome) is just a flat out lie.

Now I work for Nutanix, so let’s use another Vendor as an example, and one which I have lots of experience with from my years at IBM. Take Netapp (a.k.a IBM N-Series), for many years they have supported taking snapshots which are application consistent (via SnapManager) and keeping them on Primary storage. They as with many other vendors (new and legacy) do it in a way which avoids storing multiple copies of data and they redirect on write all delta changes which can be snapped at the next scheduled interval.

This results in the ability to keep lots of point in time copies without storing data multiple times. You could argue this is a ratio of “Insert crazy number here” :1 but the reality is, if the storage you have wasn’t storing 1:1 copies previously (which only a select few legacy products still do), a new solution doing similar isn’t a big step forward even if it could be argued it’s a bit more efficient.

Netapp allows these snapshots on primary storage to then be replicated to secondary storage (SnapVault) which is a different failure domain, with dedicated controller/s and disks. This allows for recovery of all data in the event the primary storage fails or is unavailable. Netapp also allow offload of snapshots to tape.

Many other vendors have similar functionality (and have for a long time) include but are not limited too: Pure Storage, Nutanix, EMC , Dell , IBM, the list goes on.

This functionality is table stakes… Not something unique to any one vendor or something that requires proprietary hardware to achieve.

Any vendor listed above (and others) can achieve the similar levels of data efficiency (if you want to use that term) if they all perform snapshots or metadata based copies at the same frequency. Each vendors implementations vary and each have pros and cons, but from a business outcome perspective (which is the ONLY thing that matters), its table stakes.

Question 3: What are Snapshots/Metadata copies on Primary storage good for?

Answer: They are good for creating recovery points to help achieve Recovery Point Objectives (RPOs) when combined with replication to secondary storage and/or tape/cloud to cater for site loss scenarios. Keeping snapshots on primary storage helps speed up recovery in the event you need to role back to a previous point in time assuming you have not had a storage failure. e.g.: Recovering a file or DB which was accidentally deleted or was corrupted for whatever reason.

So there is value in snapshots/metadata copies on primary storage, but it should not be considered a backup until it is replicated to another location, ideally offsite in a difference failure domain.

Summary:

Snapshots/Metadata based copies (on primary storage) are just the first step of many in an overall backup strategy. If the data is not replicated to another failure domain, it should not be called or considered a backup.

Marketing Claims of 60:1 or 100:1 data efficiency may sound good, but these sorts of numbers have been and can be achieved by many vendors for a long time. Be very careful when considering new infrastructure not to be mislead by these sorts of marketing claims.

Most vendors don’t market numbers like 60:1 or 100:1 because they understand its table-stakes and misleading for customers, and kudos to those vendors!

Snapshots/Metadata copies regardless of data efficiency ratio are USELESS in the event of a primary storage failure unless a full copy of the data is stored off the primary storage and depending on the business requirements, stored offsite.

I encourage the everyone, especially the industry analysts to help clarify this situation for customers as there is A LOT of mis-information being spread currently which puts customers at risk in the event of primary storage failures.

Jetstress Performance Testing on Nutanix Acropolis Hypervisor (AHV) – Part 1 – The Baseline Test

The following is Part 1 of the Jetstress performance testing on Nutanix Acropolis Hypervisor (AHV) series of videos.

This video shows the following:

  1. Stopping/Starting the NDSF cluster to ensure a fair starting point (No artificial pre warming of cache etc)
  2. The Performance required for 2500 Exchange Users (100 messages / Day with 2 DAG copies) being 732 Jetstress IOPS as per MS Exchange Server role requirements calculator.
  3. The Performance achieved by Jetstress with 8 threads using 8 vDisks (4 for DB, 4 for Logs)

The reason the demonstration is limited to 2500 users is because the Virtual machine compute requirements already is over the maximum recommended RAM for an Exchange 2013/2016 Server (96GB). As such, no additional storage performance is required as compute is more often than not the constraining factor.

For more information see: Peak performance vs Real World – Exchange

Note: This demonstration is not showing the peak performance which can be achieved by Jetstress on Nutanix. In fact it’s running on a ~3 year old NX-3450 with Ivy Bridge processors and Jetstress is tuned (as the video shows) to a low thread count which still achieves >3x the required IOPS for 2500 Exchange users.

Part 1

Return to the Table of Contents