What’s .NEXT? – Acropolis! Part 1

By now many of you will probably have heard about Project “Acropolis” which was the code name for development where Nutanix decided to create an Uncompromisingly Simple management platform for an optimized KVM hypervisor.

Along the way with Nutanix extensive skills and experience with products such as vSphere and Hyper-V, we took on the challenge of delivering similar enterprise grade features while doing so in an equally scalable and performant platform as NDFS.

Acropolis therefore had to be built into PRISM. The below screen shot shows the Home screen for PRISM in a Nutanix Acropolis environment, looks pretty much the same as any other Nutanix solution right! Simple!

PRISMwAcropolis

So let’s talk about how you install Acropolis. Since its a management platform for your Nutanix infrastructure it is critical component, so do I need a management cluster? No!

Acropolis is built into the Nutanix Controller VM (CVM), so it is installed by default when loading the KVM hypervisor (which is actually shipped by default).

Because its built into the CVM, Acropolis (and therefore all the management components) automatically scale with the Nutanix cluster, so there is no need to size the management infrastructure. There is also no need to license or maintain operating systems for management tools, further reducing cost and operational expense.

The following diagram shows a 4 node Nutanix NDFS cluster running Nutanix KVM Hypervisor using Acropolis. One CVM per cluster is elected the Acropolis Master and the rest of the CVMs are Acropolis Slaves.

AcropolisCluster1

The Acropolis Master is responsible for the following tasks:

  1. Scheduler for HA
  2. Network Controller
  3. Task Executors
  4. Collector/Publisher of local stats from Hypervisor
  5. VNC Proxy for VM Console connections
  6. IP address management

Each Acropolis Slave is responsible for the following tasks:

  1. Collector/Publisher of local stats from Hypervisor
  2. VNC Proxy for VM Console connections

Acropolis is a truly distributed management platform which has no dependency on external database servers and is fully resilient with in-built self healing capabilities so in the event of node or CVM failures that management continues without interruption.

What does Acropolis do? Well, put simply, the things 95% of customers need including but not limited to:

  • High Availability (Think vSphere HA)
  • Load Balancing / Virtual Machine Migrations (Think DRS & vMotion)
  • Virtual machine templates
  • Cloning (Instant and space efficient like VAAI-NAS)
  • VM operations / snapshots / console access
  • Centralised configuration of nodes (think vSphere Host Profiles & vSphere Distributed Switch)
  • Centralized Managements of virtual networking (think vSphere Distributed Switch)
  • Performance Monitoring of Physical HW, Hypervisor & VMs (think vRealize Operations manager)

Summary: Acropolis combines the best of breed hyperconverged platform with an enterprise grade KVM management solution which dramatically simplifies the design, deployment and ongoing management of datacenter infrastructure.

In the next few parts of this series I will explore the above features and the advantages of the Acropolis solution.

My NPX Journey

I have had an amazing learning experience in the last few months, expanding my skills into a second hypervisor being Kernel Virtual Machine (KVM) as well as continuing to enhance my knowledge on the ever increasing functionality of the Nutanix platform itself.

This past week I have been in Miami with some of the most talented guys in the industry who I have the pleasure to work with. We have been bootstrapping the Nutanix Platform Expert (NPX) program and have had numerous people submit comprehensive documentation sets which have been reviewed, and those who met the (very) high bar, were invited to the in-person, panel based Nutanix Design Review (NDR).

I was lucky enough to be asked to be part of the NDR panel as well as being invited to the NDR to attempt my NPX.

Being on the panel was a great learning experience in itself as I was privileged to observe many candidates who presented expert level architecture, design and troubleshooting abilities across multiple hypervisors.

I presented a design based on KVM for a customer which I have been working with over the last few months who is deploying a large scale vBCA solution on Nutanix.

I had an All-Star panel made up entirely of experienced Nutant’s who all happen to also be VCDXs, its safe to say it was not an easy experience.

The Design Review section was 90 mins which went by in a heart beat where I presented my vBCA KVM design, followed by a 30 min troubleshooting session and 60 min design scenario also based on vSphere.

Its a serious challenge having to present at an expert level on one Hypervisor, then swap into troubleshooting and designing a second hypervisor, so by the end of the examination it was safe to say I went to the bar.

As this is a bootstrap process I was asked to leave the room while the panel performed the final scores, then I was invited back in the room and told I was

Congratulations NPX #001

I am over the moon to be a part of an amazing company and to be honoured with the being #001 of such an challenging certification. I intend to continue to pursue deeper level knowledge on multiple hypervisors and everything Nutanix related to ensure I do justice to being NPX #001.

I am also pleased to say we have crowned several other NPX’s but I won’t steal there thunder by announcing their names and numbers.

For more information on the NPX program see http://go.nutanix.com/npx-application.html

Looking forward to .NEXT conference which is on this week!

MS Exchange Performance – Nutanix vs VSAN 6.0

When I saw a post (20+ Common VSAN Questions) by Chuck Hollis on VMware’s corporate blog claiming (extract below) “stunning performance advantage (over Nutanix) on identical hardware with most demanding datacenter workloads” I honestly wondering where does he get this nonsense?

FUDfromChuckles

Then when I saw Microsoft Applications on Virtual SAN 6.0 white paper released I thought I would check out what VMware is claiming in terms of this stunning performance advantage for an application I have done lots of work with lately, MS Exchange.

I have summarized the VMware Whitepaper and the Nutanix testing I personally performed in the below table. Now these tests were not exactly the same, however the ESXi Host CPU and RAM were identical, both tests used 2 x 10Gb as well as 4 x SSD devices.

The main differences were ESXi 6.0 for VSAN testing and ESXi 5.5 U2 for Nutanix, I’d say that’s advantage number 1 for VMware, Advantage Number 2 is VMware use two LSI controllers, my testing used 1, and VMware had a cluster size of 8 whereas my testing (in this case) only used 3. The larger cluster size is a huge advantage for a distributed platform, especially VSAN since it does not have data locality, so the more nodes in the cluster, the less chance of a bottleneck.

Nutanix has one advantage, more spindles, but the advantage really goes away when you consider they are SATA compared to VSAN using SAS. But if you really want to kick up a stink about Nutanix having more HDDs, take 100 IOPS per drive (which is much more than you can get from a SATA drive consistently) off the Nutanix Jetstress result.

So the areas where I feel one vendor is at a disadvantage I have highlighted in Red, and to opposing solution in Green. Regardless of these opinions, the results really do speak for themselves.

So here is a summary of the testing performed by each vendor and the results:

 

VSANvNutanixThe VMware white paper did not show the Jetstress report, however for transparency I have copied the Nutanix Test Summary below.

NutanixNX8150Jetstress

Summary: Nutanix has a stunning performance advantage over VSAN 6.0 even on identical lesser hardware, and an older version of ESXi using lower spec HDDs while (apparently) having a significant disadvantage by not running in the Kernel.