What’s .NEXT? – Acropolis! Part 1

By now many of you will probably have heard about Project “Acropolis” which was the code name for development where Nutanix decided to create an Uncompromisingly Simple management platform for an optimized KVM hypervisor.

Along the way with Nutanix extensive skills and experience with products such as vSphere and Hyper-V, we took on the challenge of delivering similar enterprise grade features while doing so in an equally scalable and performant platform as NDFS.

Acropolis therefore had to be built into PRISM. The below screen shot shows the Home screen for PRISM in a Nutanix Acropolis environment, looks pretty much the same as any other Nutanix solution right! Simple!

PRISMwAcropolis

So let’s talk about how you install Acropolis. Since its a management platform for your Nutanix infrastructure it is critical component, so do I need a management cluster? No!

Acropolis is built into the Nutanix Controller VM (CVM), so it is installed by default when loading the KVM hypervisor (which is actually shipped by default).

Because its built into the CVM, Acropolis (and therefore all the management components) automatically scale with the Nutanix cluster, so there is no need to size the management infrastructure. There is also no need to license or maintain operating systems for management tools, further reducing cost and operational expense.

The following diagram shows a 4 node Nutanix NDFS cluster running Nutanix KVM Hypervisor using Acropolis. One CVM per cluster is elected the Acropolis Master and the rest of the CVMs are Acropolis Slaves.

AcropolisCluster1

The Acropolis Master is responsible for the following tasks:

  1. Scheduler for HA
  2. Network Controller
  3. Task Executors
  4. Collector/Publisher of local stats from Hypervisor
  5. VNC Proxy for VM Console connections
  6. IP address management

Each Acropolis Slave is responsible for the following tasks:

  1. Collector/Publisher of local stats from Hypervisor
  2. VNC Proxy for VM Console connections

Acropolis is a truly distributed management platform which has no dependency on external database servers and is fully resilient with in-built self healing capabilities so in the event of node or CVM failures that management continues without interruption.

What does Acropolis do? Well, put simply, the things 95% of customers need including but not limited to:

  • High Availability (Think vSphere HA)
  • Load Balancing / Virtual Machine Migrations (Think DRS & vMotion)
  • Virtual machine templates
  • Cloning (Instant and space efficient like VAAI-NAS)
  • VM operations / snapshots / console access
  • Centralised configuration of nodes (think vSphere Host Profiles & vSphere Distributed Switch)
  • Centralized Managements of virtual networking (think vSphere Distributed Switch)
  • Performance Monitoring of Physical HW, Hypervisor & VMs (think vRealize Operations manager)

Summary: Acropolis combines the best of breed hyperconverged platform with an enterprise grade KVM management solution which dramatically simplifies the design, deployment and ongoing management of datacenter infrastructure.

In the next few parts of this series I will explore the above features and the advantages of the Acropolis solution.

My Journey to Double-VCDX

It was back in 2011 when I started my journey to VCDX which was a fantastic learning experience which has helped improve my skills as an Enterprise Architect.

After achieving VCDX-DCV in May 2012, I have continued to put the skills and experience I gained during my VCDX journey into practice, and came to the realization, of how little I actually know, and how much more there is learn.

I was looking for another certification challenge however there was no additional VCDX certification tracks at the time. Then VCDX-Cloud and VCDX-Desktop were released, I figured I should attempt VCDX-Cloud since my VCDX-DCV submission was actually based on a vCloud design.

At the time I didn’t have my VCAPs for Cloud, so as per my VCAP-CID Exam Experience and VCAP-CIA Exam Experience posts explain, I formed a study group and sat and passed both exams over a period of a few months.

Next came the VCDX application phase, I prepared my design in a similar fashion to my original application which basically meant reviewing the VCDX-Cloud Blueprint and ensuring all sections have been covered.

I sad part about submitting a second VCDX is that there is no requirement to redefend in person. As a result I suspect the impression is that achieving a second VCDX is easier. While I think this is somewhat true as the defence is no walk in the park, the VCDX submission still must be of an expert standard.

I suspect for first time VCDX applicants, the candidate may be given the benefit of the doubt if the documentation is not clear, has mistakes or contradicts itself in some areas as these points can be clarified or tested by the panellists during the design defence.

In the case of subsequent applications,  I suspect that Double-X candidates may not get the benefit of the doubt, as these points cannot be clarified. As a result, It could be argued the quality of the documentation needs to be of a higher standard so that everything in the design is clear and does not require clarification.

My tips for Double-X Candidates:

In addition to the tips in my original VCDX Journey Post:

  1. Ensure your documentation is of a level which could be handed to a competent engineer and implemented with minimal or no assistance.
  2. Ensure you have covered all items in the blueprint to a standard which is higher than your previous successful VCDX submission
  3. Make your design decisions clear and concise and ensure you have cross referenced relevant sections back to detailed customer requirements.
  4. Treat your Double-VCDX submission equally if not more seriously than your first applications. Ensure you dot all your “I”s and cross your “T”s.

I was lucky enough to have existing Double-VCDX Magnus Andersson (@magander3) and Nutanix colleague review my submission and give some excellent advice. So a big thanks Magnus!

What next?

Well just like when I completed my first VCDX, I was already looking for another challenge. Luckily I have already found the next certification and am well on the way to submitting my application for the Nutanix Platform Expert (NPX).

The VCDX-DCV and VCDX-Cloud have both been awesome learning experiences and I think both have proven to be great preparation for my NPX attempt, so stay tuned and with a bit of luck, you’ll be reading my NPX Journey in the not to distant future.

Support for Active Directory on vSphere

I heard something interested today from a customer, a storage vendor who sells predominantly block storage products was trying to tell them that Active Directory domain controllers are not supported on vSphere when using NFS datastores.

The context was the vendor was attempting to sell a traditional block based SAN, and they were trying to compete against Nutanix. The funny thing is, Nutanix supports block storage too, so it was a uneducated and pointless argument.

None the less, the topic of support for Active Directory on vSphere using NFS datastores is worth clarifying.

There are two Microsoft TechNet articles which cover support for  topic:

  1. Things to consider when you host Active Directory domain controllers in virtual hosting environments
  2. Support policy for Microsoft software that runs on non-Microsoft hardware virtualization software

Note: There is no mention of storage protocols (Block or File) in these articles.

The second article states:

for vendors who have Server Virtualization Validation Program (SVVP) validated solutions, Microsoft will support server operating systems subject to the Microsoft Support Lifecycle policy for its customers who have support agreements when the operating system runs virtualized on non-Microsoft hardware virtualization software.

VMware has validated vSphere as a SVVP solution which can be validated here: http://www.windowsservercatalog.com/svvp.aspx

The next interesting point is:

If the virtual hosting environment software correctly supports a SCSI emulation mode that supports forced unit access (FUA), un-buffered writes that Active Directory performs in this environment are passed to the host operating system. If forced unit access is not supported, you must disable the write cache on all volumes of the guest operating system that host the Active Directory database, the logs, and the checkpoint file.

Funnily enough, this is the same point for Exchange, but where the Exchange team decided not to support it, the wider organisation have a much more intelligent policy where they support SCSI emulation (ie: VMDKs on NFS datastores) as long as the storage ensures writes are not acknowledged to the OS prior to being written to persistent media (ie: Not volatile memory such as RAM).

This is a very reasonable support statement and one which has a solid technical justification.

In Summary, running Active Directory is supported on vSphere including both block (iSCSI, FC, FCoE) and file (NFS) based datastores where the storage vendor complies with the above requirements.

So check with your storage vendor to confirm if the storage your using is compliant.

Nutanix 100% complies with these requirements for both Block and File storage. For more details see: Ensuring Data Integrity with Nutanix – Part 2 – Forced Unit Access (FUA) & Write Through

For more information about how NFS datastores provide true block level storage to Virtual Machines via VMDKs, check out Emulation of the SCSI Protocol which shows how all native SCSI commands are honoured by VMDKs on NFS.

Related Articles:

  1. Running Domain Controllers in Hyper-V

This post covers the requirement for FUA the same as with vSphere and recommends the use of UPS (to ensure write integrity) as well as enterprise grade drives which are also applicable to vSphere deployments.