Example Architectural Decision – ESXi Host Hardware Sizing (Example 1)

Problem Statement

What is the most suitable hardware specifications for this environments ESXi hosts?

Requirements

1. Support Virtual Machines of up to 16 vCPUs and 256GB RAM
2. Achieve up to 400% CPU overcommitment
3. Achieve up to 150% RAM overcommitment
4. Ensure cluster performance is both consistent & maximized
5. Support IP based storage (NFS & iSCSI)
6. The average VM size is 1vCPU / 4GB RAM
7. Cluster must support approx 1000 average size Virtual machines day 1
8. The solution should be scalable beyond 1000 VMs (Future-Proofing)
9. N+2 redundancy

Assumptions

1. vSphere 5.0 or later
2. vSphere Enterprise Plus licensing (to support Network I/O Control)
3. VMs range from Business Critical Application (BCAs) to non critical servers
4. Software licensing for applications being hosted in the environment are based on per vCPU OR per host where DRS “Must” rules can be used to isolate VMs to licensed ESXi hosts

Constraints

1. None

Motivation

1. Create a Scalable solution
2. Ensure high performance
3. Minimize HA overhead
4. Maximize flexibility

Architectural Decision

Use Two Socket Servers w/ >= 8 cores per socket with HT support (16 physical cores / 32 logical cores) , 256GB Ram , 2 x 10GB NICs

Justification

1. Two socket 8 core (or greater) CPUs with Hyper threading will provide flexibility for CPU scheduling of large numbers of diverse (vCPU sized) VMs to minimize CPU Ready (contention)

2. Using Two Socket servers of the proposed specification will support the required 1000 average sized VMs with 18 hosts with 11% reserved for HA to meet the required N+2 redundancy.

3. A cluster size of 18 hosts will deliver excellent cluster (DRS) efficiency / flexibility with minimal overhead for HA (Only 11%) thus ensuring cluster performance is both consistent & maximized.

4. The cluster can be expanded with up to 14 more hosts (to the 32 host cluster limit) in the event the average VM size is greater than anticipated or the customer experiences growth

5. Having 2 x 10GB connections should comfortably support the IP Storage / vMotion / FT and network data with minimal possibility of contention. In the event of contention Network I/O Control will be configured to minimize any impact (see Example VMware vNetworking Design w/ 2 x 10GB NICs)

6. RAM is one of the most common bottlenecks in a virtual environment, with 16 physical cores and 256GB RAM this equates to 16GB of RAM per physical core. For the average sized VM (1vCPU / 4GB RAM) this meets the CPU overcommitment target (up to 400%) with no RAM overcommitment to minimize the chance of RAM becoming the bottleneck

7. In the event of a host failure, the number of Virtual machines impacted will be up to 64 (based on the assumed average size VM) which is minimal when compared to a Four Socket ESXi host which would see 128 VMs impacted by a single host outage

8. If using Four socket ESXi hosts the cluster size would be approx 10 hosts and would require 20% of cluster resources would have to be reserved for HA to meet the N+2 redundancy requirement. This cluster size is less efficient from a DRS perspective and the HA overhead would equate to higher CapEx and as a result lower the ROI

9. The solution supports Virtual machines of up to 16 vCPUs and 256GB RAM although this size VM would be discouraged in favour of a scale out approach (where possible)

10. The cluster aligns with a virtualization friendly “Scale out” methodology

11. Using smaller hosts (either single socket, or less cores per socket) would not meet the requirement to support supports Virtual machines of up to 16 vCPUs and 256GB RAM , would likely require multiple clusters and require additional 10GB and 1GB cabling as compared to the Two Socket configuration

12. The two socket configuration allows the cluster to be scaled (expanded) at a very granular level (if required) to reduce CapEx expenditure and minimize waste/unused cluster capacity by adding larger hosts

13. Enabling features such as Distributed Power Management (DPM) are more attractive and lower risk for larger clusters and may result in lower environmental costs (ie: Power / Cooling)

Alternatives

1.  Use Four Socket Servers w/ >= 8 cores per socket , 512GB Ram , 4 x 10GB NICs
2.  Use Single Socket Servers w/ >= 8 cores , 128GB Ram , 2 x 10GB NICs
3. Use Two Socket Servers w/ >= 8 cores , 512GB Ram , 2 x 10GB NICs
4. Use Two Socket Servers w/ >= 8 cores , 384GB Ram , 2 x 10GB NICs
5. Have two clusters of 9 hosts with the recommended hardware specifications

Implications

1. Additional IP addresses for ESXi Management, vMotion, FT & Out of band management will be required as compared to a solution using larger hosts

2. Additional out of band management cabling will be required as compared to a solution using larger hosts

Related Articles

1. Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage (4 x 10 GB NICs)

2. Example VMware vNetworking Design w/ 2 x 10GB NICs

3. Network I/O Control Shares/Limits for ESXi Host using IP Storage

4. VMware Clusters – Scale up for Scale out?

5. Jumbo Frames for IP Storage (Do not use Jumbo Frames)

6. Jumbo Frames for IP Storage (Use Jumbo Frames)

CloudXClogo

 

vSphere 5.1 Single Sign On (SSO) Configuration – Architectural Decision flowchart

The below is the second architectural decision flowchart in my new series and covers a new feature in vSphere 5.1, Single Sign On.

There has been a lot of discussion around “Best Practices” for SSO, I have taken the approach of creating this flowchart with as many scenarios as possible.

I would recommend that you validate any configuration the flowchart results in is suitable for your environment prior to implementing into production.

The flowchart is designed to be used as a guide only, not a definitive decision making resource.

This also compliments some of my previous example architectural decisions which are shown in the related topics section below.

A special thanks to Michael Webster (VCDX#66) @vcdxnz001 & James Wirth (VCDX#83)@JimmyWally81 for their review and contributions to this flowchart.

SSO flowchart V1.0

Related Articles

1. Example Architectural Decision – SSO deployment mode over Active/Active Datacenters

2. vCenter Single Sign-On – Part 1: What is vCenter Single Sign-On?  – By Justin King @vCenterguy

3. vCenter Single Sign-on Part 2 – Deployment options – By Justin King @vCenterguy

4. vCenter Single Sign-on Part 3 – Availability  – By Justin King @vCenterguy

5. vCenter Single SIgn-On – Part 4: Pre Install Requirements – By Justin King @vCenterguy

6. Disabling vSphere 5.1 Single Sign-on – Long White Virtual Clouds – By Michael Webster @vcdxnz001

7. VMware KB: Multisite Single Sign-On deployment best practices

8. Where is the Best Practice Guide for SSO? | VMware Support Insider By 

Example Architectural Decision – Site Recovery Manager Server – Physical or Virtual?

Problem Statement

To ensure Production vSphere environment/s can meet/exceed the required RTOs in the event of a declared site failure, What is the most suitable way to deploy VMware Site Recovery Manager, on a Physical or Virtual machine?

Requirements

1. Meet/Exceed RTO requirements

2. Ensure solution is fully supported

3. SRM be highly available, or be able to be recovered rapidly to ensure Management / Recovery of the Virtual infrastructure

4. Where possible, reduce the CAPEX and OPEX for the solution

5. Ensure the environment can be easily maintained in BAU

Assumptions

1. Sufficient compute capacity in the Management cluster for an additional VM

2. SRM database is hosted on an SQL server

3. vSphere Cluster (ideally Management cluster)  has N+1 availability

Constraints

1. None

Motivation

1. Reduce CAPEX and OPEX

2. Reduce the complexity of BAU maintenance / upgrades

3. Reduce power / cooling / rackspace usage in datacenter

Architectural Decision

Install Site Recovery Manager on a Virtual machine

Justification

1. Ongoing datacenter costs relating to Power / Cooling and Rackspace are avoided

2. Placing Site Recovery Management on a Virtual machine ensures the application benefits from the availability, load balancing, and fault resilience capabilities provided by vSphere

3. The CAPEX of a virtual machine is lower than a physical system especially when taking into consideration network/storage connectivity for the additional hardware where a physical server was used

4. The OPEX of a virtual machine is lower than a physical system due to no hardware maintenance, minimal/no additional power usage , and no cooling costs

3. Improved scale-ability and the ability to dynamically add additional resources (where required) assuming increased resource consumption by the VM. Note: The guest operating system must support Hot Add / Hot Plug and be enabled while the VM is shutdown. Where these features are not supported, virtual hardware can be added with a short outage.

4. Improved manageability as the VMware abstraction layer makes day to day tasks such as backup/recovery easier

5. Ability to non-disruptively migrate to new hardware where EVC is configured in compatible mode and enabled between hosts within a vSphere data center

Alternatives

1. Place SRM on a physical server

Implications

1. For some storage arrays, the SRM server needs to have access to admin LUNs and using a virtual machine may increase complexity by the requirement for RDMs

I would like to Thank James Wirth VCDX#83 (@jimmywally81) for his contribution to this example architectural decision.

Related Articles

1. Site Recovery Manager Deployment Location

2. Swap file location for SRM protected VMs

CloudXClogo