For a long time I have been sizing vSphere clusters for customers and I am regularly asked how I work out the overcommitment ratio and calculate the suitable percentage of cluster resources to reserve for HA.
This calculator is focused on Virtual Server clusters, for VDI solutions I recommend the VDI Calculator by @andreleibovici.
This calculator allows this to be done quickly and easily while giving advice as to the availability level recommended for the cluster based on its size.
All you need to do is enter the follow details into the Yellow fields.
1. The Number of ESXi hosts
2. The total CPU sockets per host
3. Ghz per Core
4. Physical Cores per CPU Socket (Not Hyper-threads)
5. Total RAM per host
6. Total number of VMs
7. The Desired Availability Level (N+x)
Next enter the total number of vCPUs and vRAM assigned (or expected to be) assigned to VMs in the cluster.
The calculator will then output the following:
1. Total Cluster Resources
This is the total Physical Cores and physical RAM in the cluster.
2. The total Cluster Ghz
Self Explanatory
3. The percentage of Cluster Resources reserved for HA
This is calculated from the availability level specified.
Note: The recommended Availability level is calculator and displayed on the same line as the “Desired Availability Level”.
4. The total available Cluster Resources
This is the total cluster resources, minus the percentage of cluster resources reserved for HA. Note: This is not how vSphere HA calculates available cluster resources. This is a method I use which is conservative and ensures performance does not degrade in the event of the configured availability level.
Finally, it calculates
5. The Overcommitment Ratio for CPU and RAM
This is represented as a ratio, so a result of “1” means no overcommitment.
A result of “4” would be a 4:1 overcommitment or 400%.
The tool then shows a “Rule of Thumb” for overcommitment levels of the vSphere cluster.
Simply modify the number of hosts, Cores per Socket and RAM per host until you have the desired overcommitment levels, then you can Print the sizing chart for your design.
Pingback: Hybrid Cloud and SDDC Reference Architecture | Cloud Solutions Architect
Pingback: Tech Smorgasbord #2 | Data Center Digressions
Pingback: Newsletter: November 16, 2014 | Notes from MWhite
Pingback: More Blogs and sites I've been reading and sharing - Educational Materials for Technicians, Created by Technicians
Pingback: vSphere Cluster Size | All about virtualization
Fantastic! I cannot see the date of the post, it is still right nowadays?
Thank you!
Still very much valid, Cheers.
Josh, for the “total RAM per host field” is it in GB? So 512-GB would be entered as 512 or is it a different ratio? thanks.
Hi Dave,
Yes its in GB, so 512 would equal 512GB.
Cheers
Josh
awesome thanks! I built the environment with 4 esx hosts each with 1 /12-core cpu (2.5 GHz) and each with 512-gb’s of RAM. Spec’d 30 total VM’s for now with a total vcpu count of 120 and total RAM of about 1-TB. It came back around 2.5 overcommitted. Does that sound right..?
2.5:1 for CPU yes, 0.5:1 for RAM… this is with 0% HA reservation. Add 25% for N+1 as recommended and you’ll have 3.3:1 for CPU yes, 0.67:1 for RAM… CPU is within what I normally recommend for mixed server workloads, RAM is obviously no issue.
This environment will be for CRM and will be a mixed environment. Of course out of the VM’s there will about around 5 or 6 that will be pretty beefy servers running at least 128-gb’s of RAM each and about 8-cpu’s. Other VM’s are lesser and perform other functions, etc..
Josh, Is the 0.67:1 for RAM that you mentioned just a straight percentage of the usage? Or does it also include some back-end VM usage..? thanks much!
Hi Josh,
Wouldn’t the total cluster resources also be the same as the total available, as it relates to the cores available for scheduling to which all cores are available in the cluster right?
don’t worry, I figured out the approach – think before typing 😉
In the event of HA you don’t want to increase the over commit ratio even though in the world of everything being fine, the total cluster resource would match the total available from a CPU scheduling perspective.
Hi Josh, amazing job! Thanks for sharing!
The Total number of VMs should be filled out with a total number of VMs per host or per cluster (Total Hosts)?
Willian Porfirio
Total VMs for the cluster 🙂
Pingback: Calculate your cluster size | Virtual Kim
hi, Josh: why you did not put hyper threads vcpu into the calculation instead of using physical cpu core
Hi Tim, HT is assumed as part of the calculation. Also HT are not equal to physical cores as i’m sure your aware so sizing to physical cores and using HT to assist with CPU scheduling efficiencies is a safe sizing practice. E.g.: 4:1 overcommitment of physical cores means 2:1 of logical cores which is generally a safe overcommitment ratio for general server workloads. Hope that makes sense.
thx Josh for the quick reply. it makes sense. what is your suggestion for memory overcommit ratio for general server workloads? i generally used 1.25:1.
These days 1:1 since the TPS default is now disabled.
Hi,
this is the great ideas, So I would like to know the formula of this calculations. not quite sure, can you share this for me
Thanks man to share, very usefull with more than 44 ESXi in a cluster.