How much CPU ready is OK?

I have noticed a lot of search results hitting my blog asking

Question: How much CPU ready is OK?

so I thought I would address this question with a quick post.

Of course the answer is it depends, for example Server workloads have a lower tolerance to CPU ready than desktop workloads but as a rule of thumb, here is my thoughts.

For Production server workloads

<2.5% CPU Ready
Generally No Problem!

2.5%-5% CPU Ready
Minimal contention that should be monitored during peak times

5%-10% CPU Ready
Significant Contention that should be investigated & addressed

>10% CPU Ready
Serious Contention to be investigated & addressed ASAP!

In my experience, the above have been good for a rule of thumb.

However, applications which are latency sensitive may be severely impacted even with low levels of CPU ready, these types of VMs should be on clusters with lower CPU overcommitment, leverage DRS rules to separate the contending workloads or in extreme cases, dedicated clusters.

On the flip side, Some servers are much more tolerant to CPU ready, and 5%-10% CPU ready or higher may not noticeably impact performance.

Keep in mind that setting CPU Reservations does not solve CPU Ready, see my post on the topic for more details.

VMware vCenter Operations is a tool which can help easily identify contention (including CPU) within your vSphere environment.

For Virtual Desktop workloads, what level of CPU ready is acceptable will largely depend on the individual user (ie: Power User verses Task Worker). Keep in mind virtual desktop deployments generally have high CPU consolidation ratios of  around 6:1 all the way to >12:1.

I would suggest the following , again as a rule of thumb

<5% CPU Ready
Generally No Problem!

5%-10% CPU Ready
Minimal contention that should be monitored during peak times

>10% CPU Ready
Contention to be investigated & addressed where the end user experience is being impacted.

Any Higher CPU ready will likely be impacting your users, and should be investigated.

VMware have recently released vCenter Operations for View, which you could use to monitor your VMware View environment.

vCloud Suite 5.1 Upgrade Guide

I just came across an unofficial vCloud Suite 5.1 upgrade guide by Jad El-Zem which covers off the steps involved and a few gotchas to watch out for.

VMware Blogs – vCloud Suite 5.1 Solution Upgrade Guide

Example Architectural Decision – Securing vMotion & Fault Tolerance Traffic in IaaS/Cloud Environments

Problem Statement

vMotion and Fault tolerance logging traffic is unencrypted and anyone with access to the same VLAN/network could potentially view and/or compromise this traffic. How can the environment be made as secure as possible to ensure security between in a multi-tenant/multi-department environment?

Assumptions

1.  vMotion and FT is required in the vSphere cluster/s (although FT is currently not supported for VMs hosted with vCloud Director)
2. IP Storage is being used and vNetworking has 2 x 10GB for non Virtual Machine traffic such as VMKernel’s & 2 x 10GB NICs are available for Virtual Machine traffic (Similar to Example vNetworking Design for IP Storage)
3. VI3 or later

Motivation

1. Ensure maximum security and performance for vMotion and FT traffic
2. Prevent vMotion and/or FT traffic impacting production virtual machines

Architectural Decision

vMotion & Fault tolerance logging traffic will each have a dedicated non routable VLAN which will be hosted on a dvSwitch which is physically separate from virtual machine distributed virtual switch.

Justification

1.  vMotion / FT traffic does not require external (or public) access
2. A VLAN per function ensures maximum security / performance with minimal design / implementation overhead
3. Prevent vMotion and/or FT traffic potentially impacting production virtual machine and vice versa by having the traffic share one or more broadcast domain/s
4. Ensure vMotion/FT traffic cannot leave there respective dedicated VLAN/s and potentially be sniffed

Implications

1. Two (2) VLANs with private IP ranges are required to be presented over 802.1q connections to the appropriate pNICs

Alternatives

1.  vMotion / FT share the ESXi management VLAN – This would increase risk of traffic being intercepted and “sniffed”
2. vMotion / FT share a dvSwitch with Virtual Machine networks while still running within dedicated non routable VLANs over 802.1q