Example Architectural Decision – Transparent Page Sharing (TPS) Configuration for QA / Pre-Production Servers

Problem Statement

In a VMware vSphere environment, with future releases of ESXi disabling Transparent Page Sharing by default, what is the most suitable TPS configuration for an environment running Quality Assurance or Pre-Production server workloads?

Assumptions

1. TPS is disabled by default
2. Storage is expensive
3. Two Socket ESXi Hosts have been chosen to align with a scale out methodology.
4. Average Server VM is between 2-4vCPU and 4-8GB Ram with some larger.
5. Memory is the first compute level constraint.
6. HA Admission Control policy used is “Percentage of Cluster Resources reserved for HA”
7. vSphere 5.5 or earlier

Requirements

1. The environment must deliver consistent performance
2. Minimize the cost of shared storage

Motivation

1. Reduce complexity where possible.
2. Maximize the efficiency of the infrastructure

Architectural Decision

Leave TPS disabled (default) and leave Large Memory pages enabled (default).

Justification

1. QA/Pre-Production environments should be as close as possible to the configuration of the actual production environment. This is to ensure consistency between QA/Pre-Production validation and production functionality and performance.
2. Setting 100% memory reservations ensures consistent performance by eliminating the possibility of swapping.
3. The 100% memory reservation also eliminates the capacity usage by the vswap file which saves space on the shared storage as well as reducing the impact on the storage in the event of swapping.
4. RAM is cheaper than Tier 1 storage (which is recommended for vSwap storage to ensure minimal performance impact during swapping) so the increased cost of memory in the hosts is easily offset by the saving in Tier 1 shared storage.
5. Simplicity. Leaving default settings is advantageous from both an architectural and operational perspective.  Example: ESXi Patching can cause settings to revert to default which could negate TPS savings and put a sudden high demand on storage where TPS savings are expected.
6. TPS savings for server workloads is typically much less than with desktop workloads and as a result less attractive.
7. The decision has been made to use 2 socket ESXi hosts and scale out so the TPS savings per host compared to a 4 socket server with double the RAM will be lower.
8. HA admission control will calculate fail-over requirements (when using Percentage of cluster resources reserved for HA) so that performance will be approximately the same in the event of a fail-over due to reserving the full RAM reserved for every VM leading to more consistent performance under a wider range of circumstances.
9. Lower core count (and lower cost) CPUs will likely be viable as RAM will likely be the first constraint for further consolidation.
10. Remove the real or perceived security risk of sensitive information being gathered from other VMs using TPS as described in VMware KB 2080735

Implications

1. Using 100% memory reservations requires ESXi hosts and the cluster be sized at a 1:1 ratio of vRAM to pRAM (Physical RAM) and should include N+1 so a host failure can be tolerated.
2. Increased RAM costs
3. No memory overcommitment can be achieved
4. Potential for lower CPU utilization / overcommitment as RAM may become the first constraint.

Alternatives

1. Use 50% reservation and enable TPS
2. Use no reservation, Enable TPS and disable large pages

Related Articles:

1. Transparent Page Sharing (TPS) Example Architectural Decisions Register

2. The Impact of Transparent Page Sharing (TPS) being disabled by default @josh_odgers (VCDX#90)

3. Future direction of disabling TPS by default and its impact on capacity planning –@FrankDenneman (VCDX #29)

4. Transparent Page Sharing Vulnerable, Yet Largely Irrelevant – @ChrisWahl (VCDX#104)

Example Architectural Decision – Transparent Page Sharing (TPS) Configuration for Mixed Production Servers (1 of 2)

Problem Statement

In a VMware vSphere environment, with future releases of ESXi disabling Transparent Page Sharing by default, what is the most suitable TPS configuration for an environment running mixed production server workloads?

Assumptions

1. TPS is disabled by default
2. Storage is expensive
3. Two Socket ESXi Hosts have been chosen to align with a scale out methodology.
4. Average Server VM is between 2-4vCPU and 4-8GB Ram with some larger.
5. Memory is the first compute level constraint.
6. HA Admission Control policy used is “Percentage of Cluster Resources reserved for HA”
7. vSphere 5.5 or earlier

Requirements

1. The environment must deliver consistent performance
2. Minimize the cost of shared storage

Motivation

1. Reduce complexity where possible.
2. Maximize the efficiency of the infrastructure

Architectural Decision

Leave TPS disabled (default) and leave Large Memory pages enabled (default).

Justification

1. Setting 100% memory reservations ensures consistent performance by eliminating the possibility of swapping.
2. The 100% memory reservation also eliminates the capacity usage by the vswap file which saves space on the shared storage as well as reducing the impact on the storage in the event of swapping.
3. RAM is cheaper than Tier 1 storage (which is recommended for vSwap storage to ensure minimal performance impact during swapping) so the increased cost of memory in the hosts is easily offset by the saving in Tier 1 shared storage.
4. Simplicity. Leaving default settings is advantageous from both an architectural and operational perspective.  Example: ESXi Patching can cause settings to revert to default which could negate TPS savings and put a sudden high demand on storage where TPS savings are expected.
5. TPS savings for server workloads is typically much less than with desktop workloads and as a result less attractive.
6. The decision has been made to use 2 socket ESXi hosts and scale out so the TPS savings per host compared to a 4 socket server with double the RAM will be lower.
7. HA admission control will calculate fail-over requirements (when using Percentage of cluster resources reserved for HA) so that performance will be approximately the same in the event of a fail-over due to reserving the full RAM reserved for every VM leading to more consistent performance under a wider range of circumstances.
8. Lower core count (and lower cost) CPUs will likely be viable as RAM will likely be the first constraint for further consolidation.
9. Remove the real or perceived security risk of sensitive information being gathered from other VMs using TPS as described in VMware KB 2080735

Implications

1. Using 100% memory reservations requires ESXi hosts and the cluster be sized at a 1:1 ratio of vRAM to pRAM (Physical RAM) and should include N+1 so a host failure can be tolerated.
2. Increased RAM costs
3. No memory overcommitment can be achieved
4. Potential for lower CPU utilization / overcommitment as RAM may become the first constraint.

Alternatives

1. Use 50% reservation and enable TPS
2. Use no reservation, Enable TPS and disable large pages

Related Articles:

1. The Impact of Transparent Page Sharing (TPS) being disabled by default @josh_odgers (VCDX#90)

2. Example Architectural Decision – Transparent Page Sharing (TPS) Configuration for Production Servers (2 of 2)

3. Future direction of disabling TPS by default and its impact on capacity planning –@FrankDenneman (VCDX #29)

4. Transparent Page Sharing Vulnerable, Yet Largely Irrelevant – @ChrisWahl (VCDX#104)

Rule of Thumb: Sizing for Storage Performance in the new world.

In the new world where storage performance is decoupled with capacity with new read/write caching and Hyper-Converged solutions, I always get asked:

How do I size the caching or Hyper-Converged solution to ensure I get the storage performance I need.

Obviously I work for Nutanix, so this question comes from prospective or existing Nutanix customers, but its also relevant to other products in the market, such as PernixData or any Hybrid (SSD+SAS/SATA) solution.

So for indicative sizing (i.e.: Presales) where definitive information is not available and/or where you cannot conduct a detailed assessment , I use the following simple Rule of Thumb.

Take your last two monthly full backups, and take the delta between them and multiply that by 3.

So if my full backup from August was 10TB and my full backups from September is 11TB, my delta is 1TB. I then multiply that by 3 and we get 3TB which is our assumption of the “Active Working Set” or in basic terms, the data which needs performance. (Because cold or inactive data can sit on any tier without causing performance issues).

Now I  size my SSD tier for 3TB of usable capacity.

The next question is:

Why multiple the backup data delta by 3?

This is based on an assumption (since we don’t have any hard data to go on) that the Read/Write ratio is 70% Read, 30% write.

Now those of you familiar with this thing called Maths, would argue 70/30 is 2.33333 which is true. So rounding up to 3 is essentially a buffer.

I have found this rule of thumb works very well, and customers I have worked with have effectively had All Flash Array performance because the “Active Working Set” all resides within the SSD tier.

Caveats to this rule of thumb.

1. If a customer does a significant amount of deletions during the month, the delta may be smaller and result in an undersized SSD tier.

Mitigation: Review several months of full backup logs and average the delta.

2. If the environment’s Read/Write ratio is much higher than 70/30, then the delta from the backup multiplied by 3 may again result in  an undersized SSD tier.

Mitigation: Perform some investigation into your most critical workloads and validate or correct the assumption of multiplying by 3

3. This rule of thumb is for Server workloads, not VDI.

VDI Read/Write ratio is generally almost opposite to server, and around 30/70 Read/Write. However the SSD tier for VDI should be sized taking into account the benefits of VAAI/VCAI cloning and things like de duplication (for Memory and SSD tiers) which some products, like Nutanix offer.

Summary / Disclaimer

This rule of thumb works for me 90% of the time when designing Nutanix solutions, but your results may vary depending on the platform you use.

I welcome any feedback or suggestions of alternate sizing strategies which I will update the post with where appropriate.