Example Architectural Decision – Storage DRS Configuration for VMFS Datastores in a vCloud Environment

Problem Statement

In a production , self service vCloud Director environment, What is the most suitable Storage DRS configuration to improve storage utilization , performance, as well as reduce administrative effort for BAU staff?

Requirements

1. Make the most efficient use of the available storage capacity
2. Maintain consistent level of storage performance
3. Reduce the risk and overhead of capacity management
4. Reduce the risk of a unintentional or otherwise DoS event caused by self service

Assumptions

1. vSphere 5.0 or later
2. VMFS 5 Datastores which are Thick Provisioned
3. Deduplication is not in use
4. VAAI is supported by the array and enabled across the vSphere environment
5. All datastores in each respective Datastore clusters reside on the same RAID type with similar spindle types and count
6. All datastores are presented to all hosts within the cluster
7. Array level snapshots are not in use
8. IBM SVC Storage is being used
9. vCloud Director 5.1 or later
10. Storage I/O Control is enabled at set to 30ms

Constraints

1. IBM SVC storage does not currently support VASA (VMware API for Storage Awareness)

Motivation

1. Ensure production storage performance is not negatively impacted
2. Minimize the vSphere administrators workload where possible

Architectural Decision

Set the DRS automation setting to “Fully Automated”

  • Set “Utilized Space” threshold to 80%
  • Set “I/O latency” to 15ms
  • I/O Metric Inclusion – Enabled

Advanced Options

  • No recommendations until utilization difference between source and destination is: 10%
  • Evaluate I/O load every 8 Hours
  • I/O Imbalance threshold  4

Justification

1. Setting Storage DRS to “Fully Automated”  ensures that the administrator does not need to be concerned with initial placement of virtual machines as this will be dynamically and intelligently determined and executed

2. “XCOPY” is fully supported for Block based storage, as such, any Storage vMotion activity is offloaded to the array therefore removing the I/O overhead on the compute and storage fabric.

3. Where a significant I/O imbalance is detected by SDRS, the vSphere administrator is not required to take any action, Storage DRS can identify and remediate issues which fall outside parameters (which are determined by the VMware Architect) automatically. This improves the efficiency of the environment, and reduces the involvement of BAU.

4. SDRS provides valuable “initial placement” for new virtual machines which will help avoid a situation where datastores are unevenly balanced from a capacity perspective in the first place, therefore reducing the chance of virtual machines requiring migration.

5. Setting the “No recommendations until utilization difference between source and destination is” to 20% ensures that SDRS does not move virtual machines around where significant benefit is not realized  This prevents unnecessary Storage vMotion activity on the disk system, although this is offloaded from the host to the array, the I/O still may impact production performance for workloads on the same disk system.

6. Setting the “I/O Imbalance threshold (5 Aggressive / Conservative 1 ) to “4” (2nd most aggressive)  ensures that I/O imbalance should be addressed before significant imbalance is experienced by the end users. This level of “ aggressiveness” is acceptable as the Storage vMotion can be offloaded (via VAAI “XCOPY” primitive  and has almost zero impact on the host.  Setting this to “5” may result in minor I/O imbalances being corrected, at the cost of a Storage vMotion and as a result the impact of the more frequent Storage vMotion activity may negate the benefit of the I/O balancing.

7. Storage DRS will address I/O imbalance across the datastore cluster if the latency meets or exceeds the set value of 15ms (the default) and in the event of latency increasing during peak times to >=30ms , Storage I/O Control will ensure fair acess to the storage.

Alternatives

1. Use “No Automation (Manual Mode)”
2. Not use Storage DRS

Implications

1. When selecting datastores for the datastore cluster, having VASA enabled allows the “System Capability” column to be populated in the “New Datastore Cluster” wizard to ensure suitable datastores of similar performance, RAID type and features are grouped together. VASA is currently NOT supported by SVC, as such the datastore naming convension needs to accurately reflect the capabilities of the LUN/s to ensure suitable datastores are grouped together.

vmware_logo_ads

Example Architectural Decision – Storage DRS Configuration for NFS Datastores

Problem Statement

In a vSphere environment, a NAS array is presenting Thin Provisioned NFS mounts (Datastores) to the vSphere environment. The storage has deduplication enabled across the datastores being used for the SDRS cluster. What is the most suitable configuration for SDRS to ensure the underlying storage efficiencies are not compromised while maintaining an even distribution of utilized capacity and I/O across all datastores?

Assumptions

1. vSphere 5.0 or later
2. NFS Based storage
3. NFS Mounts (Datastores) are Thin Provisioned
4. Deduplication is enabled on the array
5. VAAI is supported by the array and enabled across the vSphere environment
6. All datastores in a Datastore cluster are of the same RAID Type / Offer Similar performance due to having a similar spindle count
7. All datastores are presented to all hosts within the cluster

Motivation

1. Ensure storage efficiencies are not negatively impacted
2. Minimize the vSphere administrators workload where possible

Architectural Decision

Set the DRS automation setting to “No Automation (Manual Mode)”

  • Set “Utilized Space” threshold to 80%
  • Set “I/O latency” to 15ms
  • I/O Metric Inclution – Enabled

Advanced Options

  • No recommendations until utilization difference between source and destination is: 10%
  • Evaluate I/O load every 8 Hours
  • I/O Imbalance threshold  3

Justification

1. Setting Storage DRS to “No Automation (Manual Mode)” ensures that the administrator can confirm the recommendation will not Negatively impact the efficiency of Deduplication or  the thin provisioned NFS mounts
2. When creating a new Virtual Machine, in the “Ready to complete” window, Tick the “Show all storage recommendations” check box to review Storage DRS recommendations and override the recommendations where required
3. Where a VM is deduplicated on the source datastore, and it is moved to the destination datastore, this write activity is considered new data which will scanned by the post deduplication process which will use valuable CPU cycles on the array
4. “XCOPY” is not supported for NFS, as such, any Storage vMotion activity can only be offloaded to the array using the “Full File Clone” when a virtual machine is powered off.
5. Array level snapshots cannot be migrated with the VM using Storage DRS. If Virtual machines were automatically moved then the array level snapshot relasionship with the VM is broken and it cannot be leveraged
6. NFS datastores can be set to autogrow  by a predefined size in the event they reach a predefined utilization threashold
7. Where a significant I/O imbalance is detected by SDRS, the vSphere administrator can consider the impact of the Storage vMotion and where suitable apply the SDRS recommendation
8. SDRS still provides valuable “initial placement” for new virtual machines which will help avoid a situation where datastores are unevenly balanced from a capacity perspective
9. Storage DRS will still analysis I/O and where an imbalance is identified the vSphere administrator can choose to apply the SDRS recommendation to address the I/O imbalance

Implications

1. When selecting datastores for the datastore cluster, having VASA enabled allows the “System Capability” column to be populated in the “New Datastore Cluster” wizard to ensure suitable datastores of similar performance, RAID type and features are grouped together
2. A vSphere administrator will need to review SDRS recommendations

Alternatives

1. Use “Fully Automated”

Example Architectural Decision – HA Admission Control Policy with Software licensing constaints

High Availability Admission Control Setting & Policy with a Software Licensing Constraint

Problem Statement

The customer has a requirement to virtualize “Application X” which is currently running on physical servers. The customer is licensed for a maximum of 32 cores and the software vendor has strict licensing restrictions which do not recognize the use of DRS rules to restrict virtual machines to a sub-set of hosts within a cluster.

The application is Tier 1, and requires maximum availability. A capacity planner assessment has been conducted and found 32 cores and 256Gb RAM is sufficient to run all servers.

The servers requirements vary greatly from 1vCPU/2GB RAM to 8vCPU/64GB Ram with the bulk of the VMs 2vCPU or less with varying RAM sizes.

What is the most suitable hardware configuration and HA admission control policy / setting  that complies with the licensing restrictions while ensuring N+1 redundancy and minimizing the change of poor application performance?

Assumptions

1. None

Constraints

1. Software vendor has strict licensing requirements
2. Only 32 cores are licensed and the customer has no budget for further licenses
3. DRS rules cannot be used to isolate VMs onto one or more hosts due to software licensing agreement

Motivation

1. Ensure maximum availability for the Tier 1 application/s
2. Ensure optimal performance for Tier 1 application/s

Architectural Decision

Purchase a total of three (3) x Two (2) Way Servers, with 8 core CPUs and 128GB Ram each and form a cluster of three nodes.

For the HA Admission control setting use “Enable – Do not power on virtual machines that violate availability constraints”

For the HA admission control policy use “Specify a Failover Host” and select the third host in the cluster. (Leaving two active hosts in the cluster).

Justification

1. Enabling strict admission control is critical to ensure the required level of availability for the Tier 1 application
2. Ensure maximum CPU scheduling efficiency by having two hosts active within the cluster running virtual machines as opposed to a single large host
3. Having 2 active hosts in the cluster allows DRS some flexibility to load balance to resolve contention compared to using a single large 32 core host
4. N+1 redundancy is achieved as one host can fail and the “fail-over” host will become active and be able to take the failed hosts workloads without performance degrading
5. As only 32 cores ( 2 servers with 16 cores each) are active at any one time, the solution complies with the licensing constraint
6. Using CPUs with smaller numbers of cores (such as 5 x 2 way servers with 4 cores per socket) would result in larger VMs not fitting within NUMA nodes and potentially impacting memory performance. Although, with vNUMA in vSphere 5.0 this would be less of an issue.
7. All VMs will fit within a NUMA node thus giving the VMs maximum performance without the requirement for vNUMA which is only available in vSphere 5.0 or later
8. The compute resource supplied by the proposed cluster is sufficient to run the workloads as per the capacity planner assessment.

Implications

1. Additional networking and storage ports for three hosts as opposed to a two host cluster
2. If additional compute is required in the cluster, additional software licenses would need to be purchased. Alternativley if the application servers were redesigned to use a scale out methodology (especially for VMs with 4-8vCPUs) it would likley result in higher overcommitment ratios without significant contention and better utilization of the existing licensed cores
3. One host is sitting as a hot standby not servicing customer workloads and may be considered to be “waste”

Alternatives

1. Use 2 x 4 way 8 core ESXi hosts (32 cores per host) and set HA admission control to specify a fail over host
2. Use 5 x 2 Way 4 core ESXi hosts (8 cores per host) and set HA admission control to specify a fail over host

The Below is a basic diagram of the proposed solution.

FailoverHost

*Post updated February 11th to correct an error.