Example Architectural Decision – vSphere Path Selection Plugin (PSP) for IBM SVC Storage

Problem Statement

What is the most suitable multipathing policy when using IBM SVC storage?

Requirements

1. Ensure maximum performance and availability for vSphere storage
2. Ensure storage performance is as consistent as possible

Assumptions

1. IBM SVC Storage which is Active/Active
2. VAAI is supported and enabled

Constraints

1. Solution must be supported

Motivation

1. Ensure optimal performance and redundancy
2. Minimize Latency

Architectural Decision

Use vSphere Native Multipathing Plugin (NMP) and configure “VMW_PSP_RR” (Round Robin) as the path selection policy.

Set the default PSP to “VMW_PSP_RR” (Round Robin) for SATP VMW_SATP_SVC so all new LUNs automatically use Round Robin

Justification

1. Round Robin helps ensure minimum average latency to the storage by using all available paths
2. Ensure performance is not degraded for some/all virtual machines due to a single HBA or connection being heavily utilized
3. Using “VMW_PSP_ FIXED” requires the paths to be manually load balanced to avoid thrashing a single path
4. Using “VMW_PSP_MRU” or “VMW_PSP_ FIXED” may lead to incosistent performance across the LUNs due to some paths being more heavily used than others
5. There is no MPP currently supplied by IBM for SVC storage
6. Round Robin is a supported configuration (Note: Although not specifically listed in the Compatability Matrix)

Alternatives

1. Use “VMW_PSP_FIXED” (Default) – Fixed Pathing
2. Use “VMW_PSP_MRU”  – Most Recently Used
3. Use vendor supplied Multipathing Plugin

Implications

1. None

vmware_logo_ads

Example Architectural Decision – Guest OS Page File Storage in vSphere

Problem Statement

In a vSphere environment using deduplication and an array snapshot based backup solution, Guest OS page files are currently stored on the OS drive (VMDK) which reduces the effectiveness of deduplication as well as placing an overhead on the controllers having to scan data which cannot be deduplicated.

As the Guest OS Paging files are being included in the snapshot process (with the guest OS) this also demands additional capacity for both primary and secondary disk storage for disk to disk backups.

How can this overhead be minimized or eliminated?

Requirements

1. Make the most efficient use of the available storage capacity
2. Maintain consistent level of virtual machine / storage performance
3. Minimize the storage required for primary and secondary snapshot based backups
4. Maintain the array level snapshot based backup solution as it is required to meet RPO/RTOs
5. Maintain the use of deduplication and this has proven to decrease storage requirements and improve performance

Assumptions

1. vSphere 5.0 or later
2. VMFS 5 Datastores which are Thin Provisioned
3. Deduplication is in use for Volumes where Guest OS virtual disks are stored
4. VAAI is supported by the array and enabled across the vSphere environment
5. All datastores are presented to all hosts within the cluster
6. Snapshot based backup solution is being used
7. Virtual Machines are right sized
8. Disk to disk backup data is replicated offsite

Constraints

1. None

Motivation

1. Optimize the storage performance
2. Ensure Tier 1 storage is not wasted with transient files
3. Minimize storage required for snapshot based backups

Architectural Decision

Separate OS page files onto a dedicated VMDK, which will be located on a datastore (or datastore cluster) which is
1. Not Protected by the array level snapshot backup solution
2. Not running deduplication
3. Not running data compression

Justification

1. Allows page files to be stored on different underlying storage including (optionally) high capacity, lower cost, SATA disk
2. Relocating Guest OS page files to another datastore (or datastore cluster) not protected ny snapshots dramatically reduces the amount of Data being protected by the Snapshot based backup solution
3. Reduces the amount of data being replicated to secondary disk backup location/s thus minimizing the bandwidth requirements between datacenters
4. (Optionally) Ensures Tier 1 storage is only used for high performance guests
5. The result of the Virtual Machines being right sized the performance impact/frequency of paging should be minimal
6. Reduces the CPU cycles required for deduplication as data which cannot be deduplicated will not be scanned
7. Reduces the CPU cycles on the storage controllers by not attempting to compress page file data

Alternatives

1. Leave Page Files within the Virtual machines primary VMDK an accept the overhead on the backup solution
2. Turn of paging within the Guest OS (No Page File)

Implications

1. The additional steps of creating a dedicated VMDK for the VM and configuring the Guest OS to use the alternate location
2. Templates need to be updated to the above configuration
3. For environments using Site Recovery Manager,for protected virtual machines, some manual steps are required when setting up the virtual machines for the first time. This increases the work required during setup, however as this is a one time overhead, it is believed the benefit of reduced backup storage and replication traffic (for SRM) outweighs the one time overhead

vmware_logo_ads

Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage

Problem Statement

With 10GB connections, the proposed ESXi hosts will have less physical connections, but more bandwidth per connection than a host with 1GB NICs. In this case, 4 x 10GB NICs needs to cater for all traffic (including IP storage) for the ESXi hosts.

The design needs to ensure all types of traffic have sufficient burst and sustained bandwidth without negatively impacting other types of traffic.

How can this be achieved?

Assumptions

1. No additional Network cards (1gb or 10gb) can be supports
2. vSphere 5.0 or later
3. 2 x 48 port 10GB and 2 x 48 port 1GB switches exist in the environment
4. ESXi host are 4 way servers with 512GB RAM which are expected to run large numbers of VMs with varying workloads
5. Multi-NIC vMotion is not required due to using 10Gb NICs

Motivation

1.When using bandwidth allocation, use “shares” instead of “limits,” as the former has greater flexibility for unused capacity redistribution.
2. Ensure IP Storage (NFS) performance is optimal
3.Ensure vMotion activities (including a host entering maintenance mode) can be performed in a timely manner without impact to IP Storage or Fault Tolerance
4. Fault tolerance is a latency-sensitive traffic flow, so it is recommended to always set the corresponding resource-pool shares to a reasonably high relative value in the case of custom shares.

Architectural Decision

Separate VMware infrastructure functions (VMKernel) from virtual machine network traffic by creating two (2) dvSwitches (each with 2 x 10GB connections), dvSwitch-Admin and dvSwitch-Data

Enable Network I/O control, and configure NFS and/or iSCSI traffic with a share value of 100 and vMotion & FT which will have share value of 25.

Configure the two (2) VMKernel’s for IP Storage on dvSwitch-Admin and set to be Active on one 10GB interface and Standby on the second.

Configure the VMKernel for vMotion on dvSwitch-Admin as Active on one interface and standby on the second and vice-versa for FT.

Configure all dvPortGroups for Virtual Machine data on dvSwitch-Data.

Justification

1. The share values were chosen to ensure storage traffic is not impacted as this can cause flow on effects for the environments performance. vMotion & FT are considered important, but during periods of contention, should not monopolize or impact IP storage traffic.
2. IP Storage is more critical to ongoing cluster and VM performance than vMotion or FT
3. IP storage requires higher priority than vMotion which is more of a burst activity and is not as critical to VM performance
4. Which a share value of 25,  Fault Tolerance still has ample bandwidth to support the maximum supported FT machines per host of 4 even during periods of contention
5. Which a share value of 25, vMotion still has ample bandwidth to support multiple concurrent vMotion’s during contention however performance should not be impacted on a day to day basis. With up to 8 vMotion’s supported as it is configured on a 10GB interface. (Limit of 4 on a 1GB interface)
6. The environment required 1GB switches to accommodate for various devices, such as Out of Band management & IP KVM devices, as such having ESXi management on 2 x 1GB ports was not adding significant cost to the solution

Implications

1. In the unlikely event of significant and ongoing contention, performance for vMotion and FT may affect the ability to perform the evacuation of a host in a timely manner. This may impact the ability to performance scheduled maintenance.

Alternatives

1. Use all 4 x 10Gb NICs on a single dvSwitch, and use “Active” and “Standby” to ensure traffic remained on a specified NIC unless there was a failure. Leverage Network I/O control similar to the above example to ensure minimal impact of contention

See Example VMware vNetworking Design for IP Storage for an overview of the vNetworking design described in this example.