Example Architectural Decision – vSphere Path Selection Plugin (PSP) for IBM SVC Storage

Problem Statement

What is the most suitable multipathing policy when using IBM SVC storage?

Requirements

1. Ensure maximum performance and availability for vSphere storage
2. Ensure storage performance is as consistent as possible

Assumptions

1. IBM SVC Storage which is Active/Active
2. VAAI is supported and enabled

Constraints

1. Solution must be supported

Motivation

1. Ensure optimal performance and redundancy
2. Minimize Latency

Architectural Decision

Use vSphere Native Multipathing Plugin (NMP) and configure “VMW_PSP_RR” (Round Robin) as the path selection policy.

Set the default PSP to “VMW_PSP_RR” (Round Robin) for SATP VMW_SATP_SVC so all new LUNs automatically use Round Robin

Justification

1. Round Robin helps ensure minimum average latency to the storage by using all available paths
2. Ensure performance is not degraded for some/all virtual machines due to a single HBA or connection being heavily utilized
3. Using “VMW_PSP_ FIXED” requires the paths to be manually load balanced to avoid thrashing a single path
4. Using “VMW_PSP_MRU” or “VMW_PSP_ FIXED” may lead to incosistent performance across the LUNs due to some paths being more heavily used than others
5. There is no MPP currently supplied by IBM for SVC storage
6. Round Robin is a supported configuration (Note: Although not specifically listed in the Compatability Matrix)

Alternatives

1. Use “VMW_PSP_FIXED” (Default) – Fixed Pathing
2. Use “VMW_PSP_MRU”  – Most Recently Used
3. Use vendor supplied Multipathing Plugin

Implications

1. None

vmware_logo_ads

Example Architectural Decision – Datastore (LUN) and Virtual Disk Provisioning

Problem Statement

In a vSphere environment, What is the most suitable disk provisioning type to use for the LUN and the virtual machines to ensure minimum storage overhead and optimal performance?

Requirements

1. Ensure optimal storage capacity utilization
2. Ensure storage performance is both consistent & maximized

Assumptions

1. vSphere 4.1 or later
2. VAAI is supported and enabled
3. Array level data replication is being used throughout the environment
4. Monitoring of the environment (including vSphere and Storage) is a manual process
5. The time frame to order new hardware (eg: New Disk Shelves) is a minimum of 3 months

Constraints

1. Block based storage

Motivation

1. Increase flexibility
2. Ensure physical disk space is not unnecessarily wasted

Architectural Decision

“Thick Provision” the LUN at the Storage layer and “Thin Provision” the virtual machines at the VMware layer

Justification

1. Simplified capacity management as only one layer (vSphere layer) needs to be monitored for capacity
2. The Free space shown by vSphere is actual usable storage
3. Reduces the chance of an “Out of Space” condition
4. Increases flexibility as all unused capacity of all datastores remains available
5. Creating VMs with “Thick Provisioned – Eager Zeroed” disks would increase the provisioning time
6. Creating VMs as “Thick Provisioned” (Eager or Lazy Zeroed) does not provide any significant benefit but adds a serious capacity penalty
7. Using Thin Provisioned virtual machines minimizes storage replication traffic on creation of virtual machines
8. Using Thick Provisioned LUNs reduces the requirement for fast turn around times for purchasing additional capacity
9. Monitoring is essential to successfully and safely use “Thin on Thin”

Alternatives

1.  Thin Provision the LUN and thick provision virtual machine disks (VMDKs)
2.  Thick provision the LUN and thick provision virtual machine disks (VMDKs)
3.  Thin provision the LUN and thin provision virtual machine disks (VMDKs)

Implications

1. No storage over commitment can occur on the physical array
2. The storage “consumed” will be reported differently between the vSphere Administrator and the Storage Administrator. The vSphere Administrator will see the true utilization, whereas the SAN administrator will see the “Consumed” & “Provisioned” values as the same
3. It is possible for a datastore to become overcommited, and as a result if not monitored the datastore may run out of free space which would result in an outage.

Related Articles

1. Datastore (LUN) and Virtual Disk Provisioning (Thin on Thin)

vmware_logo_ads

VMware View 5.1 Reference Architecture on Netapp Storage (NFS)

I just saw this reference architecture from VMware and Netapp, and wanted to share it with you.

It gives excellent examples of the benefits of using VCAI (View Composer Array Integration) and the VSA (View Storage Accelerator).

Anyone deploying any significant number of VMware View virtural desktops should review this architecture.

VMware View 5.1 Reference Architecture on Netapp Storage