Example Architectural Decision – Datastore (LUN) and Virtual Disk Provisioning (Thin on Thin)

Problem Statement

In a vSphere environment, What is the most suitable disk provisioning type to use for the LUN and the virtual machines to ensure minimum storage overhead and optimal performance?

Requirements

1. Ensure optimal storage capacity utilization
2. Ensure storage performance is both consistent & maximized

Assumptions

1. vSphere 5.0 or later
2. VAAI is supported and enabled
3. The time frame to order new hardware (eg: New Disk Shelves) is <= 4 weeks
4. The storage solution has tools for fast/easy capacity management

Constraints

1. Block Based Storage

Motivation

1. Increase flexibility
2. Ensure physical disk space is not unnecessarily wasted

Architectural Decision

“Thin Provision” the LUN at the Storage layer and “Thin Provision” the virtual machines at the VMware layer

(Optional) Do not present more LUNs (capacity) than you have underlying physical storage (Only over-commitment happens at the vSphere layer)

Justification

1. Capacity management can be easily managed by using storage vendor tools such eg: Netapp VSC / EMC VSI / Nutanix Command Center
2. Thin Provisioning minimizes the impact of situations where customers demand a lot of disk space up front when they only end up using a small portion of the available disk space
3. Increases flexibility as all unused capacity of all datastores and the underlying physical storage remains available
4. Creating VMs with “Thick Provisioned – Eager Zeroed” disks would unnessasarilly increase the provisioning time for new VMs
5. Creating VMs as “Thick Provisioned” (Eager or Lazy Zeroed) does not provide any significant benefit (ie: Performance) but adds a serious capacity penalty
6. Using Thin Provisioned LUNs increases the flexibility at the storage layer
7. VAAI automatically raises an alarm in vSphere if a Thin Provisioned datastore usage is at >= 75% of its capacity
8. The impact of SCSI reservations causing performance issues (increased latency) when thin provisioned virtual machines (VMDKs) grow is no longer an issue as the VAAI Atomic Test & Set (ATS) primitive alleviates the issue of SCSI reservations.
9. Thin provisioned VMs reduce the overhead for Storage vMotion , Cloning and Snapshot activities. Eg: For Storage vMotion it eliminates the requirement for Storage vMotion (or the array when offloaded by VAAI XCOPY Primitive) to relocate “White space”
10. Thin provisioning leaves maximum available free space on the physical spindles which should improve performance of the storage as a whole
11. Where there is a real or perceved issue with performance, any VM can be converted to Thick Provisioned using Storage vMotion not disruptivley.
12. Using Thin Provisioned LUNs with no actual over-commitment at the storage layer reduces any risk of out of space conditions while maintaining the flexibility and efficiency with significantly reduce risk and dependency on monitoring.
13. The VAAI UNMAP primitive provides automated space reclamation to reduce wasted space from files or VMs being deleted

Alternatives

1.  Thin Provision the LUN and thick provision virtual machine disks (VMDKs)
2.  Thick provision the LUN and thick provision virtual machine disks (VMDKs)
3.  Thick provision the LUN and thin provision virtual machine disks (VMDKs)

Implications

1. If the storage at the vSphere and array level is not properly monitored, out of space conditions may occur which will lead to downtime of VMs requiring disk space although VMs not requiring additional disk space can continue to operate even where there is no available space on the datastore
2. The storage may need to be monitored in multiple locations increasing BAU effort
3. It is possible for the vSphere layer to report sufficient free space when the underlying physical capacity is close to or entirely used
4. When migrating VMs from one thin provisioned datastore to another (ie: Storage vMotion), the storage vMotion will utilize additional space on the destination datastore (and underlying storage) while leaving the source thin provisioned datastore inflated even after successful completion of the storage vMotion.
5.While the VAAI UNMAP primitive provides automated space reclamation this is a post-process, as such you still need to maintain sufficient available capacity for VMs to grow prior to UNMAP reclaiming the dead space

Related Articles

1. Datastore (LUN) and Virtual Disk Provisioning (Thin on Thick)CloudXClogo

 

Example Architectural Decision – Single Sign On Configuration for Single Site w/ Multiple vCenter Servers

Problem Statement

What is the most suitable deployment mode for vCenter Single-Sign On (SSO) in an environment where there is a single physical datacenter with multiple vCenter servers?

Requirements

1. The solution must be a fully supported configuration
2. Meet/Exceed RTO of 4 hours
3. Support Single Pane of glass management
4. Ability to scale for future vCenters and/or datacenters

Assumptions

1. All vCenter instances can access the same Authentication source (Active Directory or OpenLDAP)

2. The average number of authentications per second for each SSO instance is <30 (Configuration Maximum)

Constraints

1. vCenter servers reside in different network security zones within the datacenter

Motivation

1. Future proof the environment

Architectural Decision

1. Use “Multi-site” SSO deployment mode

2. Use one SSO instance per vCenter

3. Each SSO instance will reside with the vCenter on a Windows 2008 x64 R2 virtual machine in a vSphere cluster with HA enabled

4. Each SSO instance will use the bundled SQL database

5. (Optional) For greater availability, vCenter Heartbeat can be used to protect each SSO instance along with vCenter and the bundled SSO database

6. The Virtual Machine hosting vCenter/SSO will be 2vCPU and 10GB RAM to support vCenter/SSO/Inventory Service and an additional 2GB RAM to support the bundled SSO Database

7. Using the bundled SSO database ensures only a single vCenter Heartbeat deployment is required to protect each vCenter/SSO instance and reduce Windows licensing

Justification

1. To simplify the maintenance/upgrade process for vCenter/SSO as different versions of vCenter cannot co-exist with the same SSO instance

2. If “High Availability” mode is used it would prevent single pane of glass management

3. “High Availability” mode currently requires an SSL load balancer to be configured as well as manual intervention which can be complicated and problematic to implement and support

4. “Basic” mode prevents the use of Linked Mode which will prevent the management of the environment being single pane of glass

5. Where vCenter servers reside in different network security zones, Using Multi-site mode allows each SSO instance to use authentication sources that are as logically close as possible while supporting single pane of glass management. This should provide faster access to authentication services as each SSO instance is configured with Active Directory servers located in the same or logically closest network security zone/s.

6. If one instance SSO goes offline for any reason, it will only impact a single vCenter server. It will not prevent authentication to the other vCenter servers.

7. Reduce the licensing costs for Microsoft Windows 2008 by combining SSO and vCenter roles onto a single OS

Alternatives

1. Use “Basic” Mode, resulting in a standalone version of SSO for each vCenter server with no single pane of glass management

2. Use “High Availability” mode per vCenter

3. Use a shared “High Availability” mode for all vCenters in the datacenter

4. In any SSO configuration, Host the SSO database (per vCenter) on a Oracle OR SQL Server

5. Run SSO on a dedicated Windows 2008 instance with or without the SSO database locally

6. Run a single SSO instance in “Multi-Site” mode , use vCenter Heartbeat to protect SSO (including the database) and share the SSO instance with all vCenters

Implications

1. Where SSO is not protected by vCenter Heartbeat (optional), SSO for each vCenter is a Single point of failure where authentication to the affected vCenter will fail

2. “Multi-Site” mode requires the install-able version of SSO, which is Windows Only which prevents the use of the vCenter Server Appliance (VCSA) as it only supports basic mode.

Related Articles

1. vSphere 5.1 Single Sign On (SSO) deployment mode across Active/Active Datacenters

2. vSphere 5.1 Single Sign On (SSO) Architectural Decision Flowchart

3. Disabling Single Sign On – Dont Do It! – Michael Webster (VCDX#66) @vcdxnz001

CloudXClogo

 

 

Storage DRS Configuration – Architectural Decision making flowchart

I was speaking to a number of people recently, who were trying to come up with a one size fits all Storage DRS configuration for a reference architecture document.

As Storage DRS is a reasonably complicated feature, it was my opinion that a one size fits all would not be suitable, and that multiple examples should be provided when writing a reference architecture.

A collegue suggested a flowchart would assist in making the right decision around Storage DRS, so I took up the challenge to put one together.

The below is my version 0.1 of the flowchart, which I thought I would post and hopefully get some good feedback from the community, and create a good guide for those who may not have the in-depth knowledge or experience, too choose what should be in most cases an appropriate configuration for SDRS.

This also compliments some of my previous example architectural decisions which are shown in the related topic section below.

As always, feedback is always welcomed.

I hope you find this helpful.

* Updated to include the previously missing “NO” option for Data replication.

SDRS flowchart V0.2

Related Articles

1. Example Architectural Decision – Storage DRS configuration for NFS datastores

2. Example Architectural Decision – Storage DRS configuration for VMFS datastores