Example Architectural Decision – Datastore (LUN) and Virtual Disk Provisioning

Problem Statement

In a vSphere environment, What is the most suitable disk provisioning type to use for the LUN and the virtual machines to ensure minimum storage overhead and optimal performance?

Requirements

1. Ensure optimal storage capacity utilization
2. Ensure storage performance is both consistent & maximized

Assumptions

1. vSphere 4.1 or later
2. VAAI is supported and enabled
3. Array level data replication is being used throughout the environment
4. Monitoring of the environment (including vSphere and Storage) is a manual process
5. The time frame to order new hardware (eg: New Disk Shelves) is a minimum of 3 months

Constraints

1. Block based storage

Motivation

1. Increase flexibility
2. Ensure physical disk space is not unnecessarily wasted

Architectural Decision

“Thick Provision” the LUN at the Storage layer and “Thin Provision” the virtual machines at the VMware layer

Justification

1. Simplified capacity management as only one layer (vSphere layer) needs to be monitored for capacity
2. The Free space shown by vSphere is actual usable storage
3. Reduces the chance of an “Out of Space” condition
4. Increases flexibility as all unused capacity of all datastores remains available
5. Creating VMs with “Thick Provisioned – Eager Zeroed” disks would increase the provisioning time
6. Creating VMs as “Thick Provisioned” (Eager or Lazy Zeroed) does not provide any significant benefit but adds a serious capacity penalty
7. Using Thin Provisioned virtual machines minimizes storage replication traffic on creation of virtual machines
8. Using Thick Provisioned LUNs reduces the requirement for fast turn around times for purchasing additional capacity
9. Monitoring is essential to successfully and safely use “Thin on Thin”

Alternatives

1.  Thin Provision the LUN and thick provision virtual machine disks (VMDKs)
2.  Thick provision the LUN and thick provision virtual machine disks (VMDKs)
3.  Thin provision the LUN and thin provision virtual machine disks (VMDKs)

Implications

1. No storage over commitment can occur on the physical array
2. The storage “consumed” will be reported differently between the vSphere Administrator and the Storage Administrator. The vSphere Administrator will see the true utilization, whereas the SAN administrator will see the “Consumed” & “Provisioned” values as the same
3. It is possible for a datastore to become overcommited, and as a result if not monitored the datastore may run out of free space which would result in an outage.

Related Articles

1. Datastore (LUN) and Virtual Disk Provisioning (Thin on Thin)

vmware_logo_ads

Example VMware vNetworking Design w/ 2 x 10GB NICs (IP based or FC/FCoE Storage)

I have had a large response to my earlier example vNetworking design with 4 x 10GB NICs, and I have been asked, “What if I only have 2 x 10GB NICs”, so the below is an example of an environment which was limited to just two (2) x 10GB NICs and used IP Storage.

If your environment uses FC/FCoE storage, the below still applies and the IP storage components can simply be ignored.

Requirements

1. Provide high performance and redundant access to the IP Storage (if required)
2. Ensure ESXi hosts could be evacuated in a timely manner for maintenance
3. Prevent significant impact to storage performance by vMotion / Fault Tolerance and Virtual machines traffic
4. Ensure high availability for all network traffic

Constraints

1. Two (2) x 10GB NICs

Solution

Use one dvSwitch to support all VMKernel and virtual machine network traffic and use “Route based of Physical NIC Load” (commonly refereed to as “Load Based teaming”).

Use Network I/O control to ensure in the event of contention that all traffic get appropriate network resources.

Configure the following Network Share Values

IP Storage traffic : 100
ESXi Management: 25
vMotion: 25
Fault Tolerance : 25
Virtual Machine traffic : 50

Configure two (2) VMKernel’s for IP Storage and set each on a different VLAN and Subnet.

Configure VMKernels for vMotion (or Multi-NIC vMotion), ESXi Management and Fault Tolerance and set to active on both 10GB interfaces (default configuration).

All dvPortGroups for Virtual machine traffic (in this example VLANs 6 through 8) will be active on both interfaces.

The above utilizes LBT to load balance network traffic which will dynamically move workload between the two 10GB NICs once one or both network adapters reach >=75% utilization.

vNetworking BLOG 2x10gb

Conclusion

Even when your ESXi hosts only have two x 10Gb interfaces, VMware provides enterprise grade features to ensure all traffic (including IP Storage) can get access to sufficient bandwidth to continue serving production workloads until the contention subsides.

This design ensures that in the event a host needs to be evacuated, even during production hours, that it will complete in the fastest possible time with minimal or no impact to production. The faster your vMotion activity completes, the sooner DRS can get your cluster running as smoothly as possible, and in the event you are patching, the sooner your maintenance can be completed and the hosts being patched are returned to the cluster to serve your VMs.

Related Posts

1. Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage (4 x 10 GB NICs)
2. Network I/O Control Shares/Limits for ESXi Host using IP Storage