Example Architectural Decision – Storage Protocol Choice for a VMware View Environment using Linked Clones

Problem Statement

What is the most suitable storage protocol for a Virtual Desktop (VMware View) environment using Linked Clones?

Assumptions

1.  The Storage Array supports NFS native snapshot offload
2. VMware View 5.1 or later

Motivation

1. Minimize recompose (maintenance) window
2. Minimize impact on the storage array and HA/DRS cluster during recompose activities
3. Reduce storage costs where possible
4. Simplify the storage design eg: Number/size of Datastores / Storage Connectivity
5. Reduce the total solution cost eg: Number of Hosts required

Architectural Decision

Use Network File System (NFS)

Justification

1. Using native NFS snapshot (VAAI) offloads the creation of VMs to the array, therefore reducing the compute overhead on the ESXi hosts
2. Native NFS snapshots require much less disk space than traditional linked clones
3. Recomposition times are reduced due to the offloading of the cloning to the array
4. More virtual machines can be supported per NFS datastore compared to VMFS datastores (200+ for NFS compared to max recommended of 140, but it is generally recommended to design for much lower numbers eg: 64 per VMFS)
5. Recompositions/Refresh activities can be performed during business hours, or at Logoff (for Refresh) with minimal impact to the HA/DRS cluster, thus giving more flexibility to maintain the environment
6. Avoid’s potential VMFS locking issues – although this issue is not as important for environments using vSphere 4.1 onward with VAAI compatible arrays
7. When sizing your storage array, less capacity is required. Note: Performance sizing is also critical
8. The cost of a FC Storage Area Network can be avoided
9. Fewer ESXi hosts may be required as the compute overhead of driving cloning has been removed

Implications

1.  In the current release, 5.1, View Storage Accelerator (formally Content Based Read Cache or CBRC) is not supported when using Native NFS snapshots (VAAI)
2. Also in the current release 5.1, “Use native NFS snapshots (VAAI) is in “Tech Preview” – This is rumored to change in View 5.2

Alternatives

1. Use VMFS (block) based datastores and have more VMFS datastores – Note: Recompose activity will be driven by the host which adds an overhead to the cluster.

Example VMware vNetworking Design w/ 2 x 10GB NICs (IP based or FC/FCoE Storage)

I have had a large response to my earlier example vNetworking design with 4 x 10GB NICs, and I have been asked, “What if I only have 2 x 10GB NICs”, so the below is an example of an environment which was limited to just two (2) x 10GB NICs and used IP Storage.

If your environment uses FC/FCoE storage, the below still applies and the IP storage components can simply be ignored.

Requirements

1. Provide high performance and redundant access to the IP Storage (if required)
2. Ensure ESXi hosts could be evacuated in a timely manner for maintenance
3. Prevent significant impact to storage performance by vMotion / Fault Tolerance and Virtual machines traffic
4. Ensure high availability for all network traffic

Constraints

1. Two (2) x 10GB NICs

Solution

Use one dvSwitch to support all VMKernel and virtual machine network traffic and use “Route based of Physical NIC Load” (commonly refereed to as “Load Based teaming”).

Use Network I/O control to ensure in the event of contention that all traffic get appropriate network resources.

Configure the following Network Share Values

IP Storage traffic : 100
ESXi Management: 25
vMotion: 25
Fault Tolerance : 25
Virtual Machine traffic : 50

Configure two (2) VMKernel’s for IP Storage and set each on a different VLAN and Subnet.

Configure VMKernels for vMotion (or Multi-NIC vMotion), ESXi Management and Fault Tolerance and set to active on both 10GB interfaces (default configuration).

All dvPortGroups for Virtual machine traffic (in this example VLANs 6 through 8) will be active on both interfaces.

The above utilizes LBT to load balance network traffic which will dynamically move workload between the two 10GB NICs once one or both network adapters reach >=75% utilization.

vNetworking BLOG 2x10gb

Conclusion

Even when your ESXi hosts only have two x 10Gb interfaces, VMware provides enterprise grade features to ensure all traffic (including IP Storage) can get access to sufficient bandwidth to continue serving production workloads until the contention subsides.

This design ensures that in the event a host needs to be evacuated, even during production hours, that it will complete in the fastest possible time with minimal or no impact to production. The faster your vMotion activity completes, the sooner DRS can get your cluster running as smoothly as possible, and in the event you are patching, the sooner your maintenance can be completed and the hosts being patched are returned to the cluster to serve your VMs.

Related Posts

1. Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage (4 x 10 GB NICs)
2. Network I/O Control Shares/Limits for ESXi Host using IP Storage

Storage Capabilities not appearing after Installing & configuring the Netapp VASA Provider 1.0

 

When installing the Netapp VASA provider 1.0 today, I was surprised to find the Storage Capabilities were not being populated (as shown in the screen shot below). As I have installed and configure the Netapp VASA provider numerous times and never had an issue, this was very annoying.

StorageCapabilitiesBlank

 

I had completed the configuration of the VASA provider (see below) and successfully registered the provider.

netappvasaconfig_blank

I confirmed the provider was showing up by going to the “Solution providers” in vSphere Client

NetappVASASolutionProviders

Then confirming the vendor provider has been registered as per the below.

VendorProviders

So after unregistering the provider then uninstalling and re-installing the Netapp VASA Provider software the problem was still not solved.
After a quick Google, much to my surprise I couldn’t find anything on this issue.

It turns out, it was a quite simple “fix” if I can call it that.

For whatever reason I configured the storage system first, then proceeded to complete the other steps.

If your having this issue, you can avoid the problem by configuring the VASA provider (shown below) in this order

1. Configure the username and password for communication with vCenter
2. Configure the vCenter server details and register the provider
3. Register the storage system

 

netappvasaconfig

 

Shortly there after the storage capabilities all appeared successfully.

StorageCapabilitiesWorking

So the above contradicts the official Netapp installation guide for the VASA provider, which states on P11 that you can add storage systems at any time,  but solved my problem.

Hope you found this useful.