Example VMware vNetworking Design w/ 2 x 10GB NICs (IP based or FC/FCoE Storage)

I have had a large response to my earlier example vNetworking design with 4 x 10GB NICs, and I have been asked, “What if I only have 2 x 10GB NICs”, so the below is an example of an environment which was limited to just two (2) x 10GB NICs and used IP Storage.

If your environment uses FC/FCoE storage, the below still applies and the IP storage components can simply be ignored.

Requirements

1. Provide high performance and redundant access to the IP Storage (if required)
2. Ensure ESXi hosts could be evacuated in a timely manner for maintenance
3. Prevent significant impact to storage performance by vMotion / Fault Tolerance and Virtual machines traffic
4. Ensure high availability for all network traffic

Constraints

1. Two (2) x 10GB NICs

Solution

Use one dvSwitch to support all VMKernel and virtual machine network traffic and use “Route based of Physical NIC Load” (commonly refereed to as “Load Based teaming”).

Use Network I/O control to ensure in the event of contention that all traffic get appropriate network resources.

Configure the following Network Share Values

IP Storage traffic : 100
ESXi Management: 25
vMotion: 25
Fault Tolerance : 25
Virtual Machine traffic : 50

Configure two (2) VMKernel’s for IP Storage and set each on a different VLAN and Subnet.

Configure VMKernels for vMotion (or Multi-NIC vMotion), ESXi Management and Fault Tolerance and set to active on both 10GB interfaces (default configuration).

All dvPortGroups for Virtual machine traffic (in this example VLANs 6 through 8) will be active on both interfaces.

The above utilizes LBT to load balance network traffic which will dynamically move workload between the two 10GB NICs once one or both network adapters reach >=75% utilization.

vNetworking BLOG 2x10gb

Conclusion

Even when your ESXi hosts only have two x 10Gb interfaces, VMware provides enterprise grade features to ensure all traffic (including IP Storage) can get access to sufficient bandwidth to continue serving production workloads until the contention subsides.

This design ensures that in the event a host needs to be evacuated, even during production hours, that it will complete in the fastest possible time with minimal or no impact to production. The faster your vMotion activity completes, the sooner DRS can get your cluster running as smoothly as possible, and in the event you are patching, the sooner your maintenance can be completed and the hosts being patched are returned to the cluster to serve your VMs.

Related Posts

1. Example Architectural Decision – Network I/O Control for ESXi Host using IP Storage (4 x 10 GB NICs)
2. Network I/O Control Shares/Limits for ESXi Host using IP Storage

19 thoughts on “Example VMware vNetworking Design w/ 2 x 10GB NICs (IP based or FC/FCoE Storage)

    • In my opinion, LACP would at best provide a minimal benefit under some circumstances, although it can be supported in vSphere 5.1 on a dvSwitch. The route based on physical NIC load setting ensures in the event of >75% utilization of a vmNIC that traffic will be dynamically spread over the other available adapters. As such LACP is not really required, and with multiple 10Gb connections I have not seen an environment where the network was a bottleneck that needed to be addressed. If there was a bottleneck, Route based on physical NIC load combined with NIOC would be a excellent solution without using LACP. Thanks for the comment.

  1. Thank you for these excellent posts. In my opinion in 2 x 10 GB with both nic active will make configuration simple and Please correct me if there any impact as NIOC is anyway going to take care of any contention.

  2. “Configure the two (2) VMKernel’s for IP Storage…”
    I was just reading http://wahlnetwork.com/2012/04/19/nfs-on-vsphere-a-few-misconceptions/ and this line makes me wonder if we need the second VMkernel port “The host attempts to find a VMkernel port that is on the same subnet as the NFS server. If multiple exist, it uses the first one found”
    We run 100% NFS but I have only ever created a single VMkernel port for NFS. If I have misunderstood Chris’ post and we can get NFS connections across multiple physical NICs simultaneously, this would be a massive win for us.

  3. What if we don’t have Enterprise Plus (and therefore no distributed vswitch capability). I’ve been looking for recommendations using 10GbE nics but it seems everything always assume DVS. Any suggestions?

    • Without the DVS, I would use a single vSwitch with Route based on originating port ID. The downside is obviously no network load balancing or NIOC so in periods of contention the performance could degrade for traffic types such as IP storage. Still, with 2 x 10GB connections it should perform very well.

  4. Josh,

    Rewinding to last year, I arrived at my company and inherited a cluster of ESX issues from packet loss, to poor host performance and started researching for a solution. Our setup was using the older standard setup of many (12) 1GB NICs and ironically (2) 2 port Intel 10GBe cards utilizing NFS on910Gbe) I researched finding your site with this advanced design and with approval implemented it.

    Results: Enabling NOIC & SIOC using your setup instantly increased performance and stability. Your advanced VDS design has been nothing short of an impeccable. It really displays the true capabilities of Enterprise Plus licensing. Enabling me to eliminate the use of most of the 1GB NICs.

    I was fascinated with your design so much it inspired me to go after the VCP5 and passed. Disappointed it was not challenging enough it lead me to the VCAP-DCA and passed it as well. The DCD exam is scheduled for this Friday 

    I kept your design image printed out as a reminder of how my VCDX journey began and thank you for taking the time with creating this site and inspiring me to find my true potential with Vmware.

    Now going down the rabbit hole with some questions:
    1. The Intel 10Gbe NetQueue NFS are set to 8 and can be set to 16 (Dell 910’s Quad Processor hardware over HNAS 4100) I can’t find anything worthy with 10Gbe NICs but can with HBA to increase. Thoughts to achieve higher performance by increasing the Netqueue?

    2. A Fraction our ESX DRS enabled host are still using 1GB ISCSI NICs and with half the HA cluster with missing ISCSI connection (issue never addressed).

    I was going to entertain by adding another VDS portgroup (VLAN tagged for ISCSI) could add support to the rest of the ESX host and migrate the remaining 1GB ISCSI to the same layout. If the Equalogic backend is only 1GB would I need to set peak at 1,000,000 KBs or set ISCSI NIOC to perhaps 10 shares even if the back end is 1GB – does it matter?

  5. Looking for a similar setup but then using iSCSI instead of NFS, will this also work and is this also adviced? (our SANs are 10Gbit iSCSI only: Dell MD3820i)

    So one 10Gbit cable for all traffic (mgmt, vmotion, isci, vlan-vm-networking, …) plus another 10Gbit cable to a different redundant switch for redundancy. To be configured using a vCenter Distributed Switch.

    • Yes perfectly fine for iSCSI as well, Load Based Teaming for the VDS is a very simple and effective solution for IP storage.

Leave a Reply to Josh Odgers (VCDX#90)Cancel reply