Bug Life: vSphere 6.0 Network I/O Control & Custom Network Resource Pools

In a previous post How to configure Network I/O Control (NIOC) for Nutanix (or any IP Storage) I showed just how easy configuring NIOC is back in the vSphere 5.x days.

In was based around the concepts of Shares and Limits, of which I have always recommended shares which enable fairness while allowing traffic to burst if/when required. NIOC v2 was a Simple, and effective solution for sure.

Enter NIOC V3 in vSphere 6.0.

Once you upgrade to NIOC v3 you can no longer use the vSphere C# client and NIOC also now has the concept of bandwidth reservations as shown below:

NIOCoverview

I am not really a fan of reservations in NIOC or for CPU (memory is good though) and in fact I’ll go as far as to say NIOC was great in vSphere 5.x and I don’t think it needed any changes.

However with vSphere 6.0 Release 2494585 when attempting to create a custom network resource pool under the “Resource Allocation” menu by using the “+” icon (as shown below) you may experience issues.

As shown below, before even pressing the “+” icon to create a network resource pool, the Yellow warning box tells us we need to configure a bandwidth reservation for virtual machine system traffic first.

issue1

So my first though was, Ok, I can do this, but why? I prefer using Shares as opposed to Limits or reservations because I want traffic to be able to burst when required and for no bandwidth to be wasted if certain traffic types are not using it.

In any case, I followed the link in the warning and went to set a minimal reservation of 10Mbit/s for Virtual machine traffic as shown below.

Pix3

When pressing “Ok” I was greeted with the below error saying the “Resource settings are invalid”. As shown below I also tried higher reservations without success.

Pix2

I spoke to a colleague and had them try the same in a different environment and they also experienced the same issue.

I have currently got a call open with VMware Support. They have acknowledge this is an issue and is being investigated. I will post updates as I hear from them so stay tuned.

NOS 4.5 Delivers Increased Read Performance from SATA

In a recent post I discussed how NOS 4.5 increases the effective SSD tier capacity by performing up-migrations on only the local extent as opposed to both RF copies within the Nutanix cluster. In addition to this significant improvement in usable SSD tier, in NOS 4.5 the read performance from the SATA tier has also received lots of attention from Nutanix engineers.

What the Solutions and Performance Engineering team have discovered and been testing is how we can improve SATA performance. Now ideally the active working set for VMs will fit within the SSD tier, and the changes discussed in my previous post dramatically improve the chances of that active working set fitting within the SSD tier.

But there are situation when reads to cold data still need to be serviced by the slow SATA drives. Nutanix uses Data Locality to ensure the hot data remains close to the application to deliver the lowest latency and overheads which improve performance, but in the case of SATA drives and the fact data is infrequently accessed from SATA means that reading from remote SATA drives can improve performance especially where the number of local SATA drives is limited (in some cases to only 2 or 4 drives).

Most Nutanix nodes have 2 x SSD and 4 x SATA so best case you will only see a few hundred IOPS from SATA as that is all they are physically capable of. To get around this issue.

NOS 4.5 introduces some changes to the way in which we select a replica to read an egroup from the HDD tier. Periodically NOS (re)calculate the average IO latencies of the all the replicas of a vdisk’s (replicas which have the vdisk’s egroups). We use this information to choose a replica as follows:

  1. If the latency of the local replica is less than a configurable threshold, read from the local replica.
  2. If the latency of the local replica is more than a configurable threshold, and the latency of the remote replica is more than that of the local replica, prefer the local replica.
  3. If the latency of the local replica is more than a configurable threshold and the remote replica is lower than the configurable threshold OR lower than the local copy, prefer the remote replica.

The diagram below shows an example of where the VM on Node A is performing random reads to data A and shortly thereafter data C. When requesting reads from data A the latency is below the threshold but when it requests data C, NOS detects that the latency of the local copy is higher than the remote copy and selects the remote replica to read from. As the below diagram shows, one possible outcome when reading multiple pieces of data is one read is served locally and the other is serviced remotely.

remotesatareads2

Now the obvious next question is “What about Data Locality”.

Data Locality is being maintained for the hot data which resides in SSD tier because reads from SSD are faster and have lower overheads on CPU/Network etc when read locally due to the speed of SSDs. For SATA reads which are typical >5ms the SATA drive itself is the bottleneck not the network, so by distributing the Reads across more SATA drives even if they are not local, results in better overall performance and lower latency.

Now if the SSD tier has not reached 75% all data will be within the SSD tier and will be served locally, the above feature is for situations where the SSD tier is 75% full and data is being tiered to SATA tier AND random reads are occurring to cold data OR data which will not fit in the SSD tier such as very large databases.

In addition NOS 4.5 detects if the read I/O is random or sequential, and if its sequential (which SATA performance much better at) then the up-migration of data has a higher threshold to meet before being migrated to SSD.

The result of these algorithm improvements (and the increased SSD tier effective capacity discussed earlier) and Nutanix In-line compression is higher performance over larger working sets which also exceed the capacity of the SSD tier.

Effectively NOS 4.5 is delivering a truly scale out solution for read I/O from SATA tier which means one VM can be reading from potentially all nodes in the cluster ensuring SATA performance for things like Business Critical Applications is both high and consistent. Combine that with NX-6035C storage only nodes, this means SATA read I/O can be scaled out as shown in the below diagram without scaling compute.

ScaleOutRemoteReads

 

As we can see above, the Storage only Nodes (NX-6035C) are delivering additional performance for read I/O from the SATA tier (as well as from the SSD tier).

Competition Example Architectural Decision Entry 6 – Improve Performance for BCAs on Cisco UCS

Name: Anuj Modi
Title: Unified Computing & Virtualization Consultant @ Cisco
Twitter: @vConsultant
Blog: http://anujmodi.wordpress.com

Problem Statement

Most of the companies are migrating application workload to virtual infrastructure to take the advantages of virtual computing. With benefits of virtualizing the environment, the application still are facing I/O performance issue and end-users are not happy with response time for moving applications to physical servers. What are the ways to improve the performance for business critical applications in such environments?

Assumptions

1.      Cisco Unified Computing System
2.      VMware vSphere 5.x
3.      Cisco Virtual Interface Card M81/1240/1280
4.      Critical applications/databases

Constraints

1.      No impact on the applications production data
2.      Benefits of Virtual infrastructure features
3.      High Availability of Applications
Motivation

1.      Better performance and response time for business critical applications
2.      Reduce CPU cycles on ESXi Servers and offload the I/O load to hardware level.
3.      Improved I/O throughput for applications

Architectural Decision

Use the Cisco VN-Link in hardware with VMDirectPath to get better I/O performance for network traffic. All the traffic will be redirected through physical interface card and bypassing the vmkernel. This will provide better I/O performance as this will reduce the OS kernel layer to pass the network traffic to physical interface card.

VN-Link in Hardware with VMDirectPath

Alternatives

Cisco provides three different options for Virtual machine traffic on hypervisor. These options are listed below

1.      VN-Link is Software
2.      VN-Link in Hardware
3.      VN-Link in Hardware with VMDirectPath

The other two options can be used to improve the performance for virtual machine traffic.
In option1, Nexus 1000V switch can be used for network traffic forwarding. Virtual machine nic will directly connects to Nexus 1000V switch and Nexus 1000V switch uplinks will connect to Cisco virtual interface card. With this option, you can get benefits of Nexus 1000V advanced network features like ERSPA and Netflow and standardization of network switch management.

In option 2, UCSM will be used as Distributed switch and will integrated with vCenter server to control the virtual machine traffic. Each virtual machine nic will maps to a different virtual interface (VIF) on the UCS Fabric Interconnect and directly pass the traffic through it. This will give better I/O performance to network traffic and directs the I/O load to physical interface card.

Justification

Option 3 is selected with this solution to provide higher I/O performance for network traffic. Hypervisor bypass is the ability for a virtual machine to access PCIe adaptor hardware directly in order to reduce the overhead on host CPU.  Cisco UCS provide this feature with VN-Link in Hardware with VMDirectPath option and help to reduce the overhead for host CPU/memory for I/O virtualization. The virtual machine directly talks to Cisco virtual interface card and bypass the vmkernel to provide higher performance to network traffic. The current virtual interface card can scale up to 256 virtual interface cards, which means the most of the virtual machines can get PCIe adaptor on a single host.

Implications

1.The disadvantage is currently limited vMotion support on VMware hypervisor.

Back to Competition Main Page or Competition Submissions