Name: Anuj Modi
Title: Unified Computing & Virtualization Consultant @ Cisco
Twitter: @vConsultant
Blog: http://anujmodi.wordpress.com
Problem Statement
Most of the companies are migrating application workload to virtual infrastructure to take the advantages of virtual computing. With benefits of virtualizing the environment, the application still are facing I/O performance issue and end-users are not happy with response time for moving applications to physical servers. What are the ways to improve the performance for business critical applications in such environments?
Assumptions
1. Cisco Unified Computing System
2. VMware vSphere 5.x
3. Cisco Virtual Interface Card M81/1240/1280
4. Critical applications/databases
Constraints
1. No impact on the applications production data
2. Benefits of Virtual infrastructure features
3. High Availability of Applications
Motivation
1. Better performance and response time for business critical applications
2. Reduce CPU cycles on ESXi Servers and offload the I/O load to hardware level.
3. Improved I/O throughput for applications
Architectural Decision
Use the Cisco VN-Link in hardware with VMDirectPath to get better I/O performance for network traffic. All the traffic will be redirected through physical interface card and bypassing the vmkernel. This will provide better I/O performance as this will reduce the OS kernel layer to pass the network traffic to physical interface card.
VN-Link in Hardware with VMDirectPath
Alternatives
Cisco provides three different options for Virtual machine traffic on hypervisor. These options are listed below
1. VN-Link is Software
2. VN-Link in Hardware
3. VN-Link in Hardware with VMDirectPath
The other two options can be used to improve the performance for virtual machine traffic.
In option1, Nexus 1000V switch can be used for network traffic forwarding. Virtual machine nic will directly connects to Nexus 1000V switch and Nexus 1000V switch uplinks will connect to Cisco virtual interface card. With this option, you can get benefits of Nexus 1000V advanced network features like ERSPA and Netflow and standardization of network switch management.
In option 2, UCSM will be used as Distributed switch and will integrated with vCenter server to control the virtual machine traffic. Each virtual machine nic will maps to a different virtual interface (VIF) on the UCS Fabric Interconnect and directly pass the traffic through it. This will give better I/O performance to network traffic and directs the I/O load to physical interface card.
Justification
Option 3 is selected with this solution to provide higher I/O performance for network traffic. Hypervisor bypass is the ability for a virtual machine to access PCIe adaptor hardware directly in order to reduce the overhead on host CPU. Cisco UCS provide this feature with VN-Link in Hardware with VMDirectPath option and help to reduce the overhead for host CPU/memory for I/O virtualization. The virtual machine directly talks to Cisco virtual interface card and bypass the vmkernel to provide higher performance to network traffic. The current virtual interface card can scale up to 256 virtual interface cards, which means the most of the virtual machines can get PCIe adaptor on a single host.
Implications
1.The disadvantage is currently limited vMotion support on VMware hypervisor.
Back to Competition Main Page or Competition Submissions