Red Hat Enterprise Linux OpenStack Platform 6 SR-IOV Networking – Part II Walking Through the Implementation

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.

Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.

Compute Node

This is a standard Red Hat Enterprise Linux OpenStack Platform Compute node, running KVM with the Libvirt Nova driver. As the ultimate goal is to provide OpenStack VMs running on this node with access to SR-IOV virtual functions (VFs), SR-IOV support is required on several layers on the Compute node, namely the BIOS, the base operating system, and the physical network adapter. Since SR-IOV completely bypasses the hypervisor layer, there is no need to deploy Open vSwitch or the ovs-agent on this node.

Controller/Network Node

The other node which serves as the OpenStack Controller/Network node includes the various OpenStack API and control services (e.g., Keystone, Neutron, Glance) as well as the Neutron agents required to provide network services for VM instances. Unlike the Compute node, this node still uses Open vSwitch for connectivity into the tenant data networks. This is required in order to serve SR-IOV enabled VM instances with network services such as DHCP, L3 routing and network address translation (NAT). This is also the node in which the Neutron server and the Neutron plugin are deployed.

Topology Layout

For this test we are using a VLAN tagged network connected to both nodes as the tenant data network. Currently there is no support for SR-IOV networking on the Network node, so this node still uses a normal network adapter without SR-IOV capabilities. The Compute node on the other hand uses an SR-IOV enabled network adapter (from the Intel 82576 family in our case).

Configuration Overview

Preparing the Compute node

  1. The first thing we need to do is to make sure that Intel VT-d is enabled in the BIOS and activated in the kernel. The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine.
  2. Recall that the Compute node is equipped with an Intel 82576 based SR-IOV network adapter. For proper SR-IOV operation, we need to load the network adapter driver (igb) with the right parameters to set the maximum number of Virtual Functions (VFs) we want to expose. Different network cards support different values here, so you should consult the proper documentation from the card vendor. In our lab we chose to set this number to seven. This configuration effectively enables SR-IOV on the card itself, which otherwise defaults to regular (non SR-IOV) mode.
  3. After a reboot, the node should come up ready for SR-IOV. You can verify this by utilizing the lspci utility that lists detailed information about all PCI devices in the system.