Value for cloud providers Give customers control over their networks

OpenStack Quantum enables providers to let customers provision and manage networks in a public cloud environment according to their own requirements, Bryce said.

Now customers will have much deeper control over cloud networks than ever before. In the past, they might have had some load-balancing services, IP management services or services based on a virtual private network (VPN) to use, but now they can provision entire networks. This depth of control will allow customers to create real networks with true separation and segregation — two things they don’t have access to as cloud consumers today.

“The OpenStack networking project has a process of multitasking, and management layers, so providers can delegate different access rights and responsibilities to their users and, within limits, even let them set up their own networks,” Bryce said.

Multi-tenancy is another big value of OpenStack Quantum. “With a hypervisor, you can spin up five virtual machines (VMs) and add five different customers,” Salisbury said. “To keep traffic separate, Quantum lets you either provision a VLAN between the virtual switch and the host — all in the same physical box — or build a ‘tunnel.’ It’s extremely complicated, but it’s how we’ll get to scale at multi-tenancy. In the future  in the data center, orchestration will be API-driven — Quantum is the first generation.”

Today’s network traffic is separated by VLANs with a finite limit of 4094. Cloud providers quickly burn through 4094 customers, so companies are pursuing overlays, Salisbury explained. “With Quantum, there’s no finite number of tunnels that can be used. It ignores the underlying limitations and enables a lot more flexibility.”

Using OpenStack Quantum

Like many new technologies driven by open source communities, both OpenStack Quantum and SDN are still highly complex.

“This complexity needs to be simplified by making it as friendly to install as possible,” Salisbury said, although he describes the advanced services — such as load balancing– that can be provisioned out with Quantum as “quite impressive.”

Quantum also enables flexibility and elasticity for cloud computing, according to Laurent Lachal, senior analyst at Ovum. “Quantum aims to turn network assets into on-demand resources that can be dynamically provisioned the same way compute and storage resources are,” he said.

Users can also orchestrate network security resources such as intrusion detection systems via Quantum — something e-commerce, finance and health care applications require to proactively monitor and detect security breaches. “It’s extremely important to have all of these rich networking capabilities that let you network hardware yourself, manage it without intervention, put it all into software and create a standard framework around it,” Bryce said.

Red Hat Enterprise Linux OpenStack Platform 6 SR-IOV Networking – Part II Walking Through the Implementation

In the previous blog post in this series we looked at what single root I/O virtualization (SR-IOV) networking is all about and we discussed why it is an important addition to Red Hat Enterprise Linux OpenStack Platform. In this second post we would like to provide a more detailed overview of the implementation, some thoughts on the current limitations, as well as what enhancements are being worked on in the OpenStack community.

Note: this post does not intend to provide a full end to end configuration guide. Customers with an active subscription are welcome to visit the official article covering SR-IOV Networking in Red Hat Enterprise Linux OpenStack Platform 6 for a complete procedure.

Setting up the Environment

In our small test environment we used two physical nodes: one serves as a Compute node for hosting virtual machine (VM) instances, and the other serves as both the OpenStack Controller and Network node. Both nodes are running Red Hat Enterprise Linux 7.

Compute Node

This is a standard Red Hat Enterprise Linux OpenStack Platform Compute node, running KVM with the Libvirt Nova driver. As the ultimate goal is to provide OpenStack VMs running on this node with access to SR-IOV virtual functions (VFs), SR-IOV support is required on several layers on the Compute node, namely the BIOS, the base operating system, and the physical network adapter. Since SR-IOV completely bypasses the hypervisor layer, there is no need to deploy Open vSwitch or the ovs-agent on this node.

Controller/Network Node

The other node which serves as the OpenStack Controller/Network node includes the various OpenStack API and control services (e.g., Keystone, Neutron, Glance) as well as the Neutron agents required to provide network services for VM instances. Unlike the Compute node, this node still uses Open vSwitch for connectivity into the tenant data networks. This is required in order to serve SR-IOV enabled VM instances with network services such as DHCP, L3 routing and network address translation (NAT). This is also the node in which the Neutron server and the Neutron plugin are deployed.

Topology Layout

For this test we are using a VLAN tagged network connected to both nodes as the tenant data network. Currently there is no support for SR-IOV networking on the Network node, so this node still uses a normal network adapter without SR-IOV capabilities. The Compute node on the other hand uses an SR-IOV enabled network adapter (from the Intel 82576 family in our case).

Configuration Overview

Preparing the Compute node

  1. The first thing we need to do is to make sure that Intel VT-d is enabled in the BIOS and activated in the kernel. The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine.
  2. Recall that the Compute node is equipped with an Intel 82576 based SR-IOV network adapter. For proper SR-IOV operation, we need to load the network adapter driver (igb) with the right parameters to set the maximum number of Virtual Functions (VFs) we want to expose. Different network cards support different values here, so you should consult the proper documentation from the card vendor. In our lab we chose to set this number to seven. This configuration effectively enables SR-IOV on the card itself, which otherwise defaults to regular (non SR-IOV) mode.
  3. After a reboot, the node should come up ready for SR-IOV. You can verify this by utilizing the lspci utility that lists detailed information about all PCI devices in the system.

Quantum adoption timeframe

Cloud providers appear to be drifting slowly toward Quantum, but not everyone is waiting.

“The most aggressive deployers of OpenStack — service providers like Rackspace and enterprises like eBay — have been running Quantum since the OpenStack Essex release in early 2012,” said Dan Wendlendt, team lead for the OpenStack Quantum project and VMware’s senior product line manager in the Networking and Security Business Unit.

New updates are issued about every six months, and with its latest release “Folsom” in late 2012, Quantum became a core supported OpenStack project. At that time, Wendlendt saw a huge jump in the number of organizations evaluating it.

“The increased interest is due to the fact that more people have now stood up basic OpenStack, only to realize that limitations in traditional networking technologies are preventing them from building the rich enterprise cloud offering they need,” Wendlendt said. “I expect many of these Quantum trial deployments to turn into production deployments during mid to late 2013.”

Public vs Private, Amazon compared to OpenStack

How to choose a cloud platform and when to use both

The public vs private cloud debate is a path well trodden. While technologies and offerings abound, there is still confusion among organizations as to which platform is suited for their agile needs. One of the key benefits to a cloud platform is the ability to spin up compute, networking and storage quickly when users request these resources and similarly decommission when no longer required. Among public cloud providers, Amazon has a market share ahead of Google, Microsoft and others. Among private cloud providers, OpenStack® presents a viable alternative to Microsoft or VMware.

This article compares Amazon Web Services EC2 and OpenStack® as follows:

  • What technical features do the two platforms provide?
  • How do the business characteristics of the two platforms compare?
  • How do the costs compare?
  • How to decide which platform to use and how to use both

OpenStack® and Amazon Web Services (AWS)

From “OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”

From AWS “Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers..”

OpenStack Object storage(Swift)

OpenStack Object storage(Swift): is suitable for storing multimedia content like videos, images, virtual machine images, backups, email storage, archives etc. This type of data needs to grow without any limitation, and needs to be replicated. This is exactly what OpenStack swift is designed to do.

Last but not the least, comes Networking. Networking in the cloud has become so matured that you can create your own private networks, access control lists, create routes between them, interconnect different networks, connect to remote network using VPN etc. Almost all of these needs of an enterprise cloud is taken care by openstack networking.

OpenStack Is Open for Business

OpenStack has emerged as the leading open Infrastructure as a Service (IaaS) platform for private and public clouds. With an OpenStack platform, developers can provision cloud environments on demand, without assistance from IT, thus removing any infrastructure barriers to innovation. Early adopters such as, the largest online travel firm in Latin America, have already deployed OpenStack to speed time to market for new features and services.

As NetApp CTO Jay Kidd predicted in his list of top storage trends for 2014, this is the year that “OpenStack distributions become more product than project.” So, if you’re planning for a private cloud and you haven’t looked at OpenStack yet, now is the perfect time to start thinking about how it might fit within your own organization.

Open and Extensible Cloud Infrastructure
One of the best ways to understand OpenStack is through a comparison to Linux®. It’s often said that OpenStack is to the cloud what Linux is to servers. Where Linux provides an open—and extensible—operating environment for individual servers, OpenStack provides an open and extensible operating environment for cloud infrastructure that offers:

  • Modular design
  • Public roadmap
  • Packaged distributions
  • Compatibility with Amazon Web Services (AWS)

Modular design: OpenStack is a collection of separate modules or services all under the same umbrella. These services can be used to create pools of processing, storage, and networking resources, all managed through a dashboard that gives administrators control while empowering users to provision resources through a Web interface. Although the component modules are designed to work together, you are free to choose only the components you need.

Public roadmap: OpenStack software releases are named alphabetically: Austin was first, followed by Bexar, Cactus and so on, with the Juno release planned for October 2014. The roadmap is aggressive, with a new release every six months. Each release typically includes new features and new modules, and may include projects in “incubation” for future releases.

Packaged distributions: The analogy between OpenStack and Linux extends to include a very similar distribution model. Just as companies such as Red Hat and SUSE created packaged Linux distributions, a dozen or more OpenStack providers, including Red Hat, SUSE, Mirantis, Rackspace, and Metacloud, are creating packaged versions of OpenStack.

AWS compatibility: Many of the various services that OpenStack provides—compute, storage, networking, and so on—are API-compatible with their equivalent AWS capabilities. If you have an application that runs on AWS, you can run the application in any OpenStack environment, including your on-premises data center.

Article Source –

How does OpenStack Quantum work

Quantum consists of three layers of APIs:

  1. The top layer is a RESTful API that sends Quantum API and routing API requests to the correct endpoint within the pluggable infrastructure.

“With this API, you can create virtual ports and networks and attach VMs to the networks — all the basic networking concepts,” Bryce said.

  1. The middle layer contains software that provides authentication and authorization control.
  2. The bottom layer is a set of driver-based plug-ins that let Quantum connect to and orchestrate network infrastructure. Each plug-in is designed to work with a specific vendor’s project or open source project.

“Vendors have created plug-ins to allow you to manage their networking gear using the OpenStack networking framework. This includes traditional networking vendors, like Cisco, and startups like Big Switch and Nicira (acquired by VMware), doing SDN,” Bryce said. “It’s a flexible framework that gives you a standard API to manage that work. You can do a lot of different networking gear underneath and abstract that way.”

Vendors who offer network infrastructure will need to publish their code and provide hooks that everyone can form into a standard API. “It’s one of the most unique things about Quantum, because there’s never been a standard networking API before,” Salisbury said.

How do you go about installing Cloudify


Cloudify can be installed by using different methods.

Since it is based on a set of premade packages, it can be deployed using shell scripts, configuration management tools, cloud specific orchestration tools (CloudFormation, Heat, etc..) or Cloudify’s CLI tool which provides a smooth experience for getting a fully working, cli manageable Cloudify Management server.


Manager Environment

Host Machine

Minimal Requirements

A Cloudify manager must run on a 64-bit machine and requires at the very least 2GB of RAM, 1 CPU and 5GB of free space.


These are the minimal requirements for Cloudify to run. You will have to provision larger machines to actually utilize Cloudify.

Recommended Requirements

The recommended requirements can vary based on the following:

  • Number of deployments you’re going to run.
  • Amount of concurrent logs and events you’re going to send from your hosts.
  • Amount of concurrent metrics you’re going to send from your hosts.

As a general recommendation for the average system, Cloudify would require at least 4GB of RAM and 4 CPU Cores. Disk space requirements varies according to the amount of logs, events and metrics sent as Cloudify doesn’t currently clean them.


The Manager must be available in the following ports:

  • Inbound – port 80 – For CLI and REST access.
  • Inbound – port 22 – If Bootstrapping is done via the CLI.
  • Inbound – port 5672 – Agent to Manager communication.
  • Outbound – port 22 – If running Linux based host machines and remote agent installation is required.
  • Outbound – port 5985 – If running Windows based host machines and remote agent installation is required.

OS Distributions

Management Server

If bootstrapping using packages

Cloudify’s management server currently runs on Ubuntu 12.04 Precise. To install on Centos or other distributions, you must use the Docker implementation.

If bootstrapping using Docker Image

Cloudify’s Docker implementation was tested on Ubuntu 14.04 and Centos 6.5 and is based on the phusion/baseimage Docker Image (Ubuntu 14.04).


If the host machine Docker is running on is based on Ubuntu 14.04, we will attempt to install Docker for you (if it isn’t already installed. Requires an internet connection). For any other distribution (and release), you’ll have to verify that Docker is installed prior to bootstrapping.


Please see here for the supported distributions.

Agents are provided for these OS distributions, but using the Cloudify Agent Packager, you can create you own agents for your distribution.


If you bootstrap using the Docker images, you must have Docker 1.3+ installed.

Cloudify will attempt to install Docker on Ubuntu 14.04 (Trusty) ONLY as other images may require kernel upgrades and additional package installations.

What’s Next

Next, you should install the Cloudify CLI. Once you’ve installed it, you will be able to boostrap a Cloudify manager on the environment of your choice.

1 6 7 8 9