Building a Highly Available OpenStack Cloud

I Building a Highly Available OpenStack Cloud Computing some details in my previous post, about RPC-R’s reference architecture. Here, I want to drill down even more into how we worked with Red Hat to build high availability into the control plane of the RPC-R solution.

As we know, component failures will happen in a cloud environment, particularly as that cloud grows in scale. However, users still expect their OpenStack services and APIs will be always on and always available, even when servers fail. This is particularly important in a public cloud or in a private cloud where resources are being shared by multiple teams.

The starting point for architecting a highly available OpenStack cloud is the control plane. While a control plane outage would not disrupt running workloads, it would prevent users from being able to scale out or scale in in response to business demands. Accordingly, a great deal effort went into maintaining service uptime as we architected RPC-R, particularly the Red Hat OpenStack platform which is the OpenStack distribution at the core of RPC-R.

Red Hat uses a key set of open source technologies to create clustered active-active controller nodes for all Red Hat OpenStack Platform clouds. Rackspace augments that reference architecture with hardware, software and operational expertise to create RPC-R with our 99.99% API uptime guarantee. The key technologies for creating our clustered controller nodes are:

  • Pacemaker – A cluster resource manager used to manage and monitor the availability of OpenStack components across all nodes in the controller nodes cluster
  • HAProxy – Provides load balancing and proxy services to the cluster (Note that while HAProxy is the default for Red Hat OpenStack Platform, RPC-R uses F5 hardware load balancers instead)
  • Galera – Replicates the Red Hat OpenStack Platform database across the cluster


Putting it all together, you can see in the diagram above that redundant instances of each OpenStack component run on each controller node in a collapsed cluster configuration managed by Pacemaker. As the cluster manager, Pacemaker monitors the state of the cluster and has responsibility for failover and failback of services in the event of hardware and software failures. This includes coordinating restart of services to ensure that startup is sequenced in the right order and takes into account service dependencies.

A three node cluster is the minimal size in our RPC-R reference architecture to ensure a quorum in the event of a node failure. A quorum defines the minimal number of nodes that must function for the cluster itself to remain functional. Since a quorum is defined as half the nodes + 1, three nodes is the smallest feasible cluster you can have. Having at least three nodes also allows Pacemaker to compare the content of each node in the cluster and in the event that inconsistencies are found, a majority rule algorithm can be applied to determine what should be the correct state of each node.

While Pacemaker is used to manage most of the OpenStack services, RPC-R uses the Galera multi-master database cluster manager to replicate and synchronize the MariaDB based OpenStack database running on each controller node. MariaDB is a community fork of MySQL that is used in a number of OpenStack distributions, including Red Hat OpenStack platform.

database read write

Using Galera, we are able to create an active-active OpenStack database cluster and do so without the need for shared storage. Reads and writes can be directed to any of the controller nodes and Galera will synchronize all database instances. In the event of a node failure, Galeria will handle failover and failback of database nodes.

By default, Red Hat OpenStack Platform uses HAProxy to load balance API requests to OpenStack services running in the control plane. In this configuration, each controller node runs an instance of HAProxy and each set of services has its own virtual IP. HAProxy is also clustered together using Pacemaker to provide fault tolerance for the API endpoints.


As mentioned previously, Rackspace has chosen to use redundant hardware load balancers in place of HAProxy. Per the previous diagram, the Red Hat OpenStack Platform architecture is identical to RPC-R with the exception that we use F5 appliances in place of clustered HAProxy instances. We believe this option provides better performance and manageability for RPC-R customers.


An enterprise grade Private Cloud is achievable but requires a combination of the right software, a well thought-out architecture and operational expertise. We believe that Rackspace and Red Hat collaborating together is the best choice for customers looking for partners to help them build out such a solution.

Early OpenStack contributor says cloud project has ‘lost its heart’

It started with a Tweet last week from Joshua McKenty: “OpenStack has lost its heart. Last summit I will attend.”

That’s somewhat shocking to read if you consider that McKenty helped found the open source cloud computing project, built a startup company that sold OpenStack cloud software and formerly sat on the board of directors of the Foundation that governs OpenStack.

+MORE AT NETWORK WORLD: Status check on OpenStack: The open source cloud has arrived +

How exactly has OpenStack “lost its heart?” McKenty explained: “When we started this project it was about trying to create a new open source community,” he says.

As OpenStack has grown he says its turned into a corporate open source project, not a community-driven one. He spent a day walking around the show-floor at the recent OpenStack Summit in Vancouver and said he didn’t find anyone talking about the original mission of the project. “Everyone’s talking about who’s making money, who’s career is advancing, how much people get paid, how many workloads are in production,” McKenty says. “The mission was to do things differently.”

McKenty admits that it’s hard to keep a small-community feel to a project that has grown to be as large as OpenStack. It started with just Rackspace and NASA committing code, now it has now grown to more than 500 contributing companies, from IBM, Red Hat, Cisco, HP and even VMware.

McKenty says the commercial success of OpenStack is good for customers and those companies. But he believes OpenStack has lost its mission of changing the world through open source. Now, he says it’s mostly about big companies looking to make money off of it. McKenty has left the startup he founded, Piston Cloud Computing Co. to join another small but fast-growing open source project: Cloud Foundry; he works as the Field CTO for Pivotal, one of the main backers of that PaaS (platform as a service) project.

Others in the OpenStack community say McKenty has a jaded perspective. “OpenStack exists because of the company that make it up, and companies need to make money,” says Randy Bias, an OpenStack Foundation board member and another one of the earliest contributors to OpenStack. Bias says without the support of companies like Rackspace, Dell, HP and many others the project never would have existed and grown into what it is today. “OpenStack was never a movement to change the world,” Bias says, whose startup company Cloudscaling was bought by EMC last year. The project is not made up of purely philanthropic companies with only altruistic motives. The reality is, companies joined OpenStack to make money.

As for the fact that OpenStack is no longer a small organization with a grassroots-type feel to it, Bias says that it’s almost impossible to have that and be a successful to a large community with so many members.

Rackspace to resell PLUMgrid OpenStack SDN

SDN start-up PLUMgrid received a major endorsement from an OpenStack pioneer this week with a worldwide resale agreement with cloud provider Rackspace.

Rackspace will resell PLUMgrid’s full SDN product line, including its Open Networking Suite for OpenStack, CloudApex, and support and training services. The agreement is non-exclusive, meaning Rackspace also has the option to resell SDN products from PLUMgrid competitors, and PLUMgrid can sell through other cloud providers.


But Rackspace does not have a “NASCAR line-up of 10 SDN partners” that it resells product from, says Bryan Thompson, senior director of product at Rackspace.

“Our strategy is not to partner broadly,” Thompson says.

Rackspace evaluated three or four SDN vendors before choosing PLUMgrid, he says.

“The others were just offensive in how they integrated with OpenStack,” Thompson says.

PLUMgrid CEO Larry Lang says his company is partnering with “the Godfather of OpenStack.” Rackspace is a founder of the OpenStack initiative.

“A good wedding is great but you have to make sure it’s an excellent marriage,” Lang said of the new resale arrangement.

PLUMgrid’s products have been validated to run with Rackspace’s Private Cloud powered by OpenStack service. PLUMgrid’s products enable microsegmentation and firewall service insertion through their Virtual Domains feature; distributed and programmable data plane forwarding via IO Visor software; and real-time SDN visualization and monitoring through CloudApex.

PLUMgrid has 70 OpenStack cloud customers. The company’s ONS product also supports Mirantis OpenStack, RDO, Red Hat OpenStack, and Ubuntu OpenStack by Canonical distributions and installers, in addition to the Rackspace private OpenStack cloud service.

PLUMgrid will be demonstrating ONS at next week’s Rackspace::Solve event in New York. The company was founded in 2011 by ex-Cisco engineers.

This story, “Rackspace to resell PLUMgrid OpenStack SDN” was originally published by Network World.

Ubuntu Wily Werewolf trots out easy-install OpenStack

Ubuntu Wily Werewolf trots out easy-install OpenStack

The latest version of Canonical’s Linux distribution features a streamlined OpenStack installer, along with the LXD hypervisor and telco-strength networking

If there’s one Linux distribution that has worked hard — overtime, even — to associate itself with OpenStack and all it represents, it’s Canonical’s Ubuntu.

And if there’s one message Ubuntu has been working hard to promote, it’s the company’s dedication to the painless deployment and maintenance of OpenStack — as is the case with Ubuntu’s 15.10 release, aka Wily Werewolf.

With 15.10, Canonical delivers the latest edition of OpenStack, Liberty, and the first generally available edition of a new deployment and management tool, OpenStack Autopilot. Using Canonical’s own Juju orchestration system and Landscape systems management tool, Autopilot distills the installation process to a few crucial admin choices. Technologies that are redundant or not meant to be deployed together are flagged during setup.

Canonical has also been using Ubuntu to draw attention to LXD, the company’s hypervisor and container runtime hybrid, which supports functions like live migration. Canonical appears to want to make LXD the default compute resource unit for OpenStack on Ubuntu over the long term; the company recently unveiled an OpenStack Nova compute driver for LXD as a tech preview.

Telcos remain one of OpenStack’s bigger user bases, so the mix of features within both OpenStack and its implementations, Ubuntu included, drift in that direction. Among them is a new set of high-performance networking tools, the Data Plate Development Kit (DPDK), to allow “virtual network functions to deliver the high-performance network throughput required in core network services” (according to Canonical).

Hypernetes unites Kubernetes, OpenStack for multitenant container management

Hypernetes unites Kubernetes, OpenStack for multitenant container management

Hypernetes unites Kubernetes, OpenStack for multitenant container management 

Hypernetes project aims to deliver container orchestration with VM-level isolation

Hyper, creator of a VM isolated container engine that’s compatible with Docker, has debuted a project for running multitenant containers at scale.

The Hypernetes project fuses the Hyper container engine with Kubernetes and uses several pieces from OpenStack to create what it describes as “a secure, multitenant Kubernetes distro.”

Microsoft is adopting Apple’s approach to PC management, while also keeping the familiar ConfigurationRead Now

At the bottom of the Hypernetes stack is bare metal, outfitted with Hyper’s HyperD custom container engine to provision and run containers with VM-level isolation. Kubernetes manages the containers through HyperD’s API set. Other functions are controlled by components taken from OpenStack, including Keystone, for identity management and authentication; Neutron, for network management; and Cinder/Ceph, for storage volume management.

HyperD is one of many recent products that deliberately blurs the line between a small, lightweight, fast container engine and a hypervisor with strong isolation and a full roster of emulated system resources. Canonical’s LXD, the experimental Novm project, and Joyent’s Triton Elastic Container Host all experiment with implementing different kinds of isolation in a runtime.

The line will likely remain blurred, as companies experiment with combining the best of both worlds to provide products (and projects) that appeal to a range of needs.

Hypernetes’ blend of pieces from profoundly different parts of the container/VM ecosystem is intriguing — not only in terms of what it means for Hypernetes, but for OpenStack as well. Most discussion about OpenStack’s future involves the evolution of the stack as a whole or different distributions simplifying tasks such as deployment or maintenance. But with Hypernetes, OpenStack is treated not as a framework but as a source for technologies to be used in a new context, unburdened by OpenStack’s notorious complexity issues.

The Apache-licensed Hypernetes is still new, so there’s little documentation for the project. Most of what’s available is a fork of the docs for Kubernetes. This implies that anyone familiar with Kubernetes should be able to pick up Hypernetes and run it or fashion a new product or service around it — in much the same way that Google used Kubernetes as the core for its Google Container Engine.