Boot integration of the Openvswitch in Ubuntu

The installation of the Openvswitch on Ubuntu brings an automatic integration into the boot sequence. When Ubuntu is booted, the Openvswitch is also started. Any bridge or port, which has been defined in previous sessions, is restored.
BUT: No L3 interfaces are set up, junk interfaces, which have been defined in previous sessions, are restored.

The ideal start sequence of the Openvswitch delivers:

  • an empty Openvswitch database during the startup of Ubuntu –> clean the Openvswitch database (patch necessary)
  • define all bridges, interfaces, port,… using /etc/network/interfaces –> this is supported
  • brings also up all defined L3 interfaces on Openvswitch internal interfaces –> patch necessary

With Ubuntu 16.04 systemd is used to start the system. Older systems using upstart have now a fixed upstart script (see below) – but the systemd scripts do not allow out of the box an automatic setup of OVS interfaces. How to fix it? See below… 🙂 Lets hope, that Canonical provides a patch for 16.04 before 18.04 shows up…

With Ubuntu 16.10 boot integration works out of the box.

Define bridges, ports, interfaces using /etc/network/interfaces

All configurations items, which are necessary to set up the Openvswitch, can be defined in /etc/network/interfaces .

Define a bridge

All ports and interfaces on the Openvswitch must be attached to a bridge. Defining a bridge (which is a virtual switch) requires the following configuration items:

The configurations lines are

  • auto is necessary to allow an automatic start (Ubuntu 16.04 using systemd)
  • allow-ovs is the marker for an Openvswitch bridge. The bridge name follows, here vmbr-int
  • iface is the well known /etc/network/interfaces identifier to start an interface configuration
  • ovs_type defines the interface type for ovs. OVSBridge identifies an Openvswitch bridge

The lines above are an equivalent of the command ovs-vsctl add-br vmbr-int

Define a L2 port (untagged)

The next step is to attach a port (not seen by Linux) to the bridge above.

The configuration lines are

  • allow-[name of the bridge to attach the port] defines the name of the bridge, to which the port should be attached to. In our example this is vmbr-int. The name of the port follows (here l2port)
  • iface is the well known /etc/network/interfaces identifier to start an interface configuration — set the config to manual
  • ovs-bridge holds the name of the bridge to which the port should be attached to. In our example this is vmbr-int.
  • ovs_type defines the interface type for ovs. OVSPort identifies an Openvswitch port not seen by the Linux Operating system
  • ovs_options defines additional options for the port – here define an untagged port, which is member of Vlan 444. If this option is not set, a port may use all vlan tags on the interfaces, because all ports transport vlan tags by default.

The bad thing is, that the bridge (vmbr-int) must be defined twice. But this is not the whole truth. This definition is required a third time. The complete definition for the bridge and port is now:

This is the summary definition. It contains one additional line in the bridge definition iface

  • ovs_ports holds the list of all ports which should be attached to the bridge

The port which is added to the bridge in the example above can also be an existing physical nic. The example below shows the config to add a physical interface to a bridge:

Define a L3 interface

The definition of a L3 interface requires the following configuration lines

The differences compared to the L2 port are

  • iface — set the mode to static — because an IP address is configured on this interface
  • ovs_type — an L3 port must be defined as an internal port — must be set to OVSIntPort
  • address and netmask are the well known parameters for IPV4 an address/netmask combination

Do not forget to add the port to the bridge definition of vmbr-int.

Define a L2 port (tagged/trunking)

The definition of a L2 port, which is tagged looks like:

A trunking port does not require an additional definition. In the example above, we want to limit the vlans on the port. This requires a trunk definition using the ovs_options line.

Define a fake bridge

It is also possible to add the definition of an ovs fake bridge

A fake bridge definition requires a special option

  • ovs_options must be set to vmbr-int 2102

On the commandline the command would be: ovs-vsctl add-br fakebr vmbr-int 2102 . The ovs_options are the two last options being necessary on the commandline.

Define a tagged LACP bond port

It is also possible to define a tagged bond port using LACP as the bonding protocol

The options are

  • ovs_type must be set to OVSBond
  • ovs_bonds has the list of the interfaces to be added to the bond — here eth4 and eth5
  • ovs_options holds all the extra options for the bond and the trunk. These are the same options, which have to be used on the command line

Interface helper script

ifup and ifdown are using a helper script, which is part of the openvswitch-switch package. This script openvswitch is located in /etc/network/if-pre-up.d/ . This script executes all commands.

This script is missing only the support to change the mtu of Openvswitch bridges, ports and interfaces.

Boot support (Ubuntu 16.04 – when using systemd)

Canonical did it again – deliver an OS which does not support the autostart of OVS interfaces.. The upstart scripts have been fixed to allow the autostart of OVS interfaces, but now systemd is used – and the automatic start of OVS interfaces is broken again.

A fix is available, see Launchpad Bug. The systemd start file for OVS needs to be changed to add dependencies. OVS must be started before setting up the network interfaces.

Replace /lib/systemd/system/openvswitch-nonetwork.service with the file content above, and you’re done.

One option to delete all existing bridges should be set in the config file /etc/default/openvswitch-switch to start with an empty openvswitch config.

Boot support (Ubuntu 14.04 and later – when using upstart)

Ubuntu 14.04 has a better boot support for the openvswitch, but it is still not perfect. The interface up/down stuff is still missing.

One option to delete all existing bridges should be set in the config file /etc/defaults/openvswitch-switch

The upstart script must be patched twice to start and stop ovs components and the associated interfaces.

You must insert all lines between „exit $?“ and „end script“ to bring up the Openvswitch interfaces.

The shutdown sequence requires the following patch:

There is still a dependency. The openvswitch start script assumes, that the loopback interface is brought up as the last interface.

Boot support (pre Ubuntu 14.04)

The boot support for the Openvswitch is implemented very different. It depends on the Openvswitch version, the Ubuntu version and the package repository. Up to now there is NO support to bring up the interfaces automatically. In any case, a patch is required.

All Ubuntu distributions are using Openvswitch packages(by November 2013), which do not have an openvswitch upstart script. One way to bring up interfaces here is using a few lines in /etc/rc.local or patching /etc/init.d/openvswitch-switch .

The necessary lines for /etc/rc.local would be:

You may add the necessary lines to /etc/init.d/openvswitch-switch , but the interfaces may come up too late, because the non upstart script starts too late.

If you are using the openvswitch package from the Havana Ubuntu Cloud Archive [deb precise-updates/havana main] you have an openvswitch-switch upstart script. This must be patched to bring up all ovs network components.

You must insert all lines between „exit $?“ and „end script“ to bring up the Openvswitch interface.

The shutdown sequence requires the following patch:

There is still a dependency. „ifup -a“ in networking.conf brings up the loopback interface as the last interface. This is the trigger to emit the event „loopback interface is up“ which triggers then openvswitch-switch upstart script. This might be too late. Hopefully there will be a fully integrated solution for Ubuntu in future release

The OpenStack Foundation acknowledges the changing landscape

You may have noticed, based on a couple of recent articles from our co-founder, Boris Renski, that Mirantis has changed from being a company focused on building a cloud for you to being a company that helps you get things done in the cloud.  His first blog on the subject, “Infrastructure Software is Dead,” talked about how companies don’t want software, they want the outcomes the software is designed to produce, and the software itself is almost irrelevant.  Now he’s published a second blog, “Can Google Android AWS,” taking the concept a step further and proposing that part of Google’s plan in open sourcing and championing Kubernetes is to destroy cloud switching costs, making it easier for people to choose a cloud other than AWS.

And he’s got a point. Kubernetes isn’t the only tool Google’s been championing; the Istio service mesh project and the Spinnaker deployment tool come to mind.  The Cloud Native Computing Foundation gets proposals for new member projects constantly.

So what’s going on here?

In a word, the cloud itself is becoming commoditized; what’s important is how you use it.  We’ve been beating that drum here at Mirantis for a while now, first with our foundation of Infrastructure as Code, and now with our DriveTrain deployment tool and StackLight monitoring. People thought it was a little weird at first, especially when Mirantis, traditionally an “OpenStack company” added Kubernetes to our roster.

Don’t get the wrong idea; we still do OpenStack, and we’re darn good at it. But this notion of open infrastructure is definitely where things are going.

Witness some of this week’s announcements from the OpenStack Foundation. Speaking from Sydney, Australia at the Fall 2017 OpenStack Summit, executive director of the OpenStack Foundation Jonathan Bryce talked about the Foundation’s plans to focus more on integrations and use cases.

The foundation will be “investing significant financial and technical resources in a four-part strategy to address integration of OpenStack and relevant open source technologies: documenting cross-project use cases, collaborating across communities, including upstream contributions to other open source projects, fostering new projects at the OpenStack Foundation, and coordinating end-to-end testing across projects.”

Great, but what does that actually mean?

Documenting cross-project use cases

Now that OpenStack has finally got a firm footing in the enterprise, the foundation plans to focus on use cases, rather than the individual technologies necessary for enabling those use cases.  In general the plan is to move from the traditional to the emerging use cases, including:

  • Datacenter cloud infrastructure
  • CI/CD
  • Container infrastructure
  • Edge infrastructure

To enable all of these use cases, the foundation will be focusing on more than just OpenStack.

Collaborating across communities

While the OpenStack community has been producing actual software, one of the projects that uses it has a different model.  The OPNFV project doesn’t actually produce software; instead, it works with other communities to define an overall system of projects to accomplish goals, doing gap analysis and producing code within those other communities to close those gaps.

The OpenStack Foundation will be doing something similar going forward, working to make sure that open source projects work properly together in order to enable its target use cases.

“As open source leaders, we’ll fail our user base if we deliver innovation without integration, meaning the operational tools and knowledge to put it all into production,” said Bryce. “By working together across projects, we can invest in closing this gap and ensuring that infrastructure is not only open but also consumable by anyone in the world.”

Fostering new projects at the OpenStack Foundation

Perhaps most interesting, however, is that fact that the foundation isn’t planning to limit itself to contributing to other projects; it is also considering taking additional projects under its wing, similar to the way in which the Cloud Native Computing Foundation (CNCF) provides support for the creation of specific technologies.

There’s one very interesting aspect of this goal, however: these projects will not necessarily follow the same governance model as OpenStack. And that makes sense; every community is different, so the foundation wants to make sure that each project owns its own progress.

Coordinating end-to-end testing across projects

Of course, it doesn’t matter what you put together if it can’t be used.  The OpenStack Foundation is also talking about OpenLab, “a community-led program to test and improve support for the most popular Software Development Kits (SDKs)—as well as platforms like Kubernetes, Terraform, Cloud Foundry and more—on OpenStack. The goal is to improve the usability, reliability and resiliency of tools and applications for hybrid and multi-cloud environments.”

Coming next

All of this comes as not just OpenStack, but the entire technology industry is trying to decide what it wants to be when it grows up. Technology is changing fast, leaving a huge gap between those who want to stay on the bleeding edge and those who want to use older, more tried-and-true (read: stable) software. In the coming months, we’ll see how the OpenStack Foundation bridges this gap in order to stay relevant as application developers increasingly move up the stack.

How to install OpenStack on your local machine using Devstack

On November 7, we did a mini-course on Top Sysadmin Tasks and How to Do Them with OpenStack, and we promised to give you instructions on how to install OpenStack on your own laptop. Here it is.

The purpose of this guide is to allow you to install and deploy OpenStack on your own laptops or cloud VMs and follow the webinar exercises at home. The guide is both hardware and OS agnostic and can be used with AMD or Intel and Windows, Mac, or Linux. Furthermore the installation below is completely isolated and self contained using Virtual Machines. The guide is provided as is and Mirantis inc. is not responsible for any issues that may occur as a result of following this guide.

Step 1.

Create a Linux VM locally on your computer (for example, using VirtualBox) or remotely in the cloud (for example on AWS).

The VM needs to satisfy the following conditions:

  • VM needs at least 4 GB of memory and access to the Internet, and it needs to be accessible (at least) on tcp ports 22 (or console) and 80 from your laptop.
  • Devstack attempts to support Ubuntu 16.04/17.04, Fedora 24/25, CentOS/RHEL 7, as well as Debian and OpenSUSE.
  • If you do not have a preference, Ubuntu 16.04 is the most tested, and will probably go the smoothest.


DevStack will make substantial changes to your system during installation. Only run DevStack on servers or virtual machines that are dedicated to this purpose.

Step 2.

Log into the VM using ssh or the console.

Step 3.

You’ll need to run Devstack as a non-root user with sudo enabled.  You can quickly create a separate user called stack to run DevStack with:

sudo useradd -s /bin/bash -d /opt/stack -m stack

Step 4.

Because this user, stack, will be making many changes to your system, it should have sudo privileges:

echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack

You should see output of:


Step 5.

Now you need to switch over to use the user stack:

sudo su - stack

Step 6.

Download DevStack:

git clone -b stable/pike devstack/

You should see:

Cloning into 'devstack'...


The above command downloads OpenStack release Pike.  You can also check the list of available releases.  To download releases in EOL status (End Of Life) replace -b stable/pike

with, for example, -b liberty-eol (for Liberty).

Step 7.

Change to the devstack directory:

cd devstack/

Step 8:

Determine your IP address:

sudo ifconfig

You will see a response something like this:

enp0s3 Link encap:Ethernet  HWaddr 08:00:27:ea:97:9f  
       inet addr:  Bcast:  Mask:
       inet6 addr: fe80::d715:6025:f469:6f7c/64 Scope:Link
       RX packets:1364291 errors:0 dropped:0 overruns:0 frame:0
       TX packets:406550 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1000
       RX bytes:1415725472 (1.4 GB)  TX bytes:27942474 (27.9 MB)

lo     Link encap:Local Loopback  
       inet addr:  Mask:
       inet6 addr: ::1/128 Scope:Host
       UP LOOPBACK RUNNING  MTU:65536  Metric:1
       RX packets:229208 errors:0 dropped:0 overruns:0 frame:0
       TX packets:229208 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1000
       RX bytes:89805124 (89.8 MB)  TX bytes:89805124 (89.8 MB)

You’re looking for the addr: value in an interface other than lo.  For example, in this case, it’s  Make note of that for the next step. If you run your VM on the cloud, make sure to use the public IP of your VM in the link.

Step 9.

Create the local.conf file with 4 passwords and your host IP, which you determined in the last step:

cat >  local.conf <<EOF

This is the minimum required config to get started with DevStack.

Step 10.

Now install and run OpenStack:


At the end of the process you should see output something like this:

This is your host IP address:
This is your host IPv6 address: ::1
Horizon is now available at
Keystone is serving at
The default users are: admin and demo
The password: secret
DevStack Version: pike
OS Version: Ubuntu 16.04 xenial completed in 1221 seconds.

This will take a 15 – 20 minutes, largely depending on the speed of your internet connection. Many git trees and packages will be installed during this process.

Step 11.

Access your OpenStack by copy-pasting the link from the above output into your web browser:


Use the default users demo or admin and configured password from step 8 (secret).

You should see the following log-in window in your browser:


From here, you can do whatever you need to, either from the Horizon interface, or by downloading credentials to use the command line.

You can find out how to do some of the more common tasks by viewing the original webinar, or check out our OpenStack Training resources.


Enterprise Hybrid Cloud: Strategy and Cost Considerations

A clearly defined cloud strategy is an imperative need during any enterprise’s transition to a cloud based IT environment. To minimize risks and make the right choice between private, public, and hybrid cloud options, it is important to devise an effective enterprise cloud strategy with the following considerations in mind:

  1. Security and Control
    • Is it important to have physical access to the infrastructure and platforms that are used to provision workloads?
    • Are there any security and compliance requirements that need to be met for customers?
  2. Agility and Flexibility
    • Which cloud choice will adequately meet resource demands for developer and business needs?
    • Which cloud environment will adequately support customers during high demand/peak usage times?
  3. Application Complexity Management
    • Does the IT environment need to support legacy, virtualized, microservices-based, and serverless applications?
  4. Operating Cost Reduction
    • What will be the costs of deploying, managing and maintaining IT infrastructure be with each cloud choice?

Cloud Strategy Definition: Mapping Strategic Considerations to Market Segments

The following chart depicts the importance of these four criteria across large, mid-market, and SMB segments as observed by Platform9:

The importance of each of these strategic considerations can be understood as follows:

  • In large organizations, security and control, application complexity management, and operating cost reduction are typically non-negotiable. Agility and flexibility might have lesser importance in comparison with the other considerations.
  • For the mid-market segment, application complexity management and operating cost reduction are more important than the other two factors.
  • For the SMB segment, agility and flexibility, as well as operating cost reduction, are usually much higher priorities than the other two considerations. These organizations are willing to make tradeoffs for security and control and application complexity management.

Cloud Strategy Definition: Mapping Strategic Considerations to Cloud Options

Let us now discuss how these four factors can influence cloud strategy and choice.

Security and Control: Private clouds are considered more secure than public clouds for these two reasons:

  • Data stored in a public cloud is placed on the cloud provider’s storage and within its data centers. It is not possible to ascertain who amongst the provider’s numerous employees and contractors have access to your data and whether the data has been accessed by any of those individuals.
  • Private clouds offer more controls and customizations over security configurations. With a public cloud provider, you are restricted to a limited set of security tools offered.

Agility and Flexibility: A public cloud provides greater agility and flexibility in comparison to a private cloud. Public clouds, such as Amazon Web Services and Google Cloud Platform, offer massive amounts of capacity and deployments can be architected to better handle dramatic demand spikes. As a result, it can be easier to meet requirements of developers and customers.

Application Complexity Management: Public clouds offer a wide selection of operating systems, virtual machines, microservices, and serverless technologies, and the tools to manage and monitor these types of applications. With a private cloud, you will invest more time and resources for management of newer application paradigms, such as microservices and serverless applications, since existing tools will not offer these management capabilities.

Operating Cost Reduction: Each of the three major public cloud vendors tries to closely match on price, and charge only for resources consumed (utility-style pricing). In addition, it is no longer necessary to have an IT team for deploying, maintaining, and upgrading on-premises hardware, hypervisors and operating systems. However, as experienced by several Platform9 customers, public costs can skyrocket due to large volume data transfers and unmonitored resources.

It is clear that neither a private cloud or public only approach can address the strategic considerations by themselves. Having a private or public cloud only approach requires enterprises to make tradeoffs.

The importance of a hybrid cloud arises in such situations. A hybrid cloud allows you to harness select benefits from both the public and private cloud pathways while balancing the trade-offs.

Designing the Enterprise Hybrid Cloud: How Should Workloads be Distributed Between Private and Public Clouds

A hybrid cloud can be architected to include a significant proportion of workloads on either the public cloud or the private cloud. In order to intelligently implement a hybrid cloud, it is important to decide what proportion of an enterprise’s workloads will reside on-premises and in the public cloud. The ideal mix of public and on-premises workloads can be determined by considering both cost factors and the type of workload.

Determining Hybrid Cloud Workload Locations Based on Costs

In general, adopting an enterprise hybrid cloud strategy with a combination of proprietary software based private clouds and public cloud(s) will be more expensive than adopting a  public cloud only strategy. However, operating a Platform9 SaaS-Managed open source based hybrid cloud solution can provide significant cost savings over a public cloud only strategy. The cost savings achievable in such cases will vary based on the enterprise size (which largely aligns to IT complexity) and the workload distribution between private/public cloud locations.

The following chart shows the cost advantage enterprises can achieve through a hybrid cloud strategy enabled by Platform9 over a public cloud only (i.e. 100% of workloads on a public cloud e.g. AWS).

As can be seen from the chart, for all enterprise segments, as more workloads are located on-prem (and correspondingly, fewer workloads reside in the public cloud), the cost savings of a hybrid cloud approach are superior relative to a public cloud only option.

Large and mid-market enterprises can achieve higher cost savings over a public cloud only approach by locating a very large proportion (> 80%)  their workloads on-premises. For example, with 90% of workloads on-premises and 10% on AWS, cost savings for large enterprises is 55% , 51% for mid-market enterprises but only 6% for SMBs. The study shows, for large enterprises, if < 45% of workloads are on-premises then there is no economic gain over running all workloads on AWS. With such a workload distribution approach, it might make more sense to consider a 100% public cloud approach. More workloads have to be located on-premises to realize cost savings if a hybrid cloud approach is desired. A similar situation is observed for mid-market enterprises with < 35% of workloads on premises.

The study also indicates that for smaller enterprises with lower IT complexity and smaller number of workloads, a public cloud only approach might be more suitable than a hybrid cloud.

Determining Hybrid Cloud Workload Locations Based on Criticality to Enterprise

Workloads can also be classified as mission-critical or business-critical to arrive at the right on-premises and public cloud mix. Mission critical workloads are those that are essential to run your business: product/service delivery to your customer, financial transactions for products/services offered, manufacturing, and so on. Mission-critical applications should typically be deployed on-premises to allow the IT team to proactively monitor the entire application and infrastructure stack, and respond to application and system events on a 24×7 schedule. On-premises deployment for mission-critical workloads will also provide more control over security configurations.

Business-critical workloads are those that are amenable to some downtime. Examples include email services provided by Office 365, business intelligence workloads, etc. Using the public cloud for these workloads can reduce operational expenditure without creating substantial risk for the company. Such a strategy also allows the in-house IT team to focus on “keeping the lights on” with mission-critical workloads.

53 new things to look for in OpenStack Newton (plus a few more)

OpenStack Newton, the technology’s 14th release, shows just how far we’ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that’s a given, and we’re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms — virtual machines, containers, and bare metal.

There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What’s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let’s take a look at 53 things that are new in OpenStack Newton.


Compute (Nova)

  1. Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
  2. A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
  3. Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
  4. Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
  5. Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

  1. 802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
  2. The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
  3. Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
  4. OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
  5. No downtime API service upgrades

Storage (Cinder, Glance, Swift)


  1. Microversions let developers can add new features you can access without breaking the main version.
  2. Rolling upgrades let you update to Newton without having to take down the entire cloud.
  3. enabled_backends config option defines which backend types are available for volume creation.
  4. Retype volumes from encrypted to not encrypted, and back again after creation.
  5. Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
  6. The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.


  1. Glare, the Glance Artifact Repository, provides the ability to store more than just images.
  2. A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
  3. The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.


  1. Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
  2. Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
  3. Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)


  1. Simplified configuration setup
  2. PCI support of password configuration options
  3. Credentials encrypted at rest


  1. You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
  2. Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
  3. Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
  4. Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
  5. Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)


  1. Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
  2. The API service is now protected by SSL.
  3. You can now use Kubernetes on bare metal.
  4. Asynchronous cluster creation improves performance for complex operations.


  1. You can now use Kolla to deploy containerized OpenStack to bare metal.


  1. Use Neutron networking capabilities in containers.
  2. Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)


  1. Use DNS resolution and integration with an external DNS.
  2. Access external resources using the external_id attribute.


  1. New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
  2. Magnum support.


  1. Deploy Fuel without having to use an ISO.
  2. Improved life cycle management user experience, including Infrastructure as Code.
  3. Container-based deployment possibilities.


  1. Use the new Application Development Framework to build more complex applications.
  2. Enable users to deploy your application across multiple regions for better reliability and scalability.
  3. Specify that when resources are no longer needed, they should be deallocated.


  1. You can now have multiple nova-compute services using Ironic without causing duplicate entries.
  2. Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
  3. Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

  1. The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
  2. Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
  3. Add and manage applications via the Community App Catalog website.

CoreOS Stackanetes puts OpenStack in containers for easy management

CoreOS Stackanetes puts OpenStack in containers for easy management

Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized apps, simplifying management of OpenStack components

The ongoing effort to make OpenStack easier to deploy and maintain has received an assist from an unexpected source: CoreOS and its new Stackanetes project, announced today at the OpenStack Summit in Austin.

Containers are generally seen as a lighter-weight solution to many of the problems addressed by OpenStack. But CoreOS sees Stackanetes as a vehicle to deliver OpenStack’s benefits — an open source IaaS with access to raw VMs — via Kubernetes and its management methodology.

A rich dev and test toolchain, collaborative end-to-end workflow, and improved Windows support put Chef

OpenStack in Kubernetes

Kubernetes, originally created by Google, manages containerized applications across a cluster. Its emphasis is on keeping apps healthy and responsive with a minimal amount of management. Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized applications, one container for each service.

The single biggest benefit, according to CoreOS, is “radically simplified management of OpenStack components,” a common goal of most recent  OpenStack initiatives.

But Stackanetes is also a “single platform for consistently managing both IaaS and container workloads.” OpenStack has its own container management service, Magnum, used mainly as an interface to run Docker and, yes, Kubernetes instances within OpenStack. Stackanetes stands this on its head, and OpenStack becomes another containerized application running alongside all the rest in a cluster.

Easy management = admin appeal

Other projects that deploy OpenStack as a containerized service have popped up but have taken a different approach. Kolla, an OpenStack “big tent” project, uses Docker containers and Ansible playbooks to deploy an OpenStack configuration. The basic deployment is “highly opinionated,” meaning it comes heavily preconfigured but can be customized after deployment.

Stackanetes is mostly concerned with making sure individual services within OpenStack remain running — what CoreOS describes as the self-healing capacity. It’s less concerned with under-the-hood configurations of individual OpenStack components — which OpenStack has been trying to make less painful.

One long-standing OpenStack issue that Stackanetes does try to address is upgrades to individual OpenStack components. In a demo video, CoreOS CEO Alex Polvi showed how Stackanetes could shift workloads from nodes running an older version of the Horizon service to nodes running a newer version. The whole process involved only a couple of clicks.

With Stackanetes, CoreOS is betting more people would rather use Kubernetes as a deployment and management mechanism for containers than OpenStack. At the least, Kubernetes gives admins a straightforward way to stand up and manage the pieces of an OpenStack cluster — and that, by itself, has admin appeal

Why Red Hat’s OpenShift, not OpenStack, is making waves with developers

Red Hat has historically whiffed with developers. But, its PaaS offering, OpenShift, may mark a new era for the open source giant.

Developers may be the new kingmakers in the enterprise, to borrow Redmonk’s phrase, but they’ve long ignored the enterprise open source leader, Red Hat. Two years ago, I called out Red Hat’s apparent inability to engage the very audience that would ensure its long-term relevance. Now, there are signs that Red Hat got the message.

And, no, I’m not talking about OpenStack. Though Red Hat keeps selling OpenStack (seven of its top-30 deals last quarter included OpenStack, according to Red Hat CEO Jim Whitehurst), it’s really OpenShift, the company’s Platform-as-a-Service (PaaS) offering, that promises a bright, developer-friendly future for Red Hat.

redhatLooking beyond OpenStack

Red Hat continues to push OpenStack, and rightly so—it’s a way for Red Hat to certify a cloud platform just as it once banked on certifying the Linux platform. There’s money in assuring risk-averse CIOs that it’s safe to go into the OpenStack cloud environment.

Even so, as Whitehurst told investors in June, OpenStack is not yet “material” to the company’s overall revenue, and generally generates deals under $100,000. It will continue to grow, but OpenStack adoption is primarily about telcos today, and that’s unlikely to change as enterprises grow increasingly comfortable with public IaaS and PaaS options. OpenStack feels like a way to help enterprises avoid the public cloud and try to dress up their data centers in the fancy “private cloud” lingo.

OpenShift, by contrast, is far more interesting.

OpenShift, after all, opens Red Hat up to containers and all they mean for enterprise developer productivity. It’s also a way to pull through other Red Hat software like JBoss and Red Hat Enterprise Linux, because Red Hat’s PaaS is built on these products. OpenShift has found particular traction among sophisticated financial services companies that want to get in early on containers, but the list of customers includes a wide range of companies like Swiss Rail, BBVA, and many others.

More and faster

To be clear, Red Hat still has work to do. According to Gartner’s most recent Magic Quadrant for PaaS, Salesforce and Microsoft are still a ways ahead, particularly in their ability to execute their vision:


Still, there are reasons to think Red Hat will separate itself from the PaaS pack. For one thing, the company is putting its code where it hopes its revenue will be. Red Hat learned long ago that, to monetize Linux effectively, it needed to contribute heavily. In similar fashion, only Google surpasses Red Hat in Kubernetes code contributions, and Docker Inc. is the only company to contribute more code to the Docker container project.

Why does this matter? If you’re an enterprise that wants a container platform then you’re going to trust those vendors that best understand the underlying code and have the ability to influence its direction. That’s Red Hat.

Indeed, one of the things that counted against Red Hat in Gartner’s Magic Quadrant ranking was its focus on Kubernetes and Docker (“Docker and Kubernetes have tremendous potential, but these technologies are still young and evolving,” the report said). These may be young and relatively immature technologies, but all signs point to them dominating a container-crazy enterprise world for many years to come. Kubernetes, as I’ve written, is winning the container management war, putting Red Hat in pole position to benefit from that adoption, especially as it blends familiar tools like JBoss with exciting-but-unfamiliar technologies like Docker.

Red Hat has also been lowering the bar for getting started and productive with OpenShift, as Serdar Yegulalp described. By focusing on developer darlings like Docker and Kubernetes, and making them easily consumable by developers and more easily run by operations, Red Hat is positioning itself to finally be relevant to developers…and in a big way.

OpenStack’s latest release focuses on scalability and resilience

OpenStack, the massive open source project that helps enterprises run the equivalent of AWS in their own data centers, is launching the 14th major version of its software today. Newton, as this new version is called, shows how OpenStack has matured over the last few years. The focus this time is on making some of the core OpenStack services more scalable and resilient. In addition, though, the update also includes a couple of major new features. The project now better supports containers and bare metal servers, for example.

In total, more than 2,500 developers and users contributed to Newton. That gives you a pretty good sense of the scale of this project, which includes support for core data center services like compute, storage and networking, but also a wide range of smaller projects.

As OpenStack Foundation COO Mark Collier told me, the focus with Newton wasn’t so much on new features but on adding tools for supporting new kinds of workloads.

Both Collier and OpenStack Foundation executive director Jonathan Bryce stressed that OpenStack is mostly about providing the infrastructure that people need to run their workloads. The project itself is somewhat agnostic as to what workloads they want to run and which tools they want to use, though. “People aren’t looking at the cloud as synonymous with [virtual machines] anymore,” Collier said. Instead, they are mixing in bare metal and containers as well. OpenStack wants to give these users a single control plane to manage all of this.

Enterprises do tend to move slowly, though, and even the early adopters that use OpenStack are only now starting to adopt containers. “We see people who are early adopters who are running container in production,” Bryce told me. “But I think OpenStack or not OpenStack, it’s still early for containers in production usage.” He did note, however, that he is regularly talks to enterprise users who are looking at how they can use the different components in OpenStack to get to containers faster. 

Core features of OpenStack, including the Nova compute service, as well as the Horizon dashboard and Swift object/blob store, have now become more scalable. The Magnum project for managing containers on OpenStack, which already supported Docker Swarm, Kubernetes and Mesos, now also allows operators to run Kubernetes clusters on bare metal servers, while the Ironic framework for provisioning those bare metal servers is now more tightly integrated with Magnuma and also now supports multi-tenant networking.

The release also includes plenty of other updates and tweaks, of course. You can find a full (and fully overwhelming) rundown of what’s new in all of the different projects here.

With this release out of the door, the OpenStack community is now looking ahead to the next release six months form now. This next release will go through its planning stages at the upcoming OpenStack Summit in Barcelona later this month and will then become generally available next February.

1 2 3 9