53 new things to look for in OpenStack Newton (plus a few more)

OpenStack Newton, the technology’s 14th release, shows just how far we’ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that’s a given, and we’re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms — virtual machines, containers, and bare metal.

There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What’s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let’s take a look at 53 things that are new in OpenStack Newton.


Compute (Nova)

  1. Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
  2. A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
  3. Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
  4. Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
  5. Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

  1. 802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
  2. The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
  3. Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
  4. OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
  5. No downtime API service upgrades

Storage (Cinder, Glance, Swift)


  1. Microversions let developers can add new features you can access without breaking the main version.
  2. Rolling upgrades let you update to Newton without having to take down the entire cloud.
  3. enabled_backends config option defines which backend types are available for volume creation.
  4. Retype volumes from encrypted to not encrypted, and back again after creation.
  5. Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
  6. The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.


  1. Glare, the Glance Artifact Repository, provides the ability to store more than just images.
  2. A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
  3. The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.


  1. Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
  2. Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
  3. Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)


  1. Simplified configuration setup
  2. PCI support of password configuration options
  3. Credentials encrypted at rest


  1. You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
  2. Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
  3. Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
  4. Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
  5. Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)


  1. Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
  2. The API service is now protected by SSL.
  3. You can now use Kubernetes on bare metal.
  4. Asynchronous cluster creation improves performance for complex operations.


  1. You can now use Kolla to deploy containerized OpenStack to bare metal.


  1. Use Neutron networking capabilities in containers.
  2. Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)


  1. Use DNS resolution and integration with an external DNS.
  2. Access external resources using the external_id attribute.


  1. New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
  2. Magnum support.


  1. Deploy Fuel without having to use an ISO.
  2. Improved life cycle management user experience, including Infrastructure as Code.
  3. Container-based deployment possibilities.


  1. Use the new Application Development Framework to build more complex applications.
  2. Enable users to deploy your application across multiple regions for better reliability and scalability.
  3. Specify that when resources are no longer needed, they should be deallocated.


  1. You can now have multiple nova-compute services using Ironic without causing duplicate entries.
  2. Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
  3. Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

  1. The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
  2. Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
  3. Add and manage applications via the Community App Catalog website.

OpenStack’s latest release focuses on scalability and resilience

OpenStack, the massive open source project that helps enterprises run the equivalent of AWS in their own data centers, is launching the 14th major version of its software today. Newton, as this new version is called, shows how OpenStack has matured over the last few years. The focus this time is on making some of the core OpenStack services more scalable and resilient. In addition, though, the update also includes a couple of major new features. The project now better supports containers and bare metal servers, for example.

In total, more than 2,500 developers and users contributed to Newton. That gives you a pretty good sense of the scale of this project, which includes support for core data center services like compute, storage and networking, but also a wide range of smaller projects.

As OpenStack Foundation COO Mark Collier told me, the focus with Newton wasn’t so much on new features but on adding tools for supporting new kinds of workloads.

Both Collier and OpenStack Foundation executive director Jonathan Bryce stressed that OpenStack is mostly about providing the infrastructure that people need to run their workloads. The project itself is somewhat agnostic as to what workloads they want to run and which tools they want to use, though. “People aren’t looking at the cloud as synonymous with [virtual machines] anymore,” Collier said. Instead, they are mixing in bare metal and containers as well. OpenStack wants to give these users a single control plane to manage all of this.

Enterprises do tend to move slowly, though, and even the early adopters that use OpenStack are only now starting to adopt containers. “We see people who are early adopters who are running container in production,” Bryce told me. “But I think OpenStack or not OpenStack, it’s still early for containers in production usage.” He did note, however, that he is regularly talks to enterprise users who are looking at how they can use the different components in OpenStack to get to containers faster. 

Core features of OpenStack, including the Nova compute service, as well as the Horizon dashboard and Swift object/blob store, have now become more scalable. The Magnum project for managing containers on OpenStack, which already supported Docker Swarm, Kubernetes and Mesos, now also allows operators to run Kubernetes clusters on bare metal servers, while the Ironic framework for provisioning those bare metal servers is now more tightly integrated with Magnuma and also now supports multi-tenant networking.

The release also includes plenty of other updates and tweaks, of course. You can find a full (and fully overwhelming) rundown of what’s new in all of the different projects here.

With this release out of the door, the OpenStack community is now looking ahead to the next release six months form now. This next release will go through its planning stages at the upcoming OpenStack Summit in Barcelona later this month and will then become generally available next February.

AppFormix now helps enterprises monitor and optimize their virtualized networks

AppFormix helps enterprises, including the likes of Rackspace and its customers, monitor and optimize their OpenStack- and container-based clouds. The company today announced that it has also now added support for virtualized network functions (VNF) to its service.

Traditionally, networking was the domain of highly specialized hardware, but increasingly, it’s commodity hardware and software performing these functions (often for a fraction of the cost). Almost by default, however, networking functions are latency sensitive, especially in the telco industry, which is one of the core users of VNF and also makes up a large number of OpenStack’s users. Using commodity hardware, however, introduces new problems, including increased lag and jitter.

AppFormix co-founder and CEO Sumeet Singh tells me that his company’s service can now reduce jitter by up to 70 percent. “People are just starting to roll out VNFs and as telcos move from hardware to software, that’s where they run into this problem,” he noted. “Our software is designed as this real-time system where we are able to analyze how everything is performing and do optimization based on this analysis.”

For VNF, this often means modifying how workloads are placed and how resources are allocated. Interestingly, AppFormix’s research showed that CPU allocations have very little influence on jitter. Instead, it’s all about how you use the available cache and memory. It’s controlling cache allocations correctly that allows Appformix to reduce jitter.

Singh stressed that it’s not just telcos that can benefit from this but also e-commerce sites and others who want to be able to offer their users a highly predictable experience.

The new feature is now available as part of AppFormix’s overall cloud optimization platform, which currently focuses on OpenStack and Kubernetes deployments.

What is OpenStack.

MNC’s define OpenStack as the future of Cloud Computing. OpenStack is a platform to create and handle massive groups of virtual machines through a Graphical User Interface.
Openstack is free, open-source software and works similar to Linux.
Key components of OpenStack?
• Horizon: This is the GUI of openstack.

• Nova: Primary computing engine to manage multiple virtual machines and computing tasks

• Swift: This is a very robust service used for the object storage management.
• Cinder: Like our traditional computer storage system, it is a block storage system in OpenStack.
• Neutron: Used for the networking services in openstack
• Keystone: Identity Management service which uses tokens.
• Glance: image service provider. Images are the virtual copies of hard disks.
• Ceilometer: Provides telemetry services to cloud users.
• Heat (Orchestration Engine): It helps developers to illustrate automated infrastructure deployment


Getting started with basics of building your own cloud

Openstack Cloud tutorial

My daily routine involves too much of AWS Cloud infrastructure. And let me tell you AWS now has grown to an extent that it has now become the synonym of Cloud. I mean they have grown without leap and bounds in the past few years and believe me many other major players are not even near them in the cloud arena (Yeah of course Google and Microsoft does have their own cloud solutions which are pretty brilliant for all use cases, but nobody has the user/customer base that aws has in their public cloud architecture).

Nothing can match the flexibility, elasticity, and ease of use that cloud provides.  Because I remember when I use to work with physical hardware machines (I had to literally wait for hours to get one ready up and running for an emergency requirement. Then if I need additional storage for that machine again wait some more time.) . And if you are using the cloud, then you can spin up a few cloud servers in seconds (believe me in seconds) and test whatever you want.

What is OpenStack Cloud?

An year ago I happen to read an article from netcraft regarding their findings on AWS. According to them in 2013 itself AWS has crossed the mark of 158K in the total number of public facing computers.

Now imagine if you get the same features that AWS cloud provides with something open source that you can build in your own data centre. Isn’t that amazing? Well that’s the reason why tech giants like IBM, HP, Intel, Red Hat, CISCO, Juniper, Yahoo, Dell, Netapp, Vmware, Godaddy, Paypal, Canonical(Ubuntu) support and fund such a project.

This open source project is called as Open Stack, and is currently supported by more than 150 tech companies worldwide. It all started as a combined project by NASA and Rackspace in 2009 (well both were independently developing their own individual projects, which at a later point got together and later called as OpenStack). Well NASA was behind a project called as NOVA(which is very analogous to amazon ec2 and provided computing feature), and Rackspace built another tool called as Swift(a highly scalable object storage solution, very similar to AWS S3).

Apart from these, there are other components that help make openstack very much same as aws cloud(we will be discussing each of them shortly, and in upcoming tutorials, we will configure each of them to build our own cloud).

Openstack can be used by anybody who wants their own cloud infrastructure, similar to AWS. Although its origin will trace back to NASA, its not actively developed/supported by NASA any more.

And they are currently leveraging aws public cloud infrastructure J

If you want to simply use openstack public cloud, then you can use Rackspace Cloud, ENovance, HP cloud etc(these are very much similar to aws cloud.) with their cost associated. Apart from these public openstack cloud offerings, there are plug and play cloud services, where you have dedicated hardware appliance for openstack. Just purchasing it and plugging it would turn it into an openstack cloud service without any further configurations.

Let’s now discuss some of the crucial components of OpenStack, which when combined together will make a robust cloud like any other commercial cloud (Like AWS), that too in your datacenter, completely managed and controlled by your team.

When you talk about cloud, the first thing that comes to your mind will be virtualization. Because virtualization is the technology that caused this cloud revolution possible. Virtualization basically is nothing but the method of slicing resources of a physical machine to smaller/required parts, and those slices will act as independent hosts sharing resources with other slices on the machine.  This enables optimal use of computing resources.

  • OpenStack Compute:  So one of the main component of cloud is virtual machines, that can scale without bounds. This need of the cloud in openstack is fulfilled by something called as Nova. Nova is the name of the software component in OpenStack cloud, that offers and manages virtual machines.

Apart from the compute requirements, the second major requirement is storage. There are two different types of storage in the cloud, one is block storage(very similar to the way how you use RAID partition on any of your servers and format it and use it for all kind of local storage needs), or  normal disk storage, where your operating system files are installed etc.

  • OpenStack block storage (Cynder): will work similar to attaching and detaching an external hard drive to your operating system, for its local use. Block storage is useful for database storage, or raw storage for the server(like format it, mount it and use it), or else you can combine several for distributed file system needs (like you can make a large gluster volume, out of several block storage devices attached to a virtual machine launched by Nova).

The second type of storage full fills the scaling needs, without bounds. You need a storage that can scale without worry. Where your storage need is of static objects. This can be used for storing static large data like backups, archives etc. It can be accessed with its own API, and is replicated cross datacenter, to withstand large disasters.

  • OpenStack Object storage(Swift): is suitable for storing multimedia content like videos, images, virtual machine images, backups, email storage, archives etc. This type of data needs to grow without any limitation, and needs to be replicated. This is exactly what OpenStack swift is designed to do.

Last but not the least, comes Networking. Networking in the cloud has become so matured that you can create your own private networks, access control lists, create routes between them, interconnect different networks, connect to remote network using VPN etc. Almost all of these needs of an enterprise cloud is taken care by openstack networking.

  • Openstack Networking(Nova-networking, or Neutron): When I say openstack networking, think of it as something that manages networking for all our virtual hosts(instances), and provide IP address both private and public. You might be thinking that networking in virtualization is quite easy by setting up a bridge adapter and routing all traffic through it, similar to many virtual adapters. But here we are talking about an entire cloud, that should have public ip’s, that can be attached, detached from the instances that we launch inside, there must be one fixed ip for each instance, and then there must never be a single point of failure etc.

According to me openstack networking is the most complex thing that needs to be designed by taking extreme care. We will be discussing openstack networking in very detail, in a dedicated post, because of its complexity, and importance. Also it can be done with two different tools. One is called as nova-networking, and the other is called as neutron. Please note the fact that each and every component of openstack cloud needs special attention on its own, as they are each very distinct and work combined together to form a cloud. Hence i will be doing dedicated post for each of its major components.

Openstack is very highly configurable, due to this very reason, its quite difficult to mention all of its possible configurations in a tutorial. You will come to know about this, at a later point, when we start configuring things in the upcoming series of posts.

Higher Level Overview of Openstack Architecture

Component Name Used for Similar to
Horizon A dashboard for end users or administrators to access other backend services AWS Management Web Console
Nova Compute Manages virtualization and takes requests from end user through dashboard or API to form virtual Instances AWS Elastic Compute
Cynder For Block storage, directly attachable to any virtual instance, similar to an external hard drive EBS(Elastic Block Store)
Glance This is used for maintaining a catalog for images and is kind of a repository for images. AMI (Amazon Machine Images)
Swift This is used for Object storage that can be used by your applications or instances to store static objects like multimedia files, backups, store images, archives etc. AWS S3
Keystone This component is responsible for managing authentication services for all components. Like a credentials and authorization, and authentication for users AWS Identity And Access Management(IAM)

You might have got an idea of what OpenStack Cloud actually is till now. Let’s now answer some questions, that can really prove helpful in getting a little bit more idea of what openstack really is, or say how these individual components fit together to form a cloud.

What is Horizon Dashboard?

Its nothing but a web interface for users and administrators to interact with your OpenStack cloud. Its basically a Django Web Application implemented in mod_wsgi and Apache. Its primary objective is to interact with the backend API’s of other components and execute requests initiated by users. It interacts with keystone authentication service, to authorize requests before doing anything

Does nova-compute perform virtualization?

Well, nova-compute basically is a daemon that does the job of creating and terminating virtual machines. It does this job through virtual machine API calls. There is something called as a libvirt library. Libvirt is nothing but an API for interacting with Linux virtualization technologies(its a free and open source software that needs to be installed with nova as a dependency).

Basically libvirt gives nova-compute, the functionality to send API requests to KVM, Xen, LXC, OpenVZ, Virtualbox, Vmware, Parallels hypervisors.

So when a user in openstack requests to launch a cloud instance, what actually happens is nova-compute sending requests to hypervisors using libvirt. Well other than libvirt, nova-compute can send requests directly to Xen-Api, vSphere API etc. This wide support of different virtualization technologies is the main strength of nova.

How does Swift Work?

Well swift is a highly scalable object storage. Object Storage in itself, is a big topic, so i recommend reading the below post.

Unlike block storage, files are not organized in hierarchical name space. But they are organized in a flat name space. Although it can give you an illusion of a folder with contents inside, all files inside all folders are in a single name space, due to which scaling becomes much easier compared to block storage.

Swift uses multiple commodity servers and backend storage devices to combine together and form a large pool of storage as per the requirement of the end user. This can be scaled without bounds, by simply adding more nodes in the future.

swift object storage

What is keystone?

Its a single point of contact for policy, authentication, and identity management in openstack cloud. It can work with different authentication backends like Ldap, SQL or a simple key value store.

Keystone has two primary functions

  • Manage Users. Like tracking of all users, and their permissions.
  • Service list/catalog. This is nothing but providing information regarding what services are available and their respective API endpoint details.

What is Openstack Cinder?

As discussed before and shown in the diagram, cinder is nothing but a block storage service. It provides a software block storage on top of basic traditional block storage devices to instances that nova-compute launches.

In simple terms we can say that cinder does the job of virtualizing pools of block storage(any traditional storage device) and makes it available to end users via API. Users use those virtual block storage volume inside their virtual machines, without knowing where the volume is actually deployed in the architecture, or knowing details about the underlying device of the storage.

Building Your Application for Cloud Portability – An Alternative Approach to Hybrid Cloud


TOSCA | Hybrid Cloud | Cloud Portability | Cloud Orchestration | Hybrid IT | Open Source Cloud Automation | Cloud Orchestration Tools | Multi-Cloud
In my previous post, I discussed the differences between hybrid cloud and cloud portability, as well as how to achieve true hybrid cloud deployments without compromising on infrastructure API abstraction, by providing several use cases for cloud portability.

Cloud Portability Defined (again)

For the sake of clarity, I thought it would be a good idea to include my definition of cloud portability again here: “Cloud portability is the ability to run the same application on multiple cloud infrastructures, private or public. This is basically what makes hybrid cloud possible.”

Clearly, the common infrastructure API abstraction approach forces too many restrictions on the user which makes it fairly useless for many of the cloud portability use cases.

In this post, I would like to propose another method for making cloud portability, and therefore true hybrid cloud, a reality.

An Alternative Approach

One of the use cases I previously mentioned for allowing application deployment portability to an environment, that doesn’t conform to the same set of features and APIs, is iOS and Android. With operating systems, we see that software providers are able to successfully solve the portability aspect without forcing a common abstraction.

What can we learn about cloud portability from the iOS/Android use case?

Treat portability differently between the application consumer and the application owner – One of the main observation from the iOS/Android case is that, while the consumer is often completely abstracted from the differences between the two platforms, the application developer is not abstracted and often needs to treat each platform differently and sometimes even duplicate certain aspects of the application’s components and logic to suit the underlying environment. The application owner, therefore, has the incentive to support and even invest in portability as this increases the application’s overall market reach.

Minimizing the differences, not eliminating them – While the application owner has more incentive to support each platform natively, it is important to use cloud portability as a framework that will allow for minimizing but not eliminating the differences to allow simpler development and maintenance.

The main lesson from this use case is that, to achieve a similar degree of cloud portability, we need to make a distinction between the application consumer and the application owner. For cloud portability, in order to ensure a native experience for the application consumer, we need to assume that the application owner will be required to duplicate their integration effort per target cloud.


This is the same approach we should take with cloud application portability!

So, how does one go about doing that?

Achieving Cloud Portability with ARIA – A Simple Multi-Cloud Orchestration Framework

In this section, I will refer to this specific project as a means by which to illustrate the principles that I mentioned above in more concrete terms.

Project ARIA is a new Apache-licensed project that provide simple, zero footprint multi-cloud orchestration based on TOSCA. It was built originally as the core orchestration for Cloudify and is now an independent project.

The diagram below provides an inside look at the ARIA architecture.


There are three pillars, upon which ARIA is built, that are needed to manage the entire stack and lifecycle of an application:

1) An infrastructure-neutral, easily extensible templating language

2) Cloud plugins

3) Workflows

TOSCA Templating Language vs. API Abstraction

ARIA utilizes the TOSCA templating language in its application blueprints which provides a means for deploying and orchestrating a single application on multiple infrastructures through individual plugins, thereby circumventing the need for a single abstraction layer.

Templating languages, such as TOSCA, provide far greater flexibility for abstraction than API abstraction as it allows easy extensibility and customization without the need to develop or change the underlying implementation code. This is done by mapping the underlying cloud API into types and allowing the user to define the way it accesses and uses those types through scripts.

With Cloudify, we chose to use TOSCA as the templating language because of its inherent infrastructure-neutral design as well as being designed as a DSL which has lots of the characteristics of a language that utilizes the support of inheritance, interfaces and a strong typing system.

Cloud Plugins

Built-in plugins for a wide range of cloud services provide out of the box integration points with the most common of these services, but unlike the least common denominator approach (i.e. a single API abstraction layer), they can be easily extended to support any cloud service.


Workflows enable interaction with the deployment graph and provide another way to abstract common cloud operational tasks such as upgrades, snapshots, scaling, etc.

Putting It All Together

By combining the three aforementioned elements, the user is given a set of building blocks for managing the entire application stack and its lifecycle. It also provides a richer degree of flexibility that allows users to define their own degree of abstraction per use case or application.

In this manner, cloud portability is achievable without the need to change your underlying code, and, in doing so, you enable true hybrid cloud.

OpenStack digs for deeper value in telecoms, network virtualization

NFV can be a major OpenStack application, which is great for telecom, but will enterprises go for it, too?

OpenStack has long been portrayed as a low-cost avenue for creating private clouds without lock-in. But like many projects of its sprawl and size, it’s valuable for its subfunctions as well. In particular, OpenStack comes in handy for network function virtualization (NFV).

A report issued by the OpenStack Foundation, “Accelerating NFV Delivery with OpenStack,” makes a case for using OpenStack to replace the costly, proprietary hardware often employed with NFV, both inside and outside of telecoms. But it talks less about general enterprise settings than it does about telecoms, a vertical industry where OpenStack has been finding uptake.


The paper also shows how many telecom-specific NFV features — such as support for multiple IPv6 prefixes — are being requested or submitted by telecoms that are OpenStack users.

That’s not to say the telecom tail has been wagging the OpenStack dog, but it does mean that some of OpenStack’s most urgently requested — and most immediately adopted — features come from that crowd. It also raises the question of how readily those features will be deployed outside of telecoms, especially given OpenStack’s reputation as fragmented, friable, and difficult to learn.

OpenStack’s presence in the telecom world has been established for years now. Auser survey conducted back in 2014 showed that while the number of companies using OpenStack that described themselves as telecommunications companies were proportionately small, the companies in question were high-profile in their field (NTT, Deutsche Telekom).

In the most recent user survey, self-identified telecom comprised only 12 percent of the total OpenStack user base. But the 64 percent labeled “information technology” also included “cable TV and ISP,” “telco and networking,” “data center/co-location,” and other labels that could easily be considered part of a telco’s duty roster.

Network function virtualization is the second-largest OpenStack technology its users are interested in, according to the survey. The technology that took the top spot, however, was containers. That’s where OpenStack vendors (such asMirantis and Red Hat) are making their most direct appeals to the enterprise, but it’s still an open question whether a product of OpenStack’s size and cognitive load is the best way to do so.

To that end, even if OpenStack is a major force in NFV, the bigger question is whether enterprises interested in NFV/SDN (the two terms overlap) will adopt OpenStack as their solution. Some of the mystery may be due to how the changes involved in deploying NFV are a bitter pill for an enterprise of most any size to swallow — but it may again come down to OpenStack still being too top-heavy for its own good.

OpenStack simplifies management with Mitaka release

The latest OpenStack release provides a unified CLI, standardized APIs across projects, and one-step setups for many components


The latest revision of OpenStack, dubbed Mitaka, was officially released yesterday and boasts simplified management and improved user experience as two prominent features.

Rather than leave such features to a particular distribution, OpenStack has been attempting to integrate them into the project’s core mission. But another big OpenStack effort — its reorganization of the project’s management — is still drawing criticism.

Pulling it all together

A unified OpenStack command-line client is a key new feature intended to improve both management and user experiences. Each service, current or future, can register a command set with the client through a plug-in architecture. Previously, each OpenStack project had an individual CLI, and managing multiple aspects of OpenStack required a great deal of switching between clients, each with its own command sets.

At the same time, API calls for the various subprojects in OpenStack are now more uniform, along with the SDKs that go with them, so it’s easier for developers to write apps that plug directly into OpenStack components.
OpenStack instances are also easier to get up and running — an aim with each passing revision of OpenStack. This time around, more of the platform’s core settings come with defaults chosen, and many previously complex setup operations have been whittled to a single step. OpenStack’s identity and networking services, Keystone and Neutron, both feature these improvements.

Big tent or big problems?

Mitaka marks the first major OpenStack release since the project adopted its Big Tent governance model. In an attempt to tame project sprawl, OpenStack resolved to reform the way projects are included and to describe which projects are best suited to what scenarios.

Julien Danjou, software engineer at Red Hat and author of “The Hacker’s Guide to Python,” believes OpenStack’s core problems haven’t been solved by the Big Tent model. “OpenStack is still stuck between its old and new models,” he said in a blog post. The old model of OpenStack, a tiny ecosystem with a few integrated projects, has given way to a great many projects where “many are considered as second-class citizens. Efforts are made to continue to build an OpenStack project that does not exist anymore,” Danjou said.

Chris Dent, a core contributor to OpenStack, feels Big Tent has diluted the project’s unity of purpose. “We cannot effectively reach our goal of interoperable but disparate clouds if everyone can build their own custom cloud by picking and choosing their own pieces from a collection,” he said.

Dent thinks OpenStack should be kept small and focused, “with contractually strong APIs … allowing it to continue to be an exceptionally active member of and user of the larger open source community.”

Mitaka’s work in unifying the API set and providing a common CLI are steps in that direction. But countering that is OpenStack’s tendency to become more all-encompassing, which appeals only to a narrow, vertical set of customers — service providers, for instance, or operations like eBay — with the cash and manpower to make it work.

DreamHost replaces VMware SDN with open source for big savings

OpenStack code developed by spin-out company nets 70% capex, 40% opex cuts

SANTA CLARA – In a convincing example of the viability of open source networking, cloud provider DreamHost saved 70% in capital and 40% in operational costs by replacingVMware’s NSX SDN with open source alternatives.

In a presentation at the Open Networking Summit here, suppliers Cumulus Networks and Akanda – a DreamHost spin-out NFV business — said the cloud provider replaced NSX due to scaling and Layer 3 support issues. DreamHost did not speak and was not present during the presentation, but posted a blog entry on the project here last Friday

The project involved DreamHost’s DreamCompute public cloud compute service, which is based on OpenStack and Ceph object store and file system. The core networking requirements for DreamCompute are Layer 2 tenant isolation, IPv6 and 10G+ “everywhere.”

The first generation of the DreamCompute networking infrastructure included Nicira’s NVP network virtualization software for Layer 2 isolation, and Cumulus Linux as the network operating system running on white box switches. Layer 3 requirements were not met by Nicira NVP nor by software routing vendors who did not understand cloud, said Mark McClain, Akanda CTO.

The second generation of the DreamCompute network include Layer 3 capabilities in VMware NSX, which acquired Nicira, renamed the NVP product and enhanced it. But in a bake-off with the Astara open source network orchestration service for OpenStack – which was developed by DreamHost — Astara comes out on top and, with some enhancements, allows DreamCompute to scale to over 1,000 customers and thousands of VMs.

“Honestly, we expected Astara to lose this challenge,” states Jonathan LaCour, DreamHost vice president of cloud and development, in his blog. “However, Astara absolutely came out victorious, offering a significantly better experience and more reliability.”

In the third generation of the DreamCompute infrastructure, NSX was found to have scale limitations of 1,250 tenants. Open vSwitch was slow and unstable, and the software was difficult to debug and operate, the presenters said. As a result, NSX was replaced for Layer 2 isolation by hardware accelerated VXLAN in the switch and hypervisor, and by Astara for Layer 3-7 service orchestration.

Cumulus Linux remained as the physical underlay for the DreamCompute network.

Astara virtual network appliances allowed for easy scale, while VXLAN tunnels scaled “massively,” presenters said. Astara also simplified OpenStack Neutron networking deployments by requiring fewer Layer 2, DHCP and advanced services agents, and is generally easier to operate because it, VXLAN and the Linux networking stack on DreamCompute switches are “open” and familiar, presenters said.

“As far as performance and scale, DreamCompute is breaking through those limits we met with VMWare NSX,” LaCour states in his blog. “This is largely due to reductions in complexity, thanks to management and automation through OpenStack and Astara.”

VMware wouldn’t comment specifically on the DreamHost project but through a spokesperson said it is “very happy with the success” NSX has had in some of the largest OpenStack environments in the world, “as well as our track record in open networking through things like the Open vSwitch project.”

DreamHost’s project mirrors that of other cloud and Webscale providers, like Google and Facebook, that have opted to develop their own networking solutions to overcome the limitations of commercial offerings, and reduce capex and opex. That open source provides such a significant capex improvement over commercial products should perhaps come as no surprise.

But the opex reduction might be the proof point that familiar open source code, customized for specific operator requirements, is just as capable – if not more so – than commercially available, vendor-integrated products.


OpenStack is an open source infrastructure as a service (IaaS) initiative for creating and managing large groups of virtual private servers in a data center.

openstackThe goals of the OpenStack initiative are to support interoperability between cloud services and allow businesses to build Amazon-like cloud services in their own data centers. OpenStack, which is freely available under the Apache 2.0 license, is often referred to in the media as “the Linux of the Cloud” and is compared to Eucalyptus and the Apache CloudStack project, two other open source cloud initiatives.

OpenStack has a modular architecture that currently has eleven components:

Nova – provides virtual machines (VMs) upon demand.

Swift – provides a scalable storage system that supports object storage.

Cinder – provides persistent block storage to guest VMs.

Glance – provides a catalog and repository for virtual disk images.

Keystone – provides authentication and authorization for all the OpenStack services.

Horizon – provides a modular web-based user interface (UI) for OpenStack services.

Neutron – provides network connectivity-as-a-service between interface devices managed by OpenStack services.

Ceilometer – provides a single point of contact for billing systems.

Heat – provides orchestration services for multiple composite cloud applications.

Trove – provides database-as-a-service provisioning for relational and non-relational database engines.

Sahara – provides data processing services for OpenStack-managed resources.

The National Aeronautics and Space Administration (NASA) worked with Rackspace, a managed hosting and cloud computing service provider, to develop OpenStack. RackSpace donated the code that powers its storage and content delivery service (Cloud Files) and production servers (Cloud Servers). NASA contributed the technology that powers Nebula, their high performance computing, networking and data storage cloud service that allows researchers to work with large scientific data sets.

OpenStack officially became an independent non-profit organization in September 2012. The OpenStack community, which is overseen by a board of directors, is comprised of many direct and indirect competitors, including IBM, Intel and VMware.