CoreOS Stackanetes puts OpenStack in containers for easy management

CoreOS Stackanetes puts OpenStack in containers for easy management

Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized apps, simplifying management of OpenStack components

The ongoing effort to make OpenStack easier to deploy and maintain has received an assist from an unexpected source: CoreOS and its new Stackanetes project, announced today at the OpenStack Summit in Austin.

Containers are generally seen as a lighter-weight solution to many of the problems addressed by OpenStack. But CoreOS sees Stackanetes as a vehicle to deliver OpenStack’s benefits — an open source IaaS with access to raw VMs — via Kubernetes and its management methodology.

A rich dev and test toolchain, collaborative end-to-end workflow, and improved Windows support put Chef

OpenStack in Kubernetes

Kubernetes, originally created by Google, manages containerized applications across a cluster. Its emphasis is on keeping apps healthy and responsive with a minimal amount of management. Stackanetes uses Kubernetes to deploy OpenStack as a set of containerized applications, one container for each service.

The single biggest benefit, according to CoreOS, is “radically simplified management of OpenStack components,” a common goal of most recent  OpenStack initiatives.

But Stackanetes is also a “single platform for consistently managing both IaaS and container workloads.” OpenStack has its own container management service, Magnum, used mainly as an interface to run Docker and, yes, Kubernetes instances within OpenStack. Stackanetes stands this on its head, and OpenStack becomes another containerized application running alongside all the rest in a cluster.

Easy management = admin appeal

Other projects that deploy OpenStack as a containerized service have popped up but have taken a different approach. Kolla, an OpenStack “big tent” project, uses Docker containers and Ansible playbooks to deploy an OpenStack configuration. The basic deployment is “highly opinionated,” meaning it comes heavily preconfigured but can be customized after deployment.

Stackanetes is mostly concerned with making sure individual services within OpenStack remain running — what CoreOS describes as the self-healing capacity. It’s less concerned with under-the-hood configurations of individual OpenStack components — which OpenStack has been trying to make less painful.

One long-standing OpenStack issue that Stackanetes does try to address is upgrades to individual OpenStack components. In a demo video, CoreOS CEO Alex Polvi showed how Stackanetes could shift workloads from nodes running an older version of the Horizon service to nodes running a newer version. The whole process involved only a couple of clicks.

With Stackanetes, CoreOS is betting more people would rather use Kubernetes as a deployment and management mechanism for containers than OpenStack. At the least, Kubernetes gives admins a straightforward way to stand up and manage the pieces of an OpenStack cluster — and that, by itself, has admin appeal

OpenStack’s latest release focuses on scalability and resilience

OpenStack, the massive open source project that helps enterprises run the equivalent of AWS in their own data centers, is launching the 14th major version of its software today. Newton, as this new version is called, shows how OpenStack has matured over the last few years. The focus this time is on making some of the core OpenStack services more scalable and resilient. In addition, though, the update also includes a couple of major new features. The project now better supports containers and bare metal servers, for example.

In total, more than 2,500 developers and users contributed to Newton. That gives you a pretty good sense of the scale of this project, which includes support for core data center services like compute, storage and networking, but also a wide range of smaller projects.

As OpenStack Foundation COO Mark Collier told me, the focus with Newton wasn’t so much on new features but on adding tools for supporting new kinds of workloads.

Both Collier and OpenStack Foundation executive director Jonathan Bryce stressed that OpenStack is mostly about providing the infrastructure that people need to run their workloads. The project itself is somewhat agnostic as to what workloads they want to run and which tools they want to use, though. “People aren’t looking at the cloud as synonymous with [virtual machines] anymore,” Collier said. Instead, they are mixing in bare metal and containers as well. OpenStack wants to give these users a single control plane to manage all of this.

Enterprises do tend to move slowly, though, and even the early adopters that use OpenStack are only now starting to adopt containers. “We see people who are early adopters who are running container in production,” Bryce told me. “But I think OpenStack or not OpenStack, it’s still early for containers in production usage.” He did note, however, that he is regularly talks to enterprise users who are looking at how they can use the different components in OpenStack to get to containers faster. 
networktopology

Core features of OpenStack, including the Nova compute service, as well as the Horizon dashboard and Swift object/blob store, have now become more scalable. The Magnum project for managing containers on OpenStack, which already supported Docker Swarm, Kubernetes and Mesos, now also allows operators to run Kubernetes clusters on bare metal servers, while the Ironic framework for provisioning those bare metal servers is now more tightly integrated with Magnuma and also now supports multi-tenant networking.

The release also includes plenty of other updates and tweaks, of course. You can find a full (and fully overwhelming) rundown of what’s new in all of the different projects here.

With this release out of the door, the OpenStack community is now looking ahead to the next release six months form now. This next release will go through its planning stages at the upcoming OpenStack Summit in Barcelona later this month and will then become generally available next February.

AppFormix now helps enterprises monitor and optimize their virtualized networks

AppFormix helps enterprises, including the likes of Rackspace and its customers, monitor and optimize their OpenStack- and container-based clouds. The company today announced that it has also now added support for virtualized network functions (VNF) to its service.

Traditionally, networking was the domain of highly specialized hardware, but increasingly, it’s commodity hardware and software performing these functions (often for a fraction of the cost). Almost by default, however, networking functions are latency sensitive, especially in the telco industry, which is one of the core users of VNF and also makes up a large number of OpenStack’s users. Using commodity hardware, however, introduces new problems, including increased lag and jitter.

AppFormix co-founder and CEO Sumeet Singh tells me that his company’s service can now reduce jitter by up to 70 percent. “People are just starting to roll out VNFs and as telcos move from hardware to software, that’s where they run into this problem,” he noted. “Our software is designed as this real-time system where we are able to analyze how everything is performing and do optimization based on this analysis.”

For VNF, this often means modifying how workloads are placed and how resources are allocated. Interestingly, AppFormix’s research showed that CPU allocations have very little influence on jitter. Instead, it’s all about how you use the available cache and memory. It’s controlling cache allocations correctly that allows Appformix to reduce jitter.

Singh stressed that it’s not just telcos that can benefit from this but also e-commerce sites and others who want to be able to offer their users a highly predictable experience.

The new feature is now available as part of AppFormix’s overall cloud optimization platform, which currently focuses on OpenStack and Kubernetes deployments.

Building a Highly Available OpenStack Cloud

I Building a Highly Available OpenStack Cloud Computing some details in my previous post, about RPC-R’s reference architecture. Here, I want to drill down even more into how we worked with Red Hat to build high availability into the control plane of the RPC-R solution.

As we know, component failures will happen in a cloud environment, particularly as that cloud grows in scale. However, users still expect their OpenStack services and APIs will be always on and always available, even when servers fail. This is particularly important in a public cloud or in a private cloud where resources are being shared by multiple teams.

The starting point for architecting a highly available OpenStack cloud is the control plane. While a control plane outage would not disrupt running workloads, it would prevent users from being able to scale out or scale in in response to business demands. Accordingly, a great deal effort went into maintaining service uptime as we architected RPC-R, particularly the Red Hat OpenStack platform which is the OpenStack distribution at the core of RPC-R.

Red Hat uses a key set of open source technologies to create clustered active-active controller nodes for all Red Hat OpenStack Platform clouds. Rackspace augments that reference architecture with hardware, software and operational expertise to create RPC-R with our 99.99% API uptime guarantee. The key technologies for creating our clustered controller nodes are:

  • Pacemaker – A cluster resource manager used to manage and monitor the availability of OpenStack components across all nodes in the controller nodes cluster
  • HAProxy – Provides load balancing and proxy services to the cluster (Note that while HAProxy is the default for Red Hat OpenStack Platform, RPC-R uses F5 hardware load balancers instead)
  • Galera – Replicates the Red Hat OpenStack Platform database across the cluster

control-plane

Putting it all together, you can see in the diagram above that redundant instances of each OpenStack component run on each controller node in a collapsed cluster configuration managed by Pacemaker. As the cluster manager, Pacemaker monitors the state of the cluster and has responsibility for failover and failback of services in the event of hardware and software failures. This includes coordinating restart of services to ensure that startup is sequenced in the right order and takes into account service dependencies.

A three node cluster is the minimal size in our RPC-R reference architecture to ensure a quorum in the event of a node failure. A quorum defines the minimal number of nodes that must function for the cluster itself to remain functional. Since a quorum is defined as half the nodes + 1, three nodes is the smallest feasible cluster you can have. Having at least three nodes also allows Pacemaker to compare the content of each node in the cluster and in the event that inconsistencies are found, a majority rule algorithm can be applied to determine what should be the correct state of each node.

While Pacemaker is used to manage most of the OpenStack services, RPC-R uses the Galera multi-master database cluster manager to replicate and synchronize the MariaDB based OpenStack database running on each controller node. MariaDB is a community fork of MySQL that is used in a number of OpenStack distributions, including Red Hat OpenStack platform.

database read write

Using Galera, we are able to create an active-active OpenStack database cluster and do so without the need for shared storage. Reads and writes can be directed to any of the controller nodes and Galera will synchronize all database instances. In the event of a node failure, Galeria will handle failover and failback of database nodes.

By default, Red Hat OpenStack Platform uses HAProxy to load balance API requests to OpenStack services running in the control plane. In this configuration, each controller node runs an instance of HAProxy and each set of services has its own virtual IP. HAProxy is also clustered together using Pacemaker to provide fault tolerance for the API endpoints.

pacemaker

As mentioned previously, Rackspace has chosen to use redundant hardware load balancers in place of HAProxy. Per the previous diagram, the Red Hat OpenStack Platform architecture is identical to RPC-R with the exception that we use F5 appliances in place of clustered HAProxy instances. We believe this option provides better performance and manageability for RPC-R customers.

external-network

An enterprise grade Private Cloud is achievable but requires a combination of the right software, a well thought-out architecture and operational expertise. We believe that Rackspace and Red Hat collaborating together is the best choice for customers looking for partners to help them build out such a solution.

Some Important Openstack Features

TYPES OF STORAGE PROVIDED BY OPENSTACK .

OpenStack supports two types of storage:

1. Persistent Storage or volume storage

2. Ephemeral Storage

Persistent Storage / Volume Storage: It is persistent which means it will be available at later stage when the instance is shut down and independent of any particular instance. This storage is created by users.

Types of Persistent Storage

    • Object storage: It is used to access binary objects through the REST API.
    • Block storage: This is the traditional type of storage which we also see in our general computer systems.
    • Shared File System storage: It provides a set of services to manage multiple files together for storage.

Ephemeral Storage:  It is a temporary that is disappeared once the VM is terminated.

What is hypervisor? What type of hypervisor does OpenStack supports?

Hypervisor is a piece of computer software or hardware that is used to create and run virtual machines.

A list of hypervisor that supports OpenStack:

  • KVM
  • VMware
  • Containers
  • Xen and HyperV

Capitulation? Mirantis refactors OpenStack on top of Kubernetes

First, the guts of the announcement: Mirantis, the bad boys of the OpenStack world, are today announcing a collaboration with Google (a company that has pretty much zero history with OpenStack) and Intel. Under the intent of the collaboration, the life cycle management tool for OpenStack, Fuel, will be rewritten so that it uses Kubernetes as its underlying orchestration.

Lots of inside baseball there, so what are all these different products?

  • OpenStack is the open source cloud computing operating system that was jointly created by Rackspace and NASA and has since built a massive following of companies (including IBM, HPE, Intel and many, many others).
  • Kubernetes is the open source orchestration platform loosely descended from the tools that Google uses internally to operate its own data centers.
  • Fuel, as stated previously, was (is) the OpenStack-native life cycle management tool for OpenStack.

So what does it all mean? Well, it’s actually far more important than first appearances would suggest. It marks, at least to some extent, an admission by all concerned that OpenStack isn’t the be-all and end-all of the infrastructure world

That positioning, which might seem blindingly obvious to anyone who is aware of the heterogeneity of modern enterprise IT, somewhat goes against what we heard from the OpenStack camp for its first few years, when pundits would be excused for thinking that OpenStack was the solution for every possible situation. It seems now, however, that OpenStack is simply a part of the solution — and virtual machines, containers and bare-metal systems all have a part to play in enterprise IT going forward.

Under the terms of the collaboration, Mirantis will initiate a new continuous integration/continuous delivery (CI/CD) pipeline under the OpenStack Fuel project for building capabilities around containerized OpenStack deployment and operations. The resulting software will give users fine-grain control over the placement of services used for the OpenStack control plane, as well as the ability to do rolling updates of OpenStack, make the OpenStack control plane self-healing and more resilient, and smooth the path for the creating of microservices-based applications on OpenStack.

If that sounds familiar, that would be because it is much the same proposition that we heard from Alex Polvi of CoreOS fame a few months ago — the difference here is that it comes from an OpenStack player that is front-and-center of the movement, an arguably far more substantive statement.

And some big names have poured the love into this collaboration — in particular Mirantis and Google, originators of Kubernetes.

“With the emergence of Docker as the standard container image format and Kubernetes as the standard for container orchestration, we are finally seeing continuity in how people approach operations of distributed applications,” said Mirantis CMO Boris Renski. “Combining Kubernetes and Fuel will open OpenStack up to a new delivery model that allows faster consumption of updates, helping customers get to outcomes faster.”

Google Senior Product Manager Craig McLuckie also chimed in. “Leveraging Kubernetes in Fuel will turn OpenStack into a true microservice application, bridging the gap between legacy infrastructure software and the next generation of application development,” he said. “Many enterprises will benefit from using containers and sophisticated cluster management as the foundation for resilient, highly scalable infrastructure.”

Along with the initial work on the Fuel aspects, Mirantis will also become an active contributor to the Kubernetes project, and has stated the ambition to become a top contributor to the project over the next year.

Alongside that, Mirantis has joined the Cloud Native Computing Foundation, a Linux Foundation project dedicated to advancing the development of cloud-native applications and services, as a Silver member.

MyPOV

This is a big deal, there’s no denying that. OpenStack is slowly but inexorably becoming less of a “solution for everything” and more of an integral part. Skeptics would suggest that this marks a turning point where OpenStack ceases to be a compelling long-term proposition in and of itself and becomes simply a stop-gap measure between traditional architectures and more cloud-native approaches.

The reality is probably somewhere in the middle — and OpenStack will still have a part to play in infrastructure going forward — but clearly Mirantis’ move to embrace Kubernetes is an indication that it realizes that it needs to extend beyond a pure-play OpenStack offering.

As always, this space provides huge interest and much entertainment — a situation that looks unlikely to change anytime soon

Building Your Application for Cloud Portability – An Alternative Approach to Hybrid Cloud

 

TOSCA | Hybrid Cloud | Cloud Portability | Cloud Orchestration | Hybrid IT | Open Source Cloud Automation | Cloud Orchestration Tools | Multi-Cloud
In my previous post, I discussed the differences between hybrid cloud and cloud portability, as well as how to achieve true hybrid cloud deployments without compromising on infrastructure API abstraction, by providing several use cases for cloud portability.

Cloud Portability Defined (again)

For the sake of clarity, I thought it would be a good idea to include my definition of cloud portability again here: “Cloud portability is the ability to run the same application on multiple cloud infrastructures, private or public. This is basically what makes hybrid cloud possible.”

Clearly, the common infrastructure API abstraction approach forces too many restrictions on the user which makes it fairly useless for many of the cloud portability use cases.

In this post, I would like to propose another method for making cloud portability, and therefore true hybrid cloud, a reality.

An Alternative Approach

One of the use cases I previously mentioned for allowing application deployment portability to an environment, that doesn’t conform to the same set of features and APIs, is iOS and Android. With operating systems, we see that software providers are able to successfully solve the portability aspect without forcing a common abstraction.

What can we learn about cloud portability from the iOS/Android use case?

Treat portability differently between the application consumer and the application owner – One of the main observation from the iOS/Android case is that, while the consumer is often completely abstracted from the differences between the two platforms, the application developer is not abstracted and often needs to treat each platform differently and sometimes even duplicate certain aspects of the application’s components and logic to suit the underlying environment. The application owner, therefore, has the incentive to support and even invest in portability as this increases the application’s overall market reach.

Minimizing the differences, not eliminating them – While the application owner has more incentive to support each platform natively, it is important to use cloud portability as a framework that will allow for minimizing but not eliminating the differences to allow simpler development and maintenance.

The main lesson from this use case is that, to achieve a similar degree of cloud portability, we need to make a distinction between the application consumer and the application owner. For cloud portability, in order to ensure a native experience for the application consumer, we need to assume that the application owner will be required to duplicate their integration effort per target cloud.

 

This is the same approach we should take with cloud application portability!
 

So, how does one go about doing that?

Achieving Cloud Portability with ARIA – A Simple Multi-Cloud Orchestration Framework

In this section, I will refer to this specific project as a means by which to illustrate the principles that I mentioned above in more concrete terms.

Project ARIA is a new Apache-licensed project that provide simple, zero footprint multi-cloud orchestration based on TOSCA. It was built originally as the core orchestration for Cloudify and is now an independent project.

The diagram below provides an inside look at the ARIA architecture.

 

There are three pillars, upon which ARIA is built, that are needed to manage the entire stack and lifecycle of an application:

1) An infrastructure-neutral, easily extensible templating language

2) Cloud plugins

3) Workflows

TOSCA Templating Language vs. API Abstraction

ARIA utilizes the TOSCA templating language in its application blueprints which provides a means for deploying and orchestrating a single application on multiple infrastructures through individual plugins, thereby circumventing the need for a single abstraction layer.

Templating languages, such as TOSCA, provide far greater flexibility for abstraction than API abstraction as it allows easy extensibility and customization without the need to develop or change the underlying implementation code. This is done by mapping the underlying cloud API into types and allowing the user to define the way it accesses and uses those types through scripts.

With Cloudify, we chose to use TOSCA as the templating language because of its inherent infrastructure-neutral design as well as being designed as a DSL which has lots of the characteristics of a language that utilizes the support of inheritance, interfaces and a strong typing system.

Cloud Plugins

Built-in plugins for a wide range of cloud services provide out of the box integration points with the most common of these services, but unlike the least common denominator approach (i.e. a single API abstraction layer), they can be easily extended to support any cloud service.

Workflows

Workflows enable interaction with the deployment graph and provide another way to abstract common cloud operational tasks such as upgrades, snapshots, scaling, etc.

Putting It All Together

By combining the three aforementioned elements, the user is given a set of building blocks for managing the entire application stack and its lifecycle. It also provides a richer degree of flexibility that allows users to define their own degree of abstraction per use case or application.

In this manner, cloud portability is achievable without the need to change your underlying code, and, in doing so, you enable true hybrid cloud.

VMware: We love OpenStack!

A few years ago VMware and OpenStack were foes. Oh, how times have changed.

This week VMware is out with the 2.5 release of its VMware Integrated OpenStack (VIO). The virtualization giant continues to make it easier to run the open source cloud management tools on top of VMware virtualized infrastructure.

VMware, Inc

VIO’s 2.5 release shows the continued commitment by VMware to embrace OpenStack, something that would have seen out of the question a few short years ago.

The 2.5 release comes with some nifty new features: Users can automatically important vSphere virtual machine images into their VIO OpenStack cloud now. The resource manager control plane is slimmed down by 30% so it takes up less memory. There are better integrations with VMware’s NSX too.

The news shows the continued maturation of both the open source project and the virtualization giant. Once VMware and OpenStack were seen as rivals. In many ways, they still are. Both allow organizations to build private clouds. But VMware (smartly in my opinion) realized that giving customers choice is a good thing. Instead of being an all or nothing VMware vs. OpenStack dichotomy, VMware has embraced OpenStack, allowing VMware’s virtualization management tools to serve up the virtualized infrastructure OpenStack needs to operate.

VMware’s doing the same thing with application containers. Once seen as a threat to virtual machines, VMware is making the argument that the best place to run containers are in it’s virtual machines that have been slimmed down and customized to run containers. Stay tuned to see if all these gambles pay off.

OpenStack digs for deeper value in telecoms, network virtualization

NFV can be a major OpenStack application, which is great for telecom, but will enterprises go for it, too?

OpenStack has long been portrayed as a low-cost avenue for creating private clouds without lock-in. But like many projects of its sprawl and size, it’s valuable for its subfunctions as well. In particular, OpenStack comes in handy for network function virtualization (NFV).

A report issued by the OpenStack Foundation, “Accelerating NFV Delivery with OpenStack,” makes a case for using OpenStack to replace the costly, proprietary hardware often employed with NFV, both inside and outside of telecoms. But it talks less about general enterprise settings than it does about telecoms, a vertical industry where OpenStack has been finding uptake.

money-shovel-100252571-primary.idge

The paper also shows how many telecom-specific NFV features — such as support for multiple IPv6 prefixes — are being requested or submitted by telecoms that are OpenStack users.

That’s not to say the telecom tail has been wagging the OpenStack dog, but it does mean that some of OpenStack’s most urgently requested — and most immediately adopted — features come from that crowd. It also raises the question of how readily those features will be deployed outside of telecoms, especially given OpenStack’s reputation as fragmented, friable, and difficult to learn.

OpenStack’s presence in the telecom world has been established for years now. Auser survey conducted back in 2014 showed that while the number of companies using OpenStack that described themselves as telecommunications companies were proportionately small, the companies in question were high-profile in their field (NTT, Deutsche Telekom).

In the most recent user survey, self-identified telecom comprised only 12 percent of the total OpenStack user base. But the 64 percent labeled “information technology” also included “cable TV and ISP,” “telco and networking,” “data center/co-location,” and other labels that could easily be considered part of a telco’s duty roster.

Network function virtualization is the second-largest OpenStack technology its users are interested in, according to the survey. The technology that took the top spot, however, was containers. That’s where OpenStack vendors (such asMirantis and Red Hat) are making their most direct appeals to the enterprise, but it’s still an open question whether a product of OpenStack’s size and cognitive load is the best way to do so.

To that end, even if OpenStack is a major force in NFV, the bigger question is whether enterprises interested in NFV/SDN (the two terms overlap) will adopt OpenStack as their solution. Some of the mystery may be due to how the changes involved in deploying NFV are a bitter pill for an enterprise of most any size to swallow — but it may again come down to OpenStack still being too top-heavy for its own good.

OpenStack simplifies management with Mitaka release

The latest OpenStack release provides a unified CLI, standardized APIs across projects, and one-step setups for many components

ent-software-businessman-ts-100539050-primary.idge

The latest revision of OpenStack, dubbed Mitaka, was officially released yesterday and boasts simplified management and improved user experience as two prominent features.

Rather than leave such features to a particular distribution, OpenStack has been attempting to integrate them into the project’s core mission. But another big OpenStack effort — its reorganization of the project’s management — is still drawing criticism.

Pulling it all together

A unified OpenStack command-line client is a key new feature intended to improve both management and user experiences. Each service, current or future, can register a command set with the client through a plug-in architecture. Previously, each OpenStack project had an individual CLI, and managing multiple aspects of OpenStack required a great deal of switching between clients, each with its own command sets.

At the same time, API calls for the various subprojects in OpenStack are now more uniform, along with the SDKs that go with them, so it’s easier for developers to write apps that plug directly into OpenStack components.
OpenStack instances are also easier to get up and running — an aim with each passing revision of OpenStack. This time around, more of the platform’s core settings come with defaults chosen, and many previously complex setup operations have been whittled to a single step. OpenStack’s identity and networking services, Keystone and Neutron, both feature these improvements.

Big tent or big problems?

Mitaka marks the first major OpenStack release since the project adopted its Big Tent governance model. In an attempt to tame project sprawl, OpenStack resolved to reform the way projects are included and to describe which projects are best suited to what scenarios.

Julien Danjou, software engineer at Red Hat and author of “The Hacker’s Guide to Python,” believes OpenStack’s core problems haven’t been solved by the Big Tent model. “OpenStack is still stuck between its old and new models,” he said in a blog post. The old model of OpenStack, a tiny ecosystem with a few integrated projects, has given way to a great many projects where “many are considered as second-class citizens. Efforts are made to continue to build an OpenStack project that does not exist anymore,” Danjou said.

Chris Dent, a core contributor to OpenStack, feels Big Tent has diluted the project’s unity of purpose. “We cannot effectively reach our goal of interoperable but disparate clouds if everyone can build their own custom cloud by picking and choosing their own pieces from a collection,” he said.

Dent thinks OpenStack should be kept small and focused, “with contractually strong APIs … allowing it to continue to be an exceptionally active member of and user of the larger open source community.”

Mitaka’s work in unifying the API set and providing a common CLI are steps in that direction. But countering that is OpenStack’s tendency to become more all-encompassing, which appeals only to a narrow, vertical set of customers — service providers, for instance, or operations like eBay — with the cash and manpower to make it work.