OpenStack’s latest release focuses on scalability and resilience

OpenStack, the massive open source project that helps enterprises run the equivalent of AWS in their own data centers, is launching the 14th major version of its software today. Newton, as this new version is called, shows how OpenStack has matured over the last few years. The focus this time is on making some of the core OpenStack services more scalable and resilient. In addition, though, the update also includes a couple of major new features. The project now better supports containers and bare metal servers, for example.

In total, more than 2,500 developers and users contributed to Newton. That gives you a pretty good sense of the scale of this project, which includes support for core data center services like compute, storage and networking, but also a wide range of smaller projects.

As OpenStack Foundation COO Mark Collier told me, the focus with Newton wasn’t so much on new features but on adding tools for supporting new kinds of workloads.

Both Collier and OpenStack Foundation executive director Jonathan Bryce stressed that OpenStack is mostly about providing the infrastructure that people need to run their workloads. The project itself is somewhat agnostic as to what workloads they want to run and which tools they want to use, though. “People aren’t looking at the cloud as synonymous with [virtual machines] anymore,” Collier said. Instead, they are mixing in bare metal and containers as well. OpenStack wants to give these users a single control plane to manage all of this.

Enterprises do tend to move slowly, though, and even the early adopters that use OpenStack are only now starting to adopt containers. “We see people who are early adopters who are running container in production,” Bryce told me. “But I think OpenStack or not OpenStack, it’s still early for containers in production usage.” He did note, however, that he is regularly talks to enterprise users who are looking at how they can use the different components in OpenStack to get to containers faster. 
networktopology

Core features of OpenStack, including the Nova compute service, as well as the Horizon dashboard and Swift object/blob store, have now become more scalable. The Magnum project for managing containers on OpenStack, which already supported Docker Swarm, Kubernetes and Mesos, now also allows operators to run Kubernetes clusters on bare metal servers, while the Ironic framework for provisioning those bare metal servers is now more tightly integrated with Magnuma and also now supports multi-tenant networking.

The release also includes plenty of other updates and tweaks, of course. You can find a full (and fully overwhelming) rundown of what’s new in all of the different projects here.

With this release out of the door, the OpenStack community is now looking ahead to the next release six months form now. This next release will go through its planning stages at the upcoming OpenStack Summit in Barcelona later this month and will then become generally available next February.

Building a Highly Available OpenStack Cloud

I Building a Highly Available OpenStack Cloud Computing some details in my previous post, about RPC-R’s reference architecture. Here, I want to drill down even more into how we worked with Red Hat to build high availability into the control plane of the RPC-R solution.

As we know, component failures will happen in a cloud environment, particularly as that cloud grows in scale. However, users still expect their OpenStack services and APIs will be always on and always available, even when servers fail. This is particularly important in a public cloud or in a private cloud where resources are being shared by multiple teams.

The starting point for architecting a highly available OpenStack cloud is the control plane. While a control plane outage would not disrupt running workloads, it would prevent users from being able to scale out or scale in in response to business demands. Accordingly, a great deal effort went into maintaining service uptime as we architected RPC-R, particularly the Red Hat OpenStack platform which is the OpenStack distribution at the core of RPC-R.

Red Hat uses a key set of open source technologies to create clustered active-active controller nodes for all Red Hat OpenStack Platform clouds. Rackspace augments that reference architecture with hardware, software and operational expertise to create RPC-R with our 99.99% API uptime guarantee. The key technologies for creating our clustered controller nodes are:

  • Pacemaker – A cluster resource manager used to manage and monitor the availability of OpenStack components across all nodes in the controller nodes cluster
  • HAProxy – Provides load balancing and proxy services to the cluster (Note that while HAProxy is the default for Red Hat OpenStack Platform, RPC-R uses F5 hardware load balancers instead)
  • Galera – Replicates the Red Hat OpenStack Platform database across the cluster

control-plane

Putting it all together, you can see in the diagram above that redundant instances of each OpenStack component run on each controller node in a collapsed cluster configuration managed by Pacemaker. As the cluster manager, Pacemaker monitors the state of the cluster and has responsibility for failover and failback of services in the event of hardware and software failures. This includes coordinating restart of services to ensure that startup is sequenced in the right order and takes into account service dependencies.

A three node cluster is the minimal size in our RPC-R reference architecture to ensure a quorum in the event of a node failure. A quorum defines the minimal number of nodes that must function for the cluster itself to remain functional. Since a quorum is defined as half the nodes + 1, three nodes is the smallest feasible cluster you can have. Having at least three nodes also allows Pacemaker to compare the content of each node in the cluster and in the event that inconsistencies are found, a majority rule algorithm can be applied to determine what should be the correct state of each node.

While Pacemaker is used to manage most of the OpenStack services, RPC-R uses the Galera multi-master database cluster manager to replicate and synchronize the MariaDB based OpenStack database running on each controller node. MariaDB is a community fork of MySQL that is used in a number of OpenStack distributions, including Red Hat OpenStack platform.

database read write

Using Galera, we are able to create an active-active OpenStack database cluster and do so without the need for shared storage. Reads and writes can be directed to any of the controller nodes and Galera will synchronize all database instances. In the event of a node failure, Galeria will handle failover and failback of database nodes.

By default, Red Hat OpenStack Platform uses HAProxy to load balance API requests to OpenStack services running in the control plane. In this configuration, each controller node runs an instance of HAProxy and each set of services has its own virtual IP. HAProxy is also clustered together using Pacemaker to provide fault tolerance for the API endpoints.

pacemaker

As mentioned previously, Rackspace has chosen to use redundant hardware load balancers in place of HAProxy. Per the previous diagram, the Red Hat OpenStack Platform architecture is identical to RPC-R with the exception that we use F5 appliances in place of clustered HAProxy instances. We believe this option provides better performance and manageability for RPC-R customers.

external-network

An enterprise grade Private Cloud is achievable but requires a combination of the right software, a well thought-out architecture and operational expertise. We believe that Rackspace and Red Hat collaborating together is the best choice for customers looking for partners to help them build out such a solution.

Getting started with basics of building your own cloud

Openstack Cloud tutorial

My daily routine involves too much of AWS Cloud infrastructure. And let me tell you AWS now has grown to an extent that it has now become the synonym of Cloud. I mean they have grown without leap and bounds in the past few years and believe me many other major players are not even near them in the cloud arena (Yeah of course Google and Microsoft does have their own cloud solutions which are pretty brilliant for all use cases, but nobody has the user/customer base that aws has in their public cloud architecture).

Nothing can match the flexibility, elasticity, and ease of use that cloud provides.  Because I remember when I use to work with physical hardware machines (I had to literally wait for hours to get one ready up and running for an emergency requirement. Then if I need additional storage for that machine again wait some more time.) . And if you are using the cloud, then you can spin up a few cloud servers in seconds (believe me in seconds) and test whatever you want.

What is OpenStack Cloud?

An year ago I happen to read an article from netcraft regarding their findings on AWS. According to them in 2013 itself AWS has crossed the mark of 158K in the total number of public facing computers.

Now imagine if you get the same features that AWS cloud provides with something open source that you can build in your own data centre. Isn’t that amazing? Well that’s the reason why tech giants like IBM, HP, Intel, Red Hat, CISCO, Juniper, Yahoo, Dell, Netapp, Vmware, Godaddy, Paypal, Canonical(Ubuntu) support and fund such a project.

This open source project is called as Open Stack, and is currently supported by more than 150 tech companies worldwide. It all started as a combined project by NASA and Rackspace in 2009 (well both were independently developing their own individual projects, which at a later point got together and later called as OpenStack). Well NASA was behind a project called as NOVA(which is very analogous to amazon ec2 and provided computing feature), and Rackspace built another tool called as Swift(a highly scalable object storage solution, very similar to AWS S3).

Apart from these, there are other components that help make openstack very much same as aws cloud(we will be discussing each of them shortly, and in upcoming tutorials, we will configure each of them to build our own cloud).

Openstack can be used by anybody who wants their own cloud infrastructure, similar to AWS. Although its origin will trace back to NASA, its not actively developed/supported by NASA any more.

And they are currently leveraging aws public cloud infrastructure J

If you want to simply use openstack public cloud, then you can use Rackspace Cloud, ENovance, HP cloud etc(these are very much similar to aws cloud.) with their cost associated. Apart from these public openstack cloud offerings, there are plug and play cloud services, where you have dedicated hardware appliance for openstack. Just purchasing it and plugging it would turn it into an openstack cloud service without any further configurations.

Let’s now discuss some of the crucial components of OpenStack, which when combined together will make a robust cloud like any other commercial cloud (Like AWS), that too in your datacenter, completely managed and controlled by your team.

When you talk about cloud, the first thing that comes to your mind will be virtualization. Because virtualization is the technology that caused this cloud revolution possible. Virtualization basically is nothing but the method of slicing resources of a physical machine to smaller/required parts, and those slices will act as independent hosts sharing resources with other slices on the machine.  This enables optimal use of computing resources.

  • OpenStack Compute:  So one of the main component of cloud is virtual machines, that can scale without bounds. This need of the cloud in openstack is fulfilled by something called as Nova. Nova is the name of the software component in OpenStack cloud, that offers and manages virtual machines.

Apart from the compute requirements, the second major requirement is storage. There are two different types of storage in the cloud, one is block storage(very similar to the way how you use RAID partition on any of your servers and format it and use it for all kind of local storage needs), or  normal disk storage, where your operating system files are installed etc.

  • OpenStack block storage (Cynder): will work similar to attaching and detaching an external hard drive to your operating system, for its local use. Block storage is useful for database storage, or raw storage for the server(like format it, mount it and use it), or else you can combine several for distributed file system needs (like you can make a large gluster volume, out of several block storage devices attached to a virtual machine launched by Nova).

The second type of storage full fills the scaling needs, without bounds. You need a storage that can scale without worry. Where your storage need is of static objects. This can be used for storing static large data like backups, archives etc. It can be accessed with its own API, and is replicated cross datacenter, to withstand large disasters.

  • OpenStack Object storage(Swift): is suitable for storing multimedia content like videos, images, virtual machine images, backups, email storage, archives etc. This type of data needs to grow without any limitation, and needs to be replicated. This is exactly what OpenStack swift is designed to do.

Last but not the least, comes Networking. Networking in the cloud has become so matured that you can create your own private networks, access control lists, create routes between them, interconnect different networks, connect to remote network using VPN etc. Almost all of these needs of an enterprise cloud is taken care by openstack networking.

  • Openstack Networking(Nova-networking, or Neutron): When I say openstack networking, think of it as something that manages networking for all our virtual hosts(instances), and provide IP address both private and public. You might be thinking that networking in virtualization is quite easy by setting up a bridge adapter and routing all traffic through it, similar to many virtual adapters. But here we are talking about an entire cloud, that should have public ip’s, that can be attached, detached from the instances that we launch inside, there must be one fixed ip for each instance, and then there must never be a single point of failure etc.

According to me openstack networking is the most complex thing that needs to be designed by taking extreme care. We will be discussing openstack networking in very detail, in a dedicated post, because of its complexity, and importance. Also it can be done with two different tools. One is called as nova-networking, and the other is called as neutron. Please note the fact that each and every component of openstack cloud needs special attention on its own, as they are each very distinct and work combined together to form a cloud. Hence i will be doing dedicated post for each of its major components.

Openstack is very highly configurable, due to this very reason, its quite difficult to mention all of its possible configurations in a tutorial. You will come to know about this, at a later point, when we start configuring things in the upcoming series of posts.

Higher Level Overview of Openstack Architecture

Component Name Used for Similar to
Horizon A dashboard for end users or administrators to access other backend services AWS Management Web Console
Nova Compute Manages virtualization and takes requests from end user through dashboard or API to form virtual Instances AWS Elastic Compute
Cynder For Block storage, directly attachable to any virtual instance, similar to an external hard drive EBS(Elastic Block Store)
Glance This is used for maintaining a catalog for images and is kind of a repository for images. AMI (Amazon Machine Images)
Swift This is used for Object storage that can be used by your applications or instances to store static objects like multimedia files, backups, store images, archives etc. AWS S3
Keystone This component is responsible for managing authentication services for all components. Like a credentials and authorization, and authentication for users AWS Identity And Access Management(IAM)

You might have got an idea of what OpenStack Cloud actually is till now. Let’s now answer some questions, that can really prove helpful in getting a little bit more idea of what openstack really is, or say how these individual components fit together to form a cloud.

What is Horizon Dashboard?

Its nothing but a web interface for users and administrators to interact with your OpenStack cloud. Its basically a Django Web Application implemented in mod_wsgi and Apache. Its primary objective is to interact with the backend API’s of other components and execute requests initiated by users. It interacts with keystone authentication service, to authorize requests before doing anything

Does nova-compute perform virtualization?

Well, nova-compute basically is a daemon that does the job of creating and terminating virtual machines. It does this job through virtual machine API calls. There is something called as a libvirt library. Libvirt is nothing but an API for interacting with Linux virtualization technologies(its a free and open source software that needs to be installed with nova as a dependency).

Basically libvirt gives nova-compute, the functionality to send API requests to KVM, Xen, LXC, OpenVZ, Virtualbox, Vmware, Parallels hypervisors.

So when a user in openstack requests to launch a cloud instance, what actually happens is nova-compute sending requests to hypervisors using libvirt. Well other than libvirt, nova-compute can send requests directly to Xen-Api, vSphere API etc. This wide support of different virtualization technologies is the main strength of nova.

How does Swift Work?

Well swift is a highly scalable object storage. Object Storage in itself, is a big topic, so i recommend reading the below post.

Unlike block storage, files are not organized in hierarchical name space. But they are organized in a flat name space. Although it can give you an illusion of a folder with contents inside, all files inside all folders are in a single name space, due to which scaling becomes much easier compared to block storage.

Swift uses multiple commodity servers and backend storage devices to combine together and form a large pool of storage as per the requirement of the end user. This can be scaled without bounds, by simply adding more nodes in the future.

swift object storage

What is keystone?

Its a single point of contact for policy, authentication, and identity management in openstack cloud. It can work with different authentication backends like Ldap, SQL or a simple key value store.

Keystone has two primary functions

  • Manage Users. Like tracking of all users, and their permissions.
  • Service list/catalog. This is nothing but providing information regarding what services are available and their respective API endpoint details.

What is Openstack Cinder?

As discussed before and shown in the diagram, cinder is nothing but a block storage service. It provides a software block storage on top of basic traditional block storage devices to instances that nova-compute launches.

In simple terms we can say that cinder does the job of virtualizing pools of block storage(any traditional storage device) and makes it available to end users via API. Users use those virtual block storage volume inside their virtual machines, without knowing where the volume is actually deployed in the architecture, or knowing details about the underlying device of the storage.

OpenStack digs for deeper value in telecoms, network virtualization

NFV can be a major OpenStack application, which is great for telecom, but will enterprises go for it, too?

OpenStack has long been portrayed as a low-cost avenue for creating private clouds without lock-in. But like many projects of its sprawl and size, it’s valuable for its subfunctions as well. In particular, OpenStack comes in handy for network function virtualization (NFV).

A report issued by the OpenStack Foundation, “Accelerating NFV Delivery with OpenStack,” makes a case for using OpenStack to replace the costly, proprietary hardware often employed with NFV, both inside and outside of telecoms. But it talks less about general enterprise settings than it does about telecoms, a vertical industry where OpenStack has been finding uptake.

money-shovel-100252571-primary.idge

The paper also shows how many telecom-specific NFV features — such as support for multiple IPv6 prefixes — are being requested or submitted by telecoms that are OpenStack users.

That’s not to say the telecom tail has been wagging the OpenStack dog, but it does mean that some of OpenStack’s most urgently requested — and most immediately adopted — features come from that crowd. It also raises the question of how readily those features will be deployed outside of telecoms, especially given OpenStack’s reputation as fragmented, friable, and difficult to learn.

OpenStack’s presence in the telecom world has been established for years now. Auser survey conducted back in 2014 showed that while the number of companies using OpenStack that described themselves as telecommunications companies were proportionately small, the companies in question were high-profile in their field (NTT, Deutsche Telekom).

In the most recent user survey, self-identified telecom comprised only 12 percent of the total OpenStack user base. But the 64 percent labeled “information technology” also included “cable TV and ISP,” “telco and networking,” “data center/co-location,” and other labels that could easily be considered part of a telco’s duty roster.

Network function virtualization is the second-largest OpenStack technology its users are interested in, according to the survey. The technology that took the top spot, however, was containers. That’s where OpenStack vendors (such asMirantis and Red Hat) are making their most direct appeals to the enterprise, but it’s still an open question whether a product of OpenStack’s size and cognitive load is the best way to do so.

To that end, even if OpenStack is a major force in NFV, the bigger question is whether enterprises interested in NFV/SDN (the two terms overlap) will adopt OpenStack as their solution. Some of the mystery may be due to how the changes involved in deploying NFV are a bitter pill for an enterprise of most any size to swallow — but it may again come down to OpenStack still being too top-heavy for its own good.

OpenStack simplifies management with Mitaka release

The latest OpenStack release provides a unified CLI, standardized APIs across projects, and one-step setups for many components

ent-software-businessman-ts-100539050-primary.idge

The latest revision of OpenStack, dubbed Mitaka, was officially released yesterday and boasts simplified management and improved user experience as two prominent features.

Rather than leave such features to a particular distribution, OpenStack has been attempting to integrate them into the project’s core mission. But another big OpenStack effort — its reorganization of the project’s management — is still drawing criticism.

Pulling it all together

A unified OpenStack command-line client is a key new feature intended to improve both management and user experiences. Each service, current or future, can register a command set with the client through a plug-in architecture. Previously, each OpenStack project had an individual CLI, and managing multiple aspects of OpenStack required a great deal of switching between clients, each with its own command sets.

At the same time, API calls for the various subprojects in OpenStack are now more uniform, along with the SDKs that go with them, so it’s easier for developers to write apps that plug directly into OpenStack components.
OpenStack instances are also easier to get up and running — an aim with each passing revision of OpenStack. This time around, more of the platform’s core settings come with defaults chosen, and many previously complex setup operations have been whittled to a single step. OpenStack’s identity and networking services, Keystone and Neutron, both feature these improvements.

Big tent or big problems?

Mitaka marks the first major OpenStack release since the project adopted its Big Tent governance model. In an attempt to tame project sprawl, OpenStack resolved to reform the way projects are included and to describe which projects are best suited to what scenarios.

Julien Danjou, software engineer at Red Hat and author of “The Hacker’s Guide to Python,” believes OpenStack’s core problems haven’t been solved by the Big Tent model. “OpenStack is still stuck between its old and new models,” he said in a blog post. The old model of OpenStack, a tiny ecosystem with a few integrated projects, has given way to a great many projects where “many are considered as second-class citizens. Efforts are made to continue to build an OpenStack project that does not exist anymore,” Danjou said.

Chris Dent, a core contributor to OpenStack, feels Big Tent has diluted the project’s unity of purpose. “We cannot effectively reach our goal of interoperable but disparate clouds if everyone can build their own custom cloud by picking and choosing their own pieces from a collection,” he said.

Dent thinks OpenStack should be kept small and focused, “with contractually strong APIs … allowing it to continue to be an exceptionally active member of and user of the larger open source community.”

Mitaka’s work in unifying the API set and providing a common CLI are steps in that direction. But countering that is OpenStack’s tendency to become more all-encompassing, which appeals only to a narrow, vertical set of customers — service providers, for instance, or operations like eBay — with the cash and manpower to make it work.

Red Hat covers cloud apps with OpenStack and Cloud Suite

With its two latest releases, Red Hat makes good on its previously stated plans to extend open source out of the data center and across the entire dev stack.

Red Hat OpenStack Platform 8 and Red Hat Cloud Suite provide contrasting methodologies for building and delivering hybrid cloud apps on open source infrastructure. Cloud Suite is an all-in-one package of Red Hat’s cloud technologies. OpenStack Platform, meanwhile, adds value and ease of use with both Red Hat and third-party hardware.

Making the hard part easy

OpenStack is complicated to deploy and maintain, so Red Hat and other third-party vendors tout ease of use and management as selling points. As Matt Asay pointed out, Red Hat’s mainstay is to simplify complex technology (like open source infrastructure apps) for enterprise settings.

Red Hat’s previous incarnations of OpenStack were built with this philosophy in mind, and the current version ramps it up. Upgrading OpenStack components, long regarded as thorny and difficult, is handled automatically by Red Hat’s add-ons. CloudForms, Red Hat’s management tool for clouds, comes as part of the bundle, providing yet another option to offset OpenStack’s management complexities.

OpenStack has been trying to solve these problems as well, as shown with its most recent version, code-named Mitaka. It features tools like a unified command line and a more streamlined setup process with sane defaults. But Red Hat’s OpenStack uses the previous Liberty release, so it will be at least another release cycle before the changes find their way into Red Hat’s work.

Red Hat also has been trying to sweeten OpenStack’s pot via a strategy explored by several other OpenStack vendors: hardware solutions. Red Hat and Dell have previously partnered to sell the former’s OpenStack solutions on the latter’s hardware. The latest generation of that partnership provides yet another means of putting OpenStack into more hands: the On-Ramp to OpenStack program.

All of this is meant to broaden OpenStack’s appeal and to make it more than the do-it-yourself cloud favored by a few large companies and telcos. (OpenStack Platform 8 has “telco-focused preview” features.) As Asay noted, while individual OpenStack customers are large, the overall field remains smally because for many enterprise customers, OpenStack looks like too much of a solution for not enough of a problem. That didn’t start with Red Hat, and so far it’s unlikely Red Hat alone can change that.

A three-piece Suite

For that reason, Red Hat isn’t depending on OpenStack alone, as its second big release today, Red Hat Cloud Suite, shows. It’s aimed at a broader, and likely more rewarding, market: Those building cloud applications with containers and who want to concentrate on app lifecycle rather than the deployment infrastructure.

Cloud Suite also uses OpenStack, but as a substrate managed through Red Hat’s CloudForms software. On top of that is the part users will deal with most directly: Red Hat’s OpenShift PaaS for managing containerized applications in Docker. (OpenShift got high marks from InfoWorld’s Martin Heller for being “robust, easy-to-use, and highly scalable.”)

CloudForms treats OpenStack as one of many possible cloud layers that can be abstracted away. To that end, the apps deployed on OpenShift can run in multiple places — local and remote OpenStack clouds, Azure clouds, and so on. This part of Red Hat’s strategy for hybrid cloud echoes Google’s ambitions, in that it allows the user to work with open source software and open standards to deploy apps to both local and remote clouds.

OpenStack was regarded as the original method to pull that off. While Red Hat hasn’t abandoned OpenStack, its focus remains narrow. Cloud Suite, due to its flexibility and emphasis on applications rather than infrastructure, seems likely to draw a broader crowd.

7 new OpenStack guides and tips

Learning how to deploy and maintain OpenStack can be difficult, even for seasoned IT professionals. The number of things you must keep up with seems to grow every day.

Fortunately, there are tons of resources out there to help you along the way, whether you are a beginner or a cloud guru. Between the official documentation, IRC channels, books, and a number of training options available to you, as well as the number of community-created OpenStack tutorials, help is never too far away.

On Opensource.com, every month we take a look back at the best tips, tricks, and tutorials published to the web to bring you some of the most useful. Here are some of the best guides and hints we found last month.

  • First up, let’s take a look at TripleO, an OpenStack deployment tool. Adam Young takes us through the basic steps of his experimentation with getting started with TripleO, by deploying RDO on CentOS 7.1.
  • If you’re looking to deploy applications on top of OpenStack, it can help to have some simple examples at hand. Why not take a look at some simple Heat templates? Cloudwatt gives us new examples this month includingMediaWiki, Minecraft, and Zabbix.
  • If you work in OpenStack development, you know that it can be difficult to reproduce bugs. OpenStack has so many moving parts that replicating the exact circumstances that produced your error can be a non-trivial task. In this blog post, learn some best practices in debugging hard-to-find test failures through an example with Glance.
  • Next, explore a new feature from the Liberty release designed to make it easier to share Neutron networking resources between projects and tenants in an OpenStack deployment, Role Based Access Controls (RBAC). Learn the basic commands necessary to manage RBAC policies and how to set up basic controls in your cloud deployment.
  • Another new feature of the Liberty release is on the storage side: the ability to back up in-use volumes in Cinder. In this short article, learn more about how the procedure works and how Cinder manages the process.
  • Also relatively new, from the Kilo release, is the introduction of ML2 port security into Neutron, a useful feature for Network Functions Virtualization. To learn more about how ML2 port security works and how to enable it, see thisshort walkthrough from Kimi Zhang.
  • Finally, for anyone trying to work out bugs in Neutron network creation using pdb (the Python debugger), this quick step-by-step post from Arie Bregman will get you past some common issues.

Looking for more? Be sure to check out our OpenStack tutorials collection for over a hundred additional resources. And if you’ve got another suggestion which ought to be on our next list, be sure to let us know in the comments.

An Austin summit preview, new survey results, and more OpenStack news

Interested in keeping track of what is happening in the open source cloud? Opensource.com is your source for news in OpenStack, the open source cloud infrastructure project.

OpenStack around the web

There’s a lot of interesting stuff being written about OpenStack. Here’s a sampling:

OpenStack discussions

Here’s a sample of some of the most active discussions this week on the OpenStack developers’ listserv. For a more in-depth look, why not dive in yourself?

A compilation of 7 new OpenStack tutorials

Getting started, learning more, or even just finding the solution to your particular problem within the OpenStack universe can be quite an undertaking. Whether you’re a developer or an operator, it can be hard to keep up with the rapid pace of development of various OpenStack projects and how to use them. The good news is that there are a number of resources to help you out, including the official documentation, a number of third-party OpenStack certification and training programs, and community-authored tutorials.

Here at Opensource.com, every month we put together a list of the best tutorials, how-tos, guides, and tips to help you keep up with what’s going in OpenStack. Check out our favorites from this month.

  • If you’ve ever used ownCloud as a file sharing solution, either personally or for your company, you know just how versatile it is in terms of setting up storage backends. Did you know that among those options is the OpenStack Swift object storage platform? Learn how to setup ownCloud to work with OpenStack Swift in this simple tutorial.
  • Just getting started with exploring OpenStack, and want to make a go at installing it locally? Here’s a quick guide to setting up Devstack in a virtual machine, along with getting the Horizon dashboard working so that you can have a visual interface with your test cloud.
  • Ready to take the next step and install OpenStack in a server environment? Here’s how to deploy the RDO distribution of OpenStack onto a single server using Ansible.
  • Once you’re running applications in your OpenStack cloud, you need some way to keep track of performance and any issues that pop up on each server. David Wahlstrom takes a look at 6 easy-to-use tools for monitoringapplications on your virtual servers.
  • If you’re an upstream OpenStack developer, you spend a good amount of time on IRC. It’s where both weekly meetings and a lot of casual conversations take place. But we can’t all be online 24/7. Here’s a handy guide from Steve Martinelli about how to set up a ZNC bouncer to keep an eye on IRC conversations when you’re away from your computer.
  • The OpenStack Health dashboard is a quick and easy way to see what’s going on in the OpenStack continuous integration environment. The dashboard makes it easy to see how many jobs are being running in any given time period, and what the failure rate for tests within those jobs are. Learn more about how it works in this explainer article.
  • Even for networking experts, occasional speed bumps happen when managing virtual networks in OpenStack. Arie Bregman takes us throughsome of the most common problems with OpenStack’s Neutron networking project configuration and how to go about troubleshooting and solving the issues.

CoreOS launches Rkt- the container that’s not Docker

CoreOS launches Rkt- the container that’s not Docker

CoreOS container Docker rkt
CoreOS – a 2013 San Francisco startup backed by Google Ventures and $20 million in funding – is offering an alternative to the wildly popular Docker application container runtime that is sweeping the market.
ultimate guide promo smb
The ultimate guide to small business networkingIn-depth product reviews that will help any SMB make critical strategic technology decisions.
Read Now

Alex Polvi, CEO of CoreOS, says the company has developed a more security-conscious way to run application containers compared to Docker, which they call rkt. CoreOS released the 1.0 general availability open source release of rkt on Thursday.

+MORE AT NETWORK WORLD: Open Networking User Group looks to reign in the ‘Wild West’ of hybrid cloud computing | Take Microsoft’s underwater data center with a grain of salt +

“The way we approach open source software is that we build modular components,” says Polvi, who before starting CoreOS ran Rackspace’s Bay Area product team. Rkt is one of those components. To understand CoreOS, it’s helpful to understand where rkt fits in CoreOS’s broader offerings.

The company started by developing CoreOS – a Linux-based operating system meant for the new world of distributed computing. As application containers took off, Polvi and his team were less than impressed with some of the design decisions made by Docker, which has been the dominant container company.
alex polvi CoreOS container rkt
@polvi

Alex Polvi, founder and CEO of CoreOS

So, CoreOS began developing rkt. It’s different from Docker in a couple of different ways. For example, Docker uses a daemon architecture that provides root access to Linux. Poliv says that’s not such a good idea: If Docker is downloading container images from the Internet, there should be some buffer between images downloaded and the container runtime in case one of those images is nefarious. Rkt, on the other hand, downloads the container image, but there’s a separate process for executing it. Polvi says CoreOS is “borrowing decades of Unix best practices” to make rkt.

The broader point here is that CoreOS is trying to provide a market alternative for Docker’s application container runtime. Is it more secure? Well, many customers have found secure ways to run Docker, so it’s not like Docker is not safe. But a market of options is good.

CoreOS has other projects too. In addition to the aforementioned CoreOS Linux operating system, the company also sells a packaged distribution of CoreOS, rkt and the open source container orchestrator Kubernetes. That package is named Tectonic. CoreOS has a container image library it sells too.

Containers continue to be a hot topic in application development and infrastructure management, expect to hear more about CoreOS vs. Docker.