Why Red Hat’s OpenShift, not OpenStack, is making waves with developers

Red Hat has historically whiffed with developers. But, its PaaS offering, OpenShift, may mark a new era for the open source giant.

Developers may be the new kingmakers in the enterprise, to borrow Redmonk’s phrase, but they’ve long ignored the enterprise open source leader, Red Hat. Two years ago, I called out Red Hat’s apparent inability to engage the very audience that would ensure its long-term relevance. Now, there are signs that Red Hat got the message.

And, no, I’m not talking about OpenStack. Though Red Hat keeps selling OpenStack (seven of its top-30 deals last quarter included OpenStack, according to Red Hat CEO Jim Whitehurst), it’s really OpenShift, the company’s Platform-as-a-Service (PaaS) offering, that promises a bright, developer-friendly future for Red Hat.

redhatLooking beyond OpenStack

Red Hat continues to push OpenStack, and rightly so—it’s a way for Red Hat to certify a cloud platform just as it once banked on certifying the Linux platform. There’s money in assuring risk-averse CIOs that it’s safe to go into the OpenStack cloud environment.

Even so, as Whitehurst told investors in June, OpenStack is not yet “material” to the company’s overall revenue, and generally generates deals under $100,000. It will continue to grow, but OpenStack adoption is primarily about telcos today, and that’s unlikely to change as enterprises grow increasingly comfortable with public IaaS and PaaS options. OpenStack feels like a way to help enterprises avoid the public cloud and try to dress up their data centers in the fancy “private cloud” lingo.

OpenShift, by contrast, is far more interesting.

OpenShift, after all, opens Red Hat up to containers and all they mean for enterprise developer productivity. It’s also a way to pull through other Red Hat software like JBoss and Red Hat Enterprise Linux, because Red Hat’s PaaS is built on these products. OpenShift has found particular traction among sophisticated financial services companies that want to get in early on containers, but the list of customers includes a wide range of companies like Swiss Rail, BBVA, and many others.

More and faster

To be clear, Red Hat still has work to do. According to Gartner’s most recent Magic Quadrant for PaaS, Salesforce and Microsoft are still a ways ahead, particularly in their ability to execute their vision:

mqdrnt.jpg

Still, there are reasons to think Red Hat will separate itself from the PaaS pack. For one thing, the company is putting its code where it hopes its revenue will be. Red Hat learned long ago that, to monetize Linux effectively, it needed to contribute heavily. In similar fashion, only Google surpasses Red Hat in Kubernetes code contributions, and Docker Inc. is the only company to contribute more code to the Docker container project.

Why does this matter? If you’re an enterprise that wants a container platform then you’re going to trust those vendors that best understand the underlying code and have the ability to influence its direction. That’s Red Hat.

Indeed, one of the things that counted against Red Hat in Gartner’s Magic Quadrant ranking was its focus on Kubernetes and Docker (“Docker and Kubernetes have tremendous potential, but these technologies are still young and evolving,” the report said). These may be young and relatively immature technologies, but all signs point to them dominating a container-crazy enterprise world for many years to come. Kubernetes, as I’ve written, is winning the container management war, putting Red Hat in pole position to benefit from that adoption, especially as it blends familiar tools like JBoss with exciting-but-unfamiliar technologies like Docker.

Red Hat has also been lowering the bar for getting started and productive with OpenShift, as Serdar Yegulalp described. By focusing on developer darlings like Docker and Kubernetes, and making them easily consumable by developers and more easily run by operations, Red Hat is positioning itself to finally be relevant to developers…and in a big way.

VMware: We love OpenStack!

A few years ago VMware and OpenStack were foes. Oh, how times have changed.

This week VMware is out with the 2.5 release of its VMware Integrated OpenStack (VIO). The virtualization giant continues to make it easier to run the open source cloud management tools on top of VMware virtualized infrastructure.

VMware, Inc

VIO’s 2.5 release shows the continued commitment by VMware to embrace OpenStack, something that would have seen out of the question a few short years ago.

The 2.5 release comes with some nifty new features: Users can automatically important vSphere virtual machine images into their VIO OpenStack cloud now. The resource manager control plane is slimmed down by 30% so it takes up less memory. There are better integrations with VMware’s NSX too.

The news shows the continued maturation of both the open source project and the virtualization giant. Once VMware and OpenStack were seen as rivals. In many ways, they still are. Both allow organizations to build private clouds. But VMware (smartly in my opinion) realized that giving customers choice is a good thing. Instead of being an all or nothing VMware vs. OpenStack dichotomy, VMware has embraced OpenStack, allowing VMware’s virtualization management tools to serve up the virtualized infrastructure OpenStack needs to operate.

VMware’s doing the same thing with application containers. Once seen as a threat to virtual machines, VMware is making the argument that the best place to run containers are in it’s virtual machines that have been slimmed down and customized to run containers. Stay tuned to see if all these gambles pay off.

7 new OpenStack guides and tips

Learning how to deploy and maintain OpenStack can be difficult, even for seasoned IT professionals. The number of things you must keep up with seems to grow every day.

Fortunately, there are tons of resources out there to help you along the way, whether you are a beginner or a cloud guru. Between the official documentation, IRC channels, books, and a number of training options available to you, as well as the number of community-created OpenStack tutorials, help is never too far away.

On Opensource.com, every month we take a look back at the best tips, tricks, and tutorials published to the web to bring you some of the most useful. Here are some of the best guides and hints we found last month.

  • First up, let’s take a look at TripleO, an OpenStack deployment tool. Adam Young takes us through the basic steps of his experimentation with getting started with TripleO, by deploying RDO on CentOS 7.1.
  • If you’re looking to deploy applications on top of OpenStack, it can help to have some simple examples at hand. Why not take a look at some simple Heat templates? Cloudwatt gives us new examples this month includingMediaWiki, Minecraft, and Zabbix.
  • If you work in OpenStack development, you know that it can be difficult to reproduce bugs. OpenStack has so many moving parts that replicating the exact circumstances that produced your error can be a non-trivial task. In this blog post, learn some best practices in debugging hard-to-find test failures through an example with Glance.
  • Next, explore a new feature from the Liberty release designed to make it easier to share Neutron networking resources between projects and tenants in an OpenStack deployment, Role Based Access Controls (RBAC). Learn the basic commands necessary to manage RBAC policies and how to set up basic controls in your cloud deployment.
  • Another new feature of the Liberty release is on the storage side: the ability to back up in-use volumes in Cinder. In this short article, learn more about how the procedure works and how Cinder manages the process.
  • Also relatively new, from the Kilo release, is the introduction of ML2 port security into Neutron, a useful feature for Network Functions Virtualization. To learn more about how ML2 port security works and how to enable it, see thisshort walkthrough from Kimi Zhang.
  • Finally, for anyone trying to work out bugs in Neutron network creation using pdb (the Python debugger), this quick step-by-step post from Arie Bregman will get you past some common issues.

Looking for more? Be sure to check out our OpenStack tutorials collection for over a hundred additional resources. And if you’ve got another suggestion which ought to be on our next list, be sure to let us know in the comments.

Google is said to endorse ARM server chips, but don’t get excited yet

Google is said to be working with Qualcomm to design servers based on ARM processors, which would be a significant endorsement for ARM as it tries to challenge Intel’s dominance in data centers.

Google will give its public backing for Qualcomm’s chips at an investor meeting next week, according to a Bloomberg report Wednesday that cities unnamed sources. If the chips meets certain performance goals, the report says, Google will commit to using them.

It would be a big vote of confidence for both ARM and Qualcomm, but if history is a guide then it’s too early to say how significant the news really is. ARM won’t be the first x86 alternative that Google has rallied behind, and it’s unclear if the last effort has come very far.

Google's IBM Power server board

Two years ago, Google made a big show of support for IBM’s Power processor. It was a founding member of IBM’s OpenPower initiative, which allows companies to design and build Power servers for use in, among other things, cloud data centers like those run by Google.

Google even showed a Power server board it had designed itself. “We’re always looking to deliver the highest quality of service for our users, and so we built this server to port our software stack to Power,” a Google engineer said at the time.

But there’s been little news about the partnership since. Google hasn’t revealed whether it’s using Power servers in production, and last year it made only vague statements that it’s keeping its options open.

Google is secretive about the technologies it uses, and it might well have plans to use both ARM and Power, but public endorsements don’t tell us much, and in the case of ARM it’s likely even Google doesn’t know for sure.

The search giant could have several reasons for showing support for non-x86 architectures. Google probably does want to test Qualcomm’s server chips, just as it tested IBM’s, to see if a different architecture can shave costs off of running its vast infrastructure. A show of support from Google encourages development of the ecosystem as a whole, including tools and software, which will be important if Google decides to put a new architecture in production.

Such statements also serve to pressure Intel, giving Google some price leverage and pushing Intel to develop new, more power-efficient parts — something Intel has done since the ARM threat emerged a few years ago.

There’s been a lot of debate about whether “brawny” cores, like Power, or “wimpy” cores, like ARM, are more efficient for cloud workloads. It depends partly what workloads you’re talking about, and there are also costs to consider like porting software.

Urs Holzle, who’s in charge of Google’s data centers, once published a paper on the topic titled “Brawny cores still beat wimpy cores, most of the time.” But that was in 2010, and the ARM architecture has evolved a lot since then.

Qualcomm's ARM server chip

 Qualcomm disclosed its plan to sell ARM server chips in October, joining rivals like AppliedMicro. It showed a test chip with 24 cores running a Linux software stack, but it still hasn’t said when a finished product will go on sale.

Derek Aberle, Qualcomm’s president, told investors last week that shipments would begin “probably within the next year or so.” But he suggested significant sales are still “out a few years.”

A vote from Google could do a lot to boost its chances. But it’s also hard to know where all of this will end up. The only sure thing is that the processor business is a lot more interesting than it was a few years ago

Hypernetes unites Kubernetes, OpenStack for multitenant container management

Hypernetes unites Kubernetes, OpenStack for multitenant container management

Hypernetes unites Kubernetes, OpenStack for multitenant container management 

Hypernetes project aims to deliver container orchestration with VM-level isolation

Hyper, creator of a VM isolated container engine that’s compatible with Docker, has debuted a project for running multitenant containers at scale.

The Hypernetes project fuses the Hyper container engine with Kubernetes and uses several pieces from OpenStack to create what it describes as “a secure, multitenant Kubernetes distro.”

Microsoft is adopting Apple’s approach to PC management, while also keeping the familiar ConfigurationRead Now

At the bottom of the Hypernetes stack is bare metal, outfitted with Hyper’s HyperD custom container engine to provision and run containers with VM-level isolation. Kubernetes manages the containers through HyperD’s API set. Other functions are controlled by components taken from OpenStack, including Keystone, for identity management and authentication; Neutron, for network management; and Cinder/Ceph, for storage volume management.

HyperD is one of many recent products that deliberately blurs the line between a small, lightweight, fast container engine and a hypervisor with strong isolation and a full roster of emulated system resources. Canonical’s LXD, the experimental Novm project, and Joyent’s Triton Elastic Container Host all experiment with implementing different kinds of isolation in a runtime.

The line will likely remain blurred, as companies experiment with combining the best of both worlds to provide products (and projects) that appeal to a range of needs.

Hypernetes’ blend of pieces from profoundly different parts of the container/VM ecosystem is intriguing — not only in terms of what it means for Hypernetes, but for OpenStack as well. Most discussion about OpenStack’s future involves the evolution of the stack as a whole or different distributions simplifying tasks such as deployment or maintenance. But with Hypernetes, OpenStack is treated not as a framework but as a source for technologies to be used in a new context, unburdened by OpenStack’s notorious complexity issues.

The Apache-licensed Hypernetes is still new, so there’s little documentation for the project. Most of what’s available is a fork of the docs for Kubernetes. This implies that anyone familiar with Kubernetes should be able to pick up Hypernetes and run it or fashion a new product or service around it — in much the same way that Google used Kubernetes as the core for its Google Container Engine.

Joyent: The Docker-friendly cloud you’ve never heard of

 Joyent: The Docker-friendly cloud you’ve never heard of

Joyent started the container party, later validated by Docker. Despite superior technology, does an independent public cloud like Joyent have a chance?

One of the privileges of writing for InfoWorld is that I get to meet some of the brightest minds of the industry — and they actually talk to me. In October I met Bryan Cantrill, the CTO of Joyent, while attending Couchbase Live NY.

Cantrill is one of those people who could give a truly entertaining technical or technomarketing talk no matter what the content happens to be. He weaves in computer history and pops off the stage at you whenever he speaks. The last person I met who could present on anything and make it as interesting was Marc Fleury back in my JBoss days.

Microsoft is adopting Apple’s approach to PC management, while also keeping the familiar Configuration
Read Now
The PaaS that wasn’t

Before we get to my conversation with Cantrill, a little background: In 2012 I wrote “Which freaking PaaS should I use?” under the theory that, before long, everything that wasn’t SaaS would quickly move to PaaS.

After that, crickets — I mean, it took Cloud Foundry a rewrite, a spin-off, and a long time to gain any traction whatsoever. Red Hat’s OpenShift has improved a lot, but it isn’t really a public cloud play and has yet to take over the world. I barely remember that Heroku exists, and for a little while, it looked like there was Amazon, that upstart Microsoft, and everyone else.

But something changed: the emergence of Docker. Docker let us have most of the benefits of PaaS while still letting us lovingly tune the software layout and use specific … everything (because who among us doesn’t like to coil the toaster in the process of making toast — or deploying apps).

While Cantrill’s company Joyent is most known for being “the Node.js people,” that isn’t how Joyent keeps the lights on. Joyent is a cloud provider with both compute and storage options, as well as its own data centers. It also sells a supported open source environment similar to Cloud Foundry or OpenShift but based entirely on container technology.

In Cantrill’s words, Joyent’s is better or at least ahead of the market:

Joyent has been in the right place at the wrong time for an extended period of time. We’re a company really based around containers and the belief in container-based virtualization, which is great. But we were way ahead of the market. And it’s only been the last year or so that the market has caught up and realized, hey, wait a minute, that container idea is actually a pretty great idea.

In Cantrill’s presentation he did what I’d been dying to see: connect Docker to Solaris Zones. Cantrill, an ex-Sun guy, totally understood the history and made that connection. But Joyent can be defined as much by what it is not as well as by what it is. Cantrill acknowledges that, as CTO of a relatively little-known public cloud provider, he faces an uphill battle in mind share: “We are not Amazon and we are not OpenStack.”

On the other hand, who wants to be OpenStack? Cantrill likens OpenStack to the old Solaris CDE; having several companies attempt to develop something together rarely (possibly never) works out well. There are a lot more players in OpenStack than there were in CDE. Cantrill thinks OpenStack’s time has passed:

I think people are now beginning to realize that, actually. OpenStack is an also-ran for yesterday’s revolution. We’re really much more focused on what we think is tomorrow’s revolution. Today’s, hopefully, but tomorrow’s in terms of the container revolution and being an all-container-based stack. So we’ve got it all — and the software we develop is open source.

Like other vendors, Joyent has both a public and a private cloud offering. Joyent’s container technology, Triton, offers kernel-native containers but supports Docker packaging. Joyent also claims bare-metal performance. After all, Triton is based on decades-mature Solaris Zones.

Consider the people trying to run Docker images on Amazon. That is container-style virtualization running on actual virtualization. Get out your wallet because Bezos needs extra jet fuel money! Not only is Joyent’s pricing cheaper for CPU and utilization compared to Amazon, but you’re less likely to use as much because you’re not running your container on top of a virtual machine.
This un-Hadoop/un-S3 storage is not EMC approved

SANs are kind of dumb. The idea of putting your storage way across the network from compute and shoving all the disks in a box was based on a rather specific model of client-server computing. Newer, more resilient software designs don’t depend on emulating that resiliency with appliances full of disks (sorry, Dell).

This is where Joyent runs ahead of the curve — and a bit ahead of HDFS, EMR, and S3 — with Manta, an object storage solution with integrated compute:

I’ll give you an example of something we’ve already built [Manta], we use every day, that the market is still not ready for, so you don’t hear me mention it, because people are just not ready for it. And that is the ability to break container’s storage. So we’ve got an object store that’s like S3, but if you want to actually compute on your objects, instead of having to drag them out of the object store, you can spin up a container of where those objects actually live.

I observed, snarkily, that Cantrill didn’t seem to believe that stuffing a bunch of disks in a box, connecting it with a network cable, and sticking it “way over there” is a fundamentally good thing:

No, I don’t think that centralized storage is a great idea… Apps can compute; the divide between storage and compute doesn’t make sense. It feels like it makes sense, and it has this kind of nice property that your compute becomes totally transient. This machine dies, you can just spin up another one, because your storage is over here, which is great, but it’s kinda like the end of the good news. Because, by the way, your data’s over here, and your compute is over here. And by the way, the component that you’re trying to optimize for failure — compute — is, like, the most reliable component we’ve got. It’s actually the spindle that is the most unreliable component we have. The spindle is still the component whose death is assured before depreciation.

With Manta you can spin up storage via an API and run your massively parallel processes written in R, Python, Node.js, Perl, Ruby, Java, C/C++, and more. Manta also supports streaming. However, Manta is its own thing. It isn’t Spark, and it isn’t Hadoop.

It’s hard not to think of history repeating itself. Without a compatible API and an ecosystem, Manta may be yet another technology that’s “ahead of the market” — it’s better, but can’t reach mass adoption because either people don’t understand it or they care more about the ecosystem than the superiority of the underlying technology.

Is better good enough?

When Joyent first developed its container technology, it had no industry standard or Linux-based API. Docker ended up creating the de facto standard that made people want to jump on board the container bandwagon — which has in turn created a market for Joyent.

Maybe when Spark becomes a first-class API, the same will happen with Manta. It’s clearly a good design and open source, so Manta could capture the imagination of people looking for something better than HDFS or SANs or the current storage mess around virtualization.

But ecosystems are as much social as they are technical or economic. Is having better technology enough to enable Joyent to break through? Joyent is picking a lot of fights at once: Amazon, Hadoop/Spark, and friends, everyone dreaming of the “hybrid cloud.” Is better good enough to win customers while you make enemies? Time will tell.

How to Install and Configure OpenStack on a Server

There are three fairly easy ways to get your hands on OpenStack and try it out: one is to use a commercial public cloud like Rackspace or Cloudwatt, or the free Trystack. If you’re in a hurry go for one of the paid services, because it can take days to weeks to get approved for a Trystack account. Using a public cloud is a good way to dive right into developing and testing applications.

Figure 1: A successful OpenStack installation.

Figure 1: A successful OpenStack installation.

If you’re more interested in spelunking into the guts of OpenStack and learning how to administer it then you can build your own server to play with, and that is what we’re going to do with the DevStack installer. DevStack is an amazing shell script that installs the OpenStack components, a LAMP stack and CirrOS, which is a tiny Linux distro built for running cloud guests. (Cirrus? Get it? Finally a good geek pun.) I am going to cover installation in detail, because even though it’s easier than it’s ever been it’s still a bit tricksy.

Getting Started

With most Linux applications it’s safe to install and remove and play with whatever you want to test on your main Linux PC, because Linux is a grown-up operating system that does not keel over when you ask it to do work. Unlike certain overpriced proprietary operating systems that are delicate and full of excuses. But I digress.

Don’t put OpenStack on your main PC because it needs a dedicated system, so for this article I’m running it in VirtualBox on Lubuntu 12.04 on my Linux Mint 13 system. Sure, I know, real server nerds don’t run a graphical environment on their servers, but for testing it’s a nice convenience, and Lubuntu is lightweight. If you elect to run OpenStack server in a virtual machine give it a minimum of 1.5GB RAM and 6GB storage. If you have a multicore system and can spare more than one core, do so because OpenStack, even in a simple testing setup, gets hungry.

First create a user named stack to use for installing DevStack:

$ sudo useradd stack
$ sudo passwd stack
Enter new UNIX password: 
Retype new UNIX password:

Then give stack full sudo privileges:

$ sudo visudo
stack ALL=(ALL:ALL) NOPASSWD: ALL

Now logout, and then log back in as stack. If you don’t have git then install it:

$ sudo apt-get install git -y

Then pull OpenStack from Github. This copies it into the current directory, so I cd to /var and then run git:

$ git clone git://github.com/openstack-dev/devstack.git

This puts everything in /var/devstackcd to devstack, and take a few minutes to look in the various scripts and files. For whatever reason, which I have not figured out, I ran into permissions problems on my first run, so I changed ownership of /var/devstack and /opt/stack to stack:

$ sudo chown -R stack:stack /opt/stack
$ sudo chown -R stack:stack /var/devstack

I also changed /var/www to www-data:www-data; Ubuntu’s default is root, which is not a good practice.

It is good to have logging, so create /var/stacklog, and make it owned by stack.

Configuration

There is one more prerequisite, and that is to create /var/devstack/localrclocalrc always goes in your DevStack root, and it configures networking, passwords, logging, and several other items we’re going to ignore for the time being. This is what mine looks like, just a minimal configuration:

HOST_IP=10.0.1.15
FLAT_INTERFACE=eth0
FLOATING_RANGE=10.0.1.224/28
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=supersecret
RABBIT_PASSWORD=supersecret
SERVICE_PASSWORD=supersecret
SERVICE_TOKEN=supersecret

OpenStack uses a lot of passwords, so for testing I make it easy on myself by recycling the same one. The HOST_IP is the ethX inet addr of your OpenStack server, whether it’s virtualized or not, like this example:

$ ifconfig
eth0  Link encap:Ethernet  HWaddr 90:ee:aa:a2:50:aa  
      inet addr:10.0.1.15  Bcast:10.0.1.255  Mask:255.255.255.0

Do create a static IP address for your DevStack server, or you will suffer. Networking is rather involved for OpenStack, and we’ll get into that more in the future; for now we’ll keep it as simple as possible.

FLAT_INTERFACE is the server’s Ethernet interface; if you have just one it’s not necessary to include this line. You could have an internal and a public-facing interface, just like on non-cloud servers, and the FLAT_INTERFACE corresponds to the internal interface.

FLOATING_RANGE is a pool of addresses for any OpenStack servers that need to be available to the network. This must not overlap with the server’s IP address, which is why my example is way out at the end of the address range.

The Horizon dashboard, after OpenStack installation.

The Horizon dashboard, after OpenStack installation.

Alrighty then, it’s time to finish the installation. Change to /var/devstack and run:

$ ./stack.sh

This will run for a while and fill your screen with all kinds of output. Go take a nice break and think about pleasant things. When it completes a successful run you’ll see something like figure 1, above.

Now fire up a Web browser on your OpenStack server and point it to the IP address it told you, which in my example is http://10.0.1.15. If you see the login page you may congratulate yourself for a successful installation, and for accessing the Horizon dashboard (figure 2.) Go ahead and login as admin with whatever password you set in localrc. You can poke around and explore the different screens without hurting anything. There isn’t much to see yet, but you’ll find a few images and report pages.

If you make a mess, the good DevStack people included a do-over script, clean.sh. This reverses stack.sh and leaves your git clone files in place, so run clean.sh and then stack.sh to re-do your installation.

ZeroStack hatches plans for no-hassle, cloud-managed OpenStack

The cloud-managed appliance for delivering OpenStack to small enterprises faces the same market challenges as any other current offering

If there’s one part of the OpenStack market that never stops yielding enterprising newcomers, it’s the market for solutions to simplify OpenStack implementations. Not only could OpenStack still use help there, but such an approach nearly guarantees a revenue stream.

Newest to this table is ZeroStack, a freshly decloaked startup from VMware and AMD alumni, with a novel approach to OpenStack management for smaller and midtier outfits

Your OpenStack is their business

ZeroStack’s idea is a mixture of an on-premises 2U appliance and a cloud-based SaaS portal. The appliance, a mixture of infrastructure and controller, is installed in the customer’s data center, and administration is done through ZeroStack’s cloud portal. Changes to the software are pushed out automatically to appliances from the cloud, and ZeroStack claims it can bring an existing OpenStack installation up to the latest revision of the product within two months of release.

Another touted advantage to ZeroStack’s approach: The OpenStack API set is still exposed through the appliance. After an organization gets its legs with OpenStack via ZeroStack’s administration system, it can make more direct use of those APIs if it chooses.

ZeroStack is also adding features like the capacity for self-healing so that any health problems with an OpenStack setup can be dealt with by way of the cloud service’s collective intelligence.

Pitfalls still ahead

One obvious implication of ZeroStack’s offering is it could serve as a starting point for the truly open hybrid cloud OpenStack’s advocates have been stumping for. ZeroStack doesn’t have explicit plans for a hybrid play yet, but co-founder and CEO Ajay Gulati (ex-VMware) mentioned in an interview that the company is going to add gateways to other public clouds such as Amazon.

For ZeroStack to stay in the game, it’ll need to avoid the many pitfalls that have claimed other vendors in this space. Nebula comes most immediately to mind, especially after it crashed and burned by overestimating both the size of the market and its real nature. Mirantis, one of the big three names in OpenStack (next to Canonical and Red Hat), has its own appliance strategy that doesn’t depend on any one particular hardware configuration and is not intended to become the bedrock of the company’s revenue stream.

Gulati was quick to cite differences between ZeroStack’s approach and Nebula’s. Aside from the technical differences between the Nebula and ZeroStack appliances (Nebula used a controller-node architecture; ZeroStack does not), ZeroStack “provides the actual server node on which workloads will run”), Nebula “[shipped] a packed Openstack and did not have any SaaS platform for operations and management.”

Many OpenStack innovators have ended up being absorbed into the folds of one of OpenStack’s major corporate backers — such as IBM and Cisco acquiring of Blue Box Group and Piston Cloud, respectively, with the products subsumed into the OpenStack-powered cloud platforms of their new parents. For ZeroStack to avoid such a fate, it’ll need to cut the same path as Mirantis. That’s tough to do in a market whose existing size and growth only seems to allow for so many players and so much competition — and where the demand for OpenStack tilts more toward larger companies than smaller ones.

Red Hat readies OpenStack for the enterprise

Red Hat has packaged OpenStack to make it easier to deploy for hosting mission critical jobs

Red Hat has polished the open source OpenStack cloud hosting software so that it can be more easily deployed within enterprises.

The latest edition of the Red Hat Enterprise Linux OpenStack Platform, released Wednesday, contains a number of new tools to minimize the amount of low-level configuration that must be done to get the complex stack running.

An open source project with numerous contributors, OpenStack is an integrated collection of management tools for running a cloud service, for internal or external use.

The software, first created by NASA and Rackspace in 2010, has found a home in a wide variety of businesses, including eBay, Comcast, American Express, and Disney, among others.

Like most open source software, the base OpenStack package still requires a lot of expertise to configure and maintain.

As a result, Red Hat is one of a number of companies that offer commercial-grade distributions of OpenStack to help ease the process, competing with the likes of Hewlett-Packard, Ubuntu, IBM, Mirantis, VMware and Oracle.

The fresh Red Hat Enterprise Linux OpenStack Platform version 7 is based on the Kilo release of OpenStack, issued in April.

The Red Hat distribution comes with a number of additional tools that should make the software stack more palatable to enterprise use.

A new feature provides a way to install OpenStack in a cloud environment, as well as check to ensure the installation was completed correctly.

For daily use, the software stack is designed to provide a single way to easily provision cloud computing resources. Based on the open source TripleO software, this tool also provides the basis for orchestrating complex operations so they run automatically.

Mission-critical applications that need to be running all of the time on OpenStack can take advantage of another new feature that monitors a workload and, should it fail, move operations to another host.

Data backups have also been expedited, thanks to a new support for snapshots that can be copied to NFS and POSIX file systems.

Numerous improvements have also been made to streamline networking functions of the software.

Red Hat provided more options for opening and closing network traffic ports at the virtual machine level, which gives enterprises more flexibility in securing their virtual machines. OpenStack’s support for the next generation IPv6 Internet protocol has been improved as well.

Red Hat collaborated with Cisco to assure that its OpenStack would work well with the Cisco Unified Computing System, an architecture for managing fleets of servers as a single entity.