Why Red Hat’s OpenShift, not OpenStack, is making waves with developers

Red Hat has historically whiffed with developers. But, its PaaS offering, OpenShift, may mark a new era for the open source giant.

Developers may be the new kingmakers in the enterprise, to borrow Redmonk’s phrase, but they’ve long ignored the enterprise open source leader, Red Hat. Two years ago, I called out Red Hat’s apparent inability to engage the very audience that would ensure its long-term relevance. Now, there are signs that Red Hat got the message.

And, no, I’m not talking about OpenStack. Though Red Hat keeps selling OpenStack (seven of its top-30 deals last quarter included OpenStack, according to Red Hat CEO Jim Whitehurst), it’s really OpenShift, the company’s Platform-as-a-Service (PaaS) offering, that promises a bright, developer-friendly future for Red Hat.

redhatLooking beyond OpenStack

Red Hat continues to push OpenStack, and rightly so—it’s a way for Red Hat to certify a cloud platform just as it once banked on certifying the Linux platform. There’s money in assuring risk-averse CIOs that it’s safe to go into the OpenStack cloud environment.

Even so, as Whitehurst told investors in June, OpenStack is not yet “material” to the company’s overall revenue, and generally generates deals under $100,000. It will continue to grow, but OpenStack adoption is primarily about telcos today, and that’s unlikely to change as enterprises grow increasingly comfortable with public IaaS and PaaS options. OpenStack feels like a way to help enterprises avoid the public cloud and try to dress up their data centers in the fancy “private cloud” lingo.

OpenShift, by contrast, is far more interesting.

OpenShift, after all, opens Red Hat up to containers and all they mean for enterprise developer productivity. It’s also a way to pull through other Red Hat software like JBoss and Red Hat Enterprise Linux, because Red Hat’s PaaS is built on these products. OpenShift has found particular traction among sophisticated financial services companies that want to get in early on containers, but the list of customers includes a wide range of companies like Swiss Rail, BBVA, and many others.

More and faster

To be clear, Red Hat still has work to do. According to Gartner’s most recent Magic Quadrant for PaaS, Salesforce and Microsoft are still a ways ahead, particularly in their ability to execute their vision:

mqdrnt.jpg

Still, there are reasons to think Red Hat will separate itself from the PaaS pack. For one thing, the company is putting its code where it hopes its revenue will be. Red Hat learned long ago that, to monetize Linux effectively, it needed to contribute heavily. In similar fashion, only Google surpasses Red Hat in Kubernetes code contributions, and Docker Inc. is the only company to contribute more code to the Docker container project.

Why does this matter? If you’re an enterprise that wants a container platform then you’re going to trust those vendors that best understand the underlying code and have the ability to influence its direction. That’s Red Hat.

Indeed, one of the things that counted against Red Hat in Gartner’s Magic Quadrant ranking was its focus on Kubernetes and Docker (“Docker and Kubernetes have tremendous potential, but these technologies are still young and evolving,” the report said). These may be young and relatively immature technologies, but all signs point to them dominating a container-crazy enterprise world for many years to come. Kubernetes, as I’ve written, is winning the container management war, putting Red Hat in pole position to benefit from that adoption, especially as it blends familiar tools like JBoss with exciting-but-unfamiliar technologies like Docker.

Red Hat has also been lowering the bar for getting started and productive with OpenShift, as Serdar Yegulalp described. By focusing on developer darlings like Docker and Kubernetes, and making them easily consumable by developers and more easily run by operations, Red Hat is positioning itself to finally be relevant to developers…and in a big way.

Deploy Openstack Juno Single Node on Redhat/Centos 7.0

In this article i am going to explain and show you how to Deploy  openstack IAAS Cloud on your home or Production server.

About Openstack Cloud :  Its IAAS  cloud plateform is Similar to AWS EC2
but here you can deploy on your home purpose.

A  Picture of Openstack Cloud is given below.

Requirement:

1.  Redhat/centos 7.0
2.  4GB  free RAM
3.  20GB  free Hard disk
4.  Virtualization support

Now follow the given steps  :-
Step 1:  configure yum client live or  local
A) Local
[root@rhel7 html]# cd  /etc/yum.repos.d/
[root@station123 yum.repos.d]# cat  juno.repo
[redhat]
baseurl=http://192.168.1.254/rhel7
gpgcheck=0

[ashu]
baseurl=http://192.168.1.254/juno-rpm/
gpgcheck=0

 OR 

B)  Live which must need internet connection 

[root@rhel7 html]# cd  /etc/yum.repos.d/

 [root@rhel7 html]# yum install epel-release -y
[root@station123 yum.repos.d]# yum install https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm 
Step 2:  Follow some security  rules for juno

i)   [root@station123 ~]# systemctl   stop  NetworkManager
ii)  [root@station123 ~]# systemctl   disable  NetworkManager   
iii)  [root@station123 ~]# setenforce  0
iv)  [root@station123 ~]# cat  /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

Note :  here i have permanently disabled the selinux  

Step 3:  BY default  Required kernel is already installed just install openstack-packstack

[root@station123 ~]# yum  install openstack-packstack

Step 4:  Now time for generating  answer file 

[root@station123 ~]# packstack  --gen-answer-file   juno.txt

Note:  this file is text based file where you need to answer some points in 'y' or  'n' 


Some  Important changes that are required

i)   NTP  server  IP 

find out NTP in you local area or real world then place the IP like given below

[root@station123 ~]# grep  -i   ntp   juno.txt 
# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=192.168.1.254

ii) DEMO version  

[root@station123 ~]# grep -i demo  juno.txt 
# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW=e2f0614ae15f44ff
# Whether to provision for demo usage and testing. Note that
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# A URL or local file location for the Cirros demo image used for


iii)  EPEL  to no

[root@station123 ~]# grep -i  epel juno.txt 
# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=n
[root@station123 ~]# 

iv)  HOrizon ssl  to yes

 [root@station123 ~]# grep -i  horizon  juno.txt 
# Dashboard (Horizon)
CONFIG_HORIZON_INSTALL=y
# specific to controller role such as API servers, Horizon, etc.
# To set up Horizon communication over https set this to 'y'
CONFIG_HORIZON_SSL=y
[root@station123 ~]# 


v)  configure heat and celiometer as per your requirement


Step 5:  Now you can run answer file 
===========================

[root@station123 ~]# packstack  --answer-file  juno.txt 

it  may ask for your root password  then will take almost  20 to 35 minutes

Important :  if you got any error at this time then you can  send me snapshot


Step 6:   now find  your  admin  userpassword  inside your /root directory

[root@station123 ~]# cat  keystonerc_admin 
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=8bb1c812ec16418e
export OS_AUTH_URL=http://192.168.1.123:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '


step  7:  Now  you can login to openstack panel and go for cloud creation

Now after login you will be offered by openstack internal services

like  shown below

Here  are few examples  are given by me .

IMportant:  After login almost  every thing you can do via clicking in project tab

A)  Now you can  create  network  routers and images as given 

i) Image creation  

in image link you can choose image from your hard disk or any Http url 
Note : url must be in http://  formate

now you are done with image creation

ii)  Network creation 

In project  tab  click on network menu then follow the steps as given below.

Private network 

Public network  –

NOte : public network will be your netwokr address where your base system have ip address

Important : 

 you can go ahead with graphical panel its really very simple. if you face any issue you can revert to me.

What is OpenStack.

MNC’s define OpenStack as the future of Cloud Computing. OpenStack is a platform to create and handle massive groups of virtual machines through a Graphical User Interface.
Openstack is free, open-source software and works similar to Linux.
openstack
Key components of OpenStack?
• Horizon: This is the GUI of openstack.

• Nova: Primary computing engine to manage multiple virtual machines and computing tasks

• Swift: This is a very robust service used for the object storage management.
• Cinder: Like our traditional computer storage system, it is a block storage system in OpenStack.
• Neutron: Used for the networking services in openstack
• Keystone: Identity Management service which uses tokens.
• Glance: image service provider. Images are the virtual copies of hard disks.
• Ceilometer: Provides telemetry services to cloud users.
• Heat (Orchestration Engine): It helps developers to illustrate automated infrastructure deployment

 

Capitulation? Mirantis refactors OpenStack on top of Kubernetes

First, the guts of the announcement: Mirantis, the bad boys of the OpenStack world, are today announcing a collaboration with Google (a company that has pretty much zero history with OpenStack) and Intel. Under the intent of the collaboration, the life cycle management tool for OpenStack, Fuel, will be rewritten so that it uses Kubernetes as its underlying orchestration.

Lots of inside baseball there, so what are all these different products?

  • OpenStack is the open source cloud computing operating system that was jointly created by Rackspace and NASA and has since built a massive following of companies (including IBM, HPE, Intel and many, many others).
  • Kubernetes is the open source orchestration platform loosely descended from the tools that Google uses internally to operate its own data centers.
  • Fuel, as stated previously, was (is) the OpenStack-native life cycle management tool for OpenStack.

So what does it all mean? Well, it’s actually far more important than first appearances would suggest. It marks, at least to some extent, an admission by all concerned that OpenStack isn’t the be-all and end-all of the infrastructure world

That positioning, which might seem blindingly obvious to anyone who is aware of the heterogeneity of modern enterprise IT, somewhat goes against what we heard from the OpenStack camp for its first few years, when pundits would be excused for thinking that OpenStack was the solution for every possible situation. It seems now, however, that OpenStack is simply a part of the solution — and virtual machines, containers and bare-metal systems all have a part to play in enterprise IT going forward.

Under the terms of the collaboration, Mirantis will initiate a new continuous integration/continuous delivery (CI/CD) pipeline under the OpenStack Fuel project for building capabilities around containerized OpenStack deployment and operations. The resulting software will give users fine-grain control over the placement of services used for the OpenStack control plane, as well as the ability to do rolling updates of OpenStack, make the OpenStack control plane self-healing and more resilient, and smooth the path for the creating of microservices-based applications on OpenStack.

If that sounds familiar, that would be because it is much the same proposition that we heard from Alex Polvi of CoreOS fame a few months ago — the difference here is that it comes from an OpenStack player that is front-and-center of the movement, an arguably far more substantive statement.

And some big names have poured the love into this collaboration — in particular Mirantis and Google, originators of Kubernetes.

“With the emergence of Docker as the standard container image format and Kubernetes as the standard for container orchestration, we are finally seeing continuity in how people approach operations of distributed applications,” said Mirantis CMO Boris Renski. “Combining Kubernetes and Fuel will open OpenStack up to a new delivery model that allows faster consumption of updates, helping customers get to outcomes faster.”

Google Senior Product Manager Craig McLuckie also chimed in. “Leveraging Kubernetes in Fuel will turn OpenStack into a true microservice application, bridging the gap between legacy infrastructure software and the next generation of application development,” he said. “Many enterprises will benefit from using containers and sophisticated cluster management as the foundation for resilient, highly scalable infrastructure.”

Along with the initial work on the Fuel aspects, Mirantis will also become an active contributor to the Kubernetes project, and has stated the ambition to become a top contributor to the project over the next year.

Alongside that, Mirantis has joined the Cloud Native Computing Foundation, a Linux Foundation project dedicated to advancing the development of cloud-native applications and services, as a Silver member.

MyPOV

This is a big deal, there’s no denying that. OpenStack is slowly but inexorably becoming less of a “solution for everything” and more of an integral part. Skeptics would suggest that this marks a turning point where OpenStack ceases to be a compelling long-term proposition in and of itself and becomes simply a stop-gap measure between traditional architectures and more cloud-native approaches.

The reality is probably somewhere in the middle — and OpenStack will still have a part to play in infrastructure going forward — but clearly Mirantis’ move to embrace Kubernetes is an indication that it realizes that it needs to extend beyond a pure-play OpenStack offering.

As always, this space provides huge interest and much entertainment — a situation that looks unlikely to change anytime soon

VMware: We love OpenStack!

A few years ago VMware and OpenStack were foes. Oh, how times have changed.

This week VMware is out with the 2.5 release of its VMware Integrated OpenStack (VIO). The virtualization giant continues to make it easier to run the open source cloud management tools on top of VMware virtualized infrastructure.

VMware, Inc

VIO’s 2.5 release shows the continued commitment by VMware to embrace OpenStack, something that would have seen out of the question a few short years ago.

The 2.5 release comes with some nifty new features: Users can automatically important vSphere virtual machine images into their VIO OpenStack cloud now. The resource manager control plane is slimmed down by 30% so it takes up less memory. There are better integrations with VMware’s NSX too.

The news shows the continued maturation of both the open source project and the virtualization giant. Once VMware and OpenStack were seen as rivals. In many ways, they still are. Both allow organizations to build private clouds. But VMware (smartly in my opinion) realized that giving customers choice is a good thing. Instead of being an all or nothing VMware vs. OpenStack dichotomy, VMware has embraced OpenStack, allowing VMware’s virtualization management tools to serve up the virtualized infrastructure OpenStack needs to operate.

VMware’s doing the same thing with application containers. Once seen as a threat to virtual machines, VMware is making the argument that the best place to run containers are in it’s virtual machines that have been slimmed down and customized to run containers. Stay tuned to see if all these gambles pay off.

Early OpenStack contributor says cloud project has ‘lost its heart’

It started with a Tweet last week from Joshua McKenty: “OpenStack has lost its heart. Last summit I will attend.”

That’s somewhat shocking to read if you consider that McKenty helped found the open source cloud computing project, built a startup company that sold OpenStack cloud software and formerly sat on the board of directors of the Foundation that governs OpenStack.

+MORE AT NETWORK WORLD: Status check on OpenStack: The open source cloud has arrived +

How exactly has OpenStack “lost its heart?” McKenty explained: “When we started this project it was about trying to create a new open source community,” he says.

As OpenStack has grown he says its turned into a corporate open source project, not a community-driven one. He spent a day walking around the show-floor at the recent OpenStack Summit in Vancouver and said he didn’t find anyone talking about the original mission of the project. “Everyone’s talking about who’s making money, who’s career is advancing, how much people get paid, how many workloads are in production,” McKenty says. “The mission was to do things differently.”

McKenty admits that it’s hard to keep a small-community feel to a project that has grown to be as large as OpenStack. It started with just Rackspace and NASA committing code, now it has now grown to more than 500 contributing companies, from IBM, Red Hat, Cisco, HP and even VMware.

McKenty says the commercial success of OpenStack is good for customers and those companies. But he believes OpenStack has lost its mission of changing the world through open source. Now, he says it’s mostly about big companies looking to make money off of it. McKenty has left the startup he founded, Piston Cloud Computing Co. to join another small but fast-growing open source project: Cloud Foundry; he works as the Field CTO for Pivotal, one of the main backers of that PaaS (platform as a service) project.

Others in the OpenStack community say McKenty has a jaded perspective. “OpenStack exists because of the company that make it up, and companies need to make money,” says Randy Bias, an OpenStack Foundation board member and another one of the earliest contributors to OpenStack. Bias says without the support of companies like Rackspace, Dell, HP and many others the project never would have existed and grown into what it is today. “OpenStack was never a movement to change the world,” Bias says, whose startup company Cloudscaling was bought by EMC last year. The project is not made up of purely philanthropic companies with only altruistic motives. The reality is, companies joined OpenStack to make money.

As for the fact that OpenStack is no longer a small organization with a grassroots-type feel to it, Bias says that it’s almost impossible to have that and be a successful to a large community with so many members.

CoreOS launches Rkt- the container that’s not Docker

CoreOS launches Rkt- the container that’s not Docker

CoreOS container Docker rkt
CoreOS – a 2013 San Francisco startup backed by Google Ventures and $20 million in funding – is offering an alternative to the wildly popular Docker application container runtime that is sweeping the market.
ultimate guide promo smb
The ultimate guide to small business networkingIn-depth product reviews that will help any SMB make critical strategic technology decisions.
Read Now

Alex Polvi, CEO of CoreOS, says the company has developed a more security-conscious way to run application containers compared to Docker, which they call rkt. CoreOS released the 1.0 general availability open source release of rkt on Thursday.

+MORE AT NETWORK WORLD: Open Networking User Group looks to reign in the ‘Wild West’ of hybrid cloud computing | Take Microsoft’s underwater data center with a grain of salt +

“The way we approach open source software is that we build modular components,” says Polvi, who before starting CoreOS ran Rackspace’s Bay Area product team. Rkt is one of those components. To understand CoreOS, it’s helpful to understand where rkt fits in CoreOS’s broader offerings.

The company started by developing CoreOS – a Linux-based operating system meant for the new world of distributed computing. As application containers took off, Polvi and his team were less than impressed with some of the design decisions made by Docker, which has been the dominant container company.
alex polvi CoreOS container rkt
@polvi

Alex Polvi, founder and CEO of CoreOS

So, CoreOS began developing rkt. It’s different from Docker in a couple of different ways. For example, Docker uses a daemon architecture that provides root access to Linux. Poliv says that’s not such a good idea: If Docker is downloading container images from the Internet, there should be some buffer between images downloaded and the container runtime in case one of those images is nefarious. Rkt, on the other hand, downloads the container image, but there’s a separate process for executing it. Polvi says CoreOS is “borrowing decades of Unix best practices” to make rkt.

The broader point here is that CoreOS is trying to provide a market alternative for Docker’s application container runtime. Is it more secure? Well, many customers have found secure ways to run Docker, so it’s not like Docker is not safe. But a market of options is good.

CoreOS has other projects too. In addition to the aforementioned CoreOS Linux operating system, the company also sells a packaged distribution of CoreOS, rkt and the open source container orchestrator Kubernetes. That package is named Tectonic. CoreOS has a container image library it sells too.

Containers continue to be a hot topic in application development and infrastructure management, expect to hear more about CoreOS vs. Docker.

Google is said to endorse ARM server chips, but don’t get excited yet

Google is said to be working with Qualcomm to design servers based on ARM processors, which would be a significant endorsement for ARM as it tries to challenge Intel’s dominance in data centers.

Google will give its public backing for Qualcomm’s chips at an investor meeting next week, according to a Bloomberg report Wednesday that cities unnamed sources. If the chips meets certain performance goals, the report says, Google will commit to using them.

It would be a big vote of confidence for both ARM and Qualcomm, but if history is a guide then it’s too early to say how significant the news really is. ARM won’t be the first x86 alternative that Google has rallied behind, and it’s unclear if the last effort has come very far.

Google's IBM Power server board

Two years ago, Google made a big show of support for IBM’s Power processor. It was a founding member of IBM’s OpenPower initiative, which allows companies to design and build Power servers for use in, among other things, cloud data centers like those run by Google.

Google even showed a Power server board it had designed itself. “We’re always looking to deliver the highest quality of service for our users, and so we built this server to port our software stack to Power,” a Google engineer said at the time.

But there’s been little news about the partnership since. Google hasn’t revealed whether it’s using Power servers in production, and last year it made only vague statements that it’s keeping its options open.

Google is secretive about the technologies it uses, and it might well have plans to use both ARM and Power, but public endorsements don’t tell us much, and in the case of ARM it’s likely even Google doesn’t know for sure.

The search giant could have several reasons for showing support for non-x86 architectures. Google probably does want to test Qualcomm’s server chips, just as it tested IBM’s, to see if a different architecture can shave costs off of running its vast infrastructure. A show of support from Google encourages development of the ecosystem as a whole, including tools and software, which will be important if Google decides to put a new architecture in production.

Such statements also serve to pressure Intel, giving Google some price leverage and pushing Intel to develop new, more power-efficient parts — something Intel has done since the ARM threat emerged a few years ago.

There’s been a lot of debate about whether “brawny” cores, like Power, or “wimpy” cores, like ARM, are more efficient for cloud workloads. It depends partly what workloads you’re talking about, and there are also costs to consider like porting software.

Urs Holzle, who’s in charge of Google’s data centers, once published a paper on the topic titled “Brawny cores still beat wimpy cores, most of the time.” But that was in 2010, and the ARM architecture has evolved a lot since then.

Qualcomm's ARM server chip

 Qualcomm disclosed its plan to sell ARM server chips in October, joining rivals like AppliedMicro. It showed a test chip with 24 cores running a Linux software stack, but it still hasn’t said when a finished product will go on sale.

Derek Aberle, Qualcomm’s president, told investors last week that shipments would begin “probably within the next year or so.” But he suggested significant sales are still “out a few years.”

A vote from Google could do a lot to boost its chances. But it’s also hard to know where all of this will end up. The only sure thing is that the processor business is a lot more interesting than it was a few years ago

Oracle’s cloud storage service won’t frighten Amazon

With an OpenStack-compatible API set and low prices, Oracle’s new cloud storage service might seem like at Amazon, but it’s meant mainly for existing Oracle customers

Oracle’s new cloud service isn’t meant for the same crowd as Amazon’s AWS, where a developer can whip out a credit card and get started. Rather, it’s for Oracle’s existing enterprise customers who want to lift and shift existing workloads to the cloud.

That said, OSCS (Oracle Storage Cloud Service) is one of the first Oracle cloud services patterned after Amazon’s public cloud, elastic consumption mode. It’s also one of the few Oracle services someone can sign up for right now. That compelled us to take a closer look.

Spinning up, loading up

Getting access to OSCS requires time for provisioning, as you might expect with any major cloud service. After creating an Oracle account and purchasing the service — it took several minutes for the processing to go through — I had to set up the cloud account and a credential to access it. It took several more minutes for the service to be initialized and to get a service URL endpoint. I also had to select a data center (only Chicago or Ashburn was available for me).

The service wasn’t very reliable at first. Several times, the dashboard took more than a minute to load. It sometimes returned only a blank page, and at one point, the services were entirely unavailable for several hours. Mercifully, this cleared up after a day or so.

OSCS abstracts storage into containers — Amazon S3 buckets, essentially. Files uploaded are replicated across all three nodes in OSCS and are eventually consistent, but sometimes requests to retrieve a newly created container aren’t immediately honored. You can also apply simple key-value metadata to objects. No single file can be larger than 5GB, but the user can get around this by chaining together multiple files by way of a manifest.

Container objects can also be made publicly accessible by changing an ACL on the container, but Oracle doesn’t provide a CDN to handle large volumes of traffic; customers have to bring their own.

Containers can be either regular or archive containers. The latter, reminiscent of S3’s Infrequent Access Storage or Glacier tiers, cost much less than the former but have limitations: no bulk actions, no chained objects, and so on.

OSCS appears to be built on top of OpenStack Swift; at the very least, it uses the same RESTful API as the Swift API. Creating, writing, and accessing objects can be done with simple REST commands, so I was able to use a simple Python script to talk to OSCS. Also available is — what else? — a Java library API, essentially a wrapper for the REST API.
Resources

White Paper
5 Benefits of Disaster Recovery in the Cloud
Video/Webcast
Sponsored
Hybrid IT and your Fastest Path to a Well-run Cloud

See All

The Java library offers one advantage over the REST APIs: It provides encryption for data at rest and in transit, and it allows provisions for encrypting data client-side. Uploading by secure FTP is also supported, and Oracle autocreates an SFTP account with its own credentials.
Cheaper by the dozen

In terms of raw cost, OSCS is somewhat cheaper than Amazon S3. Prices start at 3 cents per gigabyte; Oracle starts at 2.4 cents per gigabyte. Archive storage is also cheaper: 0.1 cent per gigabyte/month, versus S3’s 1.25 cents per gigabyte for Infrequent Access or 0.7 cent per gigabyte for Glacier.

Like Amazon, all inbound data with OSCS is free, as is the first gigabyte outbound for the month. Everything after that starts at 12 cents per gigabyte/month and goes down, versus Amazon’s 9 cents per gigabyte, and likewise there are fees on the number of requests: 0.5 cents per 1,000 requests for standard storage, versus S3’s 1 cent. Delete requests cost nothing on both services.

Oracle also provides an unmetered version of the service. For $30 per terabyte per month, the user can store unlimited numbers of objects, a plan for which Amazon doesn’t have a real equivalent.

Amazon isn’t banking on price alone to remain competitive, though. Its position as an incumbent and the rich culture of solutions built around it are hard to displace. Plus, cloud services routinely one-down each other with rounds of price cuts on nearly every service they offer, so Oracle’s edge in pricing has a time limit. And while Oracle using OpenStack’s APIs is a good idea, Amazon counters with well-documented, widely used, and thoroughly understood APIs.

OSCS mainly provides an OpenStack-compatible API for cloud storage used by Oracle customers as they move their apps to Oracle’s infrastructure. It’s unlikely to steal any of Amazon’s thunder, unless Oracle has more in mind than rock-bottom pricing as a draw.