Building a Highly Available OpenStack Cloud

I Building a Highly Available OpenStack Cloud Computing some details in my previous post, about RPC-R’s reference architecture. Here, I want to drill down even more into how we worked with Red Hat to build high availability into the control plane of the RPC-R solution.

As we know, component failures will happen in a cloud environment, particularly as that cloud grows in scale. However, users still expect their OpenStack services and APIs will be always on and always available, even when servers fail. This is particularly important in a public cloud or in a private cloud where resources are being shared by multiple teams.

The starting point for architecting a highly available OpenStack cloud is the control plane. While a control plane outage would not disrupt running workloads, it would prevent users from being able to scale out or scale in in response to business demands. Accordingly, a great deal effort went into maintaining service uptime as we architected RPC-R, particularly the Red Hat OpenStack platform which is the OpenStack distribution at the core of RPC-R.

Red Hat uses a key set of open source technologies to create clustered active-active controller nodes for all Red Hat OpenStack Platform clouds. Rackspace augments that reference architecture with hardware, software and operational expertise to create RPC-R with our 99.99% API uptime guarantee. The key technologies for creating our clustered controller nodes are:

  • Pacemaker – A cluster resource manager used to manage and monitor the availability of OpenStack components across all nodes in the controller nodes cluster
  • HAProxy – Provides load balancing and proxy services to the cluster (Note that while HAProxy is the default for Red Hat OpenStack Platform, RPC-R uses F5 hardware load balancers instead)
  • Galera – Replicates the Red Hat OpenStack Platform database across the cluster

control-plane

Putting it all together, you can see in the diagram above that redundant instances of each OpenStack component run on each controller node in a collapsed cluster configuration managed by Pacemaker. As the cluster manager, Pacemaker monitors the state of the cluster and has responsibility for failover and failback of services in the event of hardware and software failures. This includes coordinating restart of services to ensure that startup is sequenced in the right order and takes into account service dependencies.

A three node cluster is the minimal size in our RPC-R reference architecture to ensure a quorum in the event of a node failure. A quorum defines the minimal number of nodes that must function for the cluster itself to remain functional. Since a quorum is defined as half the nodes + 1, three nodes is the smallest feasible cluster you can have. Having at least three nodes also allows Pacemaker to compare the content of each node in the cluster and in the event that inconsistencies are found, a majority rule algorithm can be applied to determine what should be the correct state of each node.

While Pacemaker is used to manage most of the OpenStack services, RPC-R uses the Galera multi-master database cluster manager to replicate and synchronize the MariaDB based OpenStack database running on each controller node. MariaDB is a community fork of MySQL that is used in a number of OpenStack distributions, including Red Hat OpenStack platform.

database read write

Using Galera, we are able to create an active-active OpenStack database cluster and do so without the need for shared storage. Reads and writes can be directed to any of the controller nodes and Galera will synchronize all database instances. In the event of a node failure, Galeria will handle failover and failback of database nodes.

By default, Red Hat OpenStack Platform uses HAProxy to load balance API requests to OpenStack services running in the control plane. In this configuration, each controller node runs an instance of HAProxy and each set of services has its own virtual IP. HAProxy is also clustered together using Pacemaker to provide fault tolerance for the API endpoints.

pacemaker

As mentioned previously, Rackspace has chosen to use redundant hardware load balancers in place of HAProxy. Per the previous diagram, the Red Hat OpenStack Platform architecture is identical to RPC-R with the exception that we use F5 appliances in place of clustered HAProxy instances. We believe this option provides better performance and manageability for RPC-R customers.

external-network

An enterprise grade Private Cloud is achievable but requires a combination of the right software, a well thought-out architecture and operational expertise. We believe that Rackspace and Red Hat collaborating together is the best choice for customers looking for partners to help them build out such a solution.

Deploy Openstack Juno Single Node on Redhat/Centos 7.0

In this article i am going to explain and show you how to Deploy  openstack IAAS Cloud on your home or Production server.

About Openstack Cloud :  Its IAAS  cloud plateform is Similar to AWS EC2
but here you can deploy on your home purpose.

A  Picture of Openstack Cloud is given below.

Requirement:

1.  Redhat/centos 7.0
2.  4GB  free RAM
3.  20GB  free Hard disk
4.  Virtualization support

Now follow the given steps  :-
Step 1:  configure yum client live or  local
A) Local
[root@rhel7 html]# cd  /etc/yum.repos.d/
[root@station123 yum.repos.d]# cat  juno.repo
[redhat]
baseurl=http://192.168.1.254/rhel7
gpgcheck=0

[ashu]
baseurl=http://192.168.1.254/juno-rpm/
gpgcheck=0

 OR 

B)  Live which must need internet connection 

[root@rhel7 html]# cd  /etc/yum.repos.d/

 [root@rhel7 html]# yum install epel-release -y
[root@station123 yum.repos.d]# yum install https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm 
Step 2:  Follow some security  rules for juno

i)   [root@station123 ~]# systemctl   stop  NetworkManager
ii)  [root@station123 ~]# systemctl   disable  NetworkManager   
iii)  [root@station123 ~]# setenforce  0
iv)  [root@station123 ~]# cat  /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

Note :  here i have permanently disabled the selinux  

Step 3:  BY default  Required kernel is already installed just install openstack-packstack

[root@station123 ~]# yum  install openstack-packstack

Step 4:  Now time for generating  answer file 

[root@station123 ~]# packstack  --gen-answer-file   juno.txt

Note:  this file is text based file where you need to answer some points in 'y' or  'n' 


Some  Important changes that are required

i)   NTP  server  IP 

find out NTP in you local area or real world then place the IP like given below

[root@station123 ~]# grep  -i   ntp   juno.txt 
# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=192.168.1.254

ii) DEMO version  

[root@station123 ~]# grep -i demo  juno.txt 
# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW=e2f0614ae15f44ff
# Whether to provision for demo usage and testing. Note that
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# A URL or local file location for the Cirros demo image used for


iii)  EPEL  to no

[root@station123 ~]# grep -i  epel juno.txt 
# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=n
[root@station123 ~]# 

iv)  HOrizon ssl  to yes

 [root@station123 ~]# grep -i  horizon  juno.txt 
# Dashboard (Horizon)
CONFIG_HORIZON_INSTALL=y
# specific to controller role such as API servers, Horizon, etc.
# To set up Horizon communication over https set this to 'y'
CONFIG_HORIZON_SSL=y
[root@station123 ~]# 


v)  configure heat and celiometer as per your requirement


Step 5:  Now you can run answer file 
===========================

[root@station123 ~]# packstack  --answer-file  juno.txt 

it  may ask for your root password  then will take almost  20 to 35 minutes

Important :  if you got any error at this time then you can  send me snapshot


Step 6:   now find  your  admin  userpassword  inside your /root directory

[root@station123 ~]# cat  keystonerc_admin 
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=8bb1c812ec16418e
export OS_AUTH_URL=http://192.168.1.123:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '


step  7:  Now  you can login to openstack panel and go for cloud creation

Now after login you will be offered by openstack internal services

like  shown below

Here  are few examples  are given by me .

IMportant:  After login almost  every thing you can do via clicking in project tab

A)  Now you can  create  network  routers and images as given 

i) Image creation  

in image link you can choose image from your hard disk or any Http url 
Note : url must be in http://  formate

now you are done with image creation

ii)  Network creation 

In project  tab  click on network menu then follow the steps as given below.

Private network 

Public network  –

NOte : public network will be your netwokr address where your base system have ip address

Important : 

 you can go ahead with graphical panel its really very simple. if you face any issue you can revert to me.

Red Hat covers cloud apps with OpenStack and Cloud Suite

Red Hat’s latest OpenStack Platform release wraps up the cloud for easier deployment, but Cloud Suite will likely claim a broader audience

With its two latest releases, Red Hat makes good on its previously stated plans to extend open source out of the data center and across the entire dev stack.

Red Hat OpenStack Platform 8 and Red Hat Cloud Suite provide contrasting methodologies for building and delivering hybrid cloud apps on open source infrastructure. Cloud Suite is an all-in-one package of Red Hat’s cloud technologies. OpenStack Platform, meanwhile, adds value and ease of use with both Red Hat and third-party hardware.

Making the hard part easy

OpenStack is complicated to deploy and maintain, so Red Hat and other third-party vendors tout ease of use and management as selling points. As Matt Asay pointed out, Red Hat’s mainstay is to simplify complex technology (like open source infrastructure apps) for enterprise settings.

Red Hat’s previous incarnations of OpenStack were built with this philosophy in mind, and the current version ramps it up. Upgrading OpenStack components, long regarded as thorny and difficult, is handled automatically by Red Hat’s add-ons. CloudForms, Red Hat’s management tool for clouds, comes as part of the bundle, providing yet another option to offset OpenStack’s management complexities.

OpenStack has been trying to solve these problems as well, as shown with its most recent version, code-named Mitaka. It features tools like a unified command line and a more streamlined setup process with sane defaults. But Red Hat’s OpenStack uses the previous Liberty release, so it will be at least another release cycle before the changes find their way into Red Hat’s work.

Red Hat also has been trying to sweeten OpenStack’s pot via a strategy explored by several other OpenStack vendors: hardware solutions. Red Hat and Dell have previously partnered to sell the former’s OpenStack solutions on the latter’s hardware. Thelatest generation of that partnership provides yet another means of putting OpenStack into more hands: the On-Ramp to OpenStack program.

All of this is meant to broaden OpenStack’s appeal and to make it more than the do-it-yourself cloud favored by a few large companies and telcos. (OpenStack Platform 8 has “telco-focused preview” features.) As Asay noted, while individual OpenStack customers are large, the overall field remains smally because for many enterprise customers, OpenStack looks like too much of a solution for not enough of a problem. That didn’t start with Red Hat, and so far it’s unlikely Red Hat alone can change that.

A three-piece Suite

For that reason, Red Hat isn’t depending on OpenStack alone, as its second big release today, Red Hat Cloud Suite, shows. It’s aimed at a broader, and likely more rewarding, market: Those building cloud applications with containers and who want to concentrate on app lifecycle rather than the deployment infrastructure.

Cloud Suite also uses OpenStack, but as a substrate managed through Red Hat’s CloudForms software. On top of that is the part users will deal with most directly: Red Hat’s OpenShift PaaS for managing containerized applications in Docker. (OpenShift got high marks from InfoWorld’s Martin Heller for being “robust, easy-to-use, and highly scalable.”)

CloudForms treats OpenStack as one of many possible cloud layers that can be abstracted away. To that end, the apps deployed on OpenShift can run in multiple places — local and remote OpenStack clouds, Azure clouds, and so on. This part of Red Hat’s strategy for hybrid cloud echoes Google’s ambitions, in that it allows the user to work with open source software and open standards to deploy apps to both local and remote clouds.

OpenStack was regarded as the original method to pull that off. While Red Hat hasn’t abandoned OpenStack, its focus remains narrow. Cloud Suite, due to its flexibility and emphasis on applications rather than infrastructure, seems likely to draw a broader crowd.