Boot integration of the Openvswitch in Ubuntu

The installation of the Openvswitch on Ubuntu brings an automatic integration into the boot sequence. When Ubuntu is booted, the Openvswitch is also started. Any bridge or port, which has been defined in previous sessions, is restored.
BUT: No L3 interfaces are set up, junk interfaces, which have been defined in previous sessions, are restored.

The ideal start sequence of the Openvswitch delivers:

  • an empty Openvswitch database during the startup of Ubuntu –> clean the Openvswitch database (patch necessary)
  • define all bridges, interfaces, port,… using /etc/network/interfaces –> this is supported
  • brings also up all defined L3 interfaces on Openvswitch internal interfaces –> patch necessary

With Ubuntu 16.04 systemd is used to start the system. Older systems using upstart have now a fixed upstart script (see below) – but the systemd scripts do not allow out of the box an automatic setup of OVS interfaces. How to fix it? See below… 🙂 Lets hope, that Canonical provides a patch for 16.04 before 18.04 shows up…

With Ubuntu 16.10 boot integration works out of the box.

Define bridges, ports, interfaces using /etc/network/interfaces

All configurations items, which are necessary to set up the Openvswitch, can be defined in /etc/network/interfaces .

Define a bridge

All ports and interfaces on the Openvswitch must be attached to a bridge. Defining a bridge (which is a virtual switch) requires the following configuration items:

The configurations lines are

  • auto is necessary to allow an automatic start (Ubuntu 16.04 using systemd)
  • allow-ovs is the marker for an Openvswitch bridge. The bridge name follows, here vmbr-int
  • iface is the well known /etc/network/interfaces identifier to start an interface configuration
  • ovs_type defines the interface type for ovs. OVSBridge identifies an Openvswitch bridge

The lines above are an equivalent of the command ovs-vsctl add-br vmbr-int

Define a L2 port (untagged)

The next step is to attach a port (not seen by Linux) to the bridge above.

The configuration lines are

  • allow-[name of the bridge to attach the port] defines the name of the bridge, to which the port should be attached to. In our example this is vmbr-int. The name of the port follows (here l2port)
  • iface is the well known /etc/network/interfaces identifier to start an interface configuration — set the config to manual
  • ovs-bridge holds the name of the bridge to which the port should be attached to. In our example this is vmbr-int.
  • ovs_type defines the interface type for ovs. OVSPort identifies an Openvswitch port not seen by the Linux Operating system
  • ovs_options defines additional options for the port – here define an untagged port, which is member of Vlan 444. If this option is not set, a port may use all vlan tags on the interfaces, because all ports transport vlan tags by default.

The bad thing is, that the bridge (vmbr-int) must be defined twice. But this is not the whole truth. This definition is required a third time. The complete definition for the bridge and port is now:

This is the summary definition. It contains one additional line in the bridge definition iface

  • ovs_ports holds the list of all ports which should be attached to the bridge

The port which is added to the bridge in the example above can also be an existing physical nic. The example below shows the config to add a physical interface to a bridge:

Define a L3 interface

The definition of a L3 interface requires the following configuration lines

The differences compared to the L2 port are

  • iface — set the mode to static — because an IP address is configured on this interface
  • ovs_type — an L3 port must be defined as an internal port — must be set to OVSIntPort
  • address and netmask are the well known parameters for IPV4 an address/netmask combination

Do not forget to add the port to the bridge definition of vmbr-int.

Define a L2 port (tagged/trunking)

The definition of a L2 port, which is tagged looks like:

A trunking port does not require an additional definition. In the example above, we want to limit the vlans on the port. This requires a trunk definition using the ovs_options line.

Define a fake bridge

It is also possible to add the definition of an ovs fake bridge

A fake bridge definition requires a special option

  • ovs_options must be set to vmbr-int 2102

On the commandline the command would be: ovs-vsctl add-br fakebr vmbr-int 2102 . The ovs_options are the two last options being necessary on the commandline.

Define a tagged LACP bond port

It is also possible to define a tagged bond port using LACP as the bonding protocol

The options are

  • ovs_type must be set to OVSBond
  • ovs_bonds has the list of the interfaces to be added to the bond — here eth4 and eth5
  • ovs_options holds all the extra options for the bond and the trunk. These are the same options, which have to be used on the command line

Interface helper script

ifup and ifdown are using a helper script, which is part of the openvswitch-switch package. This script openvswitch is located in /etc/network/if-pre-up.d/ . This script executes all commands.

This script is missing only the support to change the mtu of Openvswitch bridges, ports and interfaces.

Boot support (Ubuntu 16.04 – when using systemd)

Canonical did it again – deliver an OS which does not support the autostart of OVS interfaces.. The upstart scripts have been fixed to allow the autostart of OVS interfaces, but now systemd is used – and the automatic start of OVS interfaces is broken again.

A fix is available, see Launchpad Bug. The systemd start file for OVS needs to be changed to add dependencies. OVS must be started before setting up the network interfaces.

Replace /lib/systemd/system/openvswitch-nonetwork.service with the file content above, and you’re done.

One option to delete all existing bridges should be set in the config file /etc/default/openvswitch-switch to start with an empty openvswitch config.

Boot support (Ubuntu 14.04 and later – when using upstart)

Ubuntu 14.04 has a better boot support for the openvswitch, but it is still not perfect. The interface up/down stuff is still missing.

One option to delete all existing bridges should be set in the config file /etc/defaults/openvswitch-switch

The upstart script must be patched twice to start and stop ovs components and the associated interfaces.

You must insert all lines between „exit $?“ and „end script“ to bring up the Openvswitch interfaces.

The shutdown sequence requires the following patch:

There is still a dependency. The openvswitch start script assumes, that the loopback interface is brought up as the last interface.

Boot support (pre Ubuntu 14.04)

The boot support for the Openvswitch is implemented very different. It depends on the Openvswitch version, the Ubuntu version and the package repository. Up to now there is NO support to bring up the interfaces automatically. In any case, a patch is required.

All Ubuntu distributions are using Openvswitch packages(by November 2013), which do not have an openvswitch upstart script. One way to bring up interfaces here is using a few lines in /etc/rc.local or patching /etc/init.d/openvswitch-switch .

The necessary lines for /etc/rc.local would be:

You may add the necessary lines to /etc/init.d/openvswitch-switch , but the interfaces may come up too late, because the non upstart script starts too late.

If you are using the openvswitch package from the Havana Ubuntu Cloud Archive [deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main] you have an openvswitch-switch upstart script. This must be patched to bring up all ovs network components.

You must insert all lines between „exit $?“ and „end script“ to bring up the Openvswitch interface.

The shutdown sequence requires the following patch:

There is still a dependency. „ifup -a“ in networking.conf brings up the loopback interface as the last interface. This is the trigger to emit the event „loopback interface is up“ which triggers then openvswitch-switch upstart script. This might be too late. Hopefully there will be a fully integrated solution for Ubuntu in future release

53 new things to look for in OpenStack Newton (plus a few more)

OpenStack Newton, the technology’s 14th release, shows just how far we’ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that’s a given, and we’re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms — virtual machines, containers, and bare metal.

There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What’s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let’s take a look at 53 things that are new in OpenStack Newton.

openstack_main_services-svg

Compute (Nova)

  1. Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
  2. A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
  3. Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
  4. Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
  5. Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

  1. 802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
  2. The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
  3. Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
  4. OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
  5. No downtime API service upgrades

Storage (Cinder, Glance, Swift)

Cinder

  1. Microversions let developers can add new features you can access without breaking the main version.
  2. Rolling upgrades let you update to Newton without having to take down the entire cloud.
  3. enabled_backends config option defines which backend types are available for volume creation.
  4. Retype volumes from encrypted to not encrypted, and back again after creation.
  5. Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
  6. The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.

Glance

  1. Glare, the Glance Artifact Repository, provides the ability to store more than just images.
  2. A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
  3. The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.

Swift

  1. Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
  2. Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
  3. Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)

Keystone

  1. Simplified configuration setup
  2. PCI support of password configuration options
  3. Credentials encrypted at rest

Horizon

  1. You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
  2. Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
  3. Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
  4. Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
  5. Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)

Magnum

  1. Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
  2. The API service is now protected by SSL.
  3. You can now use Kubernetes on bare metal.
  4. Asynchronous cluster creation improves performance for complex operations.

Kolla

  1. You can now use Kolla to deploy containerized OpenStack to bare metal.

Kuryr

  1. Use Neutron networking capabilities in containers.
  2. Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)

Heat

  1. Use DNS resolution and integration with an external DNS.
  2. Access external resources using the external_id attribute.

Ceilometer

  1. New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
  2. Magnum support.

FUEL

  1. Deploy Fuel without having to use an ISO.
  2. Improved life cycle management user experience, including Infrastructure as Code.
  3. Container-based deployment possibilities.

Murano

  1. Use the new Application Development Framework to build more complex applications.
  2. Enable users to deploy your application across multiple regions for better reliability and scalability.
  3. Specify that when resources are no longer needed, they should be deallocated.

Ironic

  1. You can now have multiple nova-compute services using Ironic without causing duplicate entries.
  2. Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
  3. Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

  1. The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
  2. Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
  3. Add and manage applications via the Community App Catalog website.

Why Red Hat’s OpenShift, not OpenStack, is making waves with developers

Red Hat has historically whiffed with developers. But, its PaaS offering, OpenShift, may mark a new era for the open source giant.

Developers may be the new kingmakers in the enterprise, to borrow Redmonk’s phrase, but they’ve long ignored the enterprise open source leader, Red Hat. Two years ago, I called out Red Hat’s apparent inability to engage the very audience that would ensure its long-term relevance. Now, there are signs that Red Hat got the message.

And, no, I’m not talking about OpenStack. Though Red Hat keeps selling OpenStack (seven of its top-30 deals last quarter included OpenStack, according to Red Hat CEO Jim Whitehurst), it’s really OpenShift, the company’s Platform-as-a-Service (PaaS) offering, that promises a bright, developer-friendly future for Red Hat.

redhatLooking beyond OpenStack

Red Hat continues to push OpenStack, and rightly so—it’s a way for Red Hat to certify a cloud platform just as it once banked on certifying the Linux platform. There’s money in assuring risk-averse CIOs that it’s safe to go into the OpenStack cloud environment.

Even so, as Whitehurst told investors in June, OpenStack is not yet “material” to the company’s overall revenue, and generally generates deals under $100,000. It will continue to grow, but OpenStack adoption is primarily about telcos today, and that’s unlikely to change as enterprises grow increasingly comfortable with public IaaS and PaaS options. OpenStack feels like a way to help enterprises avoid the public cloud and try to dress up their data centers in the fancy “private cloud” lingo.

OpenShift, by contrast, is far more interesting.

OpenShift, after all, opens Red Hat up to containers and all they mean for enterprise developer productivity. It’s also a way to pull through other Red Hat software like JBoss and Red Hat Enterprise Linux, because Red Hat’s PaaS is built on these products. OpenShift has found particular traction among sophisticated financial services companies that want to get in early on containers, but the list of customers includes a wide range of companies like Swiss Rail, BBVA, and many others.

More and faster

To be clear, Red Hat still has work to do. According to Gartner’s most recent Magic Quadrant for PaaS, Salesforce and Microsoft are still a ways ahead, particularly in their ability to execute their vision:

mqdrnt.jpg

Still, there are reasons to think Red Hat will separate itself from the PaaS pack. For one thing, the company is putting its code where it hopes its revenue will be. Red Hat learned long ago that, to monetize Linux effectively, it needed to contribute heavily. In similar fashion, only Google surpasses Red Hat in Kubernetes code contributions, and Docker Inc. is the only company to contribute more code to the Docker container project.

Why does this matter? If you’re an enterprise that wants a container platform then you’re going to trust those vendors that best understand the underlying code and have the ability to influence its direction. That’s Red Hat.

Indeed, one of the things that counted against Red Hat in Gartner’s Magic Quadrant ranking was its focus on Kubernetes and Docker (“Docker and Kubernetes have tremendous potential, but these technologies are still young and evolving,” the report said). These may be young and relatively immature technologies, but all signs point to them dominating a container-crazy enterprise world for many years to come. Kubernetes, as I’ve written, is winning the container management war, putting Red Hat in pole position to benefit from that adoption, especially as it blends familiar tools like JBoss with exciting-but-unfamiliar technologies like Docker.

Red Hat has also been lowering the bar for getting started and productive with OpenShift, as Serdar Yegulalp described. By focusing on developer darlings like Docker and Kubernetes, and making them easily consumable by developers and more easily run by operations, Red Hat is positioning itself to finally be relevant to developers…and in a big way.

OpenStack’s latest release focuses on scalability and resilience

OpenStack, the massive open source project that helps enterprises run the equivalent of AWS in their own data centers, is launching the 14th major version of its software today. Newton, as this new version is called, shows how OpenStack has matured over the last few years. The focus this time is on making some of the core OpenStack services more scalable and resilient. In addition, though, the update also includes a couple of major new features. The project now better supports containers and bare metal servers, for example.

In total, more than 2,500 developers and users contributed to Newton. That gives you a pretty good sense of the scale of this project, which includes support for core data center services like compute, storage and networking, but also a wide range of smaller projects.

As OpenStack Foundation COO Mark Collier told me, the focus with Newton wasn’t so much on new features but on adding tools for supporting new kinds of workloads.

Both Collier and OpenStack Foundation executive director Jonathan Bryce stressed that OpenStack is mostly about providing the infrastructure that people need to run their workloads. The project itself is somewhat agnostic as to what workloads they want to run and which tools they want to use, though. “People aren’t looking at the cloud as synonymous with [virtual machines] anymore,” Collier said. Instead, they are mixing in bare metal and containers as well. OpenStack wants to give these users a single control plane to manage all of this.

Enterprises do tend to move slowly, though, and even the early adopters that use OpenStack are only now starting to adopt containers. “We see people who are early adopters who are running container in production,” Bryce told me. “But I think OpenStack or not OpenStack, it’s still early for containers in production usage.” He did note, however, that he is regularly talks to enterprise users who are looking at how they can use the different components in OpenStack to get to containers faster. 
networktopology

Core features of OpenStack, including the Nova compute service, as well as the Horizon dashboard and Swift object/blob store, have now become more scalable. The Magnum project for managing containers on OpenStack, which already supported Docker Swarm, Kubernetes and Mesos, now also allows operators to run Kubernetes clusters on bare metal servers, while the Ironic framework for provisioning those bare metal servers is now more tightly integrated with Magnuma and also now supports multi-tenant networking.

The release also includes plenty of other updates and tweaks, of course. You can find a full (and fully overwhelming) rundown of what’s new in all of the different projects here.

With this release out of the door, the OpenStack community is now looking ahead to the next release six months form now. This next release will go through its planning stages at the upcoming OpenStack Summit in Barcelona later this month and will then become generally available next February.

AppFormix now helps enterprises monitor and optimize their virtualized networks

AppFormix helps enterprises, including the likes of Rackspace and its customers, monitor and optimize their OpenStack- and container-based clouds. The company today announced that it has also now added support for virtualized network functions (VNF) to its service.

Traditionally, networking was the domain of highly specialized hardware, but increasingly, it’s commodity hardware and software performing these functions (often for a fraction of the cost). Almost by default, however, networking functions are latency sensitive, especially in the telco industry, which is one of the core users of VNF and also makes up a large number of OpenStack’s users. Using commodity hardware, however, introduces new problems, including increased lag and jitter.

AppFormix co-founder and CEO Sumeet Singh tells me that his company’s service can now reduce jitter by up to 70 percent. “People are just starting to roll out VNFs and as telcos move from hardware to software, that’s where they run into this problem,” he noted. “Our software is designed as this real-time system where we are able to analyze how everything is performing and do optimization based on this analysis.”

For VNF, this often means modifying how workloads are placed and how resources are allocated. Interestingly, AppFormix’s research showed that CPU allocations have very little influence on jitter. Instead, it’s all about how you use the available cache and memory. It’s controlling cache allocations correctly that allows Appformix to reduce jitter.

Singh stressed that it’s not just telcos that can benefit from this but also e-commerce sites and others who want to be able to offer their users a highly predictable experience.

The new feature is now available as part of AppFormix’s overall cloud optimization platform, which currently focuses on OpenStack and Kubernetes deployments.

Building a Highly Available OpenStack Cloud

I Building a Highly Available OpenStack Cloud Computing some details in my previous post, about RPC-R’s reference architecture. Here, I want to drill down even more into how we worked with Red Hat to build high availability into the control plane of the RPC-R solution.

As we know, component failures will happen in a cloud environment, particularly as that cloud grows in scale. However, users still expect their OpenStack services and APIs will be always on and always available, even when servers fail. This is particularly important in a public cloud or in a private cloud where resources are being shared by multiple teams.

The starting point for architecting a highly available OpenStack cloud is the control plane. While a control plane outage would not disrupt running workloads, it would prevent users from being able to scale out or scale in in response to business demands. Accordingly, a great deal effort went into maintaining service uptime as we architected RPC-R, particularly the Red Hat OpenStack platform which is the OpenStack distribution at the core of RPC-R.

Red Hat uses a key set of open source technologies to create clustered active-active controller nodes for all Red Hat OpenStack Platform clouds. Rackspace augments that reference architecture with hardware, software and operational expertise to create RPC-R with our 99.99% API uptime guarantee. The key technologies for creating our clustered controller nodes are:

  • Pacemaker – A cluster resource manager used to manage and monitor the availability of OpenStack components across all nodes in the controller nodes cluster
  • HAProxy – Provides load balancing and proxy services to the cluster (Note that while HAProxy is the default for Red Hat OpenStack Platform, RPC-R uses F5 hardware load balancers instead)
  • Galera – Replicates the Red Hat OpenStack Platform database across the cluster

control-plane

Putting it all together, you can see in the diagram above that redundant instances of each OpenStack component run on each controller node in a collapsed cluster configuration managed by Pacemaker. As the cluster manager, Pacemaker monitors the state of the cluster and has responsibility for failover and failback of services in the event of hardware and software failures. This includes coordinating restart of services to ensure that startup is sequenced in the right order and takes into account service dependencies.

A three node cluster is the minimal size in our RPC-R reference architecture to ensure a quorum in the event of a node failure. A quorum defines the minimal number of nodes that must function for the cluster itself to remain functional. Since a quorum is defined as half the nodes + 1, three nodes is the smallest feasible cluster you can have. Having at least three nodes also allows Pacemaker to compare the content of each node in the cluster and in the event that inconsistencies are found, a majority rule algorithm can be applied to determine what should be the correct state of each node.

While Pacemaker is used to manage most of the OpenStack services, RPC-R uses the Galera multi-master database cluster manager to replicate and synchronize the MariaDB based OpenStack database running on each controller node. MariaDB is a community fork of MySQL that is used in a number of OpenStack distributions, including Red Hat OpenStack platform.

database read write

Using Galera, we are able to create an active-active OpenStack database cluster and do so without the need for shared storage. Reads and writes can be directed to any of the controller nodes and Galera will synchronize all database instances. In the event of a node failure, Galeria will handle failover and failback of database nodes.

By default, Red Hat OpenStack Platform uses HAProxy to load balance API requests to OpenStack services running in the control plane. In this configuration, each controller node runs an instance of HAProxy and each set of services has its own virtual IP. HAProxy is also clustered together using Pacemaker to provide fault tolerance for the API endpoints.

pacemaker

As mentioned previously, Rackspace has chosen to use redundant hardware load balancers in place of HAProxy. Per the previous diagram, the Red Hat OpenStack Platform architecture is identical to RPC-R with the exception that we use F5 appliances in place of clustered HAProxy instances. We believe this option provides better performance and manageability for RPC-R customers.

external-network

An enterprise grade Private Cloud is achievable but requires a combination of the right software, a well thought-out architecture and operational expertise. We believe that Rackspace and Red Hat collaborating together is the best choice for customers looking for partners to help them build out such a solution.

Deploy Openstack Juno Single Node on Redhat/Centos 7.0

In this article i am going to explain and show you how to Deploy  openstack IAAS Cloud on your home or Production server.

About Openstack Cloud :  Its IAAS  cloud plateform is Similar to AWS EC2
but here you can deploy on your home purpose.

A  Picture of Openstack Cloud is given below.

Requirement:

1.  Redhat/centos 7.0
2.  4GB  free RAM
3.  20GB  free Hard disk
4.  Virtualization support

Now follow the given steps  :-
Step 1:  configure yum client live or  local
A) Local
[root@rhel7 html]# cd  /etc/yum.repos.d/
[root@station123 yum.repos.d]# cat  juno.repo
[redhat]
baseurl=http://192.168.1.254/rhel7
gpgcheck=0

[ashu]
baseurl=http://192.168.1.254/juno-rpm/
gpgcheck=0

 OR 

B)  Live which must need internet connection 

[root@rhel7 html]# cd  /etc/yum.repos.d/

 [root@rhel7 html]# yum install epel-release -y
[root@station123 yum.repos.d]# yum install https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm 
Step 2:  Follow some security  rules for juno

i)   [root@station123 ~]# systemctl   stop  NetworkManager
ii)  [root@station123 ~]# systemctl   disable  NetworkManager   
iii)  [root@station123 ~]# setenforce  0
iv)  [root@station123 ~]# cat  /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

Note :  here i have permanently disabled the selinux  

Step 3:  BY default  Required kernel is already installed just install openstack-packstack

[root@station123 ~]# yum  install openstack-packstack

Step 4:  Now time for generating  answer file 

[root@station123 ~]# packstack  --gen-answer-file   juno.txt

Note:  this file is text based file where you need to answer some points in 'y' or  'n' 


Some  Important changes that are required

i)   NTP  server  IP 

find out NTP in you local area or real world then place the IP like given below

[root@station123 ~]# grep  -i   ntp   juno.txt 
# Comma separated list of NTP servers. Leave plain if Packstack
# should not install ntpd on instances.
CONFIG_NTP_SERVERS=192.168.1.254

ii) DEMO version  

[root@station123 ~]# grep -i demo  juno.txt 
# The password to use for the Keystone demo user
CONFIG_KEYSTONE_DEMO_PW=e2f0614ae15f44ff
# Whether to provision for demo usage and testing. Note that
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
# A URL or local file location for the Cirros demo image used for


iii)  EPEL  to no

[root@station123 ~]# grep -i  epel juno.txt 
# To subscribe each server to EPEL enter "y"
CONFIG_USE_EPEL=n
[root@station123 ~]# 

iv)  HOrizon ssl  to yes

 [root@station123 ~]# grep -i  horizon  juno.txt 
# Dashboard (Horizon)
CONFIG_HORIZON_INSTALL=y
# specific to controller role such as API servers, Horizon, etc.
# To set up Horizon communication over https set this to 'y'
CONFIG_HORIZON_SSL=y
[root@station123 ~]# 


v)  configure heat and celiometer as per your requirement


Step 5:  Now you can run answer file 
===========================

[root@station123 ~]# packstack  --answer-file  juno.txt 

it  may ask for your root password  then will take almost  20 to 35 minutes

Important :  if you got any error at this time then you can  send me snapshot


Step 6:   now find  your  admin  userpassword  inside your /root directory

[root@station123 ~]# cat  keystonerc_admin 
export OS_USERNAME=admin
export OS_TENANT_NAME=admin
export OS_PASSWORD=8bb1c812ec16418e
export OS_AUTH_URL=http://192.168.1.123:5000/v2.0/
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '


step  7:  Now  you can login to openstack panel and go for cloud creation

Now after login you will be offered by openstack internal services

like  shown below

Here  are few examples  are given by me .

IMportant:  After login almost  every thing you can do via clicking in project tab

A)  Now you can  create  network  routers and images as given 

i) Image creation  

in image link you can choose image from your hard disk or any Http url 
Note : url must be in http://  formate

now you are done with image creation

ii)  Network creation 

In project  tab  click on network menu then follow the steps as given below.

Private network 

Public network  –

NOte : public network will be your netwokr address where your base system have ip address

Important : 

 you can go ahead with graphical panel its really very simple. if you face any issue you can revert to me.

Some Important Openstack Features

TYPES OF STORAGE PROVIDED BY OPENSTACK .

OpenStack supports two types of storage:

1. Persistent Storage or volume storage

2. Ephemeral Storage

Persistent Storage / Volume Storage: It is persistent which means it will be available at later stage when the instance is shut down and independent of any particular instance. This storage is created by users.

Types of Persistent Storage

    • Object storage: It is used to access binary objects through the REST API.
    • Block storage: This is the traditional type of storage which we also see in our general computer systems.
    • Shared File System storage: It provides a set of services to manage multiple files together for storage.

Ephemeral Storage:  It is a temporary that is disappeared once the VM is terminated.

What is hypervisor? What type of hypervisor does OpenStack supports?

Hypervisor is a piece of computer software or hardware that is used to create and run virtual machines.

A list of hypervisor that supports OpenStack:

  • KVM
  • VMware
  • Containers
  • Xen and HyperV

What is OpenStack.

MNC’s define OpenStack as the future of Cloud Computing. OpenStack is a platform to create and handle massive groups of virtual machines through a Graphical User Interface.
Openstack is free, open-source software and works similar to Linux.
openstack
Key components of OpenStack?
• Horizon: This is the GUI of openstack.

• Nova: Primary computing engine to manage multiple virtual machines and computing tasks

• Swift: This is a very robust service used for the object storage management.
• Cinder: Like our traditional computer storage system, it is a block storage system in OpenStack.
• Neutron: Used for the networking services in openstack
• Keystone: Identity Management service which uses tokens.
• Glance: image service provider. Images are the virtual copies of hard disks.
• Ceilometer: Provides telemetry services to cloud users.
• Heat (Orchestration Engine): It helps developers to illustrate automated infrastructure deployment

 

Capitulation? Mirantis refactors OpenStack on top of Kubernetes

First, the guts of the announcement: Mirantis, the bad boys of the OpenStack world, are today announcing a collaboration with Google (a company that has pretty much zero history with OpenStack) and Intel. Under the intent of the collaboration, the life cycle management tool for OpenStack, Fuel, will be rewritten so that it uses Kubernetes as its underlying orchestration.

Lots of inside baseball there, so what are all these different products?

  • OpenStack is the open source cloud computing operating system that was jointly created by Rackspace and NASA and has since built a massive following of companies (including IBM, HPE, Intel and many, many others).
  • Kubernetes is the open source orchestration platform loosely descended from the tools that Google uses internally to operate its own data centers.
  • Fuel, as stated previously, was (is) the OpenStack-native life cycle management tool for OpenStack.

So what does it all mean? Well, it’s actually far more important than first appearances would suggest. It marks, at least to some extent, an admission by all concerned that OpenStack isn’t the be-all and end-all of the infrastructure world

That positioning, which might seem blindingly obvious to anyone who is aware of the heterogeneity of modern enterprise IT, somewhat goes against what we heard from the OpenStack camp for its first few years, when pundits would be excused for thinking that OpenStack was the solution for every possible situation. It seems now, however, that OpenStack is simply a part of the solution — and virtual machines, containers and bare-metal systems all have a part to play in enterprise IT going forward.

Under the terms of the collaboration, Mirantis will initiate a new continuous integration/continuous delivery (CI/CD) pipeline under the OpenStack Fuel project for building capabilities around containerized OpenStack deployment and operations. The resulting software will give users fine-grain control over the placement of services used for the OpenStack control plane, as well as the ability to do rolling updates of OpenStack, make the OpenStack control plane self-healing and more resilient, and smooth the path for the creating of microservices-based applications on OpenStack.

If that sounds familiar, that would be because it is much the same proposition that we heard from Alex Polvi of CoreOS fame a few months ago — the difference here is that it comes from an OpenStack player that is front-and-center of the movement, an arguably far more substantive statement.

And some big names have poured the love into this collaboration — in particular Mirantis and Google, originators of Kubernetes.

“With the emergence of Docker as the standard container image format and Kubernetes as the standard for container orchestration, we are finally seeing continuity in how people approach operations of distributed applications,” said Mirantis CMO Boris Renski. “Combining Kubernetes and Fuel will open OpenStack up to a new delivery model that allows faster consumption of updates, helping customers get to outcomes faster.”

Google Senior Product Manager Craig McLuckie also chimed in. “Leveraging Kubernetes in Fuel will turn OpenStack into a true microservice application, bridging the gap between legacy infrastructure software and the next generation of application development,” he said. “Many enterprises will benefit from using containers and sophisticated cluster management as the foundation for resilient, highly scalable infrastructure.”

Along with the initial work on the Fuel aspects, Mirantis will also become an active contributor to the Kubernetes project, and has stated the ambition to become a top contributor to the project over the next year.

Alongside that, Mirantis has joined the Cloud Native Computing Foundation, a Linux Foundation project dedicated to advancing the development of cloud-native applications and services, as a Silver member.

MyPOV

This is a big deal, there’s no denying that. OpenStack is slowly but inexorably becoming less of a “solution for everything” and more of an integral part. Skeptics would suggest that this marks a turning point where OpenStack ceases to be a compelling long-term proposition in and of itself and becomes simply a stop-gap measure between traditional architectures and more cloud-native approaches.

The reality is probably somewhere in the middle — and OpenStack will still have a part to play in infrastructure going forward — but clearly Mirantis’ move to embrace Kubernetes is an indication that it realizes that it needs to extend beyond a pure-play OpenStack offering.

As always, this space provides huge interest and much entertainment — a situation that looks unlikely to change anytime soon

1 2 3