Boot integration of the Openvswitch in Ubuntu

The installation of the Openvswitch on Ubuntu brings an automatic integration into the boot sequence. When Ubuntu is booted, the Openvswitch is also started. Any bridge or port, which has been defined in previous sessions, is restored.
BUT: No L3 interfaces are set up, junk interfaces, which have been defined in previous sessions, are restored.

The ideal start sequence of the Openvswitch delivers:

  • an empty Openvswitch database during the startup of Ubuntu –> clean the Openvswitch database (patch necessary)
  • define all bridges, interfaces, port,… using /etc/network/interfaces –> this is supported
  • brings also up all defined L3 interfaces on Openvswitch internal interfaces –> patch necessary

With Ubuntu 16.04 systemd is used to start the system. Older systems using upstart have now a fixed upstart script (see below) – but the systemd scripts do not allow out of the box an automatic setup of OVS interfaces. How to fix it? See below… 🙂 Lets hope, that Canonical provides a patch for 16.04 before 18.04 shows up…

With Ubuntu 16.10 boot integration works out of the box.

Define bridges, ports, interfaces using /etc/network/interfaces

All configurations items, which are necessary to set up the Openvswitch, can be defined in /etc/network/interfaces .

Define a bridge

All ports and interfaces on the Openvswitch must be attached to a bridge. Defining a bridge (which is a virtual switch) requires the following configuration items:

The configurations lines are

  • auto is necessary to allow an automatic start (Ubuntu 16.04 using systemd)
  • allow-ovs is the marker for an Openvswitch bridge. The bridge name follows, here vmbr-int
  • iface is the well known /etc/network/interfaces identifier to start an interface configuration
  • ovs_type defines the interface type for ovs. OVSBridge identifies an Openvswitch bridge

The lines above are an equivalent of the command ovs-vsctl add-br vmbr-int

Define a L2 port (untagged)

The next step is to attach a port (not seen by Linux) to the bridge above.

The configuration lines are

  • allow-[name of the bridge to attach the port] defines the name of the bridge, to which the port should be attached to. In our example this is vmbr-int. The name of the port follows (here l2port)
  • iface is the well known /etc/network/interfaces identifier to start an interface configuration — set the config to manual
  • ovs-bridge holds the name of the bridge to which the port should be attached to. In our example this is vmbr-int.
  • ovs_type defines the interface type for ovs. OVSPort identifies an Openvswitch port not seen by the Linux Operating system
  • ovs_options defines additional options for the port – here define an untagged port, which is member of Vlan 444. If this option is not set, a port may use all vlan tags on the interfaces, because all ports transport vlan tags by default.

The bad thing is, that the bridge (vmbr-int) must be defined twice. But this is not the whole truth. This definition is required a third time. The complete definition for the bridge and port is now:

This is the summary definition. It contains one additional line in the bridge definition iface

  • ovs_ports holds the list of all ports which should be attached to the bridge

The port which is added to the bridge in the example above can also be an existing physical nic. The example below shows the config to add a physical interface to a bridge:

Define a L3 interface

The definition of a L3 interface requires the following configuration lines

The differences compared to the L2 port are

  • iface — set the mode to static — because an IP address is configured on this interface
  • ovs_type — an L3 port must be defined as an internal port — must be set to OVSIntPort
  • address and netmask are the well known parameters for IPV4 an address/netmask combination

Do not forget to add the port to the bridge definition of vmbr-int.

Define a L2 port (tagged/trunking)

The definition of a L2 port, which is tagged looks like:

A trunking port does not require an additional definition. In the example above, we want to limit the vlans on the port. This requires a trunk definition using the ovs_options line.

Define a fake bridge

It is also possible to add the definition of an ovs fake bridge

A fake bridge definition requires a special option

  • ovs_options must be set to vmbr-int 2102

On the commandline the command would be: ovs-vsctl add-br fakebr vmbr-int 2102 . The ovs_options are the two last options being necessary on the commandline.

Define a tagged LACP bond port

It is also possible to define a tagged bond port using LACP as the bonding protocol

The options are

  • ovs_type must be set to OVSBond
  • ovs_bonds has the list of the interfaces to be added to the bond — here eth4 and eth5
  • ovs_options holds all the extra options for the bond and the trunk. These are the same options, which have to be used on the command line

Interface helper script

ifup and ifdown are using a helper script, which is part of the openvswitch-switch package. This script openvswitch is located in /etc/network/if-pre-up.d/ . This script executes all commands.

This script is missing only the support to change the mtu of Openvswitch bridges, ports and interfaces.

Boot support (Ubuntu 16.04 – when using systemd)

Canonical did it again – deliver an OS which does not support the autostart of OVS interfaces.. The upstart scripts have been fixed to allow the autostart of OVS interfaces, but now systemd is used – and the automatic start of OVS interfaces is broken again.

A fix is available, see Launchpad Bug. The systemd start file for OVS needs to be changed to add dependencies. OVS must be started before setting up the network interfaces.

Replace /lib/systemd/system/openvswitch-nonetwork.service with the file content above, and you’re done.

One option to delete all existing bridges should be set in the config file /etc/default/openvswitch-switch to start with an empty openvswitch config.

Boot support (Ubuntu 14.04 and later – when using upstart)

Ubuntu 14.04 has a better boot support for the openvswitch, but it is still not perfect. The interface up/down stuff is still missing.

One option to delete all existing bridges should be set in the config file /etc/defaults/openvswitch-switch

The upstart script must be patched twice to start and stop ovs components and the associated interfaces.

You must insert all lines between „exit $?“ and „end script“ to bring up the Openvswitch interfaces.

The shutdown sequence requires the following patch:

There is still a dependency. The openvswitch start script assumes, that the loopback interface is brought up as the last interface.

Boot support (pre Ubuntu 14.04)

The boot support for the Openvswitch is implemented very different. It depends on the Openvswitch version, the Ubuntu version and the package repository. Up to now there is NO support to bring up the interfaces automatically. In any case, a patch is required.

All Ubuntu distributions are using Openvswitch packages(by November 2013), which do not have an openvswitch upstart script. One way to bring up interfaces here is using a few lines in /etc/rc.local or patching /etc/init.d/openvswitch-switch .

The necessary lines for /etc/rc.local would be:

You may add the necessary lines to /etc/init.d/openvswitch-switch , but the interfaces may come up too late, because the non upstart script starts too late.

If you are using the openvswitch package from the Havana Ubuntu Cloud Archive [deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main] you have an openvswitch-switch upstart script. This must be patched to bring up all ovs network components.

You must insert all lines between „exit $?“ and „end script“ to bring up the Openvswitch interface.

The shutdown sequence requires the following patch:

There is still a dependency. „ifup -a“ in networking.conf brings up the loopback interface as the last interface. This is the trigger to emit the event „loopback interface is up“ which triggers then openvswitch-switch upstart script. This might be too late. Hopefully there will be a fully integrated solution for Ubuntu in future release

Hooroo! Australia bids farewell to incredible OpenStack Summit

We have reached the end of another successful and exciting OpenStack Summit. Sydney did not disappoint giving attendees a wonderful show of weather ranging from rain and wind to bright, brilliant sunshine. The running joke was that Sydney was, again, just trying to be like Melbourne. Most locals will get that joke, and hopefully now some of our international visitors do, too!

keynote-as
Monty Taylor (Red Hat), Mark Collier (OpenStack Foundation), and Lauren Sell (OpenStack Foundation) open the Sydney Summit. (Photo: Author)

And much like the varied weather, the Summit really reflected the incredible diversity of both technology and community that we in the OpenStack world are so incredibly proud of. With over 2300 attendees from 54 countries, this Summit was noticeably more intimate but no less dynamic. Often having a smaller group of people allows for a more personal experience and increases the opportunities for deep, important interactions.

To my enjoyment I found that, unlike previous Summits, there wasn’t as much of a singularly dominant technological theme. In Boston it was impossible to turn a corner and not bump into a container talk. While containers were still a strong theme here in Sydney, I felt the general impetus moved away from specific technologies and into use cases and solutions. It feels like the OpenStack community has now matured to the point that it’s able to focus less on each specific technology piece and more on the business value those pieces create when working together.

openkeynote
Jonathan Bryce (OpenStack Foundation) (Photo: Author)

It is exciting to see both Red Hat associates and customers following this solution-based thinking with sessions demonstrating the business value that our amazing technology creates. Consider such sessions as SD-WAN – The open source way, where the complex components required for a solution are reviewed, and then live demoed as a complete solution. Truly exceptional. Or perhaps check out an overview of how the many components to an NFV solution come together to form a successful business story in A Telco Story of OpenStack Success.

At this Summit I felt that while the sessions still contained the expected technical content they rarely lost sight of the end goal: that OpenStack is becoming a key, and necessary, component to enabling true enterprise business value from IT systems.

To this end I was also excited to see over 40 sessions from Red Hat associates and our customers covering a wide range of industry solutions and use cases.  From Telcos to Insurance companies it is really exciting to see both our associates and our customers sharing their experiences with our solutions, especially in Australia and New Zealand with the world.

paddy
Mark McLoughlin, Senior Director of Engineering at Red Hat with Paddy Power Betfair’s Steven Armstrong and Thomas Andrew getting ready for a Facebook Live session (Photo: Anna Nathan)

Of course, there were too many sessions to attend in person, and with the wonderfully dynamic and festive air of the Marketplace offering great demos, swag, food, and, most importantly, conversations, I’m grateful for the OpenStack Foundation’s rapid publishing of all session videos. It’s a veritable pirate’s bounty of goodies and I recommend checking it out sooner rather than later on their website.

I was able to attend a few talks from Red Hat customers and associates that really got me thinking and excited. The themes were varied, from the growing world of Edge computing, to virtualizing network operations, to changing company culture; Red Hat and our customers are doing very exciting things.

Digital Transformation

Take for instance Telstra, who are using Red Hat OpenStack Platform as part of a virtual router solution. Two years ago the journey started with a virtualized network component delivered as an internal trial. This took a year to complete and was a big success from both a technological and cultural standpoint. As Senior Technology Specialist Andrew Harris from Telstra pointed out during the Q and A of his session, projects like this are not only about implementing new technology but also about “educating … staff in Linux, OpenStack and IT systems.” It was a great session co-presented with Juniper and Red Hat and really gets into how Telstra are able to deliver key business requirements such as reliability, redundancy, and scale while still meeting strict cost requirements.

 

Of course this type of digital transformation story is not limited to telcos. The use of OpenStack as a catalyst for company change as well as advanced solutions was seen strongly in two sessions from Australia’s Insurance Australia Group (IAG). Product

IAG
Eddie Satterly, IAG (Photo: Author)

Engineering and DataOps Lead Eddie Satterly recounted the journey IAG took to consolidate data for a better customer experience using open source technologies. IAG uses Red Hat OpenStack Platform as the basis for an internal open source revolution that has not only lead to significant cost savings but has even resulted in the IAG team open sourcing some of the tools that made it happen. Check out the full story of how they did it and join TechCrunch reporter Frederic Lardinois who chats with Eddie about the entire experience. There’s also a Facebook live chat Eddie did with Mark McLoughlin, Senior Director of Engineering at Red Hat that further tells their story.

 

 

Ops!

An area of excitement for those of us with roots in the operational space is the way that OpenStack continues to become easier to install and maintain. The evolution of TripleO, the upstream project for Red Hat OpenStack Platform’s deployment and lifecycle management tool known as director, has really reached a high point in the Pike cycle. With Pike, TripleO has begun utilizing Ansible as the core “engine” for upgrades, container orchestration, and lifecycle management. Check out Senior Principal Software Engineer Steve Hardy’s deep dive into all the cool things TripleO is doing and learn just how excited the new “openstack overcloud config download” command is going to make you, and your Ops team, become.

jarda2
Steve Hardy (Red Hat) and Jaromir Coufal (Red Hat) (Photo: Author)

And as a quick companion to Steve’s talk, don’t miss his joint lightening talk with Red Hat Senior Product Manager Jaromir Coufal, lovingly titled OpenStack in Containers: A Deployment Hero’s Story of Love and Hate, for an excellent 10 minute intro to the journey of OpenStack, containers, and deployment.

Want more? Don’t miss these sessions …

Storage and OpenStack:

Containers and OpenStack:

Telcos and OpenStack

A great event

Although only 3 days long, this Summit really did pack a sizeable amount of content into that time. Being able to have the OpenStack world come to Sydney and enjoy a bit of Australian culture was really wonderful. Whether we were watching the world famous Melbourne Cup horse race with a room full of OpenStack developers and operators, or cruising Sydney’s famous harbour and talking the merits of cloud storage with the community, it really was an unique and exceptional week.

melcup2
The Melbourne Cup is about to start! (Photo: Author)

The chance to see colleagues from across the globe, immersed in the technical content and environment they love, supporting and learning alongside customers, vendors, and engineers is incredibly exhilarating. In fact, despite the tiredness at the end of each day, I went to bed each night feeling more and more excited about the next day, week, and year in this wonderful community we call OpenStack!

Winter Industrial Training 2017 on the most demanded technology – BigData Hadoop with Spark & Cassandra – By Industry Expert Mr. Vimal Daga

Learn all the latest technologies under One Training Program namely BigData Hadoop, Spark, Cassandra, Splunk, Dockers, Ansible, Linux, Python and ….

BigData is the latest buzzword in the IT Industry. Apache’s Hadoop is a leading Big Data platform used by IT giants Yahoo, Facebook & Google. This Training is geared to make a Hadoop Expert.

Ready to be placed as Data Engineer Or Data Analyst OR Hadoop Administrator?
——————————————————————————–
Spark – Real Time Processing Integrated with BigData hadoop (also including Cassandra) – Learn everything from the basics till the Production Environment Level including Real Time Uses Cases

Know More about Your Trainer – Mr. Vimal Daga – https://www.linkedin.com/in/vimaldaga

Technologies Included in the Training :
1. BigData Hadoop
2. Spark – Real Time Streaming
3 Cassandra
4. RedHat Linux
5. Python Programming
6. DevOps – Ansible
7. Dockers
8. Splunk

Schedule for Winter Training 2017 – 2018 : 9th Dec / 23rd Dec, 2017

Applications Open for BE / BTech Winter Training : http://www.lwindia.com/winter-training-application-form.php

Course Content – http://www.lwindia.com/Winter-Training-on-BigData-Hadoop.php

To get in touch with us :
Call us at +91 9829105960,0141-2501609
Email Id : hr@lwindia.com
FB page: https://www.facebook.com/LinuxWorld.India
Website : www.lwindia.com

The OpenStack Foundation acknowledges the changing landscape

You may have noticed, based on a couple of recent articles from our co-founder, Boris Renski, that Mirantis has changed from being a company focused on building a cloud for you to being a company that helps you get things done in the cloud.  His first blog on the subject, “Infrastructure Software is Dead,” talked about how companies don’t want software, they want the outcomes the software is designed to produce, and the software itself is almost irrelevant.  Now he’s published a second blog, “Can Google Android AWS,” taking the concept a step further and proposing that part of Google’s plan in open sourcing and championing Kubernetes is to destroy cloud switching costs, making it easier for people to choose a cloud other than AWS.

And he’s got a point. Kubernetes isn’t the only tool Google’s been championing; the Istio service mesh project and the Spinnaker deployment tool come to mind.  The Cloud Native Computing Foundation gets proposals for new member projects constantly.

So what’s going on here?

In a word, the cloud itself is becoming commoditized; what’s important is how you use it.  We’ve been beating that drum here at Mirantis for a while now, first with our foundation of Infrastructure as Code, and now with our DriveTrain deployment tool and StackLight monitoring. People thought it was a little weird at first, especially when Mirantis, traditionally an “OpenStack company” added Kubernetes to our roster.

Don’t get the wrong idea; we still do OpenStack, and we’re darn good at it. But this notion of open infrastructure is definitely where things are going.

Witness some of this week’s announcements from the OpenStack Foundation. Speaking from Sydney, Australia at the Fall 2017 OpenStack Summit, executive director of the OpenStack Foundation Jonathan Bryce talked about the Foundation’s plans to focus more on integrations and use cases.

The foundation will be “investing significant financial and technical resources in a four-part strategy to address integration of OpenStack and relevant open source technologies: documenting cross-project use cases, collaborating across communities, including upstream contributions to other open source projects, fostering new projects at the OpenStack Foundation, and coordinating end-to-end testing across projects.”

Great, but what does that actually mean?

Documenting cross-project use cases

Now that OpenStack has finally got a firm footing in the enterprise, the foundation plans to focus on use cases, rather than the individual technologies necessary for enabling those use cases.  In general the plan is to move from the traditional to the emerging use cases, including:

  • Datacenter cloud infrastructure
  • CI/CD
  • Container infrastructure
  • Edge infrastructure

To enable all of these use cases, the foundation will be focusing on more than just OpenStack.

Collaborating across communities

While the OpenStack community has been producing actual software, one of the projects that uses it has a different model.  The OPNFV project doesn’t actually produce software; instead, it works with other communities to define an overall system of projects to accomplish goals, doing gap analysis and producing code within those other communities to close those gaps.

The OpenStack Foundation will be doing something similar going forward, working to make sure that open source projects work properly together in order to enable its target use cases.

“As open source leaders, we’ll fail our user base if we deliver innovation without integration, meaning the operational tools and knowledge to put it all into production,” said Bryce. “By working together across projects, we can invest in closing this gap and ensuring that infrastructure is not only open but also consumable by anyone in the world.”

Fostering new projects at the OpenStack Foundation

Perhaps most interesting, however, is that fact that the foundation isn’t planning to limit itself to contributing to other projects; it is also considering taking additional projects under its wing, similar to the way in which the Cloud Native Computing Foundation (CNCF) provides support for the creation of specific technologies.

There’s one very interesting aspect of this goal, however: these projects will not necessarily follow the same governance model as OpenStack. And that makes sense; every community is different, so the foundation wants to make sure that each project owns its own progress.

Coordinating end-to-end testing across projects

Of course, it doesn’t matter what you put together if it can’t be used.  The OpenStack Foundation is also talking about OpenLab, “a community-led program to test and improve support for the most popular Software Development Kits (SDKs)—as well as platforms like Kubernetes, Terraform, Cloud Foundry and more—on OpenStack. The goal is to improve the usability, reliability and resiliency of tools and applications for hybrid and multi-cloud environments.”

Coming next

All of this comes as not just OpenStack, but the entire technology industry is trying to decide what it wants to be when it grows up. Technology is changing fast, leaving a huge gap between those who want to stay on the bleeding edge and those who want to use older, more tried-and-true (read: stable) software. In the coming months, we’ll see how the OpenStack Foundation bridges this gap in order to stay relevant as application developers increasingly move up the stack.

How to install OpenStack on your local machine using Devstack

On November 7, we did a mini-course on Top Sysadmin Tasks and How to Do Them with OpenStack, and we promised to give you instructions on how to install OpenStack on your own laptop. Here it is.

The purpose of this guide is to allow you to install and deploy OpenStack on your own laptops or cloud VMs and follow the webinar exercises at home. The guide is both hardware and OS agnostic and can be used with AMD or Intel and Windows, Mac, or Linux. Furthermore the installation below is completely isolated and self contained using Virtual Machines. The guide is provided as is and Mirantis inc. is not responsible for any issues that may occur as a result of following this guide.

Step 1.

Create a Linux VM locally on your computer (for example, using VirtualBox) or remotely in the cloud (for example on AWS).

The VM needs to satisfy the following conditions:

  • VM needs at least 4 GB of memory and access to the Internet, and it needs to be accessible (at least) on tcp ports 22 (or console) and 80 from your laptop.
  • Devstack attempts to support Ubuntu 16.04/17.04, Fedora 24/25, CentOS/RHEL 7, as well as Debian and OpenSUSE.
  • If you do not have a preference, Ubuntu 16.04 is the most tested, and will probably go the smoothest.

Warning:

DevStack will make substantial changes to your system during installation. Only run DevStack on servers or virtual machines that are dedicated to this purpose.

Step 2.

Log into the VM using ssh or the console.

Step 3.

You’ll need to run Devstack as a non-root user with sudo enabled.  You can quickly create a separate user called stack to run DevStack with:

sudo useradd -s /bin/bash -d /opt/stack -m stack

Step 4.

Because this user, stack, will be making many changes to your system, it should have sudo privileges:

echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack

You should see output of:

stack ALL=(ALL) NOPASSWD: ALL

Step 5.

Now you need to switch over to use the user stack:

sudo su - stack

Step 6.

Download DevStack:

git clone https://github.com/openstack-dev/devstack.git -b stable/pike devstack/

You should see:

Cloning into 'devstack'...

Notes

The above command downloads OpenStack release Pike.  You can also check the list of available releases.  To download releases in EOL status (End Of Life) replace -b stable/pike

with, for example, -b liberty-eol (for Liberty).

Step 7.

Change to the devstack directory:

cd devstack/

Step 8:

Determine your IP address:

sudo ifconfig

You will see a response something like this:

enp0s3 Link encap:Ethernet  HWaddr 08:00:27:ea:97:9f  
       inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
       inet6 addr: fe80::d715:6025:f469:6f7c/64 Scope:Link
       UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
       RX packets:1364291 errors:0 dropped:0 overruns:0 frame:0
       TX packets:406550 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1000
       RX bytes:1415725472 (1.4 GB)  TX bytes:27942474 (27.9 MB)

lo     Link encap:Local Loopback  
       inet addr:127.0.0.1  Mask:255.0.0.0
       inet6 addr: ::1/128 Scope:Host
       UP LOOPBACK RUNNING  MTU:65536  Metric:1
       RX packets:229208 errors:0 dropped:0 overruns:0 frame:0
       TX packets:229208 errors:0 dropped:0 overruns:0 carrier:0
       collisions:0 txqueuelen:1000
       RX bytes:89805124 (89.8 MB)  TX bytes:89805124 (89.8 MB)

You’re looking for the addr: value in an interface other than lo.  For example, in this case, it’s 10.0.2.15.  Make note of that for the next step. If you run your VM on the cloud, make sure to use the public IP of your VM in the link.

Step 9.

Create the local.conf file with 4 passwords and your host IP, which you determined in the last step:

cat >  local.conf <<EOF
[[local|localrc]]
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=\$ADMIN_PASSWORD
RABBIT_PASSWORD=\$ADMIN_PASSWORD
SERVICE_PASSWORD=\$ADMIN_PASSWORD
HOST_IP=10.0.2.15
RECLONE=yes
EOF

This is the minimum required config to get started with DevStack.

Step 10.

Now install and run OpenStack:

./stack.sh

At the end of the process you should see output something like this:

This is your host IP address: 10.0.2.15
This is your host IPv6 address: ::1
Horizon is now available at http://10.0.2.15/dashboard
Keystone is serving at http://10.0.2.15/identity/
The default users are: admin and demo
The password: secret
DevStack Version: pike
OS Version: Ubuntu 16.04 xenial
stack.sh completed in 1221 seconds.

This will take a 15 – 20 minutes, largely depending on the speed of your internet connection. Many git trees and packages will be installed during this process.

Step 11.

Access your OpenStack by copy-pasting the link from the above output into your web browser:

http://<HOST_IP>/dashboard

Use the default users demo or admin and configured password from step 8 (secret).

You should see the following log-in window in your browser:

openstack_devstack_login

From here, you can do whatever you need to, either from the Horizon interface, or by downloading credentials to use the command line.

You can find out how to do some of the more common tasks by viewing the original webinar, or check out our OpenStack Training resources.

 

Enterprise Hybrid Cloud: Strategy and Cost Considerations

A clearly defined cloud strategy is an imperative need during any enterprise’s transition to a cloud based IT environment. To minimize risks and make the right choice between private, public, and hybrid cloud options, it is important to devise an effective enterprise cloud strategy with the following considerations in mind:

  1. Security and Control
    • Is it important to have physical access to the infrastructure and platforms that are used to provision workloads?
    • Are there any security and compliance requirements that need to be met for customers?
  2. Agility and Flexibility
    • Which cloud choice will adequately meet resource demands for developer and business needs?
    • Which cloud environment will adequately support customers during high demand/peak usage times?
  3. Application Complexity Management
    • Does the IT environment need to support legacy, virtualized, microservices-based, and serverless applications?
  4. Operating Cost Reduction
    • What will be the costs of deploying, managing and maintaining IT infrastructure be with each cloud choice?

Cloud Strategy Definition: Mapping Strategic Considerations to Market Segments

The following chart depicts the importance of these four criteria across large, mid-market, and SMB segments as observed by Platform9:

The importance of each of these strategic considerations can be understood as follows:

  • In large organizations, security and control, application complexity management, and operating cost reduction are typically non-negotiable. Agility and flexibility might have lesser importance in comparison with the other considerations.
  • For the mid-market segment, application complexity management and operating cost reduction are more important than the other two factors.
  • For the SMB segment, agility and flexibility, as well as operating cost reduction, are usually much higher priorities than the other two considerations. These organizations are willing to make tradeoffs for security and control and application complexity management.

Cloud Strategy Definition: Mapping Strategic Considerations to Cloud Options

Let us now discuss how these four factors can influence cloud strategy and choice.

Security and Control: Private clouds are considered more secure than public clouds for these two reasons:

  • Data stored in a public cloud is placed on the cloud provider’s storage and within its data centers. It is not possible to ascertain who amongst the provider’s numerous employees and contractors have access to your data and whether the data has been accessed by any of those individuals.
  • Private clouds offer more controls and customizations over security configurations. With a public cloud provider, you are restricted to a limited set of security tools offered.

Agility and Flexibility: A public cloud provides greater agility and flexibility in comparison to a private cloud. Public clouds, such as Amazon Web Services and Google Cloud Platform, offer massive amounts of capacity and deployments can be architected to better handle dramatic demand spikes. As a result, it can be easier to meet requirements of developers and customers.

Application Complexity Management: Public clouds offer a wide selection of operating systems, virtual machines, microservices, and serverless technologies, and the tools to manage and monitor these types of applications. With a private cloud, you will invest more time and resources for management of newer application paradigms, such as microservices and serverless applications, since existing tools will not offer these management capabilities.

Operating Cost Reduction: Each of the three major public cloud vendors tries to closely match on price, and charge only for resources consumed (utility-style pricing). In addition, it is no longer necessary to have an IT team for deploying, maintaining, and upgrading on-premises hardware, hypervisors and operating systems. However, as experienced by several Platform9 customers, public costs can skyrocket due to large volume data transfers and unmonitored resources.

It is clear that neither a private cloud or public only approach can address the strategic considerations by themselves. Having a private or public cloud only approach requires enterprises to make tradeoffs.

The importance of a hybrid cloud arises in such situations. A hybrid cloud allows you to harness select benefits from both the public and private cloud pathways while balancing the trade-offs.

Designing the Enterprise Hybrid Cloud: How Should Workloads be Distributed Between Private and Public Clouds

A hybrid cloud can be architected to include a significant proportion of workloads on either the public cloud or the private cloud. In order to intelligently implement a hybrid cloud, it is important to decide what proportion of an enterprise’s workloads will reside on-premises and in the public cloud. The ideal mix of public and on-premises workloads can be determined by considering both cost factors and the type of workload.

Determining Hybrid Cloud Workload Locations Based on Costs

In general, adopting an enterprise hybrid cloud strategy with a combination of proprietary software based private clouds and public cloud(s) will be more expensive than adopting a  public cloud only strategy. However, operating a Platform9 SaaS-Managed open source based hybrid cloud solution can provide significant cost savings over a public cloud only strategy. The cost savings achievable in such cases will vary based on the enterprise size (which largely aligns to IT complexity) and the workload distribution between private/public cloud locations.

The following chart shows the cost advantage enterprises can achieve through a hybrid cloud strategy enabled by Platform9 over a public cloud only (i.e. 100% of workloads on a public cloud e.g. AWS).

As can be seen from the chart, for all enterprise segments, as more workloads are located on-prem (and correspondingly, fewer workloads reside in the public cloud), the cost savings of a hybrid cloud approach are superior relative to a public cloud only option.

Large and mid-market enterprises can achieve higher cost savings over a public cloud only approach by locating a very large proportion (> 80%)  their workloads on-premises. For example, with 90% of workloads on-premises and 10% on AWS, cost savings for large enterprises is 55% , 51% for mid-market enterprises but only 6% for SMBs. The study shows, for large enterprises, if < 45% of workloads are on-premises then there is no economic gain over running all workloads on AWS. With such a workload distribution approach, it might make more sense to consider a 100% public cloud approach. More workloads have to be located on-premises to realize cost savings if a hybrid cloud approach is desired. A similar situation is observed for mid-market enterprises with < 35% of workloads on premises.

The study also indicates that for smaller enterprises with lower IT complexity and smaller number of workloads, a public cloud only approach might be more suitable than a hybrid cloud.

Determining Hybrid Cloud Workload Locations Based on Criticality to Enterprise

Workloads can also be classified as mission-critical or business-critical to arrive at the right on-premises and public cloud mix. Mission critical workloads are those that are essential to run your business: product/service delivery to your customer, financial transactions for products/services offered, manufacturing, and so on. Mission-critical applications should typically be deployed on-premises to allow the IT team to proactively monitor the entire application and infrastructure stack, and respond to application and system events on a 24×7 schedule. On-premises deployment for mission-critical workloads will also provide more control over security configurations.

Business-critical workloads are those that are amenable to some downtime. Examples include email services provided by Office 365, business intelligence workloads, etc. Using the public cloud for these workloads can reduce operational expenditure without creating substantial risk for the company. Such a strategy also allows the in-house IT team to focus on “keeping the lights on” with mission-critical workloads.

WINTER INDUSTRY FOCUSED TRAINING 2016

MAKING INDIA, VIRTUAL READY – CLOUD !!
WINTER INDUSTRY FOCUSED TRAINING 2016 – 2017
(Exclusively for our LinuxWorld Students only)
———————————————————————————
Completely Designed & Delivered by – Mr Vimal Daga – Inculcating Actual need of corporate (Use Cases)

Utilise your Winter Vacations at right place under the right guidance of Mr. Vimal Daga

Training Content : RedHat Linux + Python + Cloud Computing + Dockers + DevOps + Splunk

openstack

Fee – INR 3,500 + Service Tax

Old Cloud Computing Students – Free of Cost

To know more : http://bit.ly/2fkec2k

Lets Join Hands Together for our very own Mission – Stand for What’s Best – Be Hero of your Life !!

Admin Contact: +91 9351009002
Email: training@linuxworldindia.org
LinuxWorld Informatics Pvt. Ltd. has Initiated Winter Training Program for a span of 4 Weeks / 6 Weeks / 8 Weeks wherein you would be learning the most demanding technology in the market.

53 new things to look for in OpenStack Newton (plus a few more)

OpenStack Newton, the technology’s 14th release, shows just how far we’ve come: where we used to focus on basic things, such as supporting specific hypervisors or enabling basic SDN capabilities, now that’s a given, and we’re talking about how OpenStack has reached its goal of supporting cloud-native applications in all of their forms — virtual machines, containers, and bare metal.

There are hundreds of changes and new features in OpenStack Newton, and you can see some of the most important in our What’s New in OpenStack Newton webinar.  Meanwhile, as we do with each release, let’s take a look at 53 things that are new in OpenStack Newton.

openstack_main_services-svg

Compute (Nova)

  1. Get me a network enables users to let OpenStack do the heavy lifting rather than having to understand the underlying networking setup.
  2. A default policy means that users no longer have to provide a full policy file; instead they can provide just those rules that are different from the default.
  3. Mutable config lets you change configuration options for a running Nova service without having to restart it.  (This option is available for a limited number of options, such as debugging, but the framework is in place for this to expand.)
  4. Placement API gives you more visibility into and control over resources such as Resource providers, Inventories, Allocations and Usage records.
  5. Cells v2, which enables you to segregate your data center into sections for easier manageability and scalability,has been revamped and is now feature-complete.

Network (Neutron)

  1. 802.1Q tagged VM connections (VLAN aware VMs) enables VNFs to target specific VMs.
  2. The ability to create VMs without IP Address means you  can create a VM with no IP address and specify complex networking later as a separate process.
  3. Specific pools of external IP addresses let you optimize resource placement by controlling IP decisions.
  4. OSProfiler support lets you find bottlenecks and troubleshoot interoperability issues.
  5. No downtime API service upgrades

Storage (Cinder, Glance, Swift)

Cinder

  1. Microversions let developers can add new features you can access without breaking the main version.
  2. Rolling upgrades let you update to Newton without having to take down the entire cloud.
  3. enabled_backends config option defines which backend types are available for volume creation.
  4. Retype volumes from encrypted to not encrypted, and back again after creation.
  5. Delete volumes with snapshots using the cascade feature rather than having to delete the snapshots first.
  6. The Cinder backup service can now be scaled to multiple instances for better reliability and scalability.

Glance

  1. Glare, the Glance Artifact Repository, provides the ability to store more than just images.
  2. A trust concept for long-lived snapshots makes it possible to avoid errors on long-running operations.
  3. The new restrictive default policy means that all operations are locked down unless you provide access, rather than the other way around.

Swift

  1. Object versioning lets you keep multiple copies of an individual object, and choose whether to keep all versions, or just the most recent.
  2. Object encryption provides some measure of confidentiality should your disk be separated from the cluster.
  3. Concurrent bulk-deletes speed up operations.

Other core projects (Keystone, Horizon)

Keystone

  1. Simplified configuration setup
  2. PCI support of password configuration options
  3. Credentials encrypted at rest

Horizon

  1. You can now exercise more control over user operations with parameters such as IMAGES_ALLOW_LOCATION, TOKEN_DELETE_DISABLED, LAUNCH_INSTANCE_DEFAULTS
  2. Horizon now works if only Keystone is deployed, making it possible to use Horizon to manage a Swift-only deployment.
  3. Horizon now checks for Network IP availability rather than enabling users to set bad configurations.
  4. Be more specific when setting up networking by restricting the CIDR range for a user private network, or specify a fixed IP or subnet when creating a port.
  5. Manage Consistency Groups.

Containers (Magnum, Kolla, Kuryr)

Magnum

  1. Magnum is now more about container orchestration engines (COEs) than containers, and can now deploy Swarm, Kubernetes, and Mesos.
  2. The API service is now protected by SSL.
  3. You can now use Kubernetes on bare metal.
  4. Asynchronous cluster creation improves performance for complex operations.

Kolla

  1. You can now use Kolla to deploy containerized OpenStack to bare metal.

Kuryr

  1. Use Neutron networking capabilities in containers.
  2. Nest VMs through integration with Magnum and Neutron.

Additional projects (Heat, Ceilometer, Fuel, Murano, Ironic, Community App Catalog, Mistral)

Heat

  1. Use DNS resolution and integration with an external DNS.
  2. Access external resources using the external_id attribute.

Ceilometer

  1. New REST API that makes it possible to use services such as Gnocchi rather than just interacting with the database.
  2. Magnum support.

FUEL

  1. Deploy Fuel without having to use an ISO.
  2. Improved life cycle management user experience, including Infrastructure as Code.
  3. Container-based deployment possibilities.

Murano

  1. Use the new Application Development Framework to build more complex applications.
  2. Enable users to deploy your application across multiple regions for better reliability and scalability.
  3. Specify that when resources are no longer needed, they should be deallocated.

Ironic

  1. You can now have multiple nova-compute services using Ironic without causing duplicate entries.
  2. Multi-tenant networking makes it possible for more than one tenant to use ironic without sharing network traffic.
  3. Specify granular access restrictions to the REST API rather than just turning it off or on.

Community App Catalog

  1. The Community App Catalog now uses Glare as its backend, making it possible to more easily store multiple application types.
  2. Use the new v2 API to add and manage assets directly, rather than having to go through gerrit.
  3. Add and manage applications via the Community App Catalog website.
1 2 3 9