DEV Community

Will S
Will S

Posted on • Updated on

Learn OpenStack by Example: Introduction

My whole career up to this point has been built around designing and developing applications for offline or isolated networks and working with severely outdated operating systems with no external dependencies allowed. So when I moved into a new role working on cloud deployments, I took it as an opportunity to learn cloud development and deployments. It didn't take me long to run into a bunch of headaches that comes with working on these platforms.

Biggest of these were:

  • Vendor lock-in: If you write an application and deploy it to a platform, the code has to be tailored to the platform you're developing on. If you're told you need to migrate your code to a different platform (which happens more often than you realize), you have to refactor all of that code.
  • XaaS (Everything as a Service): I might stir a few heads with this, but hear me out. As an application developer, I want to deploy various nodes that interact with each other. If I'm on a platform that has a DBaaS or STaaS (for storage/data), PaaS (for computational nodes/servers), FaaS (for handling IoT/event-driven processes), and these are all deployed from their own clusters, which means they're accessible through the public network... which means it's your responsibility to secure it properly (which honestly, a lot of people don't).

While searching for solutions and documentation on the various problems I've come across, I would often see references to OpenStack and it got my curiosity going. What is OpenStack? What services does it offer and who owns it? How do I learn to use it? What are it's costs and limitations?

So, what is OpenStack?

OpenStack Components

Without going into too much history or detail, OpenStack is an open-source suite of tools and components that, when deployed together, can create your own dedicated cloud environment.

And what is a Cloud again?

Just to clear this one up, think of a cloud as a virtual data warehouse of micro components.

Supermicro Server

When I think of servers hosting an application, I think of 3 levels of hosting resources:

Baremetal: (server blade pictured above) These can be monsters of machines costing hundreds of thousands of dollars. We're also pretty familiar with installing software or deploying applications on a computer, it's no different to these servers. The problem comes when you want to fully utilize the resources on these servers. You start deploying multiple software packages and applications, but then you realize that one application can access the resources of another application, or the dependencies of one application conflicts with the dependencies of another. This becomes a huge security and design headache.

Virtual Machines on Baremetal

Virtual Machine: These are "virtual" in the sense they are completely software-based, but still has the same resources and complexity a full desktop or server can hold. It's possible to install multiple Virtual Machines on a server and the applications installed on each are fully contained without conflicting or interacting with other applications. You can also implement Virtual Networks that connects them and provides a barrier to how the Host system or the internet can access it.

They provide a great improvement to deploying applications since you can create the Virtual machine to suit the needs of the application. But because they replicate a full-sized machine, they also contain some of the overhead (with some optimized changes for these smaller environments) of the operating system and dependencies of a Baremetal machine. Imagine an old laptop you want to use as a home server, it can barely run Windows 10, and you want to set up a second computer running within it at the same time?! Regardless how big your server is, you'll eventually reach a limit how many can be set up on one machine.

Containers

Container Systems like Docker and LXC were created to solve this size issue. These allow for bare minimum, "pre-baked" Operating System (OS) images (or installable OS images with all the application dependencies and requirements pre-installed) without the extra resources that Virtual machines require, enough to just do what it needs to do.

Where they can often fall short is in the security and stability aspects since containers are essentially programs that run within a virtual space on your computer. They are so minimal that a malicious image creator can easily include a process that scans your system for other containers and gain access to their environments. The other problem is something in a container node can fail so terribly it can bring down the whole host system down with it. This can introduce risks that you wouldn't want when creating production quality, secure applications.

NOTE: Don't get me wrong, I love containers. They make deploying applications so easy, but I also need to understand where they may struggle.

Containers vs Virtual Machines

Many operating system developers are finding a middle ground by creating secured, yet minimal images that can be hosted on miniscule, "tiny", or even "micro"/"nano" sized virtual machine instances. These are essentially micro virtual machines a little bigger than the size of a container, but with the segmentation and complexity of a virtual machine. This starts giving way to having hundreds of containerized machines running independently of each other. They can work for a single task or work together to provide a large mesh of servers providing the resources for a single application.

Netflix Microservice Data Flow

When looking back to the scale of baremetal, these systems (either containers or virtual machines) only sip at the available resources allowing you to create new instances quickly and at any time. Because they're all software based, you can start and stop any number of them with the click of a button, they can be quickly duplicated to make up for a bottleneck in processing, and they can be all contained in one server, or across a whole warehouse of servers... thus the birth of the Cloud.

Why can't I use Kubernetes (container orchestration solution) for my applications?

That's a big question, and I could write a whole post just on that. If we want to get to the specific details, remember that Containers are essentially programs on a system and Kubernetes creates an infrastructure to automate a lot of tasks around running those containers across multiple systems. Because of these requirements, Kubernetes requires a cluster with at least 3 or more base systems (or nodes) just to create a single cluster.

Image borrowed from learnk8s.io (https://learnk8s.io/how-many-clusters)

There is also the sizing/scoping problem of a large cluster. At some point, you may have a great cluster but introduce one problem container, it "could" (very rare, but still possible) bring down a system (remember, containers are essentially running processes on the host system). This could bring down a whole cluster if you're not careful. A good design alternative is to create multiple clusters for different applications, but that means setting up another 3+ nodes for each cluster/application. I came across this article from learnk8s.io / Daniel Weibel describing the design considerations and problems faced in deploying large applications in Kubernetes.

This is where cloud environments (like OpenStack) work as an ideal solution to this "problem". Since Kubernetes requires a minimum number of nodes to even start a cluster and each cluster might have different resource requirements, the cloud environment can create Virtual Machine nodes that contains the Kubernetes cluster and it's infrastructure. I said you must have a minimum number of nodes, but I didn't say how big those nodes had to be. This means we can create 3+ mini Virtual Machines to set up Kubernetes. We can scale this out/horizontally/to other virtual machines, or we can scale this up/vertically/increase the size of our virtual machines. On top of that, you can completely isolate the cluster in a single Virtual Network meaning you only need to worry about the interface accessing that network. And as this series of posts will show you, OpenStack can be configured on a single machine or across thousands, so you're only limited by your imagination.

How does OpenStack fit into this?

We've all been introduced to the clouds by Google, Amazon, Microsoft, IBM, SAP, etc. If you look at their catalogues, you'll see hundreds of different types of technologies to help host your application. They are helpful but since everything works "As A Service" or have provisioning details unique to their platform, they can be either cumbersome, poorly documented (!), or require a high learning curve to implement. These providers try to simplify many of the details of working with these components and create new services (as they promote the "no code" approach) so you can create applications without writing a single line of code. If you look closely, the underlying technology is often a workaround using already available technology so these vendors can provide a way of templating functionality that many users will be doing anyways at the cost of platform lock-in. Sure they provide free/developer accounts, but there's always a limit. And on top of that, an application built on one platform doesn't necessarily mean it'll work on another platform without some MAJOR refactoring.

OpenStack offers a solution to all of these problems:

  1. Open-Source, which means anyone with the knowledge and resources available can install their own cloud platform or audit the source code for vulnerabilities in code quality and security.
  2. Designed from a standard, open framework. If you have an OpenStack application running with one provider and you need to move to another provider, no sweat! Just re-deploy using the new credentials and you're done (as long as versioning allows). Much of the underlying technology in OpenStack's components are widely used, accepted technologies.
  3. Because the cloud itself is fully scalable, the OpenStack team has created a number of different tools to help you develop and test your components (will discuss this later). These tools can quickly set up a mini cloud on whatever machine you install it on. No free trials, no restrictions, no time limits (except for the limits of your own hardware).
  4. Scale to any number of providers. You can have part of your application hosted on Vexxhost, create a set of storage microservices to RackSpace, and scale out compute and HPC/load balancing systems using Auro. They can all be part of the same application and interact as if they were the same environment.

This last one is a big one. As the demands of your application grows, you can scale out globally without needing to worry if your provider has a data center in the region of highest demand. Just find a provider that hosts OpenStack and scale (or deploy your own OpenStack instance on top of one of the big vendors (cloud in a cloud... inception)).

Taking all of these points into consideration, with all the private clouds, public providers, and standard interoperability across environments and regions, we can deploy something bigger than any single cloud provider can offer.

How do I get it setup?

That's the topic of the next post, but this is a breakdown of ways you can play with OpenStack at home:

  1. DevStack is a set of convenience scripts that quickly sets up A TEMPORARY OpenStack environment. It is designed for OpenStack Developers to test new components and is the primary driver within the OpenStack's test suites for their CI/CD process. We will use this as it's faster to set up an environment than other methods (even if it means we'd need to re-install on system reboot).
  2. Kolla Ansible may be one of the platform-native, production-grade deployment tools you may consider if you want to set up your own cloud (maybe for a home lab or local cloud). Most teams tend to use this route as it's actively maintained, but not fully featured yet (discovered limitations setting up Swift).
  3. Ansible All-In-One (AIO) another production-grade suite of tools using Ansible for deploying OpenStack. This allows you to pre-define various configurations and Ansible will run the commands on the target systems. These "playbooks" can even be run at regular intervals to ensure nothing moves out of what it expects it to be. This was a potential solution for this series of posts, but had difficulty creating a stable environment I had to ditch it (for now). If someone more familiar with deploying OpenStack could support me, I would prefer this option in the future.
  4. OpenStack Helm is an interesting option as it deploys a full OpenStack Cloud in a Kubernetes cluster. My perspective on this is it's a bit backwards to set up a phsyical orchestration solution on top of a software orchestration solution (same with TripleO (OpenStack On OpenStack)), but could provide some value for creating smaller environments to develop and test.
  5. PackStack is an installable package group provided by RedHat available within CentOS and RedHat operating systems. It's very convenient and quick to set up but relies on operating systems that are either enterprise-level (RedHat) or becoming deprecated (CentOS).
  6. OpenStack-Charms is another platform-specific system built on top of Juju/Charms (which I understand it as a platform built on top of Ubuntu Snap technology), and thus is a solution if you want to host a solution on Ubuntu. There are still limitations with this solution as some OpenStack components are not fully supported yet.

We will be moving forward with DevStack for our solution as it's the most widely used and stable solution (since it's used as part of OpenStack's test suite) and doesn't lock us into a specific OS/Platform.

How do I learn to use it?

THAT is the tricky bit. I've searched through their own repository of exercises, online training platforms, published books and found either everything was too technical for my newbie level, targeted towards System Administrators trying to get their RedHat Certified Systems Administrator (RHCSA) for OpenStack designation, or it was outdated and no longer relevant.

That's why I decided to start this series: a way for me to learn, practice, and fail while documenting what I find works for anyone else who's struggling for the same answers.

Goals

If this was enough to intrigue you, please follow me on my journey to learn app development on OpenStack. You should get notifications when updates and new posts are available.

As part of this exercise, I intend to create a web application that can be deployed to my local OpenStack while learning about compute modules, block storage, event messaging, and networking.

I intend to build up my training through this series of posts as follows:

  1. Install and Configure DevStack
  2. Setup the Environment
  3. Design an Application and Create Support Components
  4. Develop the Code
  5. Deploy and Run

Hope you're as excited as I am to try this out and learn something new! And if you have experience and would like to contribute to this series, please reach out with suggestions!!

Top comments (0)