The world of application packaging and deployment can be viewed as a spectrum where at one end we have hand-crafted installation/run scripts on physical infrastructure and on the other are fully containerized applications making the most of cloud-native services. Since most organisations don't start out fully cloud-native or containerized, a commonly heard phrase for increasing cloud adoption is "lift and shift". This is where applications are not re-architected, but taken straight to the cloud from an in-house deployment scenario. The main advantage of "lift and shift" is speed of cloud adoption and reduction in physical infrastructure costs. However, it is worth mentioning that this pattern often results in application deployments with a higher operational cost than those purpose-built to be there. In the longer-term, this can be offset by making more use of cloud-native services.
In this post, the advantages of Habitat for application packaging will be explored by deploying the same application across several different contexts, including:
- Bare metal or standalone VM
- Running with docker-compose
- Deploying to a Habitat-enabled Kubernetes cluster
- Deploying to a Habitat-enabled Kubernetes cluster using a cloud-native database
There is extra effort required in packaging applications with Habitat. However, what you get in return is the ability to deploy almost anywhere with relative ease at a later stage. From installing your application on a new physical box to choosing an entirely new cloud provider, the process becomes simpler with Habitat-packaged applications.
Setting the Scene (or Table)
For this demonstration, we will be using Table Setting, a single-file Python web application that exposes a REST API for managing forks. A Swagger user interface is exposed at the root URL to provide some interactivity. For those interested, the application makes use of: Flask; Flask-RESTPlus; SQLAlchemy; Flask Marshmallow and more. These frameworks are used regularly by many real-world Python applications and websites.
Table Setting can run with different backend databases. Here, we will use SQLite for an in-memory or file-based database, and MySQL for a more realistic production example. Whilst Table Setting is clearly not a production-ready application, it provides a realistic application-plus-database deployment scenario that could apply in many real-world cases. Since we are looking at a "lift and shift" example, we don't want to re-architect our application. Therefore, following best practice, Table Setting uses environment variables to convey configuration changes.
Running Table Setting Locally
Let's explore how to run Table Setting locally before getting started with Habitat. As a prerequisite, Python 3.7 should be installed. This quickly brings us to the downside of shipping anything other than a single, executable artifact as disparity can easily creep in! For example, there are two major versions of Python:
2.x now seen as legacy, and
3.x that we will use here. Choosing Python 3.7 affects how we run Table Setting because we require the correct
python executable in our
PATH. Assuming that is the case, Table Setting can be run via:
Navigate to http://localhost:5000 to see the Table Setting Swagger UI:
Feel free to try creating some forks! Standard CRUD operations (create, read, update, delete) are all available. By default, the application runs with a clean SQLite in-memory database that is wiped on restart.
A user running even this simple application locally is perhaps unnecessarily exposed to several things:
- Source control with
- Python language installation
- Virtual environment setup
- Python packaging with
- Internal application structure e.g. where to find the thing to run
A similar list could be formulated for applications written in other languages. In any case, this complicates life for performing any changes at a later stage e.g.,
- Switching to a MySQL database backend
- Running on a new machine, or
- Moving to a (new) cloud provider or service
Packaging Table Setting with Habitat
In order to see what we need to add to this application to package it using Habitat, let's look in the habitat directory:
plan.sh file defines how to build, and includes all runtime and buildtime dependencies, implementing similar steps to those taken when running locally.
hooks are scripts that control how an application is initialized on startup and run.
default.toml can be used to define base configuration. As previously mentioned, Table Setting explicitly depends on a known good Python version and environment variables are used to convey runtime configuration. For example, an environment variable can be set to specify which backend database type to run Table Setting against.
Let's concentrate for now on creating deployment artifacts with Habitat. The Habitat Studio is a clean-room for building applications. Below we use the Studio to create and export different package formats:
We also use the Habitat-maintained
core/mysql package for some deployments, so let's install and export that locally also:
We now use the package version from the installation command to export a Docker image locally:
Note, however, that you can just
hab pkg export docker core/mysql without previously installing, and you'll get the latest stable package automatically.
For reference, all of the plans built and maintained by the Habitat Core Team are available here.
The resulting artifacts are all we need for the deployment scenarios below.
Bare Metal or Standalone VM Deployment
Let's assume we want to deploy Table Setting to an Ubuntu machine and use a file-based SQLite database backend. In a real-world setup, the versioned
tar deployment file might be stored in a dedicated artifact repository but we will assume it is locally available on the target machine here.
Relying on the Habitat package naming convention, we can run the package as follows:
Running this on a locally running Ubuntu VM looks as follows:
This is arguably simpler than the steps in the previous section. We've already successfully decoupled a lot of the package internals from the deployment process. Although some knowledge is required to run the
tar Habitat package locally, this would be a similar set of steps for any Habitat package. Now let's see what else we can do.
Running with Docker Compose
For this case, let's take a look at the Docker Compose manifest in the root of the Table Setting repository:
core/mysql image previously exported is used "as is" with some additional environment setup. For this we use
HAB_MYSQL and add the corresponding settings to the variables expected by the Table Setting application.
Run this locally from the root of the repository via the following command:
We can now see the application running on port
8000 against a
mysql database backend:
At first this might seem like another variant of running locally but having a working setup with
docker-compose makes it easy to deploy to e.g. AWS ECS, see this post for more details.
Prerequisites for Kubernetes Cluster Deployment
Transitioning to a Kubernetes cluster at this point might seem like a big leap but to our advantage we have already exported the required Habitat package artifacts.
In the next two sections we will be deploying to Google Kubernetes Engine (GKE) with Habitat Operator already installed. Similarly to the steps described here we will upload our artifacts to Google Container Registry for convenience in our manifest. As an example, the following commands would upload our exported Table Setting image to GCR:
Also worthwhile to comment that although GCP was chosen for this post, the same artifacts could be deployed in the same way to other cloud provider container registries and Kubernetes clusters simply by supplying the appropriate deployment manifest files.
Deploying to a Habitat-enabled Kubernetes Cluster
The below manifest deploys the Habitat-managed MySQL and Table Setting images uploaded to Google Container Registry. Similar environment variables as before are set to configure the database:
As a future improvement to the above, configuration could be conveyed via Habitat's
user.toml mechanism, more details on that here.
Run the following to deploy this manifest on our Habitat enabled Kubernetes cluster:
Next, wait for an external IP address to be assigned:
Navigating to this external IP shows the expected UI:
From here we could explore setting up a cluster of MySQL databases as described here or alternatively look to becoming more cloud native.
Deploying to a Habitat-enabled Kubernetes Cluster Using a Cloud-native Database
Here we will demonstrate using Kubernetes Service Catalog and a Google cloud Platform Service Broker, an implementation of the Open Service Broker standard, to provision a cloud SQL (MySQL) instance instead of relying on our own image from the last section.
In addition to the previous Kubernetes prerequisites, more steps are required to use cloud SQL. Thankfully these are described in detail here. For those interested, stopping at the end of Step 3.2 and replacing the service account and web container image link details in the below manifest should allow you to reproduce. The manifest itself now looks like this and honours the
cloud-mysql namespace used in the Kubernetes Engine sample:
Deploy the application via:
As in the previous example, we wait until an external IP address is assigned:
Navigating to our very familiar UI, let's create a fork:
Finally, let's use the cloud shell in GCP UI to explore the database and confirm things are working as expected:
So we're now successfully making use of a cloud SQL database with our Habitat-managed application. Congratulations if you made it this far!
Other Habitat Features
At this point it would be remiss not to mention the many complimentary Habitat features that were completely skipped over in the above e.g. we didn't touch upon Supervisor, Builder, Depot, Package updates, binds and exports ... Thankfully there are plenty of blog posts and other documentation to remedy that!
In this post we explored several different deployment scenarios for a Habitat packaged application, from bare metal to running on a Kubernetes cluster with cloud based database. Whether your organisation is staying on-premise, in the middle of lifting and shifting, or even expanding to new cloud providers, hopefully this has shown how Habitat can make packaging and deployment easier. Thanks for reading!
- Ask and answer questions on the Habitat forums
- Chat with the Habitat Community on Slack
- Learn more about Habitat