Habitat helps build, deploy, and manage your applications. To help understand Habitat, we explore how Habitat integrates, complements, or overlaps with other popular software.
Table of Contents
- Habitat and Provisioning Tools
- Habitat and Containers
- Habitat and Container Orchestration
- Habitat and Continuous Integration
- Habitat and Configuration Management
Examples: Terraform and CloudFormation
Provisioning tools like Terraform or CloudFormation enable you to write a configuration file to manage infrastructure resources. The configuration file is used along with a CLI tool to create, read, update, and delete infrastructure resources in a declarative way. Habitat is not a provisioning tool and works well with the provisioning tool of your choice.
Provisioning tools allow you to automate the installation and configuration of the Habitat Supervisor, along with loading any applications and services you need to run. The Terraform Habitat Provisioner provides a Terraform native method of installing the Habitat Supervisor and managing Habitat services. The Habitat Operator provides a native method of auto-managing Habitat services on Kubernetes. For any other provisioners, you can write your own script and include it in your automated provisioning. Visit the Using Habitat section of the docs to find more details about configuring the Habitat Supervisor and Habitat services.
Containers enable you to build an immutable snapshot of your runtime environment, including your operating system, system libraries, application libraries, and application. The container is built with a CLI tool, and then pushed to a container-specific artifact repository, known as a container registry. Habitat is not a container format and exports your application to the container format of your choice.
Habitat builds more secure containers by exporting your application and any of its runtime dependencies directly into the container. When you build your application with Habitat, your application takes ownership of the entire toolchain of its runtime dependencies. As a result, you no longer have to rely on a large operating system and unnecessary system libraries. This enables you to include only the binaries your application uses inside your container, which can decrease your container size. By eliminating the need for a large operating system, you also avoid including binaries that an attacker could use, which further increases the security of your container. Visit the Running Habitat Containers section of the docs to find more details about how containers are built with Habitat. Finally, Habitat's HTTP API allows all of your application's runtime dependencies to be audited at any time. If a brand new vulnerability is revealed, Habitat's HTTP API makes it easy to programmatically inspect and audit the entire toolchain of your runtime environment without needing to worry about how your containers got built in the first place.
If your situation requires it, Habitat makes it simple to switch from containerized to non-containerized workloads. This is because Habitat packages only have a requirement on the kernel version of your operating system. (Linux: kernel 2.6.32 or later, Windows: Windows Server 2008 R2 or later and Windows 7 or later). You can take the same .hart file you use to export to a Docker container and run it on a virtual machine or bare metal. By only requiring the kernel, Habitat gives you the ability to switch container formats or to switch to non-containerized workloads without significant rework.
Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services that has declarative configuration and automation. Applications are run within Docker containers, grouped into pods. Kubernetes provides the control plane and API layer to schedule those pods.
Habitat is not a scheduling tool, and does not schedule container resources such as replicas, deployments, firewalls, networking, geo-location (affinity), etc. Instead, Habitat's responsibility is the service running inside the container. The Habitat Supervisor provides topologies, application binding, ring encryption, and dynamic configuration and other features not present in Kubernetes.
Since Habitat and Kubernetes are both able to manage services, using the Habitat Operator with Kubernetes empowers users to use Habitat's runtime features by operating in a Kubernetes native way. Where Habitat and Kubernetes overlap, the Habitat Operator defers control to Kubernetes.
Continuous integration allows you to build, test, and deploy your code by using CLI tools and plugins. Habitat includes the Habitat Studio which allows you to do interactive builds on your developer workstation, or non-interactive builds with your continuous integration server. Your continuous integration server can also call the Habitat CLI to promote your Habitat packages to different channels, enabling your applications to update themselves. Habitat is not a continuous integration server and can make builds and promotion processes done by your continuous integration server easier.
The Habitat Studio provides a clean room build environment for your application build. In effect, builds that occur on a developer's workstation, or on a continuous integration server, will build in the same manner. Developers no longer need to worry about entire classes of "it works on my box" problems. Build engineers no longer need to create unique and difficult to maintain worker nodes for continuous integration servers. Instead, the Habitat plan.sh file contains all the information needed to build the entire application, from dependency management, runtime environment binaries, packaging, and application lifecycle hooks. When using the Habitat Studio, your continuous integration server can focus more on what it is good at doing, instead of worrying about managing custom plugins and their potential conflicts.
Your continuous integration server can promote a Habitat package (a .hart file) to a channel by calling the Habitat CLI. This promotion method allows you to deploy a new version of your application in a pull-based manner by using the Habitat Supervisor. Because this promotion process can be invoked non-interactively through the Habitat CLI, you can manage your deployments using your existing tooling. If you choose, you can also do this promotion process manually. More complex application environments can also invoke the promotion process using a scheduling tool or provisioning tool to help manage infrastructure resources in addition to promoting Habitat packages.
Configuration management tools allow you write configuration files, using a declarative language to manage a server. These tools focus on building working servers by installing and configurating system settings, system libraries, and application libraries before an application is installed on the server. Habitat focuses on the application first instead of the server. Habitat builds and packages your application's entire binary toolchain, including the system libraries, application libraries, and runtime dependencies necessary for your application to function. As a result, Habitat can replace many use-cases that configuration management tools perform related to installing system binaries, application dependent libraries, or templating configuration files.
Configuration management tools perform tasks at run time by converging resources. The value from configuration management tools comes from this converging process -- checking the existing state of a server, and fixing it if it does not match the intended state. Because converging modifies resources at runtime, it can result in surprising and complex runtime errors. In addition, since environments are often mutable and unique, maintaining server automation occurs out-of-band with application development, creating conflict between application developers and software reliability engineers. Habitat avoids these classes of errors entirely by shifting these processes to build time, and by creating an atomic package of an application's binaries, application lifecycle hooks, and configuration files. Habitat's approach to packaging automation with the application package allows application developers and software reliability engineers to work closer together.
Habitat is not a full replacement for configuration management tools on mutable infrastructure. Instead, it allows configuration management tools to focus better on system-level tasks for virtual machines and bare metal, such as kernel tuning, system hardening tasks, and compliance remediation tasks. Habitat can then take over application automation roles, which results in a significant reduction in automation complexity for both infrastructure-focused automation and application-focused automation.
Habitat can make it easier to run your existing configuration management tool. You can create a Habitat package of your configuration management tool's agent and/or dependencies, and run it on your existing mutable infrastructure. The Habitat Supervisor's responsibility is to update your configuration management tool's agent, while your configuration management tool can still perform its normal tasks.
Habitat can provide an easier transition from virtual machine or bare metal workloads to containers, without needing to rewrite a monolithic application into microservices all at once. In this scenario, you can run the Habitat Supervisor on your existing virtual machine or bare metal infrastructure as you migrate away from your configuration management tool. Then, when you're ready, you export your application to the container format of your choice using the Habitat Studio. While you migrate your applications and services, the Habitat Supervisor runs on your existing mutable infrastructure, and runs your existing configuration management tool. New packages that do not require configuration management can also run under the Habitat Supervisor on your existing mutable infrastructure. As a result, you can continue to verify the working state of your application as you incrementally migrate your services. This approach provides an alternative to the "all-or-nothing" migration many teams are faced with when moving workloads to containers.