Habitat Best Practice Guides

This chapter covers recommended best practices for runtime and buildtime.

We focus on best practices for packages that use Habitat Builder for continuous builds.

Table of Contents


Running Habitat Linux Containers

Container Setup and Assumptions

When you run hab pkg export docker, you'll get a Docker container that provides a few things. First, a minimal Linux OS filesystem is provided, with just enough configuration (e.g., /etc/passwd, /etc/resolv.conf, etc.) to run. Second, the contents of the exported Habitat package, along with its complete dependency tree, as well as a complete Habitat Supervisor installation, are provided, unpacked, in the /hab/pkgs directory. Finally, an entrypoint script that will start the Supervisor, running the exported Habitat package, is provided, allowing the container itself to behave as though it were the Supervisor.

On Linux, the Habitat Supervisor will normally run as the root user, and will start Habitat services as the pkg_svc_user specified by that service. However, in some deployment scenarios, it is not desirable (or perhaps even possible) to run the Supervisor as root. The OpenShift container platform, for instance, does not run containers as root by default, but as randomly-chosen anonymous user IDs. From version 0.53.0 onward, the Habitat Supervisor can now run as an arbitrary user, providing users with more flexibility in how and where the use Habitat.

In order to support this in containers and provide maximal flexibility, the contents of the /hab directory are both readable and writable by the root group. When specifying a user to run a container process as, the user's primary group will be reported as root if no matching group can be found in /etc/group. This will allow the user to create and populate the /hab/sup directory for managing Supervisor state, as well as the /hab/svc directory, which will contain all the service's state. This is in line with recommendations from OpenShift on how to create containers that can run as a non-root user, but nothing in Habitat's implementation is specific to OpenShift; indeed, all the examples provided below use pure Docker.

Caveats To Running as a Non-Root User

"There's no such thing as a free lunch", as the saying goes, and that holds true here. If the Supervisor is running as a non-root user, any processes that it supervises will be run as the same user; any values that the process might specify via pkg_svc_user and pkg_svc_group are essentially ignored. Furthermore, any files written out by the service during its operation are also owned by that same user.

Details

Actually, the Supervisor does not actually care what user it is running as; rather, it uses Linux capabilities to guide its behavior. If the process has the CAP_SETUID, CAP_SETGID, and CAP_CHOWN capabilities, it will be able to run processes as the specified pkg_svc_user and pkg_svc_group (CAP_CHOWN is needed to ensure that the service processes can read and write files within the service's state directories). The Supervisor checks for the presence of these capabilities, and does not rely on having a user ID of 0 or the username root.

Container Deployment Scenarios

Running a Habitat Container as root

For completeness, we'll quickly cover the base case. If you are fine with running your container as root, you can do that directly:

docker run --rm -it core/redis:latest

Here, core/redis:latest would be the image exported from the core/redis Habitat package. The Supervisor will run as normal, with supervised processes running as the desired user.

Running a Habitat Container as a Non-Root User

If you cannot run as the root user, but you are fine with root being the container user's primary group, you can simply specify a user ID to run as. This user need not exist in the container itself, and it's better if it doesn't. Using pure Docker, it might look like this:

docker run --rm -it --user=888888 core/redis:latest

Again, we use our core/redis Habitat package container; the user ID 888888 is simply a number chosen at random (this is how platforms like OpenShift operate). No user inside the container has this ID, meaning that the user will be an anonymous user with root as its primary group. Because of how we generate Habitat containers, this fact ensures that the user has write permissions within the /hab directory.

Due to the current logic around package installation, there is an extra step needed if you would like to have your containerized Supervisors update either themselves or the services they supervise. When installing packages as a non-root user, Habitat will download keys and compressed hart files into the user's ${HOME}/.hab directory, rather than the global /hab/cache directory. You will need to ensure that a user-writable directory is mounted into the container, and specify it as the user's home directory using the HOME environment variable. Using pure Docker with a volume that is accessible by the user, that might look like this:

docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=888888 \
test_home
docker run --rm -it \
--user=888888 \
--mount type=volume,src=test_home,dst=/myhome \
--env HOME=/myhome \
core/redis:latest --auto-update --strategy=at-once

This is merely an illustration; use whatever volume management approaches and service update strategies that are appropriate for your container scheduling system and your local deployment.

As illustrated, updates of this kind are completely optional; you may prefer to move update responsibility to your container scheduler and treat your containers as immutable in this regard.

Running a Habitat Container as a Non-Root User in a Non-Root Group

If for whatever reason you do not want your user to be in the root group inside the container, you will need to add some additional volumes in order to create the needed supervisor and service state directories. However, since you will (by definition) not have write permissions on the /hab directory as a whole, your Supervisor will not be able to update either itself or the services it supervises.

To implement this using pure Docker, you could do something like this (the group ID of 999999 was again chosen arbitrarily, as with the user ID):

docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=888888 \
sup_state
docker volume create --driver local \
--opt type=tmpfs \
--opt device=tmpfs \
--opt o=size=100m,uid=888888 \
svc_state
docker run --rm -it \
--user=888888:999999 \
--mount type=volume,src=sup_state,dst=/hab/sup \
--mount type=volume,src=svc_state,dst=/hab/svc \
core/redis:latest

Again, this is just an illustrative example; use the appropriate strategies for your specific circumstances. The key information here is to ensure that both the /hab/sup and /hab/svc directories are writable by the user inside the container.

Running Habitat Windows Containers

Container Base Image

Exported Windows images use microsoft/windowsservercore as their base. This is the equivelent of a minimal Windows Server 2016 Core install. So you should not expect non default features and roles to be enabled such as IIS or Active Directory. Consider using an init hook to install any features needed by your Habitat service.

Container Pull and Startup time

The microsoft/windowsservercore image is approximately 5GB. Due to this large size, you can expect that the first time you run an exported Habitat service, pulling down the image may take several minutes. This wait should only occur on the very first docker run of any Habitat Windows service. Additionally, depending on the Windows host operating system, running the container may also take considerably longer than what one is accustomed to with Linux based containers. This startup time will be highly influenced by the container isolation mode described below.

Windows Containers and Host Kernel Isolation

There are two types of Windows containers and each runs under a different level of kernel isolation.

Windows Server Containers

These Containers, like their Linux counterparts, share the host's kernel. You can expect these containers to start pretty quickly and this is the default container type on Windows Server 2016 hosts.

Hyper-V Containers

Windows Hyper-V containers run inside of a very minimal Hyper-V VM. As a result, they do not share the host's kernel and offer a higher level of security and isolation. The cost of this isolation is that it will take longer for the container to start - perhaps a noticable delay. Also be aware that the VM is provisioned with a default 1 GB limit of memory. If your service requires more than a gigabyte of memory, you can use the --memory argument with docker run and pass a larger limit.

docker run --memory 2GB -it core/mysql

On a Windows 10 host, Windows containers will always run inside of Hyper-V isolation. Kernel sharing Windows Server containers are only available on Windows Server 2016 hosts. On Windows Server 2016, Windows Server containers are the default container type but you can force docker run to use Hyper-V containers by setting the --isolation argument to hyperv.

docker run --isolation hyperv -it core/mysql

Host Loopback Network

A common container pattern is to forward the container port to a local port and then access the container application by accessing localhost on the forwarded port. With Windows containers, published ports cannot be accessed using localhost. You will instead need to use the IP address of the host or the IP of the individual container to access the application's endpoint.

Container orchestration with Habitat

Habitat packages may be exported with the Supervisor directly into a a variety of container formats, but frequently the container is running in a container orchestrator such as Kubernetes or Mesos. Container orchestrators provide scheduling and resource allocation, ensuring workloads are running and available. Containerized Habitat packages can run within these runtimes, managing the applications while the runtimes handle the environment surrounding the application (ie. compute, networking, security).

Kubernetes

Kubernetes is an open source container cluster manager that is available as a stand-alone platform or embedded in several distributed platforms including Google's Container Engine, Tectonic by CoreOS, and OpenShift by RedHat. Habitat and Kubernetes are complementary: Kubernetes focuses on providing a platform for deployment, scaling, and operations of application containers across clusters of hosts while Habitat manages the build pipeline and lifecycle of those application containers.

Habitat Operator

The Habitat Kubernetes Operator is on-going work to create an operator that leverages Kubernetes API services to create a native and robust integration between the two technologies.

By using the Habitat Operator, you can abstract from many of the low level details of running a Habitat package in Kubernetes, and jump straight to deploying your application, with support for Habitat features like service configuration, binding, topologies and more.

For more details on the Habitat Operator, please refer to the introductory blogpost, follow along on github, and join us in our #kubernetes channel in the Habitat Slack.

Kubernetes exporter

When using the Habitat Operator, you can easily convert packages and run them on your Kubernetes cluster using the Kubernetes exporter:

$ hab pkg export kubernetes ORIGIN/NAME

Bare Kubernetes

Users are not required to use the Habitat Operator. Habitat packages exported as containers may be deployed to Kubernetes through the kubectl command. Using the Docker exporter to create a containerized application, the container may be launched like this example:

$ kubectl run mytutorial --image=myorigin/mytutorial --port=8080

Assuming the Docker image is pulled from myorigin/mytutorial we are exposing port 8080 on the container for access. Networking ports exposed by Habitat need to be passed to kubectl run as --port options. We can see our deployment with the kubectl get command:

$ kubectl get pods -l run=mytutorial

Docker and ACI

Habitat packages can be exported in both Docker and ACI formats (as well as others). Kubernetes currently supports the Docker runtime and integration of the rkt container runtime (an implementation of the App Container spec) is under active development.

Environment variables and Networking

Kubernetes supports passing environment variables into containers, which can be done via the Habitat Operator.

Multi-container Pods

Multi-container pod support through Habitat is still under active development as part of the Habitat Operator.

Azure Container Services (AKS)

Azure Container Services (AKS) is a fully managed Kubernetes service running on the Azure platform. It supports running Habitat packages using the Habitat Kubernetes Operator.

Azure Container Registry (ACR)

Azure Container Registry is a managed Docker container registry service used for storing private Docker container images. It’s a fully managed Azure resource and gives you local, network-close storage of your container images when deploying to AKS. Habitat Builder has a native integration with this service so you can publish your packages directly to Azure Container Registry.

In order to do this you need to create an Azure Service Principal that has Owner rights on your ACR instance. You can do this with the following script, changing the environment variable values to match your environment.

#!/bin/bash
ACR_RESOURCE_GROUP=myACRResourceGroup
ACR_NAME=myACRRegistry
BLDR_PRINCIPAL_NAME=myPrincipalName
BLDR_PRINCIPAL_PASSWORD="ThisIsVeryStrongPassword"
# Create Service Principal for Habitat Builder
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
az ad sp create-for-rbac --scopes $ACR_ID --role Owner --password "$BLDR_PRINCIPAL_PASSWORD" --name $BLDR_PRINCIPAL_NAME
BLDR_ID=$(az ad sp list --display-name $BLDR_PRINCIPAL_NAME --query "[].appId" --output tsv)
echo "Configuration detals for Habitat Builder Principal:"
echo " ID : $BLDR_ID"
echo " Password : $BLDR_PRINCIPAL_PASSWORD"

Note: The unique Service Principal Name (the UUID) should be provided in the Habitat Builder configuration.

Connecting ACR and AKS for Habitat Operator

Since ACR is a private Docker registry, AKS must be authorized to pull images from it. The best way is to create a role assignment on the Service Principal that is automatically created for AKS, granting it Reader access on your ACR instance.

To do this you can use the following script, changing the environment variable values to match your configuration.

#!/bin/bash
AKS_RESOURCE_GROUP=myAKSResourceGroup
AKS_CLUSTER_NAME=myAKSCluster
ACR_RESOURCE_GROUP=myACRResourceGroup
ACR_NAME=myACRRegistry
# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)
# Create role assignment
az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID

Habitat Updater

As for any other Kubernetes cluster, you should use the Habitat Kubernetes Updater if you want automatic updates of Habitat packages on AKS.

Amazon ECS and Habitat

Amazon Web Services provides a container management service called EC2 Container Service (ECS). ECS provides a Docker registry, container hosting and tooling to make deploying Docker-based containers fairly straightforward. ECS will schedule and deploy your Docker containers within a Task while Habitat manages the applications.

EC2 Container Registry

EC2 Container Registry (ECR) is a fully-managed Docker registry provided by Amazon Web Services. Applications exported to Docker with hab pkg export docker put the containers into namespaced repositories, so you will need to create these within ECR. For example, if you were building core/mongodb containers you would use the following command:

$ aws ecr create-repository --repository-name core/mongodb

To tag and push the images to the ECR you will use your Repository URI (substituting your aws_account_id and availability zone).

$ docker tag core/mongodb:latest aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/core/mongodb:latest
$ docker push aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/core/mongodb:latest

EC2 Compute Service

Once Docker images are pushed to ECR, they may be run on Amazon's ECS within a Task Definition which may be expressed as a Docker Compose file. Here is an example of a Tomcat application using a Mongo database demonstrating using Habitat-managed containers:

docker-compose.yml
version: '2'
services:
mongo:
image: aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/billmeyer/mongodb:latest
hostname: "mongodb"
national-parks:
image: aws_account_id.dkr.ecr.ap-southeast-2.amazonaws.com/mattray/national-parks:latest
ports:
- "8080:8080"
links:
- mongo
command: --peer mongodb --bind database:mongodb.default

From the example, the mongo and national-parks services use the Docker images from the ECR. The links entry manages the deployment order of the container and according to the Docker Compose documentation links should create /etc/hosts entries. This does not appear to currently work with ECS so we assign the hostname: "mongodb".

The command entry for the National Parks Tomcat application allows the Habitat Supervisor to --peer to the mongo gossip ring and --bind applies database entries to its Mongo configuration.

Additional Reading

Apache Mesos and DC/OS

Apache Mesos is an open source distributed systems kernel and the distributed systems kernel for Mesosphere's DC/OS distributed platform.

Apache Mesos is an open source distributed systems kernel and the distributed systems kernel for Mesosphere's DC/OS distributed platform.

Mesos Containerizers

Mesos has support for containerizers for running commands and applications within isolated containers. Mesos supports Docker and its own Mesos containerizer format. The Mesos containerizer provides lightweight containerization with cgroups/namespaces isolation without actual isolation. The hab pkg export mesos command creates a mostly empty base filesystem with the application and the Habitat Supervisor and packages it into a compressed tarball.

Marathon Applications

Marathon is a container orchestration platform for Mesos and DC/OS, handling the scheduling and deployment of applications. Marathon applications support Docker and the Mesos container formats, wrapping them in JSON metadata describing the resources needed to deploy the application. Once the application has been deployed to Marathon, it schedules it across the Mesos cluster and ensures the application is running optimally.

Export to a Mesos container and Marathon application

You can create native Mesos containers from Habitat packages by following these steps:

  1. Create an interactive studio in any directory with the hab studio enter command.

  2. Install or build the Habitat package from which you want to create a Marathon application, for example:

    $ hab pkg install yourorigin/yourpackage
  3. Run the Mesos exporter on the package.

    $ hab pkg export mesos yourorigin/yourpackage
  4. This will create a Mesos container-format tarball in the results directory, and also print the JSON needed to load the application into Marathon. Note that the tarball needs to be uploaded to a download location and the "uris" in the JSON need to be updated manually. This is an example of the output:

    {
    "id": "yourorigin/yourpackage",
    "cmd": "/bin/id -u hab &>/dev/null || /sbin/useradd hab; /bin/chown -R hab:hab *; mount -t proc proc proc/; mount -t sysfs sys sys/;mount -o bind /dev dev/; /usr/sbin/chroot . ./init.sh start yourorigin/yourpackage",
    "cpus": 0.5,
    "disk": 0,
    "mem": 256,
    "instances": 1,
    "uris": [ "https://storage.googleapis.com/mesos-habitat/yourorigin/yourpackage-0.0.1-20160611121519.tgz" ]
    }
  5. Note that the default resource allocation for the application is very small: 0.5 units of CPU, no disk, one instance, and 256MB of memory. To change these resource allocations, pass different values to the Mesos exporter as command line options (defaults are documented with --help).

  6. From the DC/OS web interface, launch the Marathon Service.

    Screen shot of DC/OS Services

  7. Select "Create Application".

    Screen shot of Marathon Applications List

  8. Click on the "JSON Mode" selector and enter the JSON output of the Mesos exporter and click "Create Application".

    Screen shot of Marathon New Application JSON Mode

  9. Marathon will then deploy the application and enter the "Running" status.

    Screen shot of Marathon Application Running

Debugging

You can get to the output from the running application by clicking on the "Marathon" service from the DC/OS "Services" tab. Select the application and the "Log Viewer" and choose either the "Error" or "Output" to see stderr and stdout respectively. If you have SSH access into the nodes, the Mesos container directories are beneath /var/lib/mesos/slave/slaves. Screen shot of Debugging a Running Application

Future Enhancements

This is a basic integration, there are many improvements yet to be made. Here are a few examples:

  • Marathon environment variables are not passed into the Habitat package "cmd" yet.
  • Networking ports exposed by Habitat need to be added to the JSON.
  • The Habitat gossip protocol needs to be included as a default exposed port.
  • If Marathon is running the artifact store, support uploading the tarball directly into it.
  • Upload applications directly to the Marathon application API.
  • Marathon supports unpacking several archive formats. Native .hart support could be added directly to Marathon.

Advanced Plan Writing Guide

The following is a best practice guide to how to write a production quality plan. These best practices are reflected in the requirements for a user to contribute a plan to the Habitat Core Plans.

If you haven't already, a good first step is to read the Developing Packages articles.

A well written plan consists of well-formed:

Package Metadata

Each package plan should contain a value adhering to the guidelines for each of the following elements:

  • pkg_description
  • pkg_license (in SPDX format)
  • pkg_maintainer in the format of "The Habitat Maintainers humans@habitat.sh"
  • pkg_name see the section of this document on "Package Name Conventions"
  • pkg_origin must be set to core
  • pkg_source
  • pkg_upstream_url
  • pkg_version must be the complete version number of the software

Package Name Conventions

Each package is identified by a unique string containing four sub-strings separated by a forward slash (/) called a PackageIdent.

`origin`/`name`/`version`/`release`

The origin, name, and version values of this identifier are user defined by setting their corresponding variable in your plan.sh or plan.ps1 file while the value of release is generated at build-time.

The value of name should exactly match the name of the project it represents and the plan file should be located within a directory of the same name in this repository.

Example: The plan for the bison project project contains setting pkg_name=bison and resides in $root/bison/plan.sh.

There is one exception to this rule: Additional plans may be defined for projects for their past major versions by appending the major version number to its name. The plan file for this new package should be located within a directory of the same name.

Example: the bison project maintains the 2.x line along with their current major version (at time of writing: 3.x). A second plan is created as bison2 and placed within a directory of the same name in this repository.

Packages meeting this exception will always have their latest major version found in the package sharing the exact name of their project. A new package will be created for the previous major version following these conventions.

Example: the bison project releases the 4.x line and is continuing to support Bison 3.x. The bison package is copied to bison3 and the bison package is updated to build Bison 4.x.

Plan syntax

You can review the entire plan syntax guide here.

Please note that the following conditions must be observed for any plan to be merged into core plans (and are important best practices for any plan):

Plan basic settings

You can read more about basic plan settings here. The minimum requirements for a core plan are:

  • pkg_name is set
  • pkg_origin is set
  • pkg_shasum is set
  • pkg_description is set

Callbacks

You can read more about callbacks here. The minimum requirement for a core plan are:

Do's

  • do_prepare() (Invoke-Prepare in a plan.ps1) is a good place to set environment variables and set the table to build the software. Think of it as a good place to do patches.

Don't's

  • You should never call exit within a build phase. In a plan.sh, you should instead return an exit code such as return 1 for failure, and return 0 for success. In a plan.ps1 you should call Write-Exception or throw an exception upon failure.
  • If you clone a repo from git, you must override do_verify() to return 0 in a plan.sh or if you are authoring a plan.ps1 then override Invoke-Verify with an empty implementation.
  • Never use pkg_source unless you are downloading something as a third party.
  • You should never shell out to hab from within a callback. If you think you want to, you should use a utility function instead.
  • You should not call any function or helper that begin with an underscore, for example _dont_call_this_function(). Those are internal only functions that are not supported for external use and will break your plan if you call them.
  • Don't run any code or run anything outside of a build phase or a function.

Hooks

The Supervisor dynamically invokes hooks at run-time, triggered by an application lifecycle event. You can read more about hooks here.

  • You cannot block the thread in a hook unless it is in the run hook. Never call hab or sleep in a hook that is not the run hook.
  • You should never shell out to hab from within a hook. If you think you want to, you should use a runtime configuration setting instead. If none of those will solve your problem, open an issue and tell the core team why.
  • Run hooks should:
    • Redirect stderr to stdout (e.g. with exec 2>&1 at the start of the hook)
    • In a Linux targeted hook, call the command to execute with exec <command> <options> rather than running the command directly. This ensures the command is executed in the same process and that the service will restart correctly on configuration changes.
    • If you are running something with a pipe exec won't work.
  • Attempting to execute commands as a root user or trying to do sudo hab pkg install are not good practice.
  • Don't edit any of the Supervisor rendered templates.
    • You can only write to: /var/, /static/, /data/ directories. You should only access these with your runtime configuration setting variable.
    • No one should ever edit anything in /hab/ directly.
    • No one should write to anything in /hab/ directly.

README

All plans should have a README. Items to strongly consider including:

  • Your name as maintainer and supporter of this plan.
  • What habitat topology it uses (and the plan should have the correct topology for the technology).
  • Clear, step by step instructions as to how to use the package successfully.
  • What is the best update strategy for different deployments?
  • What are some configuration updates a user can make, or do they always need to do a full rebuild?
  • Documentation on how to scale the service.
  • Instructions on how to monitor the health of the service at the application layer.
  • Can a user simply call the package as a dependency of their application?
  • How does the package integrate into their application

Iterative Development

To assist in creating new packages, or modifying existing ones, the Supervisor has an option to allow you to use the configuration directly from a specific directory, rather than the one it includes in the compiled artifact. This can significantly shorten the cycle time when working on configuration and application lifecycle hooks.

Build the plan as you normally would. When you start the Supervisor, pass the name of the directory with your plan inside it:

$ hab start core/redis --config-from /src

The Supervisor will now take its configuration and hooks from /src, rather than from the package you previously built. When the configuration is as you want it, do a final rebuild of the package.


Binary Wrapper Packages

While Habitat provides the best behavior for applications that can be compiled from source into the Habitat ecosystem, it can also bring the same management benefits to applications distributed in binary-only form.

You can write plans to package up these binary artifacts with minimal special handling. This article covers some tips and tricks for getting this software into Habitat.

Override The Build Phases You Don't Need

A Habitat package build proceeds in phases: download, verification, unpacking (where you would also patch source code, if you had it), build, and finally installation. Each of these phases has default behavior within the build system.

When building binary packages, you override the behavior of phases that do not apply to you. At the very minimum, you must override the do_build and do_install phases, for example:

plan.sh
(...)
do_build() {
# relocate library dependencies here, if needed -- see next topic
return 0
}
do_install() {
mkdir -p $pkg_prefix/bin
cp $PLAN_CONTEXT/bin/hello_world $pkg_prefix/bin/hello_world
chmod +x $pkg_prefix/bin/hello_world
}

Relocate Hard-Coded Library Dependencies If Possible

On Linux, many binaries hardcode library dependencies to /lib or /lib64 inside their ELF symbol table. Unfortunately, this means that Habitat is unable to provide dependency isolation guarantees if packages are dependent on any operating system's libraries in those directories. These Habitat packages will also fail to run in minimal environments like containers built using hab-pkg-export-docker, because there will not be a glibc inside /lib or /lib64.

Note: On Windows, library dependency locations are not maintained in a binary file's headers. See this MSDN article for a complete explanation of how Windows binaries are located. However, it's typically sufficient to ensure that the dependent binaries are on the PATH. You should make sure to include all dependencies in the pkg_deps of a plan.ps1 to ensure all of their respective DLLs are accessible by your application.

Most binaries compiled in a full Linux environment have a hard dependency on /lib/ld-linux.so or /lib/ld-linux-x86_64.so. In order to relocate this dependency to the Habitat-provided variant, which is provided by core/glibc, use the patchelf(1) utility within your plan:

  1. Declare a build-time dependency on core/patchelf as part of your pkg_build_deps line.
  2. Invoke patchelf on any binaries with this problem during the do_install() phase. For example:

    patchelf --interpreter "$(pkg_path_for glibc)/lib/ld-linux-x86-64.so.2" </span>
    ${pkg_prefix}/bin/somebinary

  3. The binary may have other hardcoded dependencies on its own libraries that you may need to relocate using other flags to patchelf like --rpath. For example, Oracle Java provides additional libraries in lib/amd64/jli that you will need to relocate to the Habitat location:

    export LD_RUN_PATH=$LD_RUN_PATH:$pkg_prefix/lib/amd64/jli
    patchelf --interpreter "$(pkg_path_for glibc)/lib/ld-linux-x86-64.so.2" </span>
    --set-rpath ${LD_RUN_PATH} </span>
    ${pkg_prefix}/bin/java

  4. For more information, please see the patchelf documentation.

If You Cannot Relocate Library Dependencies

In some situations it will be impossible for you to relocate library dependencies using patchelf as above. For example, if the version of glibc the software requires is different than that provided by an available version of glibc in a Habitat package, attempting to patchelf the program will cause execution to fail due to ABI incompatibility.

Your software vendor's support policy might also prohibit you from modifying software that they ship you.

In these situations, you will have to give up Habitat's guarantees of complete dependency isolation and continue to rely on the library dependencies provided by the host operating system. However, you can continue to use the features of the Habitat Supervisor that provide uniform manageability across your entire fleet of applications.

Fix Hardcoded Interpreters

Binary packages often come with other utility scripts that have their interpreter, or "shebang", line (first line of a script) hardcoded to a path that will not exist under Habitat. Examples are: #!/bin/sh, #!/bin/bash, #!/bin/env or #!/usr/bin/perl. It is necessary to modify these to point to the Habitat-provided versions, and also declare a runtime dependency in your plan on the corresponding Habitat package (for example, core/perl).

Use the fix_interpreter function within your plan to correct these interpreter lines during any phase, but most likely your do_build phase. For example:

fix_interpreter ${target} core/coreutils bin/env

The arguments to fix_interpreter are the file (represented here by ${target}) you are trying to fix, the origin/name pair of the Habitat package that provides that interpreter, and the interpreter pattern to search and replace in the target.

If you have many files you need to fix, or the binary package automatically generates scripts with hardcoded shebang lines, you may need to simply symlink Habitat's version into where the binary package expects it to go:

ln -sv $(pkg_path_for coreutils)/bin/env /usr/bin/env

This is a last resort as it breaks the dependency isolation guarantees of Habitat.