Skip to main content

Containerized development environment on OpenBSD with Podman

Introduction

Recently, I decided to ditch Linux in favor of OpenBSD. Reasons for this move are beyond the scope of this article, and I will leave them for some other time. Due to refinements to the toolbox I have made over the years, switching to OpenBSD was seamless. Almost all tools I use in my day-to-day work are available for both Linux and OpenBSD. However, there is one Linux-specific feature that is missing (and probably will never be in OpenBSD): container runtime.

Polish speaking readers might know I advocate using containers for development. Before I get roasted for praising container technology, let me describe what problems did containers solve for me:

  • Unification of development environment among the team: since all tools are versioned inside shared containerized environments, there are no conflicts on machines owned by different team members. As a result, everyone can use the operating system of choice, without the need to worry about Python version available. It allowed me personally to perform several distro-hops during my career on my development machine without interrupting my current work - this also allowed me to switch to OpenBSD in the first place.
  • Auditable development environments: since the definition of development environment is stored in Dockerfile and docker-compose.yml files, each change (i.e. bumping version of Python runtime) can be versioned and reviewed using git. Also, every change is propagated automatically to all team members once approved. Then, sometimes with manual cleanup of old containers, the upgrades are introduced seamlessly on all machines.
  • Easy onboarding of new team members: when a new team member joins a team, they must set up the development environment before they can work on the code itself. In old projects, when containers were not used this could take up to one week. With containerized environments, new team member needs to download the repository and run docker-compose up to get everything working. For me, as a team leader, this is a great added value as the project budget is not spent on mundane tasks, like downloading the proper version o Python or configuring an appropriate PostgreSQL server.
  • Deployment documentation: since the development environment is coded in files, the operations team can read them and find out what components and in what versions are required to run the solution in production (even, if the deployment is “traditional”, not containerized). There is no need to write separate docs, that will be another artifact that will need to be maintained and updated when changes occur. “Living” docker-compose.yml and Dockerfiles cannot lie and cannot be outdated.
  • Ability to fiddle with new technologies: I can easily try new technologies and languages without the need to clutter my machine and perform manual installation process for each new tool. I want to try Clojure? No problem - within a few seconds image with Clojure runtime is downloaded. I want to try the newest version of Python? Sure, one download, and I’m free to play. When I’m done, I can easily clear the not-needed images with a single command so that they do not use up the disk space on my laptop.

This article describes the process of setting up the development environment that utilizes containers on OpenBSD using vmm(4) and vmd(8) using Alpine Linux. There are several other nice tutorials that cover such setup (in fact, I followed them to assemble my setup). However, they concentrate on providing the environment capable of running existing images under OpenBSD. On the other hand, my case requires sharing files (for instance: source code I’m currently working on) with the virtual machine. This is achieved with Network File System (NFS).

Also, this tutorial uses Podman instead of Docker. There are two reasons for choosing Podman over Docker. First of all, Podman is more efficient and requires fewer resources while maintaining the same interface. It makes a better option to runn inside a virtual machine. Secondly, recent attempts to monetize Docker are for me a signal that eventually docker tool might end up not being free of charge for organizations and would induce additional costs on the development. The recent subscription changes do not influence the Linux users who are only using CLI for Docker on their systems, but who knows what the future brings - and that we’d better be prepared.

The setup

The following steps were tested on OpenBSD 7.0-current.

Setup local network for VM

The VM hosting the container runtime needs to communicate:

  • with the outside world (i.e. to obtain the container images or install required packages),
  • with the host system to share resources (i.e. the source code to be run inside containers).

This can be easily achieved by introducing a bridge network connecting the host with the VM with NAT mechanism provided by pf(4).

By default, the IP forwarding mechanism that will allow NAT to work properly is disabled. To enable it, run the following commands:

# sysctl net.inet.ip.forwarding=1
# sysctl net.inet6.ip6.forwarding=1

To persist this setting between boots, add the following lines to the sysctl.conf(5):

net.inet.ip.forwarding=1
net.inet6.ip6.forwarding=1

To allow access to the host machine from the guest VM new interface must be created. The vether(4) is suitable for this purpose. To create the pseudo-device create the /etc/hostname.vether0 file with the following content:

inet 10.36.25.1 255.255.255.0

Start the vether0 interface with the command:

# sh /etc/netstart vether0

Next, create the bridge(4) interface to connect the host with the guest. Attach the vether0 interface created previously to the bridge by placing the following line in the /etc/hostname.bridge0 file:

add vether0

Start bridge0 interface with:

# sh /etc/netstart bridge0

The NAT configuration should be added to the pf.conf(5) file. This can be easily achieved with the following line:

match out on egress inet from vether0:network to any nat-to (egress)

Reload pf(4) rules by invoking:

# pfctl -f /etc/pf.conf

Finally, we need to add the bridge0 interface as the switch that should be used by the virtual machines. This is achieved by defining new switch stanza in vm.conf(5) file:

switch "local" {
  interface bridge0
}

Install Alpine Linux on the virtual machine

Before installing, fetch the latest ISO image for Alpine Linux. At the moment of writing this article, the latest version is 3.14.2. Download the virtual optimized image with:

$ curl -o alpine.iso https://dl-cdn.alpinelinux.org/alpine/v3.14/releases/x86_64/alpine-virt-3.14.2-x86_64.iso

Then, create the disk image file for installing the base system using vmctl(8) command. 6 GB of space would be more than enough to hold the files.

$ vmctl create -s 6G alpine-sys.img

Boot the downloaded ISO to start the installation process of Alpine Linux on the created disk image. Again, use the vmctl(8) command:

# vmctl start -c -r alpine.iso -d alpine-sys.img -m 1G -n local docker-vm

Alpine Linux installation process is straightforward: simply follow instructions from the official documentation. Use the provided disk image as the location where system files should be stored. Use static configuration for the network interface, as follows:

  • IP address: 10.36.25.37 (10.DO.CK.ER ;)),
  • Network mask: 255.255.255.0,
  • Default gateway: 10.36.25.1,
  • DNS server of choice.

This will assert that the IP address of the spawned VM is well known every time and will free us from setting up DHCP over the bridged interface (which does not bring any added value to the solution IMHO).

When the Alpine installation is complete, reboot the machine with the following command:

# vmctl start -c -d alpine-sys.img -m 1G -n local docker-vm

Login as root and create a new, non-root user (possibly with sudo or doas rights to perform administrative tasks without the need to switch to root). In my case, the new user is named struchu:

alpine # adduser -h /home/struchu -u 1000 struchu wheel

Terminate the instance:

alpine # poweroff

Then start the VM without the connection to the serial console:

# vmctl start -d alpine-sys.img -m 1G -n local docker-vm

You should be able to connect to the started instance via SSH:

$ ssh struchu@10.36.25.37

Create a separate disk for container data

The creation of a separate disk for storing container data (i.e. cached images, managed volumes, etc.) will allow to purge the system disk and perform a fresh install of Alpine Linux if needed. To do so, first create the new disk image, with a size of your preference:

$ vmctl create -s 40G alpine-data.img

Boot the virtual machine with:

# vmctl start -d alpine-sys.img -d alpine-data.img -m 1G -n local docker-vm

SSH into it and find the newly created disk:

alpine # fdisk -l

It will probably by placed under /dev/vdb. Install and start parted utility to partition the empty disk:

alpine # apk add parted
alpine # parted /dev/vdb

Create ext4 partition:

(parted) mklabel gpt
(parted) mkpart 1 ext4 1 100%
(parted) align-check opt
(parted) name 1 data
(parted) quit

Initialize filesystem on /dev/vdb1 partition and note the filesystem UUID:

alpine # mkfs.ext4 /dev/vdb1

Create the mount point for the new disk to store the containers data:

alpine $ mkdir -p ~/.local/share/containers

Mount the new partition to check if everything went OK:

alpine # mount /dev/vdb1 ~/.local/share/containers

If everything works smoothly, add the following line to the /etc/fstab to make sure this disk is mounted automatically on boot:

UUID=<UUID of the filesystem> /home/struchu/.local/share/containers   ext4    rw,relatime,user 0 0

Podman installation

To install podman in Alpine simply run:

alpine # apk add podman

To podman run properly, the cgroups service must be enabled:

alpine # rc-update add cgroups
alpine # service cgroups start

For rootless support, the user must have a proper range of subuids and subgids enabled. This can be achieved with the following commands:

alpine # apk add fuse-overlayfs shadow slirp4netns
alpine # modprobe tun
alpine # usermod --add-subuids 100000-165535 struchu
alpine # usermod --add-subgids 100000-165535 struchu
alpine $ podman system migrate

To verify if everything works properly, run the hello-world container:

alpine $ podman run --rm -it hello-world

The next step would be to install podman-compose utility as a replacement for docker-compose. This is available via pip:

alpine # apk add python3 py3-pip
alpine $ pip install --user podman-compose

To allow for the seamless invocation of this utility, add this line to the ~/.profile file:

export PATH="$PATH:$HOME/.local/bin"

Share workspace with the VM

The last part of the VM setup is to share the directory holding the code one will be working on that should be run inside a container. In my case, I usually put the code in /home/struchu/workspace directory. To share this directory with the VM we will use NFS. To enable NFS shares on OpenBSD the portmap(8), nfsd(8) and mountd(8) services should be enabled on host:

# rcctl enable portmap nfsd mountd

mountd(8) must be informed which directories to serve to the clients. To do this, add the following line to exports(5) file:

/home/struchu/workspace -network=10.36.25 -mask=255.255.255.0

This line will allow all hosts within the bridged network to access the /home/struchu/workspace directory, with read-write permissions. This will allow containers to modify the code i.e. generate code, fix linting errors, or format code with automatic formatters.

Finally, start the services:

# rcctl start portmap nfsd mountd

Alpine Linux does not support mounting NFS shares out of the box. To enable this possibility the nfs-utils packages must be installed. SSH into the VM and run the following commands:

alpine # apk add nfs-utils
alpine # rc-update add nfsmount
alpine # service nfsmount start

The next step is to create a mount point for the host workspace. For convenience and to reduce the mental load required to navigate the filesystems, I use the same directory structure as on the host:

alpine $ mkdir /home/struchu/workspace

Once this is completed, try to mount the exported directory from the host machine:

alpine # mount -t nfs 10.36.25.1:/home/struchu/workspace /home/struchu/workspace

If everything is OK, the workspace directory will contain the same entries as the host /home/struchu/workspace directory.

To allow for an automatic mount of this share on VM boot, add the following line to /etc/fstab:

10.36.25.1:/home/struchu/workspace      /home/struchu/workspace nfs _netdev 0 0

Convenience tweaks

To avoid typing the full IP address of the VM (i.e. for SSH or testing developed solution in browser), it might be a good idea to alias it using hosts(5) file:

10.36.25.37     docker.local

If you are a long time docker user and your muscle memory tells you to use it instead of podman, you might want to create aliases in the .profile file in Alpine VM. Since those two tools have almost the same interface, simple aliases should do the trick:

alias docker=podman
alias docker-compose=podman-compose

In all examples, I used 1GB of RAM allocated for the spawned VMs. However, this value could (or even should) be tweaked to match the requirements for containers to be run and the possessed hardware.

Final remarks

I’m aware, that OpenBSD does not need containers (or to state it otherwise: OpenBSD developers do not need containers, as they create this system for themselves) and most of the same results can be achieved with other tools (i.e. chroot(8)). However, Docker (and containers in general) simplified the development of the projects I was involved in. And, what is more important, most of the developers I worked with (me included) use Linux where container solutions are close at hand. I’m aware that containers are not a perfect solution for all problems and that the BSD community might find it disturbing (especially when it comes to OpenBSD and the security - and I fully respect that approach) but for me, they bring real value to the development process.

Using a virtual machine adds some performance penalty to the running containers. Furthermore, vmd(8) is still a bit limited compared to other virtualization solutions (for instance kvm or VirtualBox). However, it is native to OpenBSD and included in the base system, which makes installing any third-party packages unnecessary. Furthermore, it is still developed by the OpenBSD team and I suppose that many improvements will be made in the following years.