GitOps driven Openhab in k3s

After building out my cloud@home environment (detailed in my previous article), I decided to take the next logical step: replacing my OpenHABian setup with a fully GitOps-driven “as code” version of OpenHAB deployed into my K3s cluster.

My goals were simple:

  • be able to deploy or redeploy everything from scratch in seconds,
  • support upgrades cleanly,
  • optionally preserve user data when I want to,
  • have transparency and reproducibility via Git.
  • be able to reflect any OpenHab textual definition change in production with a simple commit

This post walks through how I designed, configured, and now run OpenHAB on Kubernetes using GitOps—covering the trade-offs I faced, the architecture I settled on, and lessons learned along the way.

OpenHab Deployment definition

Before diving in, you may want to review my k3s architecture — it provides the cluster with redundant persistence, automated backup and restore, and a fully GitOps-driven workflow powered by SaltStack and GitHub.

What follows is a simplified and commented excerpt of the full YAML manifest. The goal is to highlight the core building blocks of the deployment, which serve as the foundation for everything else described in this article.

Please note that I’m not defining a persitent volume for Openhab “userdata” folder below because it is useless in the context of this article and would just add uneeded lines.


Some additional notes regarding this definition:

  • I always specify the exact version of the Openhab container in order to ensure that only version I tested my configuration on will be deployed.
  • I do not map the Karaf console in my production environement (8101) because using any TCP ingress in k3s requires a specific configuration and I honestly don’t need it.

GitOPS for Openhab configuration

To manage OpenHAB configuration via GitOps, I use a git-sync sidecar container that pulls configuration from a private repository. The configuration is stored on an in-memory emptyDir  volume mounted at /openhab/conf . Initially, I tried using a subpath mount, but this approach introduced two problems:

  1. Startup ordering – git-sync must complete its initial sync before the OpenHAB container starts, otherwise OpenHAB creates its own /conf  directory structure, which may conflict.
  2. Continuous updates – subsequent changes in the Git repository are not reflected in the running container because symlinks in subpaths are not followed after startup.

Using an initContainer for git-sync isn’t a solution either, because an initContainer only runs to completion once at startup, while the purpose of git-sync is to continuously monitor and update the local repository.

The solution I implemented involves three key steps:

  • Set the OPENHAB_CONF  environment variable to point to a different path, avoiding the need for subpath mounts.
  • Use a sidecar container with git-sync to fetch the configuration from the OpenHAB repository.
  • Use git-sync’s command hook feature to copy updated configuration files into the OpenHAB configuration directory.

Two remaining challenges had to be addressed:

  1. Race condition at startup – if the OpenHAB container initializes its configuration directory before or during the git-sync copy, file structure may be incomplete (OpenHAB run some initialization command durint its first startup)
  2. Command hook limitations – git-sync only allows executing a single executable with no arguments.

To solve this, I added a small BusyBox initContainer and a dedicated volume to create a bash script. This script acts as a command hook for git-sync, waiting for the OpenHAB configuration directory to exist before copying files over.

Finally, there are a few practical notes:

  • Only older git-sync images (3.x) are readily available online. To ensure future compatibility, I built a custom git-sync image using a build script stored in my Git repository.
  • I modified the GROUP_ID and USER_ID in the OpenHAB container definition so that the OpenHAB user and group match those used by git-sync, which cannot be changed otherwise.
Here is the updated deployement Yaml code:

Secrets management

So far, I’ve described how I set up a GitOps-driven OpenHAB: all textual definitions—items, rules, sitemaps, etc.—can be updated in my K3s production environment with a simple commit to the main branch.

For small changes, such as adjusting a cron value in a rule, I can edit the required file and commit it directly. For more complex changes, I create a dedicated branch in my development environment and merge it into the main branch once the work is complete.

But the goal isn’t fully achieved yet: OpenHAB textual definitions often need to store secrets or contextual values, for example:

  • MQTT client ID
  • Passwords used by bindings like Kodi
  • Credentials required by external services such as InfluxDB

To handle this, I implemented a simple but effective templating mechanism: any value enclosed in double brackets is replaced by the corresponding environment variable when files are fetched from the Git repository. For instance, {{MY_ENV_VAR}}  will be replaced with the value of the environment variable MY_ENV_VAR.

This approach allows me to define all secrets directly in K3s and “restore” them at deployment time using environment variable definitions in the deployment YAML.

In practice, my openhab-config  Git repository contains a build.sh  script. This script is executed by the dynamically created updated.sh  script described in the previous chapter. It relies solely on simple shell commands available in BusyBox, keeping the process lightweight and fully reproducible.

Here is the build.sh script:

The script build part in the BusyBox initcontainers call this build script:

Any secret or simple variable value can now be defined as environement variable in the git-sync sidecar container.

For example, below is my influxdb.cfg stored in the /services  of my openhab-config  repository

To better illustrate this, based on the same git-sync YAML definition as previously, but with additionnal values for InfluxDB:

Openhab Tips

This section isn’t meant to replace the official OpenHAB design patterns, but rather to share a handful of practical tips that helped me improve and streamline my textual configuration. These are lessons learned while running OpenHAB being or not in a GitOps-driven setup, and they might be useful if you’re looking to optimize your own configuration.

General Practices

One practice I strongly recommend is keeping the semantic model in a dedicated items file. This way, the logical organization of the home (rooms, zones, equipment) remains clearly separated from the technical items that implement automations.

Here is an example of how I structure the semantic model:

Another useful approach is to split items into files by “family”—for example, cameras.items, heaters.items, or sensors.items. Since items of the same family usually share a common structure, maintaining them in separate files makes updates and refactoring much easier.

I also define a dedicated group for each item family. This allows me to manage or reference all items of a given type through their group, which comes in handy for both rules and UI navigation.

In addition, I create functional groups for rule logic. For example, I maintain groups for all battery-powered devices, for disabling all cameras, or for starting heaters. This simplifies rule writing because I can iterate over these groups instead of managing each item individually.

Here is an example of how I assign items to their family group:

For rules, I keep a dedicated file for notifications. This makes all notification logic centralized and easy to extend without digging into unrelated automation files (more on that later).

Finally, I maintain a dedicated file for startup actions. At startup, this file sets default values for items not linked to any binding—for instance, my comfort and eco temperature targets, or the heater hysteresis values.

Here’s a snippet showing how I initialize default values at startup:

HTTP Binding Usage

despite the doc give hints about a few types like “switch”, it seems to support other types too. At least, I successfully used Numbers 

Here’s a snippet showing how to get CPU usage from some API:

Also commandTransformation  and stateTransformation are not only about transformation. You can eventualy return any value you need, providing it is always a string ! So returning a JSON structure as POST payload must be “stringified” before (see how the Reolink camera control chapter below).

As an exemple, here is a javascript transformation that enable or disable email alert on a Reolink camera:

Also note that this kiong of transormation needs the jsscripting  automation in addons.cfg 

Integrating Moodaudio

Nothing groundbreaking here, but it’s worth mentioning: Moodaudio doesn’t provide a dedicated binding like Kodi does. However, it can still be controlled effectively using the MPD binding, which supports standard commands such as play, pause, previous, and next.

Here’s a simple example of how I use the MPD binding with Moodaudio:

In addition, I implemented a “one button radio play” feature by calling the Moodaudio API directly. The trick is to use the /command/cmd?  endpoint with the playitem  command, followed by the RADIO/<name of the radio>  argument.

Here’s how the command looks in practice:

If you don’t have a proper TLS certificate, you will need to add ignoreSSLErrors="true" option on the thing definition

Push Notifications

Just like with my Zabbix monitoring setup (described in my cloud@home article), I rely on Ntfy for push notifications to provide a simple yet effective way of sending alerts directly from rules. This keeps me informed about important events without having to constantly check dashboards.

To keep things tidy, I group all my notification logic into a single rules file. This centralization makes it easier to maintain, extend, or troubleshoot the notification system as my setup evolves.

Here’s how my notification rules are defined (sample):

Monitoring Oregon Sensor Reception

Depending on battery level, signal quality, or sometimes for no obvious reason, my receiver may occasionally fail to capture readings from Oregon temperature sensors. To detect and act on these situations, I implemented a simple notification mechanism.

For each sensor, I define a dedicated DateTime  item that uses the system:timestamp-update  profile. All these items are then grouped together, so I can easily process them in bulk.

Here’s an example of how I declare the items and group:

On top of this, I created a notification rule that runs every 10 minutes. The rule checks whether any group member has an outdated timestamp and triggers a notification if a sensor hasn’t reported for too long.

Here’s how the monitoring rule is defined:

 

These tips don’t aim to replace the official OpenHAB design patterns, but they highlight a few practical tricks that helped me make my textual configuration cleaner, more maintainable, and easier to extend. Hopefully, they can serve as inspiration if you’re building or refining your own setup.

Last words

With this GitOps-driven approach, my OpenHAB deployment is now fully reproducible, portable, and easy to maintain. From the initial deployment definitions, to configuration management through Git, to handling secrets and notifications, everything is described “as code” and can be rolled out—or rolled back—in just a few seconds.

What started as a simple replacement for my OpenHABian setup has turned into a robust, Kubernetes-native installation where upgrades, recovery, and experimentation come with almost no operational overhead.

If you’re already running a K3s cluster, this workflow shows how home automation can benefit from the same best practices as modern cloud-native applications: version control, GitOps pipelines, and declarative infrastructure.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.