Deploy Linux Systems with Bootc
Bootc and OSTree deliver atomic updates, instant rollbacks, and git-like system versioning for Linux infrastructure. Modernize deployment workflows.
The traditional package manager approach to Linux system management is showing its age. After years of managing infrastructure at scale, I’ve watched teams struggle with configuration drift, inconsistent environments, and risky updates that can’t be easily rolled back. The solution isn’t better scripts or more careful change management—it’s rethinking how we deploy operating systems entirely.
Image-based deployment with bootc and OSTree represents a fundamental shift in how we manage Linux systems. Instead of installing packages and modifying configuration files, you build complete system images, deploy them atomically, and roll back instantly when problems occur. This is the same pattern that revolutionized container deployments, now applied to the operating system itself.
Understanding Image-Based System Management
Traditional Linux distributions manage systems through package managers like apt, yum, or dnf. You start with a base installation, add packages, modify configuration files, and hope everything stays consistent across your fleet. This mutable approach leads to snowflake servers where no two systems are truly identical.
I first encountered OSTree while researching immutable infrastructure patterns. OSTree brings git-like version control to your entire filesystem. Every system state is a commit in a git-style repository. Bootc builds on OSTree by adding container image support, allowing you to build system images using standard container tooling.
The architecture is elegant:
# System A (current deployment)
/ostree/deploy/fedora/deploy/abc123.0/
/usr (read-only, content-addressed)
/etc (writable overlay)
/var (persistent data)
# System B (previous deployment, kept for rollback)
/ostree/deploy/fedora/deploy/def456.0/
/usr (read-only, shared content with A)
/etc (previous configuration)
/var (same persistent data)
Both deployments share identical files through content-addressing, consuming minimal extra disk space. Switching between them is atomic—just a bootloader configuration change and a reboot.
Build Container-Based System Images
Bootc images use standard Containerfile/Dockerfile syntax. This was the key insight that made image-based deployments practical—we already know how to build containers.
Here’s a minimal bootc image for a web server:
FROM quay.io/fedora/fedora-bootc:40
# Install necessary packages
RUN dnf install -y \
nginx \
podman \
firewalld \
&& dnf clean all
# Configure nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY default.conf /etc/nginx/conf.d/default.conf
# Enable services
RUN systemctl enable nginx && \
systemctl enable firewalld
# Configure firewall
RUN firewall-offline-cmd --add-service=http && \
firewall-offline-cmd --add-service=https
# Application user setup
RUN useradd -r -s /sbin/nologin webapp && \
mkdir -p /var/www/html && \
chown -R webapp:webapp /var/www/html
This Containerfile defines your entire system state. Build it with standard container tools:
podman build -t localhost/webapp-system:latest .
podman push localhost/webapp-system:latest registry.example.com/webapp-system:latest
The image is now ready for deployment. No installation scripts, no configuration management tools, just a container image containing your complete operating system configuration.
Deploy Atomic Updates to Production
Deploying a bootc image to a running system uses the bootc command-line tool. I typically automate this through systemd timers for regular updates:
# One-time deployment to new hardware
bootc install to-disk \
--source-imgref registry.example.com/webapp-system:latest \
/dev/sda
# Update running system to new image
bootc upgrade --check
# If update available, apply it
bootc upgrade --apply
# Rollback to previous deployment if needed
bootc rollback
The upgrade process is atomic. The new system image is downloaded, verified, and staged. On the next reboot, the bootloader switches to the new deployment. The previous deployment remains available for instant rollback.
I use this systemd timer to check for updates every 6 hours:
# /etc/systemd/system/bootc-upgrade.timer
[Unit]
Description=Check for bootc system updates
Requires=bootc-upgrade.service
[Timer]
OnCalendar=*-*-* 00,06,12,18:00:00
RandomizedDelaySec=1h
Persistent=true
[Install]
WantedBy=timers.target
# /etc/systemd/system/bootc-upgrade.service
[Unit]
Description=Apply bootc system updates
After=network-online.target
Wants=network-online.target
[Service]
Type=oneshot
ExecStart=/usr/bin/bootc upgrade --apply
ExecStartPost=/usr/bin/systemctl reboot
# Only reboot if upgrade succeeded
SuccessAction=reboot
This provides automatic updates with zero downtime. Systems pull new images, stage them, and reboot during maintenance windows. Failed updates never activate because OSTree verifies integrity before switching deployments.
Manage Configuration in Immutable Systems
The trickiest part of image-based deployments is handling configuration that varies between environments. You can’t just modify /etc/nginx/nginx.conf on production—those changes disappear on the next update because /usr is read-only.
I use three patterns for configuration management:
1. Environment Variables and Templating
# /usr/local/bin/configure-nginx
#!/bin/bash
envsubst < /usr/share/templates/nginx.conf.template > /etc/nginx/nginx.conf
systemctl reload nginx
# /etc/systemd/system/configure-nginx.service
[Unit]
Description=Configure nginx from environment
Before=nginx.service
ConditionPathExists=/etc/sysconfig/webapp
[Service]
Type=oneshot
EnvironmentFile=/etc/sysconfig/webapp
ExecStart=/usr/local/bin/configure-nginx
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target
The /etc/sysconfig/webapp file contains environment-specific values and persists across updates because /etc is writable.
2. Butane/Ignition for Initial Configuration
Butane generates Ignition configs for initial system provisioning. This is perfect for cloud deployments:
# config.bu
variant: fcos
version: 1.5.0
storage:
files:
- path: /etc/sysconfig/webapp
mode: 0644
contents:
inline: |
ENVIRONMENT=production
DATABASE_HOST=db.example.com
API_KEY_SECRET_ARN=arn:aws:secretsmanager:...
systemd:
units:
- name: configure-nginx.service
enabled: true
Convert to Ignition JSON and use it during deployment:
butane config.bu > config.ign
bootc install to-disk \
--source-imgref registry.example.com/webapp-system:latest \
--ignition-file config.ign \
/dev/sda
3. External Configuration Mounts
For complex configurations, I mount external volumes:
# Mount configuration from S3/Git/etc
podman run -d \
--name=config-sync \
-v /etc/webapp-config:/config \
registry.example.com/config-sync:latest
# nginx.conf references mounted configs
include /etc/webapp-config/*.conf;
The configuration lives outside the OS image, allowing updates without rebuilding the entire system.
Validate System Images Before Deployment
One of bootc’s killer features is testability. Because your system is a container image, you can test it before deployment:
# Run system image in container for integration testing
podman run --rm -it \
--privileged \
registry.example.com/webapp-system:latest \
/bin/bash
# Inside container, verify services start correctly
systemctl start nginx
systemctl status nginx
curl http://localhost/health
# Test firewall rules
firewall-cmd --list-all
# Validate configuration
nginx -t
I build this into CI/CD pipelines:
# .github/workflows/build-system-image.yml
name: Build and Test System Image
on:
push:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build bootc image
run: |
podman build -t webapp-system:${{ github.sha }} .
- name: Test system image
run: |
podman run --rm --privileged webapp-system:${{ github.sha }} \
/usr/local/bin/test-suite.sh
- name: Push to registry
if: success()
run: |
podman push webapp-system:${{ github.sha }} \
registry.example.com/webapp-system:${{ github.sha }}
podman tag webapp-system:${{ github.sha }} \
registry.example.com/webapp-system:latest
podman push registry.example.com/webapp-system:latest
Failed tests block deployment. Only validated images reach production.
Monitor Deployment State Across Infrastructure
Tracking deployment state across a fleet requires new observability patterns. I expose bootc status through metrics:
#!/usr/bin/env python3
# /usr/local/bin/bootc-exporter
import subprocess
import json
from prometheus_client import start_http_server, Gauge
import time
BOOTC_VERSION = Gauge('bootc_current_version',
'Current bootc deployment version',
['deployment', 'image'])
def get_bootc_status():
result = subprocess.run(['bootc', 'status', '--json'],
capture_output=True, text=True)
return json.loads(result.stdout)
def update_metrics():
status = get_bootc_status()
current = status['status']['booted']
BOOTC_VERSION.labels(
deployment=current['deployment'],
image=current['image']['imageReference']
).set(1)
if __name__ == '__main__':
start_http_server(9100)
while True:
update_metrics()
time.sleep(60)
This exports current deployment state to Prometheus. I can now alert on version drift, track rollout progress, and correlate deployments with performance metrics.
Implement Production-Ready Deployment Patterns
After running bootc in production for several months, I’ve developed these patterns:
Staged Rollouts: Deploy new images to canary servers first, monitor for issues, then gradually roll out to the fleet:
# Deploy to canary
ansible-playbook -i inventory/canary bootc-upgrade.yml
# Wait 24 hours, check metrics
# If good, deploy to production
ansible-playbook -i inventory/production bootc-upgrade.yml
Automated Rollback: Monitor for errors post-deployment and automatically rollback if thresholds are exceeded:
#!/bin/bash
# /usr/local/bin/health-check-and-rollback
# Run via systemd timer 5 minutes after boot
error_rate=$(curl -s http://localhost/metrics |
grep error_rate |
awk '{print $2}')
if (( $(echo "$error_rate > 0.05" | bc -l) )); then
logger "Error rate $error_rate exceeds threshold, rolling back"
bootc rollback
systemctl reboot
fi
Blue-Green Deployments: Maintain two parallel fleets and switch traffic atomically:
# Update blue fleet
ansible-playbook -i inventory/blue bootc-upgrade.yml --extra-vars "image_tag=v2.0"
# Switch load balancer to blue
aws elbv2 modify-target-group --target-group-arn $TG_ARN \
--targets $(cat blue-instances.txt)
# Update green fleet for next deployment
ansible-playbook -i inventory/green bootc-upgrade.yml --extra-vars "image_tag=v2.0"
Migrate from Traditional Linux Deployments
Moving existing systems to bootc requires planning. I use this phased approach:
Phase 1: Inventory Current State
# Capture installed packages
rpm -qa > packages.txt
# Capture configuration files
rpm -qa --configfiles > configs.txt
# Capture systemd units
systemctl list-unit-files --state=enabled > enabled-services.txt
Phase 2: Build Equivalent Bootc Image
Start with a base image and add packages/configurations:
FROM quay.io/fedora/fedora-bootc:40
COPY packages.txt /tmp/
RUN dnf install -y $(cat /tmp/packages.txt) && dnf clean all
COPY etc-overlay/ /etc/
RUN systemctl enable $(cat /tmp/enabled-services.txt)
Phase 3: Test in Parallel
Deploy bootc systems alongside existing infrastructure. Run identical workloads and compare behavior.
Phase 4: Gradual Cutover
Migrate workloads server-by-server, keeping traditional systems as fallback.
Lessons from Production Bootc Deployments
Configuration Management is Simpler: I eliminated Ansible playbooks with 2000+ lines of complex logic. The entire system configuration is now a 150-line Containerfile.
Rollbacks Actually Work: Unlike traditional package rollbacks which often fail due to dependency conflicts, OSTree rollbacks are instant and reliable. I’ve rolled back production systems dozens of times with zero issues.
Disk Space is Minimal: I worried about storing multiple deployment versions. In practice, OSTree’s content-addressing means shared files consume space only once. Three deployments use roughly 20% more space than one traditional installation.
Updates are Less Scary: Atomic updates with guaranteed rollback remove the anxiety from system updates. Our update velocity increased 3x because the risk disappeared.
Start Using Bootc for Linux Deployments
The shift to image-based deployments with bootc and OSTree isn’t just a new tool—it’s a fundamental transformation in infrastructure management. Treating operating systems like immutable artifacts rather than mutable state machines aligns perfectly with modern DevOps practices and dramatically improves reliability.
Start small: build a test system image, deploy it in a lab environment, and experience instant rollbacks firsthand. The architectural simplicity and operational benefits will quickly convince you that this is the future of Linux system deployment.