Mastodon

Keycloak 26 on Podman with Quadlets: Identity Management the systemd Way



Logo

Running your own identity provider is one of those things that sounds straightforward until you’re three hours into debugging OIDC token flows at 2 AM. Keycloak has become the de facto open-source solution for identity and access management, but deploying it properly - with a real database backend, network isolation, and no credentials in plaintext - still takes some deliberate architecture.

This guide walks through deploying the Red Hat Build of Keycloak (RHBK) 26 on Podman using Quadlets: systemd-native unit files that give you declarative container management without a daemon or a compose file. If you’ve read my earlier Podman Quadlets guide, this follows the same architectural pattern - isolated backend network for the database, dual-attached application container, secrets managed through Podman’s secret store.

A note on support: Red Hat’s official documentation for the RHBK container image targets OpenShift as the supported deployment platform. Running RHBK on plain Podman is technically sound and works well, but it is not a Red Hat-supported configuration. If you need a fully supported deployment path, OpenShift is where Red Hat stands behind it. This guide is for those of us who want the RHBK image’s quality, security errata, and build lifecycle on a single-host Podman setup - and are comfortable with that trade-off.

Architecture Overview

The deployment consists of three components:

  1. PostgreSQL 16 - Keycloak’s database backend, isolated on a private network
  2. Keycloak 26 - The identity provider, connected to both backend and frontend networks
  3. A dedicated backend network - Ensuring the database is never reachable from the outside
                    Internet
                       |
               Reverse Proxy (443)
                       |
          +------------+------------+
          |     frontend network    |
          +------------+------------+
                       |
                Keycloak Container
                   (Port 8080)
                       |
          +------------+------------+
          | keycloak-backend.network|
          |   (172.16.0.0/24)       |
          +------------+------------+
                       |
              PostgreSQL Container
                  (Port 5432)

Keycloak sits on both networks: it talks to PostgreSQL over the isolated backend, and your reverse proxy reaches it on the frontend. The database has no route to the outside world.

This guide assumes you already have a reverse proxy (Traefik, Caddy, nginx) handling TLS termination on your frontend network. If you need that piece, my earlier Quadlet article covers Traefik in detail.

Prerequisites

You’ll need:

  • RHEL 10, Fedora 43+, or any system with Podman 5.x+ and Quadlet support
  • An active Red Hat subscription for pulling from registry.redhat.io (this guide uses the Red Hat Build of Keycloak)
  • A reverse proxy already handling TLS on your frontend network
  • Root access for system-wide Quadlet deployment (or adapt for rootless with systemctl --user)

All Quadlet files go into /etc/containers/systemd/ for system-wide deployments.

Step 1: Create the Secrets

Never put passwords in unit files. Podman’s secret store keeps credentials encrypted and out of process listings:

# Generate and store the database password
pwgen -s 32 1 | tr -d '\n' | podman secret create keycloak_db_password -

# Generate and store the Keycloak admin password
pwgen -s 32 1 | tr -d '\n' | podman secret create keycloak_admin_password -

The tr -d '\n' is important. pwgen appends a trailing newline, and podman secret create stores the raw bytes verbatim. PostgreSQL’s init script strips the newline when setting the password, but Keycloak’s JDBC driver sends it as-is - resulting in a password mismatch that produces a confusing “authentication failed” error despite both containers reading the exact same secret.

Write down the admin password somewhere safe - you’ll need it for first login. After initial setup, you can create additional admin accounts through the Keycloak UI and remove the bootstrap admin.

Step 2: Backend Network

Create /etc/containers/systemd/keycloak-backend.network:

[Network]
NetworkName=keycloak-backend
Subnet=172.16.0.0/24
Gateway=172.16.0.1
IPRange=172.16.0.0/28

A dedicated subnet with an explicit IP range. The /28 range in IPRange limits DHCP assignments to 14 addresses - more than enough for a database and application container, and it prevents the network from being used as a dumping ground for unrelated containers.

Why not just use the default bridge? Network segmentation is the point. The database should only be reachable by containers that explicitly join this network. Defense-in-depth starts at the network layer.

Step 3: Database Container

Create /etc/containers/systemd/keycloak-db.container:

[Unit]
Description=Keycloak PostgreSQL database

[Container]
ContainerName=keycloak-db
Image=registry.redhat.io/rhel10/postgresql-16:latest
AutoUpdate=registry

# Isolated on backend network only - no frontend access
Network=keycloak-backend.network

# PostgreSQL configuration
Environment=POSTGRESQL_USER=keycloak
Environment=POSTGRESQL_DATABASE=keycloak

# Password injected at runtime from Podman secret store
Secret=keycloak_db_password,type=env,target=POSTGRESQL_PASSWORD

# Persistent storage with SELinux relabeling
Volume=/opt/keycloak/postgres:/var/lib/pgsql/data:z

# Health check - verify PostgreSQL is accepting connections
HealthCmd=/usr/libexec/check-container
HealthInterval=30s
HealthTimeout=5s
HealthRetries=3
HealthStartPeriod=30s

[Service]
Restart=always

[Install]
WantedBy=multi-user.target

Key points:

  • registry.redhat.io/rhel10/postgresql-16: The RHEL-based PostgreSQL image follows Red Hat’s support lifecycle and receives security errata. If you don’t have a Red Hat subscription, docker.io/postgres:16-alpine is a solid alternative - just adjust the environment variables (POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD), the internal data path (/var/lib/postgresql/data instead of /var/lib/pgsql/data), and the health check (pg_isready -U keycloak instead of /usr/libexec/check-container).
  • Single network attachment: The database lives exclusively on the backend network. It has no route to the frontend and cannot be reached from outside the backend subnet.
  • :z volume flag: Tells Podman to relabel the SELinux context for shared container access. Don’t skip this on SELinux-enforcing systems.
  • Health check: The RHEL PostgreSQL image ships with /usr/libexec/check-container. For the upstream image, use pg_isready -U keycloak instead.

Create the data directory and set permissions for the container’s PostgreSQL user (UID 26):

mkdir -p /opt/keycloak/postgres
setfacl -m u:26:rwx /opt/keycloak/postgres

The RHEL PostgreSQL container runs as UID 26 (postgres). Without this ACL, the container can’t initialize the data directory and will fail with permission errors. The :z SELinux flag handles labeling, but filesystem-level ownership is a separate concern - setfacl grants access without changing the directory’s ownership.

Password rotation: Red Hat’s PostgreSQL container resets the database password to match the POSTGRESQL_PASSWORD environment variable on each startup, so rotating the password is straightforward: update the Podman secret and recreate the container. Since Podman injects type=env secrets at container creation time, a simple systemctl restart isn’t enough - you need a fresh container creation to pick up the new secret value:

podman secret rm keycloak_db_password
pwgen -s 32 1 | tr -d '\n' | podman secret create keycloak_db_password -
systemctl stop keycloak-db.service keycloak-server.service
systemctl start keycloak-db.service keycloak-server.service

Step 4: Keycloak Application Container

Create /etc/containers/systemd/keycloak-server.container:

[Unit]
Description=Red Hat Build of Keycloak
After=keycloak-db.service
After=traefik.service
Requires=keycloak-db.service

[Container]
ContainerName=keycloak-server
Image=registry.redhat.io/rhbk/keycloak-rhel9:26.4-12
AutoUpdate=registry

# Dual network: backend for database, frontend for reverse proxy
Network=keycloak-backend.network
# Must match the network your reverse proxy (e.g. Traefik) is on
Network=frontend.network

# Database configuration
Environment=KC_DB=postgres
Environment=KC_DB_URL=jdbc:postgresql://keycloak-db:5432/keycloak
Environment=KC_DB_USERNAME=keycloak

# Proxy configuration - Keycloak runs behind a reverse proxy
Environment=KC_PROXY_HEADERS=xforwarded
Environment=KC_HTTP_ENABLED=true
Environment=KC_HOSTNAME_STRICT=false

# Hostname - set to your actual domain
Environment=KC_HOSTNAME=keycloak.example.com

# Enable health and metrics endpoints
Environment=KC_HEALTH_ENABLED=true
Environment=KC_METRICS_ENABLED=true

# Bootstrap admin user (used for first login only)
Environment=KC_BOOTSTRAP_ADMIN_USERNAME=admin

# Secrets - database and admin passwords
Secret=keycloak_db_password,type=env,target=KC_DB_PASSWORD
Secret=keycloak_admin_password,type=env,target=KC_BOOTSTRAP_ADMIN_PASSWORD

# Persistent storage for themes and providers
Volume=/opt/keycloak/providers:/opt/keycloak/providers:z
Volume=/opt/keycloak/themes:/opt/keycloak/themes:z
Volume=/etc/localtime:/etc/localtime:ro

# Run Keycloak in production mode
Exec=start

# Traefik labels for routing
Label="traefik.enable=true"
Label="traefik.docker.network=frontend"
Label="traefik.http.routers.rhbk.rule=Host(`keycloak.example.com`)"
Label="traefik.http.routers.rhbk.entrypoints=https"
Label="traefik.http.routers.rhbk.service=rhbk-svc"
Label="traefik.http.routers.rhbk.tls.certresolver=traefiktls"
Label="traefik.http.routers.rhbk.middlewares=secure-headers@file"
Label="traefik.http.services.rhbk-svc.loadbalancer.server.port=8080"

[Service]
Restart=always

[Install]
WantedBy=multi-user.target

There’s a lot happening here, so let’s break it down:

Network configuration: The container joins both keycloak-backend.network (to reach PostgreSQL by hostname keycloak-db) and the frontend network where your reverse proxy lives. Replace frontend.network with whatever network your Traefik (or other reverse proxy) container is on - the name must match, or Traefik won’t discover the container. The traefik.docker.network label must also match.

Database connection: Keycloak resolves keycloak-db via Podman’s built-in DNS on the backend network. The JDBC URL points directly to the container name - no IP addresses to manage.

Proxy settings: KC_PROXY_HEADERS=xforwarded tells Keycloak to trust X-Forwarded-* headers from your reverse proxy for correct URL generation, redirect URIs, and HTTPS detection. KC_HTTP_ENABLED=true allows plain HTTP on port 8080 since TLS terminates at the proxy.

KC_HOSTNAME_STRICT=false: This is set to false here to simplify initial setup, but the Keycloak documentation recommends true for production. With hostname-strict disabled, Keycloak will respond on any hostname forwarded by the proxy, which can create security issues if your reverse proxy doesn’t overwrite the Host header. Once you’ve verified that your reverse proxy correctly sets Host and X-Forwarded-* headers to your intended domain, switch this to true. If you’re using Traefik with the Host() rule shown above, Traefik already filters by hostname, so setting KC_HOSTNAME_STRICT=true from the start is safe.

Why start without --optimized? Keycloak 26 has a build phase and a runtime phase. The upstream community image ships pre-built, but the RHBK image does not - it needs to run the build step on first startup. Using start (without --optimized) lets Keycloak handle both phases automatically. This adds a few seconds to each startup, which is negligible for a service that starts once and runs continuously. If startup time matters to you (e.g., in a scaling scenario), you can build a custom image with kc.sh build baked in (see Step 5) and then switch to start --optimized.

Bootstrap admin: KC_BOOTSTRAP_ADMIN_USERNAME and KC_BOOTSTRAP_ADMIN_PASSWORD create an admin account on first startup. This account is meant for initial setup only. After logging in, create a proper admin user in the Keycloak UI and consider removing the bootstrap credentials.

No health check in this Quadlet: The RHBK image is based on UBI 9 Micro, which doesn’t include curl, wget, or any other HTTP client. A HealthCmd that relies on curl will always fail, marking the container as unhealthy - and Traefik ignores unhealthy containers by default. You can define a basic readiness check using Bash’s built-in /dev/tcp to probe the management port:

HealthCmd=bash -c 'echo > /dev/tcp/localhost/9000'
HealthInterval=30s
HealthTimeout=5s
HealthRetries=3
HealthStartPeriod=60s

This confirms the management port is accepting connections, but it’s a TCP check, not a full HTTP readiness probe. It won’t catch cases where Keycloak’s JVM is up but the application hasn’t finished initializing. Use it as a basic liveness signal, not as a gate for traffic routing. We omit it from the main Quadlet above to keep things simple and avoid Traefik interaction issues, but add it if you want systemd-level health visibility via systemctl status.

Traefik labels: The labels configure Traefik’s dynamic routing. traefik.docker.network must point to the shared frontend network so Traefik routes traffic over the right interface. The service label explicitly names the load balancer backend and maps it to port 8080. If you’re using a different reverse proxy, replace these labels with your proxy’s configuration (see the reverse proxy section below).

Systemd dependencies: Requires=keycloak-db.service ensures PostgreSQL is running before Keycloak attempts to connect - if the database service fails, Keycloak stops too. After=traefik.service ensures the reverse proxy is up before Keycloak starts, so Traefik can discover it immediately.

Create the data directories and set permissions. The RHBK container runs as UID 1000, so the bind-mounted directories need to be accessible to that user - just as we set up UID 26 for PostgreSQL:

mkdir -p /opt/keycloak/{providers,themes}
setfacl -m u:1000:rwx /opt/keycloak/providers /opt/keycloak/themes

Step 5: Build the Optimized Image (Optional)

If you need custom providers, custom themes baked into the image, or non-default database drivers, build a custom image:

FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4-12 AS builder

# Set build-time configuration
ENV KC_DB=postgres
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true

# Add custom providers if needed
# COPY my-provider.jar /opt/keycloak/providers/

RUN /opt/keycloak/bin/kc.sh build

FROM registry.redhat.io/rhbk/keycloak-rhel9:26.4-12

COPY --from=builder /opt/keycloak/ /opt/keycloak/

Build with Podman:

podman build -t localhost/keycloak-custom:26.4 .

Then update the Quadlet to use Image=localhost/keycloak-custom:26.4, switch AutoUpdate=registry to AutoUpdate=local (Podman uses registry for remote refs and local for locally built images), and change Exec=start to Exec=start --optimized since the build step is already baked into the image.

For most deployments, the default RHBK image with start is sufficient. Use a custom image plus start --optimized when you need faster restarts or want providers and themes baked in rather than bind-mounted.

Step 6: Deployment

Reload systemd to pick up the new Quadlet files:

systemctl daemon-reload

Start the stack:

systemctl enable --now keycloak-db.service
systemctl enable --now keycloak-server.service

Verify everything is running:

# Check container status
systemctl status keycloak-db.service
systemctl status keycloak-server.service

# Watch Keycloak logs for successful startup
journalctl -u keycloak-server.service -f

You should see Keycloak log its startup sequence, database migration (on first run), and eventually:

Keycloak 26.4.10.redhat-00001 on JVM (powered by Quarkus 3.27.2.redhat-00001) started in 32.000s.
Listening on: http://0.0.0.0:8080. Management interface listening on http://0.0.0.0:9000.

Step 7: Reverse Proxy Notes

The Quadlet above includes Traefik labels that handle routing configuration declaratively - no separate Traefik config files needed. If you’re using Traefik, the Quadlet is self-contained.

If you’re using a different reverse proxy, remove the Traefik labels and configure routing manually. Keycloak listens on port 8080 for HTTP. Here are two common alternatives:

Caddy (Caddyfile snippet):

keycloak.example.com {
    reverse_proxy keycloak-server:8080
}

nginx (server block):

server {
    listen 443 ssl;
    server_name keycloak.example.com;

    location / {
        proxy_pass http://keycloak-server:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_buffer_size 128k;
        proxy_buffers 4 256k;
        proxy_busy_buffers_size 256k;
    }
}

The larger proxy buffers for nginx may be necessary - Keycloak’s OIDC responses can include headers that exceed nginx’s conservative defaults. Without sufficient buffer space, you’ll see 502 Bad Gateway errors during authentication flows. The sizes above are generous; adjust downward if your setup works without them.

Both examples above assume the reverse proxy is containerized on the same frontend network, so Podman’s DNS resolves keycloak-server. If your reverse proxy runs directly on the host, you’ll need to publish Keycloak’s port (PublishPort=8080:8080 in the Quadlet) and use proxy_pass http://127.0.0.1:8080 instead.

Step 8: Automatic Updates

Enable Podman’s auto-update timer:

systemctl enable --now podman-auto-update.timer

With AutoUpdate=registry in both Quadlet files, Podman checks daily for new images and recreates containers when updates are available. Volumes persist across recreations, so your data is safe.

For Keycloak specifically, be deliberate about version updates. Pin to a specific tag like 26.4-12 (not latest) in the Quadlet and bump the version manually after reviewing the changelog. Keycloak upgrades can include database migrations that you want to run intentionally, not at 3 AM via an auto-update.

Post-Deployment: First Login

Navigate to https://keycloak.example.com and log in with the bootstrap admin credentials. From here:

  1. Create a new realm for your applications (don’t use the master realm for end users)
  2. Create a permanent admin user within the master realm
  3. Set up your first client - this is the OIDC/SAML application registration
  4. Configure identity providers if you want social login or federation with existing LDAP/AD

Once you have a proper admin account, clean up the bootstrap user:

  1. Delete the bootstrap admin in the Keycloak UI: navigate to the master realm, go to Users, find the admin account, and delete it
  2. Remove the environment variables from the Quadlet: delete the KC_BOOTSTRAP_ADMIN_USERNAME line, and the Secret=keycloak_admin_password line
  3. Restart to apply:
systemctl daemon-reload
systemctl restart keycloak-server.service

Removing the environment variables alone doesn’t delete the user from the database - it only stops Keycloak from trying to create it on boot. Both steps are necessary.

Monitoring

Keycloak 26 exposes Prometheus-compatible metrics on the management port:

# From the host (or any container on the same network)
curl http://keycloak-server:9000/metrics

Point your Prometheus instance at this endpoint for dashboards covering authentication rates, token issuance, active sessions, and JVM health.

Health endpoints are equally useful for alerting:

# Liveness - is the process alive?
curl http://keycloak-server:9000/health/live

# Readiness - is it ready to handle requests?
curl http://keycloak-server:9000/health/ready

Operational Notes

Backups: The PostgreSQL data lives at /opt/keycloak/postgres. Use pg_dump inside the container for logical backups:

podman exec keycloak-db pg_dump -U keycloak keycloak > keycloak-backup.sql

Log management: All container output flows to journald. Standard journalctl filtering applies:

# Keycloak errors only
journalctl -u keycloak-server.service -p err

# Database logs since last boot
journalctl -u keycloak-db.service -b

Scaling considerations: This single-host setup handles a surprising number of users. Keycloak’s session cache is local by default. If you eventually need clustering, Keycloak supports Infinispan-based distributed caches - but that’s a different article.

A Note on the Upstream Keycloak Image

The upstream community project publishes images at quay.io/keycloak/keycloak. If you want to use the upstream image instead:

Image=quay.io/keycloak/keycloak:26.0

The configuration is identical - same environment variables, same startup flags. One difference: the upstream image ships pre-built, so you can use start --optimized directly without a custom build step.

The upstream images are community-maintained and don’t carry Red Hat’s security errata or support lifecycle. This guide uses RHBK for its build quality and predictable errata cadence, though as noted above, running RHBK on Podman (rather than OpenShift) is itself not a Red Hat-supported configuration. Choose whichever image fits your operational model.


References

Comments

You can use your Mastodon or other ActivityPub account to comment on this article by replying to the associated post.

Search for the copied link on your Mastodon instance to reply.

Loading comments...