I’ve been running Podman in production for years now. Every container on my infrastructure - Forgejo, Traefik, PostgreSQL, Keycloak, CI runners - is managed by Podman on RHEL. Not a single Docker daemon in sight. Regular readers know that my first instinct for isolation is FreeBSD Jails - but when I’m on Linux and dealing with OCI containers, Podman is the tool I reach for. It follows the Unix model more closely than Docker, using smaller composable pieces instead of a single central daemon.
This isn’t a “getting started” tutorial. This is an opinionated production-ops perspective on Linux hosts - not a universal answer for every developer workstation or platform. It’s the article I wish someone had written when I was migrating away from Docker: the architectural reasons I prefer Podman, the practical patterns that make it work in production, and the real gotchas you’ll hit along the way. If you’ve read my earlier Quadlets guide or the Keycloak deployment article, this ties those threads together into a bigger picture.
Table of Contents
Why Podman Is Better Than Docker
I’m not going to pretend this is a balanced comparison. On Linux servers, I think Podman is the cleaner architecture, and the reasons are structural rather than cosmetic.
No Daemon, No Single Point of Failure
Docker’s architecture has a trade-off I no longer think is worth accepting on Linux hosts: a privileged central daemon. Every docker run, every docker build, every container lifecycle event goes through dockerd - a process running as root with effectively unrestricted access to your system.
Docker does offer --live-restore to keep containers running during daemon restarts, but it comes with limitations around networking and interactive sessions, and it’s not the default. The fundamental coupling remains: every container’s control path runs through a single privileged process. That’s operational risk and attack surface that adds up.
Podman uses a fork/exec model. Each container is a direct child process of whatever started it - your shell, systemd, a CI runner. There is no central daemon. Containers are independent processes managed by the kernel, the way Unix was designed to work.
Docker:
User → dockerd (root daemon) → containerd → runc → container
↑
centralized control, shared fate by default
Podman:
User → conmon → runc → container
systemd → conmon → runc → container
(no daemon, direct process tree)
I can update Podman on my server without touching running containers. Each container’s lifecycle is independent.
Rootless by Design, Not by Afterthought
Docker bolted on rootless support years after the fact, and it shows. Running Docker rootless still requires a separate dockerd-rootless process, has compatibility limitations with storage drivers, and adds limitations and extra setup around things like privileged ports.
Podman was designed rootless from the start. User namespaces, subordinate UID/GID mapping, and unprivileged container execution are core architecture, not a compatibility layer. A regular user can run containers without any elevated privileges:
# As a regular user, no sudo needed
podman run --rm -it docker.io/library/alpine:latest sh
The security implications are significant. A container escape in a rootless Podman setup lands you in an unprivileged user namespace. A compromise involving Docker’s privileged daemon has a much higher potential blast radius on the host than a compromise contained inside a rootless user namespace. That’s not a theoretical distinction - it’s a meaningful difference in your threat model.
SELinux Is a Feature, Not an Obstacle
Docker’s relationship with SELinux has historically been adversarial. Countless Docker tutorials start with setenforce 0 because the daemon and SELinux don’t always cooperate. Podman integrates with SELinux as a first-class security layer.
Every Podman container automatically gets an SELinux label (container_t). Volume mounts use :z and :Z flags to handle context relabeling. The container_runtime_t type exists specifically for containers that need elevated access patterns (like mounting the Podman socket). None of this requires disabling SELinux or writing custom policy modules.
If you’re running RHEL or Fedora with SELinux enforcing (as you should be - see my SELinux guide), Podman just works with it. Docker sometimes fights it.
The OCI Guarantee
Podman is fully OCI-compliant. For most real-world use cases, Docker images, registries, and Dockerfiles work with Podman without modification. You can podman pull from Docker Hub, podman build with a Dockerfile, and podman push to any registry. The images are identical because they follow the same OCI specification.
# These do the same thing
docker pull nginx:latest
podman pull docker.io/library/nginx:latest
# Buildah (Podman's build companion) reads Dockerfiles natively
podman build -t myapp -f Dockerfile .
The migration path from Docker to Podman is a find-and-replace of docker with podman in most cases. Your images, your registries, your CI pipelines - they all work.
Quadlets: Containers as systemd Services
If you’re still writing podman run commands in shell scripts or using Compose files, you’re missing the most compelling feature in Podman’s ecosystem: Quadlets.
Quadlets are systemd unit files that describe containers, networks, volumes, and images. Drop them in /etc/containers/systemd/ (system-wide) or ~/.config/containers/systemd/ (rootless), and systemd manages them like any other service. No daemon, no orchestrator, no YAML parser sitting between you and your containers.
The Basics
A Quadlet file looks like a systemd unit file with a [Container] section:
[Container]
ContainerName=my-app
Image=docker.io/library/nginx:latest
PublishPort=8080:80
Volume=/srv/www:/usr/share/nginx/html:z
[Service]
Restart=always
[Install]
WantedBy=multi-user.target
Save this as /etc/containers/systemd/my-app.container, run systemctl daemon-reload, and you have a container managed by systemd:
systemctl start my-app.service
systemctl status my-app.service
journalctl -u my-app.service -f
No compose file. No daemon. Just systemd doing what it already does well: managing service lifecycles.
Why Quadlets Beat Compose
Docker Compose requires a separate binary, parses YAML, and maintains its own state about which containers belong to which project. It’s a parallel service manager running alongside (or on top of) your actual service manager.
Quadlets eliminate that entire layer:
| Concern | Docker Compose | Podman Quadlets |
|---|---|---|
| Service lifecycle | docker compose up/down |
systemctl start/stop |
| Boot startup | Requires daemon + compose plugin | Native systemd WantedBy= |
| Dependencies | depends_on: (limited) |
systemd After=, Requires= (battle-tested) |
| Logs | docker compose logs |
journalctl -u (integrated with system logging) |
| Resource limits | Compose YAML deploy: section |
systemd cgroup directives (kernel-enforced) |
| Update mechanism | Manual docker compose pull && up |
podman auto-update with systemd timer |
| Restart policy | Compose-level | systemd’s proven restart logic |
The dependency management alone is worth the switch. systemd’s dependency graph has been solving service ordering for over a decade. Compose’s depends_on is limited to basic ordering, and it can’t express dependency chains involving non-container services. systemd’s After= and Requires= handle ordering and hard dependencies natively, and when you need actual readiness (not just “the process started”), Quadlets support health check primitives:
[Container]
HealthCmd=pg_isready -U myuser
HealthInterval=10s
HealthTimeout=5s
HealthRetries=5
HealthStartPeriod=30s
HealthOnFailure=stop
HealthStartPeriod gives the application time to initialize before health checks count. HealthOnFailure=stop tells Podman to stop the container if it fails health checks - and systemd’s Restart=always brings it back. This gets you much closer to real readiness and recovery behavior than simple startup ordering alone.
Real-World Deployment: A Complete Stack
Let me show you what a real Podman deployment looks like. These are based on my production Quadlets, sanitized for opsec but architecturally identical.
Network Topology
First, the network isolation. Every multi-container deployment gets a dedicated backend network:
/etc/containers/systemd/forgejo-backend.network:
[Network]
Subnet=172.16.0.0/24
Gateway=172.16.0.1
IPRange=172.16.0.0/28
The /28 in IPRange limits DHCP-assigned addresses to 14 hosts. The database and application container need two. This prevents the network from becoming a dumping ground for unrelated containers and makes the architecture self-documenting.
The frontend network (where Traefik lives) is separate. Application containers join both; databases join only the backend. The database has no route to the internet, ever.
Database Container
/etc/containers/systemd/forgejo-db.container:
[Container]
ContainerName=forgejo-db
AutoUpdate=registry
Image=registry.redhat.io/rhel10/postgresql-16:latest
Network=forgejo-backend.network
Environment=POSTGRESQL_USER=forgejo
Environment=POSTGRESQL_DATABASE=forgejo
Secret=forgejo_db_password,type=env,target=POSTGRESQL_PASSWORD
Volume=/opt/forgejo/postgres:/var/lib/pgsql/data:z
[Service]
Restart=always
[Install]
WantedBy=default.target
Note what’s not here: no frontend network, no published ports, no Traefik labels. This container talks to exactly one network and serves exactly one purpose. The password comes from Podman’s secret store (more on that below), not from the unit file.
Application Container
/etc/containers/systemd/forgejo-server.container:
[Container]
ContainerName=forgejo-server
Image=codeberg.org/forgejo/forgejo:14
AutoUpdate=registry
# Internal network for database connection
Network=forgejo-backend.network
# External network with Traefik
Network=frontend.network
Environment=USER_UID=1000
Environment=USER_GID=1000
Environment=FORGEJO__database__DB_TYPE=postgres
Environment=FORGEJO__database__HOST=forgejo-db:5432
Environment=FORGEJO__database__NAME=forgejo
Environment=FORGEJO__database__USER=forgejo
Secret=forgejo_db_password,type=env,target=FORGEJO__database__PASSWD
Volume=/opt/forgejo/forgejo:/data:z
Volume=/etc/timezone:/etc/timezone:ro
Volume=/etc/localtime:/etc/localtime:ro
# Traefik routing labels
Label="traefik.enable=true"
Label="traefik.docker.network=frontend"
Label="traefik.http.routers.forgejo.rule=Host(`git.example.com`)"
Label="traefik.http.routers.forgejo.entrypoints=https"
Label="traefik.http.routers.forgejo.service=forgejo-http"
Label="traefik.http.routers.forgejo.tls.certresolver=traefiktls"
Label="traefik.http.routers.forgejo.middlewares=secure-headers@file"
Label="traefik.http.services.forgejo-http.loadbalancer.server.port=3000"
Label="traefik.tcp.routers.forgejo-ssh.rule=HostSNI(`*`)"
Label="traefik.tcp.routers.forgejo-ssh.entrypoints=ssh"
Label="traefik.tcp.routers.forgejo-ssh.service=forgejo-ssh"
Label="traefik.tcp.services.forgejo-ssh.loadbalancer.server.port=22"
[Service]
Restart=always
[Install]
WantedBy=default.target
[Unit]
After=forgejo-db.service
After=traefik.service
The dual network attachment is the key pattern: backend for the database, frontend for Traefik. Podman’s built-in DNS resolves forgejo-db on the backend network, so the app finds its database by hostname without managing IP addresses.
The After= directives ensure proper startup ordering - systemd won’t start Forgejo until the database and Traefik services have started. Note that After= is ordering, not readiness: it guarantees the services launch in sequence, but not that the database is actually accepting connections yet. For that, combine with the health check primitives shown earlier.
Reverse Proxy
/etc/containers/systemd/traefik.container:
[Container]
ContainerName=traefik
Image=docker.io/traefik:latest
AutoUpdate=registry
AddCapability=CAP_NET_BIND_SERVICE
Network=frontend.network
PublishPort=80:80
PublishPort=443:443
PublishPort=2222:2222
NoNewPrivileges=true
SecurityLabelType=container_runtime_t
Volume=/etc/localtime:/etc/localtime:ro
Volume=/run/podman/podman.sock:/var/run/docker.sock:ro
Volume=/opt/traefik/traefik.yml:/etc/traefik/traefik.yml:z,ro
Volume=/opt/traefik/config.yml:/etc/traefik/config.yml:z,ro
Volume=/opt/traefik/letsencrypt:/letsencrypt:z
Volume=/var/log/traefik:/var/log/traefik:z
Label=traefik.enable=true
Label=traefik.http.routers.dashboard.rule=Host(`traefik.example.com`)
Label=traefik.http.routers.dashboard.entrypoints=https
Label=traefik.http.routers.dashboard.service=api@internal
Label=traefik.http.routers.dashboard.tls=true
Label=traefik.http.routers.dashboard.tls.certresolver=traefiktls
Label=traefik.http.routers.dashboard.middlewares=dashboard-auth,secure-headers@file
Label=traefik.http.middlewares.dashboard-auth.basicauth.users=admin:$$2y$$05$$...
[Service]
Restart=always
[Install]
WantedBy=default.target
Notice SecurityLabelType=container_runtime_t. Traefik needs to read the Podman socket to discover containers, and the default container_t SELinux type doesn’t permit that. The container_runtime_t type grants the necessary access without disabling SELinux. This is precisely the kind of fine-grained security control that Docker makes difficult and Podman makes natural.
CI Runner
/etc/containers/systemd/forgejo-runner.container:
[Unit]
Description=Forgejo Runner
After=forgejo-server.service network-online.target
[Container]
ContainerName=forgejo-runner
Image=code.forgejo.org/forgejo/runner:12
User=root
NoNewPrivileges=true
Exec=forgejo-runner daemon
Network=forgejo-backend.network
SecurityLabelType=container_runtime_t
Volume=/opt/forgejo/runner:/data:z
Volume=/opt/forgejo/runner/config.yml:/data/config.yml:ro
Volume=/run/podman/podman.sock:/var/run/docker.sock:z
Environment=CONFIG_FILE=/data/config.yml
Environment=DOCKER_HOST=unix:///var/run/docker.sock
[Service]
Restart=always
[Install]
WantedBy=default.target
The CI runner is a container that spawns other containers - it needs the Podman socket. The socket is mounted as /var/run/docker.sock because the runner expects Docker’s socket path. Podman’s Docker-compatible API handles the rest transparently. The runner doesn’t know (or care) that it’s talking to Podman instead of Docker.
Secrets Management
Hardcoding passwords in unit files is a non-starter. Environment variables in Quadlet files are visible in process listings, in systemd’s journal, and in the unit files themselves on disk. Podman’s secret store solves this properly.
Creating Secrets
# Generate a strong password and store it
pwgen -s 32 1 | tr -d '\n' | podman secret create forgejo_db_password -
# Or from an existing file
podman secret create my_api_key /path/to/keyfile
# List stored secrets
podman secret ls
The tr -d '\n' matters. pwgen appends a trailing newline, and podman secret create stores bytes verbatim. Some applications strip trailing whitespace from passwords; others don’t. A PostgreSQL init script might strip it while a JDBC driver sends it as-is, resulting in a password mismatch between two containers reading the exact same secret. Always strip the newline.
Using Secrets in Quadlets
[Container]
# Inject as environment variable
Secret=forgejo_db_password,type=env,target=FORGEJO__database__PASSWD
# Or mount as file
Secret=my_tls_cert,type=mount,target=/etc/ssl/certs/app.crt
The type=env variant injects the secret as an environment variable at container creation time. The type=mount variant mounts it as a file inside the container, which is preferable for TLS certificates or configuration files that applications expect on disk.
Rotating Secrets
Because type=env secrets are injected at container creation time, a normal restart of the existing container won’t refresh a secret that was injected when the container was created. You need to destroy the old container and create a fresh one:
# Remove the old secret
podman secret rm forgejo_db_password
# Create new secret
pwgen -s 32 1 | tr -d '\n' | podman secret create forgejo_db_password -
# Stop and start (not restart) to force container recreation
systemctl stop forgejo-db.service forgejo-server.service
systemctl start forgejo-db.service forgejo-server.service
The stop/start cycle destroys the old container and creates a new one from the Quadlet definition, picking up the new secret value. Volumes persist across this cycle, so no data is lost.
Automatic Updates
One of Podman’s most underrated features. Enable the systemd timer:
systemctl enable --now podman-auto-update.timer
With AutoUpdate=registry in your Quadlet files, Podman checks daily for new image versions. When a newer image is available at the same tag, Podman pulls it and recreates the container, preserving volumes and configuration.
# See what would be updated without doing it
podman auto-update --dry-run
# Force an update check now
podman auto-update
Two Update Modes
AutoUpdate=registry: Checks the remote registry for a newer image at the same tag. Use this for images you pull from Docker Hub, Quay, or Red Hat’s registry.AutoUpdate=local: Checks if the local image has been rebuilt. Use this for images you build locally withpodman build.
A Word of Caution
Auto-updates are excellent for reverse proxies, utility containers, and applications where rolling forward is safe. For databases and stateful services, be more deliberate: pin to a specific version tag and bump manually after reviewing the changelog. A PostgreSQL major version bump via auto-update at 3 AM is not how you want to discover that a migration is required.
# Safe to auto-update (stateless, easily rolled back)
Image=docker.io/traefik:latest
AutoUpdate=registry
# Pin and update manually (stateful, needs migration planning)
Image=registry.redhat.io/rhel10/postgresql-16:1-30
AutoUpdate=registry
Even with a pinned version like 16:1-30, AutoUpdate=registry still checks for image rebuilds at that exact tag (security patches, base image updates). The tag acts as a ceiling for how far the update can go.
Docker Compatibility
Podman goes out of its way to be a drop-in replacement for Docker. This isn’t just marketing - the compatibility is real, practical, and covers the two areas people worry about most.
The Docker Socket
Podman provides a Docker-compatible API socket via a systemd-managed service:
# Enable the socket (root)
systemctl enable --now podman.socket
# Or rootless
systemctl --user enable --now podman.socket
This creates /run/podman/podman.sock (root) or /run/user/$UID/podman/podman.sock (rootless), exposing a REST API that speaks Docker’s protocol. Any tool expecting a Docker socket - Traefik, Portainer, CI runners, monitoring agents - can connect to this socket and work without modification.
In Quadlet files, mount it where the application expects Docker’s socket:
Volume=/run/podman/podman.sock:/var/run/docker.sock:ro
The application sees /var/run/docker.sock, sends Docker API calls, and Podman handles them. The application never knows the difference. This is how the Traefik and CI runner examples above work - Traefik’s Docker provider discovers containers through this socket, reading their labels for routing configuration, without any Podman-specific configuration.
Docker Compose Compatibility
Podman provides podman compose as a thin wrapper around an external Compose provider, wiring that provider up to the local Podman socket. The provider can be docker-compose (the standalone Python binary), the Go-based docker compose plugin, or podman-compose. Install a supported provider:
# podman-compose is the most common choice on Fedora/RHEL
dnf install podman-compose
Your existing docker-compose.yml files work as-is:
podman compose up -d
podman compose logs -f
podman compose down
The result is standard Podman containers - you can inspect them with podman ps, check their logs with podman logs, and manage them with all the usual Podman tooling.
That said, I don’t recommend using Compose on Podman in production. It works, and it’s useful for quick local development or migrating an existing Docker Compose setup. But you’re layering a Docker abstraction on top of Podman, bypassing the systemd integration that makes Podman compelling in the first place. Compose manages its own container lifecycle, its own dependency ordering, its own restart logic - duplicating what systemd already does better.
The migration path looks like this:
- Start with
podman composeto verify your existing setup works on Podman - Convert each service to a Quadlet file
- Delete the Compose file
For step 2, podman generate systemd --new --files --name my-container can produce systemd unit files from running containers, but this command is deprecated in Podman 5.x in favor of writing Quadlet files directly. In practice, manually converting Compose services to Quadlets is straightforward - the mapping from YAML keys to Quadlet directives is mostly one-to-one, and you end up with cleaner, more maintainable configuration.
The podman CLI Alias
Many tools and scripts hardcode docker as the command. A simple alias handles this:
# In /etc/profile.d/docker-compat.sh or your shell rc
alias docker=podman
On Fedora and RHEL, the podman-docker package does this system-wide and also creates a Docker-compatible socket symlink:
dnf install podman-docker
This installs a docker wrapper that calls podman, provides /var/run/docker.sock as a symlink to the Podman socket, and makes Docker-expecting tools work without configuration changes. It even suppresses the “Emulate Docker CLI using podman” message that podman normally prints.
Practical Tips
Networking Patterns
Dedicated backend networks for every stack. Don’t share a single network across unrelated services. Each application stack (app + database) gets its own isolated backend. This is defense-in-depth: if one application is compromised, the attacker can’t reach another stack’s database.
# Each stack gets its own backend
/etc/containers/systemd/forgejo-backend.network
/etc/containers/systemd/keycloak-backend.network
/etc/containers/systemd/nextcloud-backend.network
One shared frontend network for the reverse proxy. Traefik (or whatever you use) joins this network, and each application container joins it as a second network. This is the only network with a route to the outside.
DNS resolution is automatic. Containers on the same Podman network resolve each other by container name. No need to manage IP addresses or /etc/hosts entries. forgejo-db:5432 just works from any container on forgejo-backend.network.
Volume and Permission Patterns
Always use :z on SELinux systems. The :z flag relabels the host directory with container_file_t, allowing container access. Without it, SELinux blocks the mount and you get cryptic permission errors.
# Shared volume (multiple containers can access)
Volume=/opt/app/data:/data:z
# Private volume (only this container)
Volume=/opt/app/secrets:/secrets:Z
# Read-only system files don't need relabeling
Volume=/etc/localtime:/etc/localtime:ro
Set filesystem ACLs for container UIDs. Containers often run as non-root UIDs internally. The RHEL PostgreSQL image uses UID 26, Forgejo uses UID 1000. Without filesystem-level access, the container can’t write to its mounted volumes:
mkdir -p /opt/forgejo/postgres
setfacl -m u:26:rwx /opt/forgejo/postgres
This is separate from SELinux labeling - :z handles the MAC context, setfacl handles the DAC permissions. You need both.
Security Hardening
NoNewPrivileges=true on every container that doesn’t explicitly need privilege escalation. This prevents setuid binaries inside the container from gaining elevated privileges:
[Container]
NoNewPrivileges=true
Drop capabilities by default. Podman already drops most capabilities, but you can explicitly add only what’s needed:
# Only grant what the container actually needs
AddCapability=CAP_NET_BIND_SERVICE
Use SecurityLabelType sparingly. Most containers run fine with the default container_t type. Only use container_runtime_t for containers that need to manage other containers (CI runners, monitoring tools that access the Podman socket).
Debugging
When things go wrong, systemd and Podman give you more integrated diagnostic tooling on Linux hosts:
# Service status with recent log lines
systemctl status forgejo-server.service
# Full journal with filtering
journalctl -u forgejo-server.service --since "10 minutes ago"
# Container events
podman events --filter container=forgejo-server
# Inspect networking
podman inspect forgejo-server --format '{{.NetworkSettings.Networks}}'
# Check systemd dependency tree
systemctl list-dependencies forgejo-server.service
# See what Quadlet generated
/usr/libexec/podman/quadlet --dryrun
The last command is especially useful. Quadlet is a generator that produces systemd unit files from your .container files. If a Quadlet isn’t behaving as expected, --dryrun shows you the actual systemd unit that was generated, so you can see exactly what systemd is working with.
Registry Authentication
For private registries (Red Hat’s registry.redhat.io, corporate registries), authenticate once and Podman stores the credentials:
# Interactive login
podman login registry.redhat.io
# Credentials are stored in
# /run/containers/0/auth.json (root)
# $XDG_RUNTIME_DIR/containers/auth.json (rootless)
For automated environments, you can provide credentials via a JSON file:
podman login --authfile /etc/containers/auth.json registry.redhat.io
Automation with Ansible
Once you’re managing more than a handful of hosts, writing Quadlet files by hand stops scaling. The containers.podman Ansible Collection can manage every aspect of Podman - containers, pods, networks, volumes, secrets, registry logins - and recent versions can generate Quadlet files directly. Instead of templating unit files yourself, you declare the desired state in a playbook and the collection handles the rest:
- name: Deploy Forgejo database
containers.podman.podman_container:
name: forgejo-db
image: registry.redhat.io/rhel10/postgresql-16:latest
state: quadlet
quadlet_dir: /etc/containers/systemd
network: forgejo-backend.network
secrets:
- forgejo_db_password,type=env,target=POSTGRESQL_PASSWORD
volumes:
- /opt/forgejo/postgres:/var/lib/pgsql/data:z
env:
POSTGRESQL_USER: forgejo
POSTGRESQL_DATABASE: forgejo
The state: quadlet parameter is the key - it tells the module to generate a Quadlet file in quadlet_dir rather than starting a container directly. This gives you Ansible’s idempotency and inventory management on top of Podman’s systemd integration. If you’re running Podman across a fleet of RHEL hosts, this collection is the missing piece between “it works on one server” and “it works everywhere.”
When Podman Isn’t the Right Choice
I said this article was opinionated, not delusional. There are cases where Docker or Kubernetes makes more sense:
Docker Desktop for local development on macOS/Windows. Podman has podman machine for non-Linux platforms, and it works, but Docker Desktop’s integration with macOS and Windows is more polished. If your developers are on Macs and just need to run containers locally, Docker Desktop is a fine choice. The production host should still be Podman.
Multi-node orchestration. If you need containers spanning multiple hosts with service discovery, rolling updates, and horizontal scaling, that’s Kubernetes territory. Podman is a single-host container runtime. It does that job exceptionally well, but it doesn’t pretend to be an orchestrator.
Ecosystem lock-in. Some CI/CD platforms, development tools, and monitoring solutions have deep Docker-specific integrations that go beyond the API compatibility layer. If your entire toolchain assumes Docker and the compatibility layer doesn’t fully cover your use case, forcing Podman may create more friction than it solves.
For everything else - single-host deployments, homelab infrastructure, edge computing, production services that don’t need orchestration - Podman with Quadlets is the better tool.
The Bigger Picture
Containers are processes. systemd manages processes. Quadlets connect the two. Your containers start at boot, restart on failure, log to journald, respect cgroup limits, and integrate with SELinux - all through mechanisms that have been battle-tested in production Linux systems for over a decade.
Docker popularized containers and deserves credit for that. But the Linux ecosystem has matured - user namespaces, cgroups v2, and systemd’s service management have made the central daemon architecture unnecessary for most Linux server workloads. Podman builds on that maturity rather than working around it.
If you’re starting new container infrastructure on Linux, start with Podman. If you’re running Docker in production, the compatibility layer makes migration surprisingly low-friction, and the architectural improvements make it worthwhile.
References
- Podman Documentation
- Podman Quadlet Documentation
- Podman Auto-Update Documentation
- Podman Secrets Documentation
- Red Hat: From Docker Compose to Podman Quadlets
- containers.podman Ansible Collection - manage Podman and generate Quadlets via Ansible
- Production-Grade Container Deployment with Podman Quadlets - my earlier Quadlet guide
- Keycloak 26 on Podman with Quadlets - practical Quadlet deployment example
- SELinux: A Practical Guide - understanding SELinux in container contexts
Comments
You can use your Mastodon or other ActivityPub account to comment on this article by replying to the associated post.
Search for the copied link on your Mastodon instance to reply.
Loading comments...