Docker is no longer the only player in the container world. If you lead a development team or manage production infrastructure, you’ve probably noticed that the containerization ecosystem has branched out significantly. Podman without a daemon and root privileges, containerd as a minimal runtime under Kubernetes, Docker Desktop with a new licensing model – each solution has its niche and its trade-offs.
And this is no longer an academic discussion. In 2026, the choice of container runtime has real consequences: licensing costs, security in CI/CD environments, Kubernetes compatibility, ease of adoption across teams. According to the Cloud Native Computing Foundation report from 2025, 34% of organizations use more than one container runtime – Docker for dev, Podman in CI, containerd in prod Kubernetes. This is a hybrid approach that requires understanding where the overlap ends and where unique advantages begin.
If you’re facing a decision: Docker, Podman, or containerd – or you’re planning a migration from Docker to alternatives – this article is for you. I’ll show you specific technical differences, comparison tables, a decision tree, and a practical migration guide. No vendor bias, no buzzwords – just facts, trade-offs, and real experiences from production projects.
Quick Overview
What you’ll learn from this article:
- Evolution of the container ecosystem: how we went from Docker monopoly to OCI-compliant runtime wars
- Docker in 2026: strengths (ecosystem, DX), limitations (daemon, root, licensing), and when it’s still the best choice
- Podman: rootless and daemonless architecture, Docker CLI compatibility, where it beats Docker
- containerd: minimal runtime powering Kubernetes, when it’s overkill, and when it’s a must-have
- All-in comparison table: performance, security, developer experience, Kubernetes integration, licensing
- Decision tree: which runtime for which use case (dev laptops, CI/CD, prod Kubernetes, edge, regulated industries)
- Migration from Docker to Podman step by step: without downtime, without refactoring Dockerfiles
- What competencies to build in your devops/platform engineering team
Who this article is for:
- Tech Leads and Architects building container infrastructure
- DevOps/Platform Engineers responsible for CI/CD and deployment pipelines
- CTOs/Engineering Managers evaluating licensing costs and vendor lock-in
- Security Engineers looking for rootless and daemonless alternatives for compliance
Reading time: 12 minutes
Evolution of Containerization: From Docker to the Runtime Ecosystem
To understand why in 2026 the choice of container runtime is not obvious, you need to understand how we got to the current situation.
2013-2017: Docker Dominance
Docker revolutionized software deployment. Before Docker, Linux containers (LXC) were an esoteric tool for kernel hackers. Docker made containers accessible to the masses: simple CLI, Dockerfile as code, Docker Hub as a registry, docker-compose for multi-container apps. Developer experience was phenomenal – docker run, docker build, docker push. Everything just worked.
During this period, Docker = containers. Nobody thought about alternatives because Docker was the infrastructure default.
2017-2020: OCI Standardization + Kubernetes Shift
Two key moments:
- Open Container Initiative (OCI) – The Linux Foundation established standards: OCI Image Spec (how to package images) and OCI Runtime Spec (how to run them). Docker implemented OCI, but it was no longer the only compliant runtime.
- Kubernetes momentum – K8s became the de facto orchestration platform. Kubernetes initially used Docker through the Docker shim, but Docker was heavy (full daemon, CLI, API that K8s didn’t need). The Kubernetes community started looking for lighter alternatives.
Result: in 2020, Kubernetes deprecated the Docker runtime (dockershim) and began promoting containerd (extracted from Docker) as the default.
2020-2023: Runtime Proliferation
Suddenly we had a choice:
- Docker – still the default for developers, Docker Desktop dominated on laptops
- containerd – minimalist runtime under Kubernetes, CNCF graduated project
- Podman – Red Hat’s answer, compatible with Docker CLI but rootless and daemonless
- CRI-O – Kubernetes-native runtime, optimized for K8s
- gVisor, Kata Containers – specialized secure runtimes for multi-tenant environments
Plus the licensing bomb: Docker Desktop introduced paid subscriptions for companies with 250+ employees (August 2021). This was the catalyst for migration to alternatives.
2024-2026: Hybrid Deployments
Today’s typical enterprise stack:
- Developer laptops: Docker Desktop (if the company pays) or Podman Desktop / Rancher Desktop (if not)
- CI/CD pipelines: Podman (rootless in shared runners) or kaniko (Dockerfiles in Kubernetes without Docker)
- Production Kubernetes: containerd as the default runtime (95% of K8s clusters according to CNCF Survey 2025)
- Edge/IoT: containerd or Podman (lightweight footprint)
There is no longer a one-size-fits-all. Each runtime has its sweet spot.
Why is this an organizational problem?
Multiple runtimes = complexity:
- Developers need to know the differences (Podman volumes != Docker volumes behavior)
- CI/CD pipelines need different configurations
- Security/compliance teams must audit different attack surfaces
- Training and onboarding are harder – junior devs need to learn more concepts
That’s why a runtime selection strategy is now part of platform engineering and developer experience strategy, not just an infrastructure decision.
Docker in 2026: Strengths, Limitations, and Licensing
Docker is still the most popular container tool. According to the Stack Overflow Survey 2025: 67% of developers use Docker (vs 19% Podman, 11% containerd directly). But popularity != best choice for every use case.
Docker’s Strengths
1. Developer Experience (DX) – Still the Best
Docker CLI is intuitive, well-documented, with a massive number of examples online. Every containerization tutorial starts with docker run hello-world. For a junior entering the world of containers, Docker has the lowest learning curve.
Docker Desktop (on macOS/Windows) offers:
- GUI for managing containers, images, volumes
- Kubernetes cluster with a single click (K8s learning without AWS EKS bills)
- Extensions ecosystem (Lens, Snyk scans, disk usage analyzers)
- Synchronized file system performance (on macOS using VirtioFS in Docker Desktop 4.28+)
2. Ecosystem and Tooling Compatibility
Docker was the first mover. The result: 99% of tooling assumes Docker:
- IDE integrations: VS Code Docker extension, JetBrains Docker plugin – all assume Docker API
- CI/CD systems: GitLab Runner, GitHub Actions, Jenkins – default configs use Docker
- Local dev tools: Testcontainers (Java testing framework), LocalStack (AWS mock) – built for Docker
If you have a legacy codebase with docker-compose.yml and Makefiles full of docker build, migrating to Podman/containerd requires refactoring.
3. Docker Compose – Production-Ready in 2026
The Docker Compose Spec (2020) became an open standard. Compose V2 (rewritten in Go, now docker compose without a hyphen) is fast and stable. For small deployments, Compose on a single host is often a lighter alternative to Kubernetes.
Case: startups using a single beefy VM with Compose deployments save on K8s complexity (no need for Helm, operators, service meshes) while getting multi-container orchestration.
4. Docker Hub and the Registry Ecosystem
Docker Hub has 15M+ images. Every vendor publishes their official images there (postgres, redis, nginx). Discovering and pulling images is seamless – docker pull ubuntu just works.
Podman and containerd also use Docker Hub, but Docker CLI has nice UX details (auto-search, rate limit warnings, Docker Scout security scanning integrated).
Docker’s Limitations
1. Docker Daemon – Single Point of Failure and Attack Surface
Docker uses a client-server architecture. Every docker command connects through a REST API (Unix socket /var/run/docker.sock) to the Docker daemon (dockerd). The daemon runs as root.
Problems:
- Security: Docker socket exposure = root access to the host. If an application in a container has
/var/run/docker.sockmounted, it can launch privileged containers and escape the sandbox. - Single point of failure: If the daemon crashes, all containers can have problems. Restarting the daemon often requires restarting containers.
- Resource overhead: The daemon consumes CPU/RAM even when there are no running containers.
2. Root Privileges Requirement
The Docker daemon must run as root. Launching containers by a non-root user requires adding them to the docker group – but this de-facto gives the rootless user root access (through the docker socket).
This is a compliance problem for regulated industries (finance, healthcare) where the principle of least privilege is a requirement.
Docker introduced rootless mode (2019), but:
- It requires a
dockerd-rootless-setuptool.shscript (not default) - It has limitations (no
--privilegedsupport, port publishing <1024 requires workarounds) - Community adoption is low (only 8% of Docker users use rootless mode according to Docker Inc 2025 stats)
3. Licensing – Docker Desktop Is Not Free for Everyone
Since August 2021:
- Docker Desktop is paid for companies with 250+ employees or $10M+ revenue
- Price: $9/user/month (Pro), $21/user/month (Team), $24/user/month (Business)
- Docker Engine (CLI + daemon) remains free and open-source (Apache 2.0)
Confusion: many companies thought they had to pay just for using the docker command. No – they only pay for Docker Desktop GUI (on macOS/Windows). Linux users using Docker Engine don’t have to pay.
But for companies with 500+ developers on macOS/Windows, the Docker Desktop bill can be $50K-$120K/year. This was the driver for migration to Podman Desktop (free) or Rancher Desktop (free).
4. Kubernetes Deprecation – Psychological Factor
Kubernetes removed dockershim in v1.24 (May 2022). Docker containers work in K8s (through containerd), but direct Docker runtime support was removed.
This didn’t change anything for the typical K8s user (kubectl still works), but it sent a signal: “The Kubernetes community does not promote Docker.” Some organizations interpreted this as “Docker is legacy” and began migrations.
When Is Docker Still the Best Choice?
Despite its limitations, Docker is often the right tool in:
- Developer laptops for junior/mid developers – lowest learning curve, best DX, most tutorials
- Companies with <250 employees – Docker Desktop is free, no licensing concerns
- Legacy codebases with deep Docker integrations – migration cost > licensing cost
- Small deployments without Kubernetes – Docker Compose on a single VM is simple and effective
- Windows containers – Docker has better Windows container support than Podman (as of 2026)
Podman: Rootless, Daemonless, and OCI-Compliant
Podman is Red Hat’s answer to Docker’s limitations. The philosophy: “Docker-compatible CLI without the Docker daemon.”
Key Architectural Differences
1. Daemonless – No Central Process
Podman has no daemon. Every podman run command forks the container process directly. Container lifecycle is managed by systemd (on Linux) or by the Podman API service (if you enable it).
Advantages:
- No single point of failure – a crash of one container doesn’t affect others
- Lower overhead – no daemon consuming resources when containers aren’t running
- Security – no
/var/run/docker.sockexposure risk
Trade-off:
- Startup latency: the first
podman runafter boot is ~10-15% slower than Docker (Podman needs to initialize storage, networks) - Monitoring complexity: in Docker you have a single daemon to monitor. In Podman, each container is a separate process tree in systemd.
2. Rootless by Default – True Unprivileged Containers
Podman rootless mode is a first-class citizen, not an afterthought like in Docker. A user can:
# As a non-root user (e.g., dev)
podman run -d nginx
podman ps
podman stop <container>
Everything works without sudo, without adding the user to a special group, without a root daemon.
How does it work? Podman uses Linux user namespaces: the container thinks it’s running as UID 0 (root), but outside the namespace it’s the user’s UID (e.g., 1000). The kernel remaps permissions.
Security benefit: if a container is compromised and an attacker gains root inside the container, outside the namespace they only have the permissions of the non-root user. No privilege escalation to host root.
3. Docker CLI Compatibility – Drop-in Replacement
The Podman CLI is (almost) compatible with Docker:
alias docker=podman
# Most docker commands work:
docker build -t myapp .
docker run -p 8080:80 myapp
docker-compose up # via podman-compose or podman-compose plugin
Compatibility rate: ~95% for typical use cases. Edge cases where it differs:
- Volumes behavior: Podman volumes are lightweight (just bind mounts in user home), Docker volumes have full lifecycle management
- Network defaults: Podman default network is slirp4netns (user-mode networking), Docker uses bridge network with root
- Build cache: Buildah (Podman’s builder) cache semantics are slightly different from Docker BuildKit
4. Podman Compose – Growing but Not Full Parity
Podman supports docker-compose through:
- podman-compose (external Python script) – ~85% compose spec coverage
- podman compose (built-in since Podman 4.1, 2022) – uses Podman’s compose library, ~90% coverage
Gap: complex compose files with custom networks, health check dependencies, and extensions may have quirks. Testing is required before production use.
When Does Podman Beat Docker?
1. Rootless CI/CD Pipelines
GitLab Runners, GitHub Actions self-hosted runners often share a VM between jobs. Docker requires a shared /var/run/docker.sock (security risk: job A can manipulate job B containers).
Podman rootless: each job runs in an isolated user namespace. No shared daemon, no inter-job interference. This is a game changer for security-conscious CI setups.
2. Regulated Industries with Strict Compliance
Finance, healthcare, and government often have requirements:
- No root daemons in production
- Audit trail for every container launch (systemd journal for Podman)
- Least privilege principle – containers launched by non-root users
Podman out-of-the-box meets these requirements. Docker requires customization and workarounds.
3. Firms Avoiding Vendor Lock-in and Licensing Fees
Podman Desktop (Podman + GUI) is fully free and open-source. For companies with 1000+ developers, this can mean $100K+/year savings vs Docker Desktop licensing.
Plus: Podman is backed by Red Hat (IBM), but has no commercial licensing traps – pure Apache 2.0.
4. Kubernetes-Native Workflows
Podman has podman generate kube – it generates Kubernetes YAML from a running container. A developer can test locally in Podman, then deploy to K8s without changes to the YAML.
Conversely: podman play kube imports K8s YAML and runs it as Podman containers. This is a smooth dev → prod workflow for K8s-heavy organizations.
Podman’s Limitations
1. Windows/macOS Support – Not Native
Podman is a Linux-first tool. On macOS/Windows, it runs through a lightweight VM (podman machine using QEMU/WSL2). This adds latency and complexity vs Docker Desktop’s tight integration.
Docker Desktop on macOS uses Apple Virtualization Framework (fast) + VirtioFS (file sync optimization). Podman Desktop on macOS (as of 2026) is ~5-10% slower in I/O-heavy workloads.
2. Ecosystem Tooling Lagging
IDE plugins and CI/CD integrations often assume Docker API. Podman has Docker-compatible API mode (podman system service), but it requires manual enablement.
Examples where additional configuration is needed:
- Testcontainers for Java – requires a
DOCKER_HOSTenv variable pointing to the Podman socket - VS Code Dev Containers – works with Podman but requires tweaks in settings.json
- Snyk, Aqua security scanners – default configs assume Docker, Podman requires custom paths
3. Learning Curve for Teams
Developers trained on Docker need to learn Podman quirks:
podman podconcept (Kubernetes pod primitive on local) – doesn’t exist in Docker- Rootless networking limitations (port mapping <1024 requires
sysctl net.ipv4.ip_unprivileged_port_start) - Storage driver differences (overlay vs fuse-overlayfs)
Training time: ~1-2 weeks for a dev team to get comfortable with Podman if they come from a Docker background.
containerd: Minimal Runtime for Kubernetes
containerd is not a tool for developers. It’s an industry-grade container runtime powering Kubernetes, Docker (as a backend), and cloud platforms (AWS Fargate, Google Cloud Run, Azure Container Instances).
What Is containerd?
containerd was originally part of Docker – the low-level runtime responsible for image pulling, storage, and container execution. In 2016, Docker separated containerd as a standalone project and donated it to the CNCF.
Today containerd is:
- CNCF graduated project (highest maturity level)
- Default runtime in Kubernetes (95% of clusters according to CNCF 2025)
- Building block for higher-level tooling (Docker, Podman, nerdctl use containerd under the hood)
Architecture Philosophy: Minimalist and Composable
containerd does one thing: manages container lifecycle (pull, create, start, stop, delete). It doesn’t have:
- CLI for developers (there’s a barebones
ctrtool, but it’s not user-friendly) - Build functionality (it doesn’t compile Dockerfiles)
- Networking management (delegated to CNI plugins)
- Volume/storage abstractions (only a low-level snapshotter interface)
This is a feature, not a bug. containerd is a component in a larger system (Kubernetes, Docker), not a standalone tool.
When Should You Use containerd Directly?
1. Kubernetes Clusters – Default and Recommended
If you’re deploying Kubernetes, containerd is the de facto standard. Kubernetes v1.24+ removed Docker runtime support. Today you have a choice:
- containerd (default in kubeadm, EKS, GKE, AKS)
- CRI-O (Kubernetes-native alternative, popular in OpenShift)
containerd is lighter and faster than the Docker runtime was. Benchmarks (CNCF 2024):
- Startup latency: containerd 15-20% faster than dockershim was
- Memory overhead: containerd runtime uses ~30MB RAM vs ~100MB for Docker daemon
- Image pulling: containerd parallel layer fetching is more efficient
If you manage K8s clusters, the choice between containerd vs CRI-O is an architectural decision, but containerd has broader ecosystem support.
2. Embedded/Edge Deployments Where Every MB Counts
IoT devices and edge computing nodes often have limited resources (512MB RAM, single-core CPU). containerd’s footprint is minimal – you can run containers in environments where a full Docker stack wouldn’t fit.
AWS Bottlerocket (minimal Linux distro for containers) uses containerd. Google Cloud IoT Edge uses containerd. This is the runtime for resource-constrained environments.
3. Building Custom Container Platforms
If you’re building a platform-as-a-service, serverless runtime, or custom orchestrator – containerd is the building block. It provides core primitives, and you add higher-level logic.
Examples:
- AWS Firecracker (microVM runtime for Lambda/Fargate) uses containerd integration
- Rancher/K3s (lightweight Kubernetes) defaults to containerd
- Nomad (HashiCorp orchestrator) has a containerd driver
containerd CLI Alternatives – nerdctl
For developers wanting to use containerd directly (without Docker/Podman), there’s nerdctl – a Docker-compatible CLI for containerd.
nerdctl run -d -p 8080:80 nginx
nerdctl build -t myapp .
nerdctl compose up
nerdctl is maintained by the containerd community. Features:
- ~98% Docker CLI compatibility
- BuildKit integration (Dockerfile builds)
- Compose support
- Lazy pulling (eStargz format) – significantly faster startup for large images
- Image signing/verification (cosign integration)
nerdctl + containerd is an alternative to Docker/Podman in use cases where you want to be closest to the Kubernetes runtime experience.
Limitations of the containerd Approach
1. Not Beginner-Friendly
containerd is not a tool for a junior learning containers. No GUI, documentation is technical, tutorials are scarce.
If you’re building a developer onboarding experience, Docker/Podman are a better starting point. containerd is a tool for platform engineers, not application developers.
2. No Built-in Networking/Storage Abstractions
Docker/Podman have docker network create, docker volume create. containerd delegates this to CNI plugins and manual snapshot management.
For a standalone container on a laptop, a developer would have to manually manage CNI configs and filesystem mounts – this is friction that Docker/Podman eliminate.
3. Build Story Requires Additional Tools
containerd doesn’t build Dockerfiles. You need:
- BuildKit (Docker’s modern builder) via the buildctl CLI
- buildah (Podman’s builder)
- kaniko (Dockerfile builds in Kubernetes without a daemon)
- nerdctl build (wrapper around BuildKit)
For CI/CD pipelines this isn’t a problem (you use a dedicated builder), but for dev looping (the docker build habit) it’s an extra step.
Comparison Table: Docker vs Podman vs containerd
| Criterion | Docker Engine + Desktop | Podman + Desktop | containerd + nerdctl |
|---|---|---|---|
| Architecture | Client-server with root daemon | Daemonless, fork model | Minimalist daemon (not root) |
| Root requirement | Daemon as root (rootless mode limited) | Rootless by default | Can run rootless |
| Developer Experience | ★★★★★ Best GUI, ecosystem, DX | ★★★★☆ Good CLI, GUI improving | ★★☆☆☆ Technical, not beginner-friendly |
| Docker CLI compatibility | 100% (this is Docker) | ~95% drop-in replacement | ~98% via nerdctl |
| Compose support | Native docker compose | podman-compose (~90% coverage) | nerdctl compose (~95%) |
| Build Dockerfiles | BuildKit built-in | Buildah integrated | BuildKit via buildctl/nerdctl |
| Kubernetes integration | Deprecated (v1.24+), via containerd | podman generate/play kube | Native (K8s default runtime) |
| Startup latency | Fast (daemon pre-warmed) | ~10-15% slower (fork overhead) | Fastest (minimal runtime) |
| Memory overhead | ~100MB daemon + containers | ~0MB daemon + containers | ~30MB runtime + containers |
| Security (rootless) | Limited rootless support | Rootless first-class | Supports rootless |
| Image registry | Docker Hub seamless | Compatible (OCI registries) | Compatible (OCI registries) |
| Windows containers | ★★★★★ Native support | ★★☆☆☆ Limited, improving | ★★★☆☆ Via containerd shim |
| macOS/Windows host | Docker Desktop (native-like) | Podman Machine (VM-based) | nerdctl + Lima VM |
| Licensing | Desktop: paid for 250+ employees | Fully free (Apache 2.0) | Fully free (Apache 2.0) |
| CI/CD friendliness | Good (but shared daemon = risk) | ★★★★★ Excellent (rootless isolation) | Good (K8s-native pipelines) |
| Learning curve | Easy (best docs, tutorials) | Medium (Docker knowledge transfers) | Hard (low-level, technical) |
| Ecosystem tooling | ★★★★★ Everything supports Docker | ★★★☆☆ Growing, requires tweaks | ★★★☆☆ K8s ecosystem, niche elsewhere |
| Production runtime | Legacy (K8s deprecated) | Growing (Red Hat OpenShift) | ★★★★★ K8s default |
| Community size | Largest (15M+ users) | Fast growing (5M+ estimate) | Technical (CNCF ecosystem) |
| Vendor backing | Docker Inc (commercial) | Red Hat (IBM) open-source | CNCF (vendor-neutral) |
Performance benchmarks (2026 data, laptop workloads):
| Test | Docker 24.0 | Podman 5.2 | containerd 2.0 + nerdctl |
|---|---|---|---|
run nginx (cold start) | 1.2s | 1.4s | 1.0s |
build simple Dockerfile | 8.3s | 8.9s | 7.8s (BuildKit) |
| Image pull (1GB image) | 18s | 19s | 16s (parallel layers) |
| Memory overhead (idle) | 105MB | 0MB (daemonless) | 28MB |
The differences are marginal for the typical user. The choice should not be based on micro-benchmarks, but on architectural fit.
Which Runtime When? Decision Tree
Here’s a decision tree based on use case and organizational constraints:
START: What is your primary use case?
Branch A: Developer Laptops (Local Development)
Question 1: Does the company have 250+ employees and do developers use macOS/Windows?
- YES → Docker Desktop requires a license ($9-24/user/month)
- Budget exists?
- YES → Docker Desktop (best DX, worth paying)
- NO → Podman Desktop or Rancher Desktop (free alternatives, ~90% DX)
- Budget exists?
- NO (<250 employees) → Docker Desktop free – use it, best DX
Question 2: Does the team have a strong Kubernetes focus and want a dev experience close to prod?
- YES → Podman (podman play kube) or nerdctl + containerd (if advanced users)
- NO → Docker (simplest onboarding)
Question 3: Windows containers required?
- YES → Docker Desktop (best Windows support)
- NO → Both Docker/Podman are OK
Recommendation:
- Default for most companies: Docker Desktop if the free tier applies, Podman Desktop if licensing is an issue
- K8s-heavy orgs: Podman (Kubernetes alignment)
- Windows shops: Docker Desktop
Branch B: CI/CD Pipelines
Question 1: Are runners shared between jobs/users?
- YES → Security risk with Docker daemon
- Podman rootless (best isolation) or kaniko (builds in K8s without a daemon)
- NO (dedicated runners per job) → Both Docker/Podman are OK
Question 2: Does CI/CD run in Kubernetes (GitHub Actions with K8s runners, GitLab K8s executor)?
- YES → kaniko for builds (no daemon needed), containerd as runtime
- NO (VM-based runners) → Podman/Docker
Question 3: Need Docker-in-Docker (DIND)?
- YES → Docker (DIND battle-tested) or Podman in Podman (possible but requires setup)
- NO → Prefer non-DIND approaches (kaniko, buildah)
Recommendation:
- Shared runners: Podman rootless
- K8s-based CI: kaniko + containerd
- VM-based, security not critical: Docker (simplest config)
Branch C: Production Kubernetes
Question 1: Do you manage the K8s cluster yourself (self-managed, on-prem)?
- YES → containerd (default in kubeadm, lightweight) or CRI-O (if OpenShift/Red Hat focus)
- NO (managed K8s: EKS, GKE, AKS) → Already using containerd by default, no decision needed
Question 2: Need Windows node pools?
- YES → containerd (has Windows support) – Docker deprecated
- NO → containerd default
Recommendation:
- 99% of cases: containerd (Kubernetes community recommendation, best performance)
- Red Hat ecosystem: CRI-O viable alternative
Branch D: Edge/IoT/Embedded
Question 1: Tight resource constraints (<1GB RAM, low CPU)?
- YES → containerd (smallest footprint ~30MB) or Podman (~50MB)
- NO → Docker acceptable
Question 2: Orchestration needed?
- YES → K3s + containerd (lightweight K8s) or Nomad + containerd
- NO (standalone containers) → Podman (systemd integration for auto-restart)
Recommendation:
- Extreme constraints: containerd
- Moderate edge: Podman
Branch E: Regulated Industries (Finance, Healthcare, Government)
Question 1: Do compliance requirements mandate rootless containers?
- YES → Podman (rootless first-class) or containerd rootless (harder setup)
- NO → Docker acceptable with mitigation (AppArmor, SELinux profiles)
Question 2: Need an audit trail for every container event?
- YES → Podman + systemd journal (every container launch logged) or containerd + custom audit plugin
- NO → Docker logging acceptable
Recommendation:
- Strict compliance: Podman
- Moderate compliance: Docker with security hardening
Branch F: Building Custom Platforms (PaaS, Serverless)
Question 1: Need low-level control over container lifecycle?
- YES → containerd (building block, composable) or CRI + gVisor/Kata (secure runtimes)
- NO → Higher-level tool (Docker/Podman API)
Recommendation:
- Platform engineering: containerd + custom orchestration
Migrating from Docker to Podman – Step by Step
If you’ve decided to migrate (licensing, security, philosophical reasons), here’s a battle-tested plan.
Assumption: Migrating Docker → Podman in CI/CD and development; production Kubernetes already uses containerd.
Phase 1: Assessment (1-2 weeks)
Step 1: Inventory
Map everything that uses Docker:
- Developer laptops: how many people, what OS (macOS/Windows/Linux)
- CI/CD pipelines: which jobs build images, which run tests in containers
- Tooling dependencies: do IDE plugins, Testcontainers, or other tools assume Docker?
- Custom scripts: grep through the codebase for
dockercommands,/var/run/docker.sockmounts, docker-compose files
Tool: grep -r "docker" . in repos + developer survey
Step 2: Compatibility check
Test Podman on a representative sample:
- Clone the repo on a test VM
- Install Podman
alias docker=podman- Run a typical dev workflow: build, test, run
Red flags if:
- docker-compose files use esoteric features (custom networks with IPAM, depends_on with health checks)
- Dockerfiles have multi-stage builds with crazy caching assumptions
- Tooling requires Docker Desktop GUI features
Expected compatibility rate: 90-95%. Identify the 5-10% of edge cases that require workarounds.
Step 3: Migration plan
Prioritize:
- Low-risk, high-value: CI/CD pipelines (security + licensing savings)
- Medium-risk: Senior developer laptops (early adopters can test and give feedback)
- High-touch: Junior developer laptops (need handholding)
Timeline estimate:
- CI/CD migration: 2-4 weeks
- Developer laptop rollout: 4-8 weeks (phased, 20% → 50% → 100%)
Phase 2: CI/CD Migration (2-4 weeks)
Step 1: Proof of concept on one pipeline
Choose a non-critical pipeline (e.g., development branch CI).
Before (Docker):
# .gitlab-ci.yml
build:
image: docker:24
services:
- docker:24-dind
script:
- docker build -t myapp .
- docker run myapp pytest
After (Podman):
build:
image: quay.io/podman/stable:latest
script:
- podman build -t myapp .
- podman run myapp pytest
Remove the docker:dind service – Podman doesn’t need DIND. This is an immediate security win (no shared Docker daemon).
Test throughput: are build times comparable? Do tests pass?
Step 2: Rollout to all pipelines
Update CI configs:
- Replace
dockercommands withpodman - Remove DIND services (no longer needed)
- Update base images (docker:24 → podman/stable)
Edge cases:
- Docker Compose in CI: Replace with
podman-composeor refactor topodman pod(Kubernetes-style pod primitive) - Caching layers: Buildah (Podman’s builder) may have different cache semantics – watch build times, adjust the
--layersflag if needed
Step 3: Monitor and iterate
First 2 weeks after migration:
- Collect metrics: build times, failure rates, developer complaints
- Create a runbook for common issues
- Set up a Slack channel for migration questions
Phase 3: Developer Laptop Rollout (4-8 weeks)
Step 1: Documentation and training
Prepare:
- Migration guide: “How to switch from Docker Desktop to Podman Desktop”
- FAQ: Differences between Docker vs Podman, troubleshooting common issues
- Video walkthrough: 10-min screen recording showing installation + basic workflow
Conduct a 1-hour training session for early adopters (senior devs). Cover:
- Installation (Podman Desktop or CLI)
- The
alias docker=podmantrick - Differences: volumes, networking quirks
- Troubleshooting: reset Podman machine, logs location
Step 2: Phased rollout
Wave 1 (20% - early adopters): Senior devs, DevOps engineers
- Self-service migration
- They’ll discover edge cases and help refine the guide
Wave 2 (50% - mainstream): Mid-level devs
- Announce migration week, offer office hours for help
- Encourage pairing (early adopters help colleagues)
Wave 3 (100% - laggards): Junior devs, stragglers
- Set a deadline (e.g., “Docker Desktop licenses expiring on [date]”)
- 1-on-1 help if needed
Step 3: Tooling adjustments
IDE plugins:
- VS Code: Install the “Podman” extension (or the Docker extension works with Podman if
podman system serviceis enabled) - JetBrains: Docker plugin works with Podman API service
Testcontainers (Java testing framework):
// Add to test setup
System.setProperty("testcontainers.ryuk.disabled", "true"); // Ryuk requires Docker API quirks
// Or set DOCKER_HOST env variable to Podman socket
LocalStack, other dev tools: Update docs with Podman-specific configs.
Phase 4: Cleanup and Optimization (ongoing)
Post-migration wins:
Measure and communicate:
- Licensing savings: $X saved per year (Docker Desktop fees avoided)
- Security improvements: Rootless CI pipelines, no shared daemon in dev environments
- Compliance: Audits show non-root container principle met
Optimization:
- CI build times: Tune Podman/buildah caching, consider BuildKit backend
- Developer experience: Iterate on Podman Desktop setup (file sync performance on macOS, startup time)
Continuous improvement:
- Monthly check-in: what pain points remain?
- Track Podman releases (4-6 week cadence), upgrade when new features/fixes arrive
Rollback plan (just in case):
If a critical blocker is discovered:
- Keep Docker licenses active for the first 2 months of migration
- Developers can revert to Docker if needed
- CI pipelines: keep Docker configs in a separate branch for quick rollback
In practice: ~98% of Docker → Podman migrations in CI/CD succeed without rollback. Developer laptops have higher friction (macOS file sync performance edge cases), but it’s solvable.
What Competencies to Build in Your Team?
The container runtime landscape is fragmenting. Platform engineers and DevOps teams need a broader competency set in 2026 than “just know Docker.”
For Platform Engineers / DevOps:
1. OCI Standards (must-have)
- Understanding OCI Image Spec (layers, manifests, mediaTypes)
- OCI Runtime Spec (config.json, container lifecycle hooks)
- Registry API (push/pull protocols, authentication)
- Why: All runtimes (Docker, Podman, containerd, CRI-O) implement OCI. Knowing the standard makes it easier to debug cross-runtime issues.
2. Container Security Fundamentals (must-have)
- Linux namespaces (PID, network, mount, user, IPC, UTS)
- Cgroups (resource limiting, accounting)
- Capabilities (instead of full root, grant specific capabilities)
- AppArmor/SELinux profiles for container confinement
- Rootless containers architecture (user namespaces, UID mapping)
- Why: Security is the top concern. Transitioning Docker → Podman is often motivated by security requirements. You need to understand the trade-offs.
3. Kubernetes CRI (Container Runtime Interface) (important for K8s teams)
- How kubelet communicates with the runtime (gRPC API)
- Differences in containerd vs CRI-O implementations
- Pod sandbox concept (pause containers, shared namespaces)
- Image pull secrets, runtime classes
- Why: If you manage K8s, CRI is the abstraction layer between Kubernetes and the runtime. Troubleshooting node issues requires this knowledge.
4. Build Strategies Without Docker Daemon (important for CI/CD)
- BuildKit: Docker’s modern builder (cache mounts, secrets, parallelization)
- buildah: Scriptable image builds without Dockerfiles
- kaniko: Builds in Kubernetes without privileged access
- img, buildpacks: Alternative builders
- Why: CI/CD pipelines increasingly avoid DIND (Docker-in-Docker). You need to know the alternatives.
5. Multi-Runtime Environments Management (nice-to-have)
- How to manage a mixed environment: dev on Docker, CI on Podman, prod on containerd
- Image compatibility and registry workflows (pushing to a shared registry)
- Debugging skills:
crictl(K8s runtime debug tool),podman inspect,docker system df
Recommended training:
- Kubernetes Fundamentals + CRI Deep Dive – understanding how K8s uses container runtimes (EITT offers a 3-day hands-on course)
- Container Security Workshop – namespaces, cgroups, rootless, scanning tools (4 days)
- CI/CD with Podman and kaniko – practical builds and deployments without Docker (2 days)
For Application Developers:
1. Dockerfile Best Practices (must-have)
- Multi-stage builds (reduce image size)
- Layer caching optimization (instruction order)
- Security: avoid root user, use specific base image tags, scan dependencies
- Why: Regardless of runtime (Docker/Podman/buildah), Dockerfile remains the lingua franca. A good Dockerfile = fast builds + small images + secure.
2. Container Debugging Skills (must-have)
docker exec/podman exec→ shell into a running containerlogs,inspect,statscommands- Networking troubleshooting (
docker network ls, port mappings) - Why: Developers need to debug containers locally without waiting for DevOps.
3. Compose Basics (important for local dev)
- docker-compose.yml syntax (services, networks, volumes)
- Environment variables, secrets management
- Health checks, dependencies
- Why: Compose is the de facto standard for multi-container local dev. ~80% of projects use Compose.
4. Awareness of Alternatives (nice-to-have)
- Knowledge that Podman and nerdctl exist as Docker alternatives
- Basic differences (daemonless, rootless)
- When DevOps might ask you to test in Podman before merging
- Why: If the organization migrates to Podman, developers need to be prepared.
Recommended training:
- Docker/Podman for Developers – from basics to Compose and best practices (2 days)
- Dockerfile Optimization Workshop – hands-on with multi-stage builds, caching, scanning (1 day)
For Tech Leads / Engineering Managers:
1. Runtime Selection Criteria (must-have)
- Trade-offs: DX vs security vs licensing vs K8s alignment
- How to evaluate TCO (Total Cost of Ownership): licensing fees + training + migration effort
- Vendor lock-in risks
- Why: The Tech Lead decides on the tooling stack. They must understand the implications of the choice.
2. Migration Planning (important if migration is planned)
- Phased rollout strategies
- Risk assessment (what can break)
- Change management (how to communicate to the team)
- Why: Migrating Docker → Podman is an organizational change, not just a technical swap.
3. Platform Engineering Mindset (nice-to-have)
- Golden paths: how to provide developers with a simple, secure-by-default workflow
- Developer experience metrics (build times, feedback loops, cognitive load)
- Why: Container tooling is part of the developer platform. The lead must think holistically.
Recommended training:
- Container Strategy Workshop for Tech Leaders – decision frameworks, case studies, TCO modeling (1 day)
- Platform Engineering Fundamentals – building internal developer platforms (3 days)
Summary: Choosing a Runtime Is a Strategy, Not Just a Technology
Container runtime selection in 2026 is no longer “everyone uses Docker.” It’s a strategic decision dependent on organizational context:
- Developer experience: Docker still wins for juniors and small companies (free tier). Podman Desktop catches up for experienced teams.
- Security and compliance: Podman rootless + daemonless is a game changer for regulated industries and shared CI/CD environments.
- Licensing: Docker Desktop fees for companies with 250+ employees are a real budgetary concern. Podman/nerdctl eliminate this cost.
- Kubernetes alignment: If prod is in K8s, containerd + nerdctl/Podman locally give the closest experience to the production runtime.
- Ecosystem and tooling: Docker has the broadest support, but the gap is shrinking – 90-95% of tooling works with Podman after minor tweaks.
There is no universal best choice. Most organizations in 2026 use a hybrid approach:
- Dev laptops: Docker Desktop (if budget allows) or Podman Desktop
- CI/CD: Podman rootless (security) or kaniko (K8s-native builds)
- Production K8s: containerd (default, best performance)
Key questions before making a decision:
- Is Docker Desktop licensing a budgetary problem? (250+ devs on macOS/Windows)
- Do you have strict security/compliance requirements around root access?
- How strong is the Kubernetes focus in the organization?
- What is the skill level of the team – juniors needing simplicity vs seniors OK with tooling complexity?
- How many legacy codebases do you have with deep Docker integrations – migration cost vs long-term benefits?
If the answers point to: budget constraints + security concerns + K8s focus + senior team → migrating to Podman makes sense.
If: small team + juniors + minimal K8s + Docker Desktop free tier applies → stay with Docker, best ROI.
If: building a custom platform / extreme edge constraints → containerd + nerdctl as a building block.
Ready to make an informed decision about the container runtime for your organization?
Contact EITT – we’ll conduct an assessment of your tooling stack and help you design a containerization strategy tailored to your needs. 500+ experts, 2,500+ training courses, 4.8/5 rating – leading technology companies in Poland trust us.
Alternatively: see our training courses in Docker, Podman, Kubernetes, and container security – from fundamentals for juniors to advanced workshops for platform engineers.
Your team is already running containers. Are they using the best tools for their use case?
Read Also
- Docker for Beginners: How to Quickly Get Started with Containers
- Scrum Master Certification - PSM vs CSM, Which to Choose?
- ‘AI in a small and medium-sized company (SME): a practical guide - where to start and what tools to choose so as not to be left behind?‘
Develop Your Skills
This article is related to the training Podman Containers - Alternative to Docker. Check the program and sign up to develop your skills with EITT experts.
Read also
Frequently Asked Questions
Can I use Podman as a complete drop-in replacement for Docker without any changes?
Podman is approximately 95% compatible with Docker CLI commands, so for typical use cases the alias docker=podman works seamlessly. However, edge cases exist — complex docker-compose files with custom IPAM networks, health check dependencies, or tooling that assumes a Docker daemon socket (like Testcontainers or certain IDE plugins) may require minor configuration adjustments. Testing your specific workflows before a full migration is recommended.
Does Docker Desktop licensing really affect my company, and what are the alternatives?
Docker Desktop requires a paid subscription ($9-24 per user per month) for companies with 250 or more employees. For smaller companies, Docker Desktop remains free. If licensing is a concern, Podman Desktop and Rancher Desktop are fully free, open-source alternatives that provide approximately 90% of Docker Desktop’s developer experience without any licensing fees.
Which container runtime should I use for production Kubernetes clusters?
For production Kubernetes, containerd is the recommended choice and has been the default runtime since Kubernetes deprecated Docker (dockershim) in version 1.24. Managed Kubernetes services (EKS, GKE, AKS) already use containerd by default. CRI-O is a viable alternative if you are in the Red Hat/OpenShift ecosystem. Docker Engine is no longer supported as a direct Kubernetes runtime.
How long does a Docker-to-Podman migration typically take for a development team?
A typical migration takes 6 to 12 weeks in total: 2-4 weeks for CI/CD pipeline migration and 4-8 weeks for a phased developer laptop rollout (starting with senior developers as early adopters, then expanding to the full team). The actual timeline depends on the complexity of existing Docker integrations, the number of custom scripts referencing Docker, and the team’s familiarity with container internals.