How to Run Claude Code, Codex, and Gemini as Containerized Homelab Services

Containerize AI CLI tools with Docker for remote SSH access and OpenAI-compatible APIs via Traefik. Vendor-independent, reproducible.

HomeLab

Claude Code, Codex, and Gemini CLI are powerful tools — until you realize they only run on the machine where you installed them. No remote access. No API endpoint. No way to query multiple models from the same workflow without juggling terminals and API keys across machines. You are locked to one vendor’s CLI on one laptop, and if you want to switch models or access your agent from another device, you start from scratch.

This post shows how to containerize all three CLIs as homelab services — each with SSH access for interactive use and an OpenAI-compatible API wrapper behind Traefik for programmatic access.

TL;DR

  • Each AI CLI gets two containers: one for interactive SSH access, one for an OpenAI-compatible /v1 API wrapper.
  • Traefik handles HTTPS termination and routing, so each model gets its own subdomain (claude., codex., gemini.).
  • The architecture is vendor-independent — adding a new model means copying the pattern, not rewriting the stack.
  • Everything runs on a single Docker host with Compose profiles, .env files for secrets, and a shared Traefik proxy network.

Why Containerize AI CLIs

The Problem with Local-Only AI Tools

AI CLIs are designed for local development. That creates three problems at homelab scale:

No remote access. You cannot SSH into Claude Code from your phone, a different workstation, or a CI pipeline. The CLI is bound to the terminal where you installed it.

Dependency conflicts. Claude Code needs a recent Node.js runtime. Codex needs its own Node.js version. Gemini needs another. Running all three on the same host means managing version conflicts across runtimes, global packages, and PATH entries.

Single-model lock-in. Each CLI only talks to its own vendor’s API. If you want to compare outputs across Claude, GPT, and Gemini for the same prompt, you need three separate terminal sessions with three separate configurations. There is no unified interface.

What Containerization Gives You

Isolation. Each CLI gets its own filesystem, runtime, and dependencies. No conflicts.

Remote access. SSH into any container from any device. Run Claude Code from your tablet.

API standardization. OpenAI-compatible wrappers give every model the same /v1/chat/completions interface. Your scripts do not care which model is behind the endpoint.

Reproducibility. The Dockerfile is the documentation. Anyone can rebuild the exact same environment.

This is the practical application of the multi-model architecture and vendor independence I have written about before — except now we are building the compute layer.

Architecture Overview

Every AI CLI follows a two-container pattern:

┌─────────────────────────────────────────────────────┐
│                   Docker Host                       │
│                                                     │
│  ┌──────────────┐        ┌──────────────────────┐   │
│  │ claude-code   │        │ claude-code-openai-  │   │
│  │ (SSH :2222)   │        │ wrapper (:8000)      │   │
│  └──────────────┘        └──────────┬───────────┘   │
│                                     │               │
│  ┌──────────────┐        ┌──────────┴───────────┐   │
│  │ codex         │        │ codex-openai-        │   │
│  │ (SSH :2223)   │        │ wrapper (:8787)      │   │
│  └──────────────┘        └──────────┬───────────┘   │
│                                     │               │
│  ┌──────────────┐        ┌──────────┴───────────┐   │
│  │ gemini        │        │ gemini-cli-openai-   │   │
│  │ (SSH :2224)   │        │ wrapper (:80)        │   │
│  └──────────────┘        └──────────┬───────────┘   │
│                                     │               │
│                          ┌──────────┴───────────┐   │
│                          │     Traefik           │   │
│                          │  claude.example.com   │   │
│                          │  codex.example.com    │   │
│                          │  gemini.example.com   │   │
│                          └──────────────────────┘   │
└─────────────────────────────────────────────────────┘

SSH containers expose a port on the host for interactive CLI sessions. They are not on the Traefik network — they are direct port-mapped for SSH access only.

Wrapper containers sit on the shared proxy network. Traefik reads their labels and routes HTTPS traffic to the correct container based on the subdomain.

Prerequisites

Before you start, you need:

  • Docker and Docker Compose installed on your homelab host.
  • Traefik running with a proxy Docker network already created (docker network create proxy). See What Is Traefik? for setup.
  • API keys for each provider: ANTHROPIC_API_KEY, OPENAI_API_KEY, GEMINI_API_KEY.
  • A domain with DNS records pointing claude., codex., and gemini. subdomains to your host.
  • Cloudflare (or another ACME provider) configured in Traefik for automatic TLS certificates.
  • An SSH public key for key-based authentication into the containers.

Store all secrets in a .env file in each service directory. Never commit .env files to version control.

Claude Code Container

Claude Code is the reference pattern. The other two CLIs follow the same structure with minor differences.

Dockerfile

FROM debian:bookworm-slim

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

RUN apt-get update \
  && apt-get install -y --no-install-recommends \
    bash ca-certificates curl git less \
    openssh-client openssh-server tini \
  && rm -rf /var/lib/apt/lists/*

RUN useradd --create-home --shell /bin/bash claude

ENV HOME=/home/claude
ENV PATH="${HOME}/.local/bin:${PATH}"

USER claude
COPY --chown=claude:claude install-command.txt /tmp/install-command.txt
RUN bash -lc "$(cat /tmp/install-command.txt)" \
  && rm -f /tmp/install-command.txt \
  && claude --version

USER root
RUN ln -sf /home/claude/.local/bin/claude /usr/local/bin/claude \
  && if ! grep -qxF 'export PATH="$HOME/.local/bin:$PATH"' \
       /home/claude/.bashrc; then \
       echo 'export PATH="$HOME/.local/bin:$PATH"' >> /home/claude/.bashrc; \
     fi

COPY start-sshd.sh /usr/local/bin/start-sshd.sh
RUN chmod 0755 /usr/local/bin/start-sshd.sh

WORKDIR /workspace
ENTRYPOINT ["/usr/bin/tini", "--", "/usr/local/bin/start-sshd.sh"]

Key decisions:

  • Non-root user. Claude Code installs and runs as the claude user. SSH access is restricted to this user via AllowUsers claude. Root login is disabled.
  • tini as PID 1. Prevents zombie processes. The SSH daemon runs as a child of tini, which properly reaps orphaned processes.
  • install-command.txt contains the install script. This keeps the Dockerfile vendor-neutral — you update the install command without modifying the Dockerfile itself.

SSH Entrypoint

The start-sshd.sh script handles key injection at container startup:

#!/usr/bin/env bash
set -euo pipefail

mkdir -p /var/run/sshd /home/claude/.ssh
chmod 0700 /home/claude/.ssh

if [[ -n "${SSH_PUBLIC_KEY:-}" ]]; then
  if ! grep -qxF "${SSH_PUBLIC_KEY}" /home/claude/.ssh/authorized_keys; then
    echo "${SSH_PUBLIC_KEY}" >> /home/claude/.ssh/authorized_keys
  fi
fi

if [[ "${ENABLE_PASSWORD_AUTH:-false}" == "true" ]]; then
  echo "claude:${SSH_PASSWORD}" | chpasswd
  sed -ri 's/^#?PasswordAuthentication\s+.*/PasswordAuthentication yes/' \
    /etc/ssh/sshd_config
else
  sed -ri 's/^#?PasswordAuthentication\s+.*/PasswordAuthentication no/' \
    /etc/ssh/sshd_config
fi

# Harden: pubkey only, no root login, restrict to claude user
sed -ri 's/^#?PubkeyAuthentication\s+.*/PubkeyAuthentication yes/' /etc/ssh/sshd_config
sed -ri 's/^#?PermitRootLogin\s+.*/PermitRootLogin no/' /etc/ssh/sshd_config

ssh-keygen -A
exec /usr/sbin/sshd -D -e

The SSH_PUBLIC_KEY environment variable is injected via the Compose file. The script is idempotent — restarting the container does not duplicate keys.

Compose File

services:
  claude-code:
    profiles:
      - ssh
    build:
      context: .
      dockerfile: Dockerfile
    container_name: claude-code
    restart: unless-stopped
    stdin_open: true
    tty: true
    working_dir: /workspace
    environment:
      - SSH_PUBLIC_KEY=${SSH_PUBLIC_KEY:-}
      - ENABLE_PASSWORD_AUTH=${ENABLE_PASSWORD_AUTH:-false}
      - SSH_PASSWORD=${SSH_PASSWORD:-}
    ports:
      - ${CLAUDE_CODE_SSH_PORT:-2222}:22
    volumes:
      - ../../:/workspace
      - claude_code_home:/home/claude/.claude

  claude-code-openai-wrapper:
    image: registry.subdepthtech.org/homelab/claude-code-openai-wrapper:latest
    restart: unless-stopped
    env_file:
      - .env
    networks:
      - proxy
    command:
      [
        'poetry',
        'run',
        'uvicorn',
        'src.main:app',
        '--host',
        '0.0.0.0',
        '--port',
        '8000',
      ]
    labels:
      - traefik.enable=true
      - traefik.docker.network=proxy
      - traefik.http.routers.claude-wrapper.rule=Host(`claude.subdepthtech.org`)
      - traefik.http.routers.claude-wrapper.entrypoints=https
      - traefik.http.routers.claude-wrapper.tls=true
      - traefik.http.routers.claude-wrapper.tls.certresolver=cloudflare
      - traefik.http.services.claude-wrapper.loadbalancer.server.port=8000

networks:
  proxy:
    external: true

volumes:
  claude_code_home:

The profiles: [ssh] key means the SSH container only starts when you explicitly request it: docker compose --profile ssh up -d. The wrapper container runs by default. This keeps the always-on footprint minimal — you only spin up SSH when you need an interactive session.

Connect with: ssh claude@your-host -p 2222

Codex Container

Codex follows the same two-container pattern. The differences are worth calling out.

Key Differences from Claude Code

Base image: node:20-bookworm-slim instead of debian:bookworm-slim. Codex requires Node.js at build time, so it uses the official Node image directly rather than installing Node separately.

Runs as root. Unlike Claude Code’s dedicated claude user, Codex runs as root inside the container. The Dockerfile enables PermitRootLogin yes in the SSH config. This is a pragmatic choice for a lab — Codex’s install process expects root — but it means the SSH container has higher privilege.

Password auth by default. The Compose file sets SSH_PASSWORD: ${CODEX_SSH_PASSWORD:-codex} — a default password is provided. In production, you should override this or disable password auth entirely.

Wrapper port 8787. The upstream OpenAI-compatible wrapper image exposes port 8787 instead of 8000:

services:
  codex-openai-wrapper:
    image: registry.subdepthtech.org/homelab/codex-openai-wrapper:latest
    restart: unless-stopped
    env_file:
      - .env
    networks:
      - proxy
    labels:
      - traefik.enable=true
      - traefik.docker.network=proxy
      - traefik.http.routers.codex-wrapper.rule=Host(`codex.subdepthtech.org`)
      - traefik.http.routers.codex-wrapper.entrypoints=https
      - traefik.http.routers.codex-wrapper.tls=true
      - traefik.http.routers.codex-wrapper.tls.certresolver=cloudflare
      - traefik.http.services.codex-wrapper.loadbalancer.server.port=8787

The Traefik labels follow the exact same pattern. Only the router name, hostname, and port change.

Gemini Container

Gemini is the most flexible of the three in its entrypoint design.

Key Differences

Runtime SSH toggle. The Gemini entrypoint script uses a DISABLE_SSH_PASSWORD_AUTH flag that flips at container start. This is the inverse of Claude Code’s ENABLE_PASSWORD_AUTH — password auth is on by default and must be explicitly disabled:

ssh_password="${SSH_PASSWORD:-changeme}"
echo "root:${ssh_password}" | chpasswd

if [[ "${DISABLE_SSH_PASSWORD_AUTH:-false}" == "true" ]]; then
  password_auth="no"
else
  password_auth="yes"
fi

cat > /etc/ssh/sshd_config.d/99-container.conf <<EOF
PermitRootLogin yes
PubkeyAuthentication yes
PasswordAuthentication ${password_auth}
UsePAM no
EOF

This approach uses sshd_config.d drop-in files instead of sed on the main config — cleaner, but achieves the same result.

Wrapper port 80. The Gemini wrapper listens on port 80 internally:

- traefik.http.services.gemini-wrapper.loadbalancer.server.port=80

Same base image as Codex. Both use node:20-bookworm-slim and run as root.

The Common Pattern

If you want to add a fourth model (say, Llama via Ollama CLI), here is the reusable recipe:

  1. Dockerfile: Start from a slim base, install the CLI, add openssh-server, copy an entrypoint script.
  2. Entrypoint script: Create SSH directories, inject keys from environment variables, configure sshd, exec sshd -D -e.
  3. Compose file: Two services — one with profiles: [ssh] and host port mapping, one on the proxy network with Traefik labels.
  4. Wrapper image: Build or find an OpenAI-compatible wrapper, push it to your private registry.

Comparison Table

Claude CodeCodexGemini
Base imagedebian:bookworm-slimnode:20-bookworm-slimnode:20-bookworm-slim
Install methodinstall-command.txt as shellinstall-command.txt as scriptinstall-command.txt as script
SSH userclaude (non-root)rootroot
PID 1tinishell entrypointshell entrypoint
Password auth defaultDisabledEnabled (codex)Enabled (changeme)
Wrapper port8000878780
Subdomainclaude.codex.gemini.

All wrapper images are stored in a private registry at registry.subdepthtech.org. This keeps your custom images off Docker Hub and lets you control versioning. If you do not run a private registry, you can build the wrapper images locally and reference them by tag.

This pattern connects directly to the broader Docker Swarm cluster architecture — the same containers can be promoted to Swarm services when you need multi-node scheduling. And once the compute layer is running, you can wire in persistent memory via MCP.

Security Considerations

Running AI CLIs in containers is not automatically secure. Here are the things to get right.

Default passwords are dangerous. Both Codex and Gemini ship with default SSH passwords (codex and changeme). If the SSH port is reachable from the network, anyone who knows the default gets a shell with root access. Change them immediately, or better, disable password auth and use key-based authentication only.

Non-root vs root. Claude Code’s approach — a dedicated user with PermitRootLogin no — is the stronger security posture. Codex and Gemini run as root because their install processes expect it. If you have time to harden these, create a non-root user and adjust the entrypoint.

API key management. All three services read API keys from .env files. These files should:

  • Never be committed to version control (add to .gitignore)
  • Have restrictive file permissions (chmod 600)
  • Be rotated periodically

Network isolation. SSH containers should not be on the proxy network. Notice that in the Compose files, only the wrapper services have networks: [proxy]. The SSH containers use host port mapping instead. This means Traefik cannot route to them, and they are not exposed to other containers on the proxy network.

Wrapper auth. The OpenAI-compatible wrappers should require bearer token authentication on every request. Without it, anyone who can reach the Traefik endpoint can make API calls on your account.

Principle of least privilege. Mount only the directories each container needs. The workspace volume (../../:/workspace) gives the container access to the entire homelab repo. If you only need a subset, narrow the mount.

For more on the security trade-offs of running AI agents in your lab, see 10 Lessons from Building an AI Agent Security Lab.

Summary

  • Remote access to AI CLIs — SSH into Claude Code, Codex, or Gemini from any device.
  • Two-container pattern — one container for interactive SSH, one for the OpenAI-compatible API wrapper.
  • Traefik labels handle HTTPS routing and TLS. Each model gets its own subdomain with zero manual certificate management.
  • Security basics matter — disable default passwords, prefer non-root users, isolate SSH containers from the proxy network, and never commit API keys.
  • The architecture is additive — adding a new model means copying the pattern, not rearchitecting the stack.
  • Vendor independence is a security control — if one provider has an outage, rate-limits you, or changes their terms, you switch subdomains and keep working.

What’s Next

The containers in this post handle the compute layer — giving each AI model a place to run and an API to talk to. But compute without memory means every session starts from zero.

The next layer is persistent context. MCP (Model Context Protocol) standardizes how AI models connect to external tools and data sources, so your agents can remember what they did and access your actual infrastructure. Pair that with a concrete implementation like an MCP memory server and these containers go from stateless tools to stateful agents.

Which CLI are you running in your homelab? Have you tried comparing outputs across models for the same prompt? I’d like to hear what patterns you have found.