Secure Remote Docker Deployments with Proton Pass CLI, Docker Contexts, and SSH
Idempotent remote Docker deploys over SSH with Proton Pass CLI secrets, including the security tradeoffs and mitigations that actually matter.
TL;DR
setup-remote.shhandles one-time bootstrapping: dedicated SSH key, Docker context, remoteproxynetwork, andpass-cliinstall/auth checks.deploy.shis the repeatable path: preflight checks, pull secrets frompass://homelab/..., materialize temporary env files, then deploy withdocker --context.- The workflow is mostly idempotent, but the current
--dry-runpath still materializes secrets (and copies some to remote). Treat dry-run as “no compose up,” not “no secret handling.”
Remote Docker deploys are easy to get wrong in two ways: fragile or insecure.
The homelab scripts in this repo strike a practical middle ground. They avoid exposing a remote Docker TCP socket, keep secrets out of Git, and make repeated deploys predictable.
The workflow (evidence-based)
1) One-time setup: SSH + context + baseline checks
Run once (or re-run safely):
./scripts/setup-remote.sh [email protected]
The setup script does seven practical things:
- Creates a dedicated ED25519 key at
~/.ssh/docker_homelabif missing. - Adds that key to
ssh-agent. - Installs the public key on the remote host.
- Adds an SSH config host alias.
- Verifies remote Docker access.
- Creates Docker context
homelab-remoteif it does not exist. - Ensures a remote
proxynetwork and checkspass-cliauth state.
The idempotent context logic is straightforward:
if docker context inspect "$CONTEXT_NAME" >/dev/null 2>&1; then
log "Docker context '$CONTEXT_NAME' already exists."
else
docker context create "$CONTEXT_NAME" \
--docker "host=ssh://${REMOTE_USER}@${REMOTE_HOST}"
fi
That is what you want in a setup script: detect first, then create.
2) Secrets stay in Proton Pass, not in repo config
deploy.sh pulls secrets from a Proton Pass vault named homelab using pass:// URIs:
pass_read() {
local uri="pass://${VAULT}/$1"
local val
if ! val="$(pass-cli item read "$uri" 2>/dev/null)"; then
err "Failed to read secret: $uri"
return 1
fi
printf '%s' "$val"
}
Per-service secret materialization is explicit (traefik, authelia, paperless-ngx, gitea, pihole-unbound, nebulasync), and services without secrets get an empty .env.
That mapping is opinionated and explicit. Hidden secret conventions are where deployments break.
3) Deployment is context-first and repeatable
Preview:
./scripts/deploy.sh all --dry-run
Deploy everything:
./scripts/deploy.sh all
Deploy one service:
./scripts/deploy.sh traefik
The apply step is the right shape:
docker --context "$CONTEXT_NAME" compose \
-f "$compose_file" \
--env-file "$merged_env" \
up -d --remove-orphans
A nuance worth calling out: this repo standardizes on docker --context NAME compose .... The deployment guide documents this form to avoid unknown flag: --context failures.
4) Cleanup and ephemeral handling
Local temporary secrets are written into a restricted temporary directory and removed on exit via trap. On Linux, the script prefers /dev/shm (tmpfs).
That is the right direction. It is not perfect, but it is materially better than committing .env files or leaving secrets in long-lived working directories.
Lessons learned
- Idempotence is more than “script can run twice.” Repeated runs should converge to the same operational state. Context and network checks do this well.
- Secret retrieval should be centralized and boring.
pass_readgives one code path for errors, logging, and URI convention. - Dry-run semantics need precision. In this implementation,
--dry-runstill fetches secrets and may SCP secret files for some services before returning. - Context-based remote deploys are cleaner than shelling into remote hosts.
docker --context ... compose ...keeps your deployment interface local and scriptable. - Security controls are layered, not binary. Dedicated SSH key, Proton vault, temp secret directory, and
chmodhardening each reduce risk; none are sufficient alone.
What I’d do differently
-
Move the dry-run guard earlier. Right now dry-run happens after secret materialization and some remote copies. I would short-circuit before secret reads and SCP operations.
-
Treat SSH host key trust explicitly.
ssh-keyscanis convenient, but it is still trust-on-first-use. I would require an out-of-band fingerprint verification step before appending toknown_hosts. -
Tighten remote secret lifecycle. For services that require file-based secrets on remote disk, add post-deploy rotation/cleanup logic or move to runtime secret mounts where possible.
-
Make macOS temp handling explicit. The script comment says RAM-backed temp, but on macOS
$TMPDIRis not guaranteed equivalent to Linux/dev/shm. I would document that distinction clearly. -
Add a real readiness check. Current health logic checks for “running” containers. I would add service-specific health endpoints for critical apps.
Security notes
- This workflow avoids exposing Docker over TCP and uses SSH transport for remote daemon access.
- A dedicated deploy key (
docker_homelab) is used instead of reusing personal SSH keys. - Proton Pass vault reads happen at runtime via
pass-cli, avoiding hardcoded credentials in scripts. - Linux deployments prefer tmpfs (
/dev/shm) for ephemeral secret files; the cleanup trap removes temporary material. - Remote secret files for
traefikandautheliaare copied with restrictive permissions (chmod 600), but they still exist on remote storage. - The deploy user must reach Docker on the remote host; in practice, Docker group membership is high-trust access and should be treated as privileged.
Final take
If you want remote Docker deploys that are both usable and safer than ad-hoc shell scripts, this pattern is practical:
- SSH transport
- Docker contexts
- Vault-backed secret retrieval
- Idempotent setup checks
The sharp edges are known and fixable. That is what “production-ready” looks like in a homelab: not perfect, but explicit, repeatable, and hardened where it counts.