Secure Remote Docker Deployments with Proton Pass CLI, Docker Contexts, and SSH
Idempotent remote Docker deploys over SSH with Proton Pass CLI secrets, including the security tradeoffs and mitigations that actually matter.
Remote Docker deploys are easy to get wrong in two ways: fragile or insecure. You either end up with a pile of ad-hoc shell scripts that break on the second run, or you hardcode secrets into .env files and commit them to Git because “it’s just a homelab.” The scripts in this repo take a different approach: SSH transport instead of exposed TCP sockets, vault-backed secrets instead of plaintext files, and idempotent setup checks so repeated deploys converge to the same state.
TL;DR
setup-remote.shhandles one-time bootstrapping: dedicated SSH key, Docker context, remoteproxynetwork, andpass-cliinstall/auth checks.deploy.shis the repeatable path: preflight checks, pull secrets frompass://homelab/..., materialize temporary env files, then deploy withdocker --context.- The workflow is mostly idempotent, but the current
--dry-runpath still materializes secrets (and copies some to remote). Treat dry-run as “no compose up,” not “no secret handling.”
What is Proton Pass CLI?
Proton Pass CLI (pass-cli) is a command-line tool for reading secrets from Proton Pass vaults. It lets you pull credentials at deploy time using pass:// URIs without storing them in files or environment variables. If you use Proton Pass as your password manager, pass-cli gives your scripts a way to retrieve secrets programmatically while keeping the vault as the single source of truth.
The workflow (evidence-based)
1) One-time setup: SSH + context + baseline checks
Run once (or re-run safely):
./scripts/setup-remote.sh [email protected]
The setup script does seven practical things:
- Creates a dedicated ED25519 key at
~/.ssh/docker_homelabif missing. - Adds that key to
ssh-agent. - Installs the public key on the remote host.
- Adds an SSH config host alias.
- Verifies remote Docker access.
- Creates Docker context
homelab-remoteif it does not exist. - Ensures a remote
proxynetwork and checkspass-cliauth state.
The idempotent context logic is straightforward:
if docker context inspect "$CONTEXT_NAME" >/dev/null 2>&1; then
log "Docker context '$CONTEXT_NAME' already exists."
else
docker context create "$CONTEXT_NAME" \
--docker "host=ssh://${REMOTE_USER}@${REMOTE_HOST}"
fi
That is what you want in a setup script: detect first, then create.
2) Secrets stay in Proton Pass, not in repo config
deploy.sh pulls secrets from a Proton Pass vault named homelab using pass:// URIs:
pass_read() {
local uri="pass://${VAULT}/$1"
local val
if ! val="$(pass-cli item read "$uri" 2>/dev/null)"; then
err "Failed to read secret: $uri"
return 1
fi
printf '%s' "$val"
}
Per-service secret materialization is explicit (traefik, authelia, paperless-ngx, gitea, pihole-unbound, nebulasync), and services without secrets get an empty .env.
That mapping is opinionated and explicit. Hidden secret conventions are where deployments break.
How are you handling secrets in your homelab deploys? If you are still using .env files committed to Git or manually pasting credentials, consider what it would take to move to a vault-backed approach. The upfront cost is real, but the payoff is not having to rotate every secret when you accidentally push to a public repo.
3) Deployment is context-first and repeatable
Preview:
./scripts/deploy.sh all --dry-run
Deploy everything:
./scripts/deploy.sh all
Deploy one service:
./scripts/deploy.sh traefik
The apply step is the right shape:
docker --context "$CONTEXT_NAME" compose \
-f "$compose_file" \
--env-file "$merged_env" \
up -d --remove-orphans
A nuance worth calling out: this repo standardizes on docker --context NAME compose .... This form avoids unknown flag: --context failures that occur with the older docker-compose binary.
4) Cleanup and ephemeral handling
Local temporary secrets are written into a restricted temporary directory and removed on exit via trap. On Linux, the script prefers /dev/shm (tmpfs).
That is the right direction. It is not perfect, but it is materially better than committing .env files or leaving secrets in long-lived working directories.
Lessons learned
- Idempotence is more than “script can run twice.” Repeated runs should converge to the same operational state. Context and network checks do this well.
- Secret retrieval should be centralized and boring.
pass_readgives one code path for errors, logging, and URI convention. - Dry-run semantics need precision. In this implementation,
--dry-runstill fetches secrets and may SCP secret files for some services before returning. - Context-based remote deploys are cleaner than shelling into remote hosts.
docker --context ... compose ...keeps your deployment interface local and scriptable. - Security controls are layered, not binary. Dedicated SSH key, Proton vault, temp secret directory, and
chmodhardening each reduce risk; none are sufficient alone.
What I’d do differently
-
Move the dry-run guard earlier. Right now dry-run happens after secret materialization and some remote copies. I would short-circuit before secret reads and SCP operations.
-
Treat SSH host key trust explicitly.
ssh-keyscanis convenient, but it is still trust-on-first-use. I would require an out-of-band fingerprint verification step before appending toknown_hosts. -
Tighten remote secret lifecycle. For services that require file-based secrets on remote disk, add post-deploy rotation/cleanup logic or move to runtime secret mounts where possible.
-
Make macOS temp handling explicit. The script comment says RAM-backed temp, but on macOS
$TMPDIRis not guaranteed equivalent to Linux/dev/shm. I would document that distinction clearly. -
Add a real readiness check. Current health logic checks for “running” containers. I would add service-specific health endpoints for critical apps.
Security notes
- This workflow avoids exposing Docker over TCP and uses SSH transport for remote daemon access.
- A dedicated deploy key (
docker_homelab) is used instead of reusing personal SSH keys. - Proton Pass vault reads happen at runtime via
pass-cli, avoiding hardcoded credentials in scripts. - Linux deployments prefer tmpfs (
/dev/shm) for ephemeral secret files; the cleanup trap removes temporary material. - Remote secret files for
traefikandautheliaare copied with restrictive permissions (chmod 600), but they still exist on remote storage. - The deploy user must reach Docker on the remote host; in practice, Docker group membership is high-trust access and should be treated as privileged.
Final take
If you want remote Docker deploys that are both usable and safer than ad-hoc shell scripts, this pattern is practical:
- SSH transport
- Docker contexts
- Vault-backed secret retrieval
- Idempotent setup checks
The sharp edges are known and fixable. That is what “production-ready” looks like in a homelab: not perfect, but explicit, repeatable, and hardened where it counts.
What’s Your Deploy Workflow?
If you have built a remote Docker deployment pipeline — whether with Proton Pass, Bitwarden CLI, SOPS, or something else entirely — I would like to hear how it compares. What vault or secret management approach are you using? Have you solved the dry-run secret leakage problem cleanly? Share your approach or ask questions about this setup.