Durable AI-Agent Memory in a Homelab Repo with MCP Setup/Check Scripts

Use setup/check scripts and a Dockerized MCP memory server to keep agent context durable while avoiding secret leakage into repo memory.

AI

TL;DR

  • This repo now uses global Docker MCP gateway (MCP_DOCKER) instead of repo-local MCP server config files.
  • scripts/mcp-setup and scripts/mcp-check enforce that repo-local MCP files are absent and global MCP_DOCKER is present.
  • Memory persistence is handled by Docker MCP memory server state, while repo isolation is enforced by naming (Repo:subdepthtech.com ...).
  • The contract in .agents/agents/AGENTS.md is the source of truth for safe memory usage and prefix rules.

Evidence used

This post is grounded in:

  • scripts/mcp-setup
  • scripts/mcp-check
  • .agents/agents/AGENTS.md
  • README.md
  • CLAUDE.md
  • .gitignore

The contract: memory is operational, not optional

The contract in .agents/agents/AGENTS.md is explicit:

  • Start of task: query memory for repo/component context.
  • During task: record key assumptions and decisions.
  • End of task: write a session summary.

It also explicitly bans secrets and PII from memory storage. That policy is the part many teams skip.

How persistence is wired

At runtime, this repo relies on global MCP configuration:

  • Codex CLI global config defines MCP_DOCKER (docker mcp gateway run)
  • Gemini global config defines MCP_DOCKER (docker mcp gateway run)
  • Repo-local MCP files are intentionally not used in this repository

In the Docker MCP catalog, the memory server is backed by Docker-managed state. Operationally, that means persistence comes from Docker MCP runtime/storage, not from a repo-bound MEMORY_FILE_PATH.

Setup + validation workflow

Initialize/verify local contract and tooling:

./scripts/mcp-setup

Then validate:

./scripts/mcp-check

What mcp-check verifies:

  • repo-local MCP config files do not exist (.mcp.json, .codex/config.toml, .gemini/settings.json)
  • global MCP_DOCKER is configured
  • codex mcp list includes MCP_DOCKER and excludes standalone local memory
  • canonical contract exists and includes Repo:subdepthtech.com scope rules
  • memory artifacts are gitignored

Why this pattern is practical

  1. It removes per-repo MCP drift. With one global gateway, you avoid hand-maintaining three local client config files in every repo.

  2. It keeps one contract across multiple clients. .agents/agents/AGENTS.md defines one shared policy and prefix system for Claude, Codex, and Gemini.

  3. It still supports repo-level isolation. Prefix scoping (Repo:subdepthtech.com) keeps memory queries and writes scoped by convention.

  4. It remains security-aware by default. Memory instructions explicitly prohibit credentials and PII.

Lessons learned

  1. Validation matters as much as configuration. mcp-check catches mismatch between intent and runtime.
  2. One global gateway plus repo naming discipline is usually simpler than per-repo server config.
  3. Memory safety is a policy problem first; tooling enforces policy, it does not replace it.
  4. Reducing moving parts improves repeatability in multi-agent workflows.

What I’d do differently

  1. Add a CI check that fails if repo-local MCP files are reintroduced.
  2. Add machine-readable output to mcp-check so external tooling can gate on specific failures.
  3. Enforce Repo: prefix usage with a lightweight policy check before memory writes.
  4. Add optional secret-pattern linting around memory write operations.
  5. Document fallback steps when Docker MCP gateway is unavailable.

Security notes

  • Treat memory as sensitive operational metadata even without explicit credentials.
  • Follow .agents/agents/AGENTS.md: never store secrets, tokens, credentials, private keys, or PII.
  • Keep .agents/mcp/memory/ and *.jsonl gitignored to reduce accidental commits.
  • Scope all memory reads/writes to Repo:subdepthtech.com to reduce cross-repo contamination.
  • If multiple agents write concurrently, use append discipline and operational checks to avoid corruption.