Durable AI-Agent Memory in a Homelab Repo with MCP Setup/Check Scripts
Use setup/check scripts and a Dockerized MCP memory server to keep agent context durable while avoiding secret leakage into repo memory.
TL;DR
- This repo now uses global Docker MCP gateway (
MCP_DOCKER) instead of repo-local MCP server config files. scripts/mcp-setupandscripts/mcp-checkenforce that repo-local MCP files are absent and global MCP_DOCKER is present.- Memory persistence is handled by Docker MCP memory server state, while repo isolation is enforced by naming (
Repo:subdepthtech.com ...). - The contract in
.agents/agents/AGENTS.mdis the source of truth for safe memory usage and prefix rules.
Evidence used
This post is grounded in:
scripts/mcp-setupscripts/mcp-check.agents/agents/AGENTS.mdREADME.mdCLAUDE.md.gitignore
The contract: memory is operational, not optional
The contract in .agents/agents/AGENTS.md is explicit:
- Start of task: query memory for repo/component context.
- During task: record key assumptions and decisions.
- End of task: write a session summary.
It also explicitly bans secrets and PII from memory storage. That policy is the part many teams skip.
How persistence is wired
At runtime, this repo relies on global MCP configuration:
- Codex CLI global config defines
MCP_DOCKER(docker mcp gateway run) - Gemini global config defines
MCP_DOCKER(docker mcp gateway run) - Repo-local MCP files are intentionally not used in this repository
In the Docker MCP catalog, the memory server is backed by Docker-managed state. Operationally, that means persistence comes from Docker MCP runtime/storage, not from a repo-bound MEMORY_FILE_PATH.
Setup + validation workflow
Initialize/verify local contract and tooling:
./scripts/mcp-setup
Then validate:
./scripts/mcp-check
What mcp-check verifies:
- repo-local MCP config files do not exist (
.mcp.json,.codex/config.toml,.gemini/settings.json) - global MCP_DOCKER is configured
codex mcp listincludesMCP_DOCKERand excludes standalone localmemory- canonical contract exists and includes
Repo:subdepthtech.comscope rules - memory artifacts are gitignored
Why this pattern is practical
-
It removes per-repo MCP drift. With one global gateway, you avoid hand-maintaining three local client config files in every repo.
-
It keeps one contract across multiple clients.
.agents/agents/AGENTS.mddefines one shared policy and prefix system for Claude, Codex, and Gemini. -
It still supports repo-level isolation. Prefix scoping (
Repo:subdepthtech.com) keeps memory queries and writes scoped by convention. -
It remains security-aware by default. Memory instructions explicitly prohibit credentials and PII.
Lessons learned
- Validation matters as much as configuration.
mcp-checkcatches mismatch between intent and runtime. - One global gateway plus repo naming discipline is usually simpler than per-repo server config.
- Memory safety is a policy problem first; tooling enforces policy, it does not replace it.
- Reducing moving parts improves repeatability in multi-agent workflows.
What I’d do differently
- Add a CI check that fails if repo-local MCP files are reintroduced.
- Add machine-readable output to
mcp-checkso external tooling can gate on specific failures. - Enforce
Repo:prefix usage with a lightweight policy check before memory writes. - Add optional secret-pattern linting around memory write operations.
- Document fallback steps when Docker MCP gateway is unavailable.
Security notes
- Treat memory as sensitive operational metadata even without explicit credentials.
- Follow
.agents/agents/AGENTS.md: never store secrets, tokens, credentials, private keys, or PII. - Keep
.agents/mcp/memory/and*.jsonlgitignored to reduce accidental commits. - Scope all memory reads/writes to
Repo:subdepthtech.comto reduce cross-repo contamination. - If multiple agents write concurrently, use append discipline and operational checks to avoid corruption.