Archive
All Posts
Filter mission logs by category and tags. This index keeps every published post discoverable with stable `/posts/*` URLs.
24 results
homelab · Mar 28, 2026
Tmux + Ghostty: Build a Terminal Workflow You Actually Want to Use
A layered guide to tmux and Ghostty — from first install to floating popups, vim navigation, and a polished dark theme.
ai · Mar 21, 2026
Build a Personal AI Assistant with Markdown Files and Zero Code
A starter kit of markdown files that turns Claude into a personalized work assistant -- no coding, no setup, just conversation.
homelab · Feb 12, 2026
How to Run Claude Code, Codex, and Gemini as Containerized Homelab Services
Containerize AI CLI tools with Docker for remote SSH access and OpenAI-compatible APIs via Traefik. Vendor-independent, reproducible.
ai · Feb 8, 2025
MCP (Model Context Protocol) Explained for AI Practitioners
MCP is Anthropic's open protocol for connecting AI models to external tools and data. Here are the core concepts and why it matters for agents.
tools · Feb 7, 2025
Pi-hole + Unbound: What They Do and Why They Replace Your ISP's DNS
Pi-hole blocks ads at the DNS level. Unbound resolves queries directly against root servers. Together they keep your DNS private and local.
tools · Feb 6, 2025
What Is CrowdSec and How It Adds Threat Intelligence to Your Homelab
CrowdSec is an open-source security engine with crowd-sourced threat intelligence. Here is what it does, how it works, and why it replaces fail2ban.
tools · Feb 5, 2025
What Is Paperless-ngx and Why Self-Host Your Documents
Paperless-ngx is an open-source document management system with OCR and full-text search. Here is why self-hosting it beats cloud alternatives.
homelab · Feb 4, 2025
What Is Traefik and Why It's the Go-To Reverse Proxy for Homelabs
Traefik auto-discovers Docker services, handles TLS, and routes traffic without manual config rewrites. Here is why homelabbers pick it.
homelab · Feb 3, 2025
Hardening Traefik with CrowdSec forwardAuth in a Homelab Reverse-Proxy Stack
Practical homelab guide to wire Traefik forwardAuth with CrowdSec, validate it, and handle the security tradeoffs before production.
homelab · Feb 2, 2025
Secure Remote Docker Deployments with Proton Pass CLI, Docker Contexts, and SSH
Idempotent remote Docker deploys over SSH with Proton Pass CLI secrets, including the security tradeoffs and mitigations that actually matter.
homelab · Feb 1, 2025
Pi-hole + Unbound Behind Traefik with a Clean /admin Redirect
How this homelab publishes Pi-hole admin via Traefik while keeping DNS local, with practical hardening steps for the risky defaults.
homelab · Jan 31, 2025
Running Paperless-ngx Behind Traefik with Internal Network Segmentation (Redis + Postgres)
A homelab-backed Paperless-ngx + Traefik deployment with segmented Redis/Postgres networks, concrete checks, and security hardening lessons.
ai · Jan 30, 2025
Durable AI-Agent Memory in a Homelab Repo with MCP Setup/Check Scripts
Use setup/check scripts and a Dockerized MCP memory server to keep agent context durable while avoiding secret leakage into repo memory.
homelab · Jan 29, 2025
Setting Up a Docker Swarm AI Agent Cluster for Security Research
Build AI agent lab on Raspberry Pi with Docker Swarm: encrypted networks, HMAC auth, and security monitoring. Production-grade patterns on $600 hardware.
ai · Jan 28, 2025
10 Lessons from Building an AI Agent Security Lab
Lab lessons: prompt injection unsolvable, vendor lock-in is operational risk, agility is control. Breaking systems teaches security faster than theory.
ai · Jan 27, 2025
AI Security Challenges We're Not Ready For
Unprepared for autonomous agents, model poisoning, deepfakes, and AI arms races. Security frameworks, certifications, and playbooks lag behind capabilities.
learning · Jan 26, 2025
From USS Tennessee to AI Security: A Cybersecurity Journey
From USS Tennessee ISSM to AI security: how traditional cybersecurity expertise became both foundation and limitation for securing AI systems.
ai · Jan 25, 2025
How to Structure Data for AI Without Creating Security Nightmares
Balance AI context with security: structured data, sanitization, RAG, and least-privilege. Practical patterns for safe AI without data exfiltration risks.
ai · Jan 24, 2025
Building a Multi-Model AI System for Security and Agility
Multi-model architecture with Claude, GPT-4, and GLM enables rapid provider switching, cost optimization, and protection against vendor lock-in.
ai · Jan 23, 2025
Vendor Lock-In is Your Biggest AI Security Risk
Cloud AI providers control your infrastructure completely. Multi-vendor architecture isn't optional—it's a security control for operational resilience.
ai · Jan 22, 2025
I Monitored a Chinese AI Model for Bias. Here's What I Found.
GLM 4.6 monitoring revealed 12% geographic bias, narrative injection, and trust-building patterns. Empirical security research on lower-cost AI model behavior.
ai · Jan 21, 2025
Prompt Injection: The SQL Injection of AI (But Unsolvable)
Prompt injection is the defining LLM vulnerability with no parameterized query fix. Unlike SQL injection, it may be theoretically impossible to solve.
ai · Jan 20, 2025
Why AI Security Broke Traditional InfoSec Playbooks
Traditional CISSP frameworks fail against prompt injection and unsolvable AI vulnerabilities. Here's why agility matters more than stability in AI security.
web · Jan 19, 2025
Why We Chose Astro for Our Showcase Site
Exploring Astro's islands architecture, content collections, and why it's perfect for static sites with dynamic needs.