Category
AI
Model security, prompt risk, agent orchestration, and LLM operations.
11 posts
Mar 21, 2026
Build a Personal AI Assistant with Markdown Files and Zero Code
A starter kit of markdown files that turns Claude into a personalized work assistant -- no coding, no setup, just conversation.
Feb 8, 2025
MCP (Model Context Protocol) Explained for AI Practitioners
MCP is Anthropic's open protocol for connecting AI models to external tools and data. Here are the core concepts and why it matters for agents.
Jan 30, 2025
Durable AI-Agent Memory in a Homelab Repo with MCP Setup/Check Scripts
Use setup/check scripts and a Dockerized MCP memory server to keep agent context durable while avoiding secret leakage into repo memory.
Jan 28, 2025
10 Lessons from Building an AI Agent Security Lab
Lab lessons: prompt injection unsolvable, vendor lock-in is operational risk, agility is control. Breaking systems teaches security faster than theory.
Jan 27, 2025
AI Security Challenges We're Not Ready For
Unprepared for autonomous agents, model poisoning, deepfakes, and AI arms races. Security frameworks, certifications, and playbooks lag behind capabilities.
Jan 25, 2025
How to Structure Data for AI Without Creating Security Nightmares
Balance AI context with security: structured data, sanitization, RAG, and least-privilege. Practical patterns for safe AI without data exfiltration risks.
Jan 24, 2025
Building a Multi-Model AI System for Security and Agility
Multi-model architecture with Claude, GPT-4, and GLM enables rapid provider switching, cost optimization, and protection against vendor lock-in.
Jan 23, 2025
Vendor Lock-In is Your Biggest AI Security Risk
Cloud AI providers control your infrastructure completely. Multi-vendor architecture isn't optional—it's a security control for operational resilience.
Jan 22, 2025
I Monitored a Chinese AI Model for Bias. Here's What I Found.
GLM 4.6 monitoring revealed 12% geographic bias, narrative injection, and trust-building patterns. Empirical security research on lower-cost AI model behavior.
Jan 21, 2025
Prompt Injection: The SQL Injection of AI (But Unsolvable)
Prompt injection is the defining LLM vulnerability with no parameterized query fix. Unlike SQL injection, it may be theoretically impossible to solve.
Jan 20, 2025
Why AI Security Broke Traditional InfoSec Playbooks
Traditional CISSP frameworks fail against prompt injection and unsolvable AI vulnerabilities. Here's why agility matters more than stability in AI security.