AI

Model security, prompt risk, agent orchestration, and LLM operations.

9 posts

Feb 7, 2026

Durable AI-Agent Memory in a Homelab Repo with MCP Setup/Check Scripts

Use setup/check scripts and a Dockerized MCP memory server to keep agent context durable while avoiding secret leakage into repo memory.

Nov 13, 2025

10 Lessons from Building an AI Agent Security Lab

Lab lessons: prompt injection unsolvable, vendor lock-in is operational risk, agility is control. Breaking systems teaches security faster than theory.

Nov 12, 2025

AI Security Challenges We're Not Ready For

Unprepared for autonomous agents, model poisoning, deepfakes, and AI arms races. Security frameworks, certifications, and playbooks lag behind capabilities.

Nov 10, 2025

How to Structure Data for AI Without Creating Security Nightmares

Balance AI context with security: structured data, sanitization, RAG, and least-privilege. Practical patterns for safe AI without data exfiltration risks.

Nov 9, 2025

Building a Multi-Model AI System for Security and Agility

Multi-model architecture with Claude, GPT-4, and GLM enables rapid provider switching, cost optimization, and protection against vendor lock-in.

Nov 8, 2025

Vendor Lock-In is Your Biggest AI Security Risk

Cloud AI providers control your infrastructure completely. Multi-vendor architecture isn't optional—it's a security control for operational resilience.

Nov 7, 2025

I Monitored a Chinese AI Model for Bias. Here's What I Found.

GLM 4.6 monitoring revealed 12% geographic bias, narrative injection, and trust-building patterns. Empirical security research on lower-cost AI model behavior.

Nov 6, 2025

Prompt Injection: The SQL Injection of AI (But Unsolvable)

Prompt injection is the defining LLM vulnerability with no parameterized query fix. Unlike SQL injection, it may be theoretically impossible to solve.

Nov 5, 2025

Why AI Security Broke Traditional InfoSec Playbooks

Traditional CISSP frameworks fail against prompt injection and unsolvable AI vulnerabilities. Here's why agility matters more than stability in AI security.