From USS Tennessee to AI Security: A Cybersecurity Journey
From USS Tennessee ISSM to AI security: how traditional cybersecurity expertise became both foundation and limitation for securing AI systems.
When I reported aboard the USS Tennessee in 2018, I walked into a world of dated but stable technology: Windows 7 workstations, 5-inch floppy disks still in operational use, tape backups, and legacy network infrastructure that had been humming along reliably for years.
By the time I left, we had pushed through a full technology refresh — modern Windows operating systems, a revamped network enterprise, contemporary storage, tighter security posture. It was textbook traditional IT security, done well.
The thing that surprised me most? The average end user barely noticed.
They had new computers, sure. The interface looked different. But their actual workflows? Mostly the same. Going from Windows 7 to Windows 10 felt incremental — same Office apps, similar file management, familiar processes.
That sums up how traditional IT has always worked: predictable update cycles, backward compatibility baked in, minimal user training because changes are gradual, and straightforward security (scan-patch-scan).
Then I started working with AI systems.
Everything changed.
The Traditional Security Foundation
My cybersecurity career grew from traditional InfoSec principles:
Information Systems Security Manager (ISSM) responsibilities:
- Risk assessment and management
- Security policy development and enforcement
- Compliance monitoring (NIST, DoD STIGs)
- Access control and least privilege implementation
- Incident response planning and execution
CISSP perspective:
- Security architecture and engineering
- Asset security and data protection
- Identity and access management
- Security operations and monitoring
- Software development security
Systems administration focus:
- Log monitoring and analysis
- Access control enforcement
- Configuration management
- Patch management
- Backup and recovery
That foundation gave me a solid grasp of security frameworks, compliance requirements, and day-to-day operational security. Those skills still matter.
But they weren’t enough for AI security.
The Speed Problem: Evolution vs Revolution
The USS Tennessee technology refresh was classic IT evolution: predictable, manageable, incremental.
Traditional IT timeline:
- Windows 7 (2009) to Windows 11 (2021): 12 years
- Office 2010 to Office 2024: 14 years
- Core workflows stayed stable the entire time
- Users adapted gradually
- Security controls evolved at a manageable pace
AI model timeline:
- GPT-3 (2020) to GPT-5 (2024): 4 years
- But real capability leaps happened in months
- Claude Skills.md, plugins, agent marketplaces: didn’t exist 6 months ago
- Users have to completely relearn workflows on a regular basis
- Security controls can’t keep up with capabilities
Imagine watching the jump from the original iPhone to the iPhone 16 happen overnight. Traditional security frameworks simply cannot keep pace with that rate of change.
Why I Started Working with AI
As a security professional, I saw a completely new threat landscape forming — one where my traditional expertise gave me a starting point but not answers:
Traditional security frameworks fell short:
- No CVE database for prompt injection
- No “patch” for architectural vulnerabilities
- No established audit frameworks
- No certification paths (CISSP doesn’t cover this)
- No incident response playbooks
Education couldn’t keep up:
- Academic programs lagged 2-3 years behind reality
- Vendor training focused on using AI, not securing it
- Security conferences had few AI-specific tracks
- Most guidance stayed theoretical, not practical
I needed to get my hands dirty:
- You can’t secure what you don’t understand
- Reading documentation only gets you so far
- Theory without practice leaves gaps
- Breaking systems teaches more than protecting them
So I started building AI systems myself — not to deploy them, but to understand their vulnerabilities from the inside out.
The Evolution into DevOps
Securing AI systems demanded skills I didn’t have as an ISSM/CISSP:
Infrastructure skills:
- Docker containerization and orchestration
- CI/CD pipeline design and security
- Network configuration and monitoring
- Secrets management at scale
- Infrastructure as code (Terraform, Ansible)
Development workflows:
- Git version control and branching strategies
- GitHub Actions for automated testing
- Code review processes
- Dependency management
- Application security testing
Why these matter for AI security:
- You can’t assess AI deployment security without understanding containers
- You can’t evaluate CI/CD risks without building pipelines yourself
- You can’t design network security without hands-on configuration experience
- You can’t audit agent systems without understanding how they interact with development workflows
Traditional security roles tend to separate “security” from “engineering.” That separation breaks down with AI security. You need both.
If you’ve made a similar transition — from traditional IT security into AI — what was the moment you realized the old playbook wasn’t going to cut it?
From Policy to Practice: The Gap
My background in policy roles taught me to think about risk, compliance, and governance. That thinking still has value. But AI security revealed that policy without technical understanding is incomplete — and often wrong.
Example: Prompt Injection Policy
Policy approach:
Policy 47.2: AI agents must not output credentials or sensitive data under any circumstances.
Enforcement: Security team will review AI outputs quarterly for compliance.
Reality:
- Prompt injection can bypass any policy statement
- Quarterly reviews are far too infrequent
- Without output filtering, sandboxing, and monitoring, this policy is unenforceable
- The person who wrote this policy clearly doesn’t understand how AI systems work
Technical approach:
# Implement actual controls
def query_ai_with_security(prompt, context):
# 1. Input validation
if contains_injection_patterns(prompt):
log_security_event("Potential injection attempt")
return sanitized_error_response()
# 2. Output filtering
response = ai_model.query(prompt, context)
filtered_response = redact_sensitive_data(response)
# 3. Sandboxing
if response_attempts_code_execution(response):
block_and_alert()
# 4. Monitoring
log_interaction(prompt, filtered_response, security_flags)
return filtered_response
Policy grounded in technical reality works. Policy written without understanding the implementation is theater.
What’s Different About AI Security
From my ISSM/CISSP perspective, AI represents genuinely new territory that upends core assumptions:
| Traditional IT Security | AI Security |
|---|---|
| Scan-patch-scan methodology works | No scanning for prompt injection; some vulnerabilities may be unsolvable |
| Update cycles measured in years | Models change overnight; capabilities evolve monthly |
| Configuration files are deterministic (same input = same output) | Natural language configuration is probabilistic (same input ≠ same output) |
| Vulnerabilities eventually get patches | Some architectural vulnerabilities may never be resolved |
| End users barely notice system upgrades | Upgrades can break entire workflows; require organizational retraining |
| Vendor changes require months to adapt | Vendor changes happen instantly; no adaptation time |
| Security through stability | Security through agility |
This demands a fundamentally different security mindset — one that accepts uncertainty, designs for containment over prevention, and prizes rapid adaptability above long-term stability.
The Security Professional’s Advantage
Approaching AI from a security and systems administration background — rather than as a software developer — actually gives you a real edge:
Security professionals already understand:
- Defense in depth (no single control is sufficient)
- Assume breach (design for compromise, not just prevention)
- Least privilege (minimize access by default)
- Monitoring and detection (you can’t prevent everything)
- Incident response (how to contain damage when attacks succeed)
Systems administrators already understand:
- How to build resilient architectures
- How to spot anomalous behavior
- How to implement access controls at scale
- How to maintain operational security under pressure
- How to balance security with usability
These perspectives matter deeply for AI security because:
- AI systems need security baked in from day one, not bolted on later
- Prompt injection is unsolvable, so containment is non-negotiable
- Model behavior shifts require continuous monitoring
- New vulnerability classes keep appearing
- Organizations have to respond fast when vendors make changes
Developers tend to optimize for functionality first, security second. Security professionals trained in adversarial thinking bring the skepticism and risk awareness that AI systems desperately need.
Lessons for Traditional Security Professionals
If you’re an ISSM, CISSP, CISM, or other traditional security professional looking at AI:
1. Your experience provides a foundation, not the full picture
Traditional security knowledge matters — defense in depth, least privilege, monitoring, incident response all carry forward. But they’re not enough on their own. AI demands new patterns built on those traditional principles.
2. Hands-on experience is non-negotiable
You cannot effectively secure AI systems by reading whitepapers and drafting policies. You have to:
- Build AI systems yourself (even small ones)
- Try to break them through prompt injection
- Understand how models actually work, not just conceptually
- Experiment with different configurations and observe the results
- See firsthand why traditional controls fall short
3. Accept that you’re starting over in some real ways
Your CISSP covered 8 domains thoroughly. None of them specifically address:
- Prompt injection and jailbreaking
- Model poisoning and backdoors
- Adversarial machine learning
- AI agent authentication
- Probabilistic configuration management
- Vendor model dependencies
You’re picking up a new specialty, not just extending what you already know.
4. Agility matters more than stability
Traditional security prizes stability: locked-down systems, change control, predictable environments. AI security calls for the opposite: rapid pivoting, continuous adaptation, architecting for vendor switching.
This will feel uncomfortable. Lean into it anyway.
5. Collaborate across disciplines
AI security needs security professionals, developers, data scientists, and operations teams pulling in the same direction. The traditional silo approach falls apart.
Learn enough about each discipline to hold a meaningful conversation. You don’t need to become an expert developer, but you do need to understand development workflows well enough to spot the security gaps.
Where I Am Now: Hands-On AI Security
My path from USS Tennessee to AI security has been a process of continuous learning and rethinking core assumptions:
What I’ve built:
- Distributed AI agent system on a Raspberry Pi cluster
- Multi-model architecture (Claude, GPT-4, GLM)
- Security monitoring for bias detection
- CI/CD pipeline with security testing
- Comprehensive logging and audit systems
What I’ve learned:
- Traditional frameworks give you a foundation, but they need real adaptation for AI
- Hands-on experimentation teaches more than study alone
- Policy without technical backing is security theater
- Agility itself is a security control for AI systems
- Vendor independence takes deliberate architectural decisions
- Some vulnerabilities may never be solved — design around them
What caught me off guard:
- How fast AI capabilities move (faster than I expected, even after expecting it)
- How subtle bias can be (it takes statistical analysis to surface)
- How SDKs can elevate lower-tier models (abstraction is powerful)
- How unprepared most organizations are (even ones with traditionally strong security programs)
- How little formal training exists (everyone is working this out together)
The Path Forward
For security professionals making this transition:
1. Start building now
Don’t wait until you feel ready. Build something small. Break it. Learn from what goes wrong.
2. Focus on fundamentals over specific models
Models change constantly. Understanding architectural patterns, threat vectors, and security controls will serve you far longer than memorizing specific model capabilities.
3. Share what you learn
The field is young enough that every practitioner’s experience matters. Write about what works and what doesn’t. Present at conferences. Help grow the community’s knowledge base.
4. Get comfortable with uncertainty
Traditional security looks for clean answers: “Is this secure?” AI security requires probabilistic thinking: “What’s the risk level? What’s our containment strategy?”
5. Stay current without chasing every shiny object
AI moves on a weekly cadence. You can’t absorb all of it. Anchor yourself in foundational concepts and practical implementation. Let others run after hype.
Conclusion: A New Discipline
The journey from USS Tennessee — securing stable, well-understood traditional IT systems — to AI security has meant a series of genuine paradigm shifts.
Traditional security principles still hold weight:
- Defense in depth
- Least privilege
- Monitoring and detection
- Assume breach
- Incident response
But they have to be applied in fundamentally new ways to systems that:
- Are probabilistic, not deterministic
- Evolve monthly, not yearly
- Carry unsolvable vulnerabilities
- Require vendor independence
- Demand organizational agility
AI security isn’t traditional security with AI bolted on. It’s a distinct discipline that calls for new mindsets, new skills, and new frameworks — all anchored in traditional foundations.
For security professionals willing to roll up their sleeves, sit with uncertainty, and question long-held assumptions, it’s a genuinely compelling frontier.
The field needs seasoned security professionals who can bring adversarial thinking, risk assessment, and operational discipline to AI systems. But those professionals have to be willing to learn again — not from scratch, but from a new starting point where traditional expertise is the floor, not the ceiling.
That’s the path I’m on. And if you’re reading this as a traditional security professional weighing whether to jump into AI security: yes, you should. The field needs you. Just come prepared to be challenged, surprised, and pushed to rethink what you thought you knew.
That’s what makes it worth doing.
Where Are You on This Path?
Whether you’re still deep in traditional InfoSec or already knee-deep in AI security, I’d like to hear where you are in this transition. What skills transferred cleanly? What did you have to learn from scratch? Share your journey — the field is small enough that every practitioner’s experience moves the whole community forward.