Why AI Security Broke Traditional InfoSec Playbooks
Traditional CISSP frameworks fail against prompt injection and unsolvable AI vulnerabilities. Here's why agility matters more than stability in AI security.
I spent years securing traditional IT systems—from Windows 7 deployments on the USS Tennessee to modern cloud infrastructure—and I’ve had to rethink nearly everything along the way. The hard truth: traditional IT security and AI security are not the same discipline.
They demand different mindsets, different tooling, and a completely different relationship with risk. If you’re reaching for your CISSP playbook to secure an AI system, you’re going to come up short.
The Traditional Security Playbook That Actually Worked
For decades, security professionals like me leaned on a proven methodology that delivered real, measurable results. Scan-patch-scan wasn’t just bureaucratic box-checking—it genuinely worked:
- Scan for vulnerabilities using tools like Nessus, Qualys, or OpenVAS
- Patch the identified vulnerabilities through vendor updates
- Scan again to verify remediation
- Repeat on a predictable schedule
This held up because the underlying systems had properties that made security tractable:
- Well-defined vulnerabilities with CVE numbers and documented exploits
- Available patches from vendors with clear installation procedures
- Deterministic behavior where the same input always produces the same output
- Predictable changes that could be tested before deployment
- Long support cycles that allowed for planning and resource allocation
The Comfort of Predictable Update Cycles
Traditional IT evolved at a pace organizations could absorb. Windows 7 (2009) to Windows 11 (2021) spanned twelve years, yet core workflows stayed largely intact. Users moved between versions with minimal retraining. Sysadmins could plan migrations years out.
Now look at AI: GPT-3 launched in 2020, ChatGPT hit mass adoption in 2022, GPT-4 was a massive capability leap in 2023, and by 2024 we had Claude Sonnet 3.5 and enterprise-grade AI agents. Four years of evolution that fundamentally changed how these systems operate, what they can do, and how people interact with them.
As of November 2025, agentic AI systems are maturing fast. Gartner named agentic AI the top technology trend of 2025, predicting that 33% of enterprise applications will include agentic AI by 2028—up from less than 1% in 20241. That adoption velocity has no precedent in traditional IT.
Configuration Management: Deterministic vs Probabilistic
Traditional systems ran on configuration files that embodied everything we valued in security:
# Apache httpd.conf - Deterministic Configuration
ServerName example.com
Listen 443
SSLEngine on
SSLCertificateFile /path/to/cert.pem
SSLCertificateKeyFile /path/to/key.pem
Change one line here and you know exactly what happens. The behavior is deterministic, testable, predictable. Validate the config in staging, and it behaves identically in production.
Now look at how we “configure” AI systems:
# System prompt for AI agent
You are a helpful coding assistant. Always:
- Write secure code following OWASP guidelines
- Explain your reasoning clearly
- Ask for clarification when requirements are ambiguous
- Never execute commands that could harm the system
Swap a single word—“helpful” for “efficient”—and you can alter:
- The tone and style of every response
- How much detail the model provides
- How the system weighs thoroughness against speed
- What trade-offs it makes in security contexts
- How it reads ambiguous instructions
And there’s no deterministic way to test all possible impacts because AI systems are probabilistic by nature. The same prompt can yield different outputs. Prior context shapes behavior in ways you can’t fully predict. The underlying model can change without warning, shifting system behavior overnight.
If you’ve tried applying traditional scan-patch-scan to an AI system, you already know where this is going.
Why Scan-Patch-Scan Fails for AI Security
The most fundamental departure from traditional security boils down to one fact: some AI vulnerabilities cannot be patched.
Prompt injection and prompt poisoning attacks don’t have CVE numbers. You can’t:
- Run a vulnerability scanner to detect them
- Apply a vendor patch to eliminate them
- Verify through testing that they’ve been resolved
- Add them to your vulnerability management tracking system
These vulnerabilities are baked into how large language models process information. Researchers from OpenAI, Anthropic, and Google DeepMind systematically evaluated twelve published defenses against prompt injection and found that “by systematically tuning and scaling general optimization techniques—gradient descent, reinforcement learning, random search, and human-guided exploration—we bypass 12 recent defenses (based on a diverse set of techniques) with attack success rate above 90% for most”2.
Sit with that for a second. Academic researchers from the companies building these AI systems tested defensive measures and found adaptive attacks succeeding 90% of the time. This is not a patching problem. This is an architecture problem.
As an ISSM, that realization changes everything about how I approach system security. Traditional security assumed every vulnerability eventually gets a patch or a workaround. The timeline might stretch, but resolution was always theoretically on the table. With AI, you have to design around unsolvable vulnerabilities rather than waiting for a vendor fix.
The Model Change Problem: When Your Vendor Rewrites Your System
Traditional IT vendors gave us stability through long support cycles. Microsoft supported Windows 10 for a decade. That meant organizations could:
- Plan multi-year upgrade cycles
- Budget for transitions well in advance
- Train staff gradually
- Test thoroughly before rollout
- Maintain operational continuity
AI providers play by different rules entirely. They can—and do:
- Release new model versions with no warning or advance notice
- Change backend system prompts that you can’t see or control
- Modify safety filters that alter output characteristics
- Update pricing or rate limits that reshape your cost structure
- Deprecate model versions forcing immediate migrations
A real example: Claude Sonnet 3.5 replaced Sonnet 3 within months in 2024. Organizations that had built systems, tuned prompts, and trained users on the earlier version suddenly had their entire AI infrastructure behaving differently. Reliable prompts started producing unexpected outputs. Error handling shifted. The personality and communication style changed.
Imagine Microsoft pushing a Windows update that rewrites the Start menu, overhauls file permissions, and changes how applications talk to the OS—all without beta testing, advance notice, or the option to defer.
By November 2025, the industry is paying close attention to “shadow AI”—unsanctioned AI tools deployed without IT oversight. IBM’s 2025 Cost of a Data Breach Report found that one in five organizations experienced a breach tied to shadow AI, costing an average of $670,000 more than a conventional breach3. This happens precisely because AI moves so fast that formal approval processes can’t keep up with what business units need.
The November 2025 Reality: New Frameworks Emerging
The cybersecurity industry has acknowledged that traditional frameworks don’t adequately cover AI security. As of November 2025, several important shifts have taken shape:
NIST AI Risk Management Framework Evolution: NIST released AI RMF 2.0 in February 2024 and followed up in July 2024 with NIST-AI-600-1, the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile4. This profile targets unique generative AI risks and proposes risk management actions tied to organizational goals. NIST incorporated enhanced governance guidance aligned with enterprise risk and cybersecurity processes, sector-specific processes including the Generative AI Profile, and tighter alignment with global regulations like the EU AI Act.
ISC2 Curriculum Updates: Facing the AI security skills gap head-on, ISC2 announced in July 2025 the launch of the ISC2 Building AI Strategy Certificate and six corresponding courses5. Their research showed over a third of surveyed cybersecurity professionals pointing to AI as the biggest skills shortfall on their teams. ISC2 also updated the CISSP exam effective April 15, 2024 to cover emerging technologies including artificial intelligence, blockchain, and IoT, while keeping the core eight-domain framework intact6.
Real-World Incident Data: AI security incidents have escalated sharply. Q1 2025 alone saw 179 reported deepfake incidents—exceeding the total for all of 2024 by 19%7. AI now generates 40% of phishing emails targeting businesses8. And the 2025 IBM study found that 13% of organizations reported breaches of AI models or applications, with 97% of those breached organizations lacking proper AI access controls3.
The End User Impact: Complete Workflow Disruption
When Windows 7 upgraded to Windows 10, users dealt with:
- A new Start menu design
- Some interface changes
- Largely unchanged core workflows
- One-hour training sessions covering the basics
When GPT-4 upgraded to GPT-4 Turbo, then GPT-4o, users faced:
- Fundamentally different prompting techniques
- Changed reasoning capabilities requiring prompt restructuring
- Altered safety boundaries affecting which queries work
- Different context window behaviors changing workflow patterns
- Modified rate limits impacting usage patterns
- Organization-wide retraining requirements
Take the Skills.md feature Claude introduced in 2024. Organizations that had adopted Claude months earlier suddenly needed to:
- Understand what Skills.md files are and how they work
- Decide which skills to enable for different use cases
- Restructure existing system prompts around the skills framework
- Retrain users on the new interaction patterns
- Update security policies to account for new capability boundaries
That would be like Microsoft dropping a completely new authentication system into Windows 10 overnight and requiring every organization to rebuild their Active Directory domain structure from scratch.
From ISSM to AI Security: The Mindset Shifts Required
Years in traditional InfoSec roles—including time as a CISSP-certified ISSM—left me with assumptions I had to dismantle when I moved into AI security. Here are the specific mindset shifts that proved necessary:
1. Accept Unsolvable Vulnerabilities
Traditional mindset: Every vulnerability has a patch. The timeline may be long, but resolution is achievable. Your job is to identify, track, and remediate.
AI mindset: Some vulnerabilities are architectural and may never be solved. Your job is to design systems that contain damage when attacks succeed rather than preventing all attacks.
Practical implementation: Instead of trying to prevent all prompt injection attacks, build systems where:
- AI agents have minimal necessary privileges
- Sensitive operations require human approval
- Data access follows least-privilege principles
- Monitoring detects anomalous behavior patterns
- Isolation limits lateral movement after compromise
2. Build for Agility, Not Just Stability
Traditional mindset: Stability is security. Long-term consistency allows for thorough testing. Change introduces risk and should be tightly controlled.
AI mindset: Agility is a security control. The ability to pivot fast when models change or vulnerabilities surface is itself a defensive capability.
Practical implementation:
- Use abstraction layers (like Claude SDK or LangChain) that allow model switching
- Design prompts that work across multiple model providers
- Implement feature flags that enable rapid rollback
- Maintain vendor independence through multi-model architecture
- Test against multiple AI providers continuously
Recent industry moves back this up. At SentinelOne’s OneCon 2025 conference in November 2025, the company unveiled comprehensive AI security portfolio additions including Prompt Security for Employees, providing real-time monitoring and control of GenAI usage across thousands of platforms—specifically targeting shadow AI elimination9. That’s the industry recognizing that visibility and agility matter more than trying to lock everything down to a single approved platform.
3. Monitor Behavior, Not Just Logs
Traditional mindset: Monitor for known attack patterns. Signature-based detection identifies threats. Log analysis reveals IOCs (Indicators of Compromise).
AI mindset: Monitor for anomalous model behavior because attack patterns against AI systems are still emerging and signature-based detection can’t catch novel attacks.
Practical implementation:
- Track AI agent output patterns for deviations from baseline
- Monitor data access patterns for privilege escalation attempts
- Analyze prompt/response pairs for injection indicators
- Measure response latency for signs of model poisoning
- Log all tool usage and API calls for forensic analysis
4. Design for Vendor Independence
Traditional mindset: Vendor relationships are long-term partnerships. Switching costs run high, so vendor lock-in is acceptable if the relationship holds.
AI mindset: Vendor lock-in is a security risk. Providers can change pricing, deprecate models, alter behavior, or shut down services with minimal notice. Build systems that can swap providers fast.
Practical implementation:
- Use provider-agnostic APIs and SDKs
- Abstract model-specific features behind interfaces
- Test regularly with alternative providers
- Document migration procedures
- Maintain cost/capability matrices across vendors
This concern isn’t hypothetical. OpenAI’s pricing structure changed multiple times in 2024, Anthropic introduced new model tiers with different capability profiles, and multiple AI providers experienced service outages that hit production systems.
The Statistics Don’t Lie: Traditional Security Isn’t Enough
As of November 2025, the numbers paint a clear picture of traditional security approaches falling short against AI-specific threats:
- Credential compromise at scale: June 2025 saw 16 billion login credentials exposed across 30+ datasets, many tied to AI platform accounts10
- Deepfake surge: Deepfake files grew from 500,000 in 2023 to a projected 8 million in 2025—a 900% CAGR7
- Traditional defenses obsolete: Adaptive, AI-generated malware has rendered signature-based defenses increasingly ineffective8
- Governance gap: 63% of breached organizations lacked adequate AI governance policies as of 20253
- Financial impact: Organizations with heavy shadow AI use saw $670,000 higher breach costs than those with low or no shadow AI3
One data point stands out: the global average cost of a data breach fell 9% to $4.44 million in 2025—the first decline in five years3. But that drop didn’t come from better traditional security. It came from organizations deploying AI and automation in their security operations, saving an average of $1.9 million per breach and cutting the breach lifecycle by 80 days3.
What This Means for Security Professionals
If you hold a CISSP, CISM, or ISSM credential in traditional IT security, here’s the uncomfortable truth: your certifications and experience are a solid foundation, but they don’t fully prepare you for AI security.
The skills that made you effective—systematic vulnerability management, change control, configuration management, compliance frameworks—still matter. But they’re not enough for a threat landscape where:
- Vulnerabilities may be unsolvable
- System behavior is probabilistic
- Vendors can reshape your infrastructure overnight
- Attack patterns are still emerging
- Traditional detection methods fall flat
You need to build new expertise from the ground up, the same way I did when I shifted from traditional ISSM work to AI security research. That means:
Learning how AI systems actually work at a technical level—not just security frameworks, but transformer architectures, attention mechanisms, and how language models process information.
Building hands-on experience with AI systems in controlled environments where you can test attacks and defenses without production risk.
Getting comfortable with uncomfortable uncertainty about which defenses will hold, because the research shows many published defenses crumble under adaptive attacks.
Designing for resilience over prevention, since entire classes of attack can’t be stopped with current architectures.
Keeping pace with rapid evolution, because frameworks, best practices, and even core capabilities shift within months, not years.
Cybersecurity has always demanded continuous learning, but AI security is a discontinuous shift. Incremental skill development won’t cut it. You have to rethink your assumptions about how systems work, what security actually means, and what’s realistically achievable.
The Path Forward: New Expertise for a New Threat Landscape
AI security is not “traditional security plus AI.” It’s a distinct discipline that requires different tools, processes, risk frameworks, and ways of thinking. As of November 2025, the security industry has started building out specialized AI security certifications (like ISC2’s AI Security Certificate and the Advanced in AI Security Management credential5), updated frameworks (like NIST AI RMF 2.04), and purpose-built tools (like SentinelOne’s Prompt Security portfolio9).
But frameworks and certifications trail operational reality. Organizations deploying AI today face threats the security industry is still learning to classify, let alone defend against. The most effective approach weaves together:
- Hands-on experimentation to understand how attacks actually play out
- Security-first architecture built on the assumption that attacks will succeed
- Multi-layer defense since no single control provides adequate protection
- Rapid iteration because the threat landscape shifts monthly
- Vendor independence to preserve operational resilience
You can’t secure AI systems with traditional InfoSec alone. But you also can’t throw out the fundamentals—defense in depth, least privilege, assume breach, continuous monitoring—they all still apply. The challenge is adapting those principles to a probabilistic, fast-moving threat landscape where some vulnerabilities may never have a fix.
That’s the path I’ve been walking since I moved from traditional ISSM work into AI security. It’s uncomfortable, uncertain, and forces you to accept that many questions don’t have good answers yet. But this is the future of cybersecurity—and it’s already here.
What’s Your Experience?
If you’ve made the jump from traditional InfoSec to AI security — or you’re in the middle of it — I’d like to hear what hit you hardest. Which assumptions did you have to unlearn? What frameworks or approaches are you building to fill the gaps? Drop a comment or reach out directly. The more practitioners sharing real experience, the faster we all get better at this.
Footnotes
-
World Economic Forum - Non-Human Identities in AI Cybersecurity (link removed; see removed-links.md) ↩