
Last night, the Cloud Security Alliance (CSA) published "The AI Vulnerability Storm: Building a Mythos-Ready Security Program," a 30-page CISO brief shaped over the weekend by more than 250 security leaders. Terra contributed to that document. We served as reviewers, and several of our perspectives were incorporated into the Priority Actions section.
It is the clearest practitioner-oriented document to emerge from the Mythos announcement and provides a strong foundation for how security leaders should think about this shift.
What follows focuses on the areas that will matter most as you decide what to do next.
Discovery Is No Longer the Constraint. Execution Is.
Security programs have historically been constrained by discovery. The primary challenge was identifying vulnerabilities, mapping systems, and understanding where risk existed.
Discovery is no longer the limiting factor. Execution is.
Adversaries can now move across attack surfaces more quickly, test more conditions, and iterate on attack paths in ways that were not previously practical. The result is not just an increase in findings. It is an increase in the speed and depth with which those findings can be exploited.
This changes the core question facing security teams. It is no longer enough to ask whether vulnerabilities can be found. Recent public demonstrations have shown what autonomous systems can do in controlled conditions, but they say very little about how those capabilities behave when run consistently in production environments. That is the harder problem.
It is also worth noting that a model capable of reasoning across the full stack – misconfigurations, identity, network architecture, and business logic failures – dramatically changes the scope, and the importance of the harness. We wrote about this in depth here.
When the Attack Surface Is Meaning, Not Code
The CSA Brief identifies unmanaged AI agent attack surfaces as a critical risk. One example of this class of failure comes from a vulnerability Terra disclosed on January 1, 2026, later filed as CVE-2026-25724.
The issue involved a symbolic link handling flaw in Anthropic's Claude Code. Under certain conditions, repository content could influence the agent to access files outside its intended scope.
The mechanics of the bypass matter less than what the vulnerability represents.
This was not a traditional exploit. There was no injection or memory corruption. The system behaved according to its design, but within a context that had been shaped in a way the system was not built to safely interpret. Repository structure, comments, and surrounding content influenced how the agent reasoned about what it should access.
In these systems, any content an agent can read becomes part of the attack surface. That includes documentation, naming conventions, and structural patterns that were never previously treated as security-relevant inputs.
This introduces a semantic layer of risk that does not fit cleanly into existing categories. It requires a different approach to both modeling and control. It is not yet fully captured in the Brief’s risk register.
The Harness Is the Security Boundary
One of Terra's contributions appears in Priority Action #4, which focuses on defending agent-based systems.
In agentic systems, the harness defines the security boundary. The harness includes prompts, tool definitions, retrieval pipelines, and escalation logic. It is the layer where intent is translated into behavior and where constraints are either enforced or left ambiguous.
Failures in these systems do not occur only because of insufficient permissions. They occur because the system is allowed to interpret and act on context without clearly defined limits.
Most organizations deploying agents today do not treat the harness as a core security control. It is often managed as configuration, adjusted over time, and not subjected to the same rigor as application logic or infrastructure.
That gap is where many of the most consequential failures will occur.
The Bottleneck Has Already Shifted
Another contribution appears in Priority Action #11, which focuses on establishing a dedicated vulnerability operations function.
The constraint has shifted from discovery to prioritization.
Systems that rely on AI will generate findings at a rate that exceeds any team's ability to manually process them. Without a clear model for triage, organizations will accumulate more data without improving their security posture.
The distinction that matters is between theoretical vulnerability and validated exposure. A vulnerability that cannot be reached or exploited in a given environment does not carry the same urgency as one that sits on a direct path to a critical asset.
Security programs that do not account for this distinction will struggle to maintain signal as they scale. Counting findings is no longer meaningful. What matters is how effectively teams can identify and remediate issues that are actually exploitable.
Where to Spend More Time
The Brief connects emerging risks to established frameworks in a way that allows organizations to begin operationalizing them. It forces an honest assessment of whether security functions are positioned to execute, not just advise. And its recognition of the human dimension – that burnout and attrition are operational risks, not soft concerns – is something most security documents avoid entirely.
As you work through it, there are a few areas worth slowing down on.
Accountability is one. The Brief is clear that agents require governance before deployment. What is harder to define, and what your program will need to answer, is the chain of ownership when an agent takes an action that leads to unintended consequences. That question needs to be resolved before systems go into production, not after an incident.
The asymmetry is worth naming directly. Attackers operate without governance constraints, change management boards, or liability exposure. When an autonomous defensive agent causes unintended disruption in a production environment, the CISO is personally exposed. That gap between what defenders can technically deploy and what they are permitted to deploy is not a process problem. It is a liability problem, and it needs to be addressed before deployment decisions are made, not after.
The semantic manipulation example discussed earlier is another. The Brief's risk register captures agent attack surface as a category. The practical implication that any content an agent can read is a potential attack vector deserves more time than a single reading allows.
And the distinction between theoretical vulnerability and validated exposure is worth returning to after you have read the Priority Actions section. That distinction should shape how you design your VulnOps function from the start.
What to Do With This
Treat agent deployment as an operational problem. The harness should be defined and governed with the same rigor as any other production system. Boundaries, escalation paths, and override mechanisms need to be established before these systems interact with sensitive environments.
Success metrics also need to change. Counting findings is no longer meaningful when discovery can be automated at scale. What matters is how effectively teams can identify and remediate issues that are actually exploitable.
Human accountability is the third piece. These systems can accelerate execution, but they should not operate without clear ownership of decisions that carry real consequences.
Where Terra Fits
Continuous security validation requires systems that can execute at attacker speed while remaining constrained by clearly defined boundaries. It requires distinguishing between what is theoretically possible and what is actually exploitable in a given environment. It also requires maintaining human oversight where decisions have real impact.
That is the model Terra is built to support. Our contributions to the CSA Brief reflect that perspective – the harness as a control layer, prioritization as the primary constraint, and validated exposure as the unit of meaningful work.
The CSA Brief is essential reading for security leaders working through these changes. The industry has made meaningful progress in understanding how AI affects vulnerability discovery. The next phase is defining how these capabilities are operated safely and consistently in real environments. That is the problem security teams need to solve now.
.png)
