What defenders should do about what offensive research surfaces. Detection engineering, tool-output triage, and the structural gaps in AppSec programs — across the fifteen domains where mature programs are built.
Most AppSec investment goes into the domain that looked urgent last quarter, not the one that blocks the most attack paths. We study the tradeoffs — which detection controls actually reduce risk, and which are theater.
Most AppSec metrics are vanity numbers. We write about the handful that reflect program maturity — and why the rest of the dashboard is a distraction.
Agent permissions, prompt-injection guardrails, and model-file validation are new names for old SAMM problems — threat modeling, secure build, operational monitoring. Our research maps the existing maturity model onto the AI attack surface: MLOps access boundaries, agent sandboxing, and detection patterns for tool-abuse in LLM applications.