Training the next generation
If AI agents take on the work of tier-one agents, how do new SOC team members learn the ropes? Vinod Goje is optimistic.
Tier one analyst work has traditionally been the training ground of security careers. The paradox of agentic AI is that while it relieves humans of repetitive triage, it also risks eroding the very “muscle memory” new analysts used to build by grinding through alerts.
But much of the rote triage, such as filtering out obvious false positives, cutting through duplicate alerts, and escalating routine phishing cases, teaches analysts little more than patience. AI excels at handling these menial tasks, allowing human analysts to focus on more complex challenges.
That transforms tier-one from grunt work into guided training ground: Instead of drowning in noise, new analysts study curated, AI-documented cases and learn by interrogating the agent’s rationale. So yes, if left unchecked, agentic AI could create a talent-pipeline gap. But used deliberately, it can actually accelerate skill development.
Pricing, value, and program design
Agentic AI capabilities and governance are important, of course, but one of the biggest drivers for adopting agentic AI in security comes down to economics. Security leaders want to know: How much money and time does this save us? The answer is not always straightforward.
“Pricing remains a friction point,” says Fifthelement.ai’s Garini. “Vendors are playing with usage-based models, but organizations are finding value when they tie spend to analyst hours saved rather than raw compute or API calls.”
Mindgard’s Glynn notes the variability in AI pricing models available today. “A charge can be per subscription, per seat, or per alert. Other vendors may offer usage-based plans, too,” he says. “Advanced agent systems are usually costly as they have wider impact and opportunity of savings on analyst workloads.”