For example, if an agent is told to reduce “noise” in the security operations center, it might interpret this too literally and suppress valid alerts in its effort to streamline operations, leaving an organization blind to an active intrusion, Joyce says.
Agentic AI systems are designed to act independently, but without strong governance, this autonomy can quickly become a liability, Riboldi says. “A seemingly harmless agent given vague or poorly scoped instructions might overstep its boundaries, initiating workflows, altering data, or interacting with critical systems in unintended ways,” he says.
In an agentic AI environment, “there is a lot of autonomous action without oversight,” Mayham says. “Unlike traditional automation, agents make choices that could mean clicking links, sending emails, triggering workflows. And this is all based on probabalistic reasoning. When those choices go wrong it’s hard to construct why. We’ve seen [clients] of ours accidently exposing sensitive internal URLs by misunderstanding what safe-to-share means.”
