Cybercriminals aren’t just using AI — they’re weaponizing it. Deepfakes, automated phishing, and AI-written malware are emerging as some of the fastest-growing threats on the enterprise radar. According to Foundry’s 2025 Security Priorities Study, AI-enabled attacks now rank among the top concerns for security buyers, even as a majority of organizations are investing in or planning to invest in AI-driven defenses. The battle lines are clear: AI versus AI.
Recent CSO reporting paints an unsettling picture of what’s already happening. Autonomous AI agents are learning to execute full attack chains — from reconnaissance and exploitation to evasion and data theft — without human direction. Researchers have documented AI models used to generate extortion emails, launch ransomware, and discover new vulnerabilities in minutes. As one expert put it, attackers are “operating at computer speed and scale,” threatening to tilt the balance of power decisively in their favor.
For defenders, the answer isn’t to match automation with automation blindly. Security leaders interviewed by CSO describe a growing focus on treating AI as a “copilot, not an autopilot.” Well-governed AI can accelerate detection, triage, and containment, but it still depends on strong human oversight to stay effective. “The real win isn’t just speed,” one CISO told CSO. “It’s handling the routine stuff so analysts can focus on the complex and strategic problems that machines can’t.”