Like traditional data centers, AI data centers contain hardware, network, storage, data, and software components, making them targets of common cyberattacks: distributed denial of service (DDoS), ransomware, supply chain, and social engineering attacks. Data centers are also notorious for being vulnerable to side-channel attacks – a cyberattack that collects information on or tries to influence a system’s processes and execution – because data center hardware, from fans to central processing units (CPUs), can reveal sensitive information about CPU-level activity, data architecture and usage. For example, in July 2025, AMD found four new processor vulnerabilities that would allow side-channel attacks.
The risks AI data centers face
Compared to traditional ones, though, AI data centers face an expanded set of threats due to the differences in hardware, data and purpose.
While large data centers use CPUs and graphics processing units (GPUs), AI data centers always use GPUs because AI workloads require more compute power and because GPUs allow for parallel operations. Application-specific integrated circuits (ASICs) and field-programmable gate arrays (FPGAs) are also powerful hardware that can be customized to efficiently compute and process AI workloads. Google invented an ASIC specific for AI and deep learning called the Tensor Processing Unit (TPU). These more powerful resources are vulnerable to side-channel attacks just like CPUs: in January 2025, TPUXtract, a TPU-specific side-channel attack that exploits data leaks and allows threat actors to infer an AI model’s parameters, was discovered.
