Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Meridian Ventures launched $35M fund to back MBA-deferred founders

    May 15, 2026

    Lovable just backed a company that’s looking to bring vibe coding to hardware

    May 14, 2026

    Clio’s $500M milestone arrives just as Anthropic ups the ante

    May 14, 2026
    Facebook X (Twitter) Instagram
    Tech Pulse
    • Meridian Ventures launched $35M fund to back MBA-deferred founders
    • Lovable just backed a company that’s looking to bring vibe coding to hardware
    • Clio’s $500M milestone arrives just as Anthropic ups the ante
    • Anduril raises $5B, doubles valuation to $61B
    • Kevin Hartz’s A* just closed its third fund with $450M
    X (Twitter) Pinterest YouTube LinkedIn WhatsApp
    Techurz
    • Home
    • AI Systems
    • Cyber Reality
    • Future Tech
    • Disruption Lab
    • Signals
    • Tech Pulse
    Techurz
    Home - Guides - AI GPUs will soon need more power than a small country, as HBM memory growth spirals out of control
    Guides

    AI GPUs will soon need more power than a small country, as HBM memory growth spirals out of control

    TechurzBy TechurzJune 19, 2025Updated:May 12, 2026No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    HBM Memory
    Share
    Facebook Twitter LinkedIn Pinterest Email


    • Future AI memory chips could demand more power than entire industrial zones combined
    • 6TB of memory in one GPU sounds amazing until you see the power draw
    • HBM8 stacks are impressive in theory, but terrifying in practice for any energy-conscious enterprise

    The relentless drive to expand AI processing power is ushering in a new era for memory technology, but it comes at a cost that raises practical and environmental concerns, experts have warned.

    Research by Korea Advanced Institute of Science & Technology (KAIST) and Terabyte Interconnection and Package Laboratory (TERA) suggests by 2035, AI GPU accelerators equipped with 6TB of HBM could become a reality.

    These developments, while technically impressive, also highlight the steep power demands and increasing complexity involved in pushing the boundaries of AI infrastructure.


    You may like

    Rise in AI GPU memory capacity brings huge power consumption

    The roadmap reveals the evolution from HBM4 to HBM8 will deliver major gains in bandwidth, memory stacking, and cooling techniques.

    Starting in 2026 with HBM4, Nvidia’s Rubin and AMD’s Instinct MI400 platforms will incorporate up to 432GB of memory, with bandwidths reaching nearly 20TB/s.

    This memory type employs direct-to-chip liquid cooling and custom packaging methods to handle power densities around 75 to 80W per stack.

    HBM5, projected for 2029, doubles the input/output lanes and moves toward immersion cooling, with up to 80GB per stack consuming 100W.

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    However, the power requirements will continue to climb with HBM6, anticipated by 2032, which pushes bandwidth to 8TB/s and stack capacity to 120GB, each drawing up to 120W.

    These figures quickly add up when considering full GPU packages expected to consume up to 5,920W per chip, assuming 16 HBM6 stacks in a system.

    By the time HBM7 and HBM8 arrive, the numbers stretch into previously unimaginable territory.

    HBM7, expected around 2035, triples bandwidth to 24TB/s and enables up to 192GB per stack. The architecture supports 32 memory stacks, pushing total memory capacity beyond 6TB, but the power demand reaches 15,360W per package.

    The estimated 15,360W power consumption marks a dramatic increase, representing a sevenfold rise in just nine years.

    This means that a million of these in a data center would consume 15.36GW, a figure that roughly equals the UK’s entire onshore wind generation capacity in 2024.

    HBM8, projected for 2038, further expands capacity and bandwidth with 64TB/s per stack and up to 240GB capacity, using 16,384 I/O and 32Gbps speeds.

    It also features coaxial TSV, embedded cooling, and double-sided interposers.

    The growing demands of AI and large language model (LLM) inference have driven researchers to introduce concepts like HBF (High-Bandwidth Flash) and HBM-centric computing.

    These designs propose integrating NAND flash and LPDDR memory into the HBM stack, relying on new cooling methods and interconnects, but their feasibility and real-world efficiency remain to be proven.

    You might also like

    Control Country GPUs growth HBM memory power Small spirals
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleHow Democrats Are Meeting (and Missing) the Moment
    Next Article How Cisco has been quietly retooling for the AI revolution
    Techurz
    • Website

    Related Posts

    Opinion

    Fusion power may not be sci-fi. Just ask the people who sunk $5B into it.

    April 22, 2026
    Opinion

    Sources: Cursor in talks to raise $2B+ at $50B valuation as enterprise growth surges

    April 17, 2026
    Opinion

    Sam Altman-backed fusion startup Helion in talks to sell power to OpenAI

    March 23, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Our Picks

    Meridian Ventures launched $35M fund to back MBA-deferred founders

    May 15, 2026

    Lovable just backed a company that’s looking to bring vibe coding to hardware

    May 14, 2026

    Clio’s $500M milestone arrives just as Anthropic ups the ante

    May 14, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.