Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Is safety is ‘dead’ at xAI?

    February 14, 2026

    In a changed VC landscape, this exec is doubling down on overlooked founders

    February 14, 2026

    ‘Clueless’ -inspired app Alta partners with brand Public School to start integrating styling tools into websites

    February 14, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Is safety is ‘dead’ at xAI?
    • In a changed VC landscape, this exec is doubling down on overlooked founders
    • ‘Clueless’ -inspired app Alta partners with brand Public School to start integrating styling tools into websites
    • India doubles down on state-backed venture capital, approving $1.1B fund
    • Why top talent is walking away from OpenAI and xAI
    • Fusion startup Helion hits blistering temps as it races toward 2028 deadline
    • AI burnout, billion-dollar bets, and Silicon Valley’s Epstein problem
    • Score, the dating app for people with good credit, is back
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Startups»How to dominate AI before it dominates us
    Startups

    How to dominate AI before it dominates us

    TechurzBy TechurzSeptember 11, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    PluggedIn Newsletter logo
    Share
    Facebook Twitter LinkedIn Pinterest Email


    James Barrat is an author and documentary filmmaker who has written and produced for National Geographic, Discovery, PBS, and many other broadcasters.

    What’s the big idea?

    The Intelligence Explosion: When AI Beats Humans at Everything [Photo: St. Martin’s Press]

    Artificial intelligence could reshape our world for the better or threaten our very existence. Today’s chatbots are just the beginning. We could be heading for a future in which artificial superintelligence challenges human dominance. To keep our grip on the reins of progress when faced with an intelligence explosion, we need to set clear standards and precautions for AI development.

    Below, James shares five key insights from his new book, The Intelligence Explosion: When AI Beats Humans at Everything. Listen to the audio version—read by James himself—below, or in the Next Big Idea App.

    1. The rise of generative AI is impressive, but not without problems.

    Generative AI tools, such as ChatGPT and Dall-E, have taken the world by storm, demonstrating their ability to write, draw, and even compose music in ways that seem almost human. Generative means they generate or create things. But these abilities come with some steep downsides. These systems can easily create fake news, bogus documents, or deepfake photos and videos that appear and sound authentic. Even the AI experts who build these models don’t fully understand how they come up with their answers. Generative AI is a black box system, meaning you can see the data the model is trained on and the words or pictures it puts out, but even the designers cannot explain what happens on the inside.

    Stuart Russell, coauthor of Artificial Intelligence: A Modern Approach, said this about generative AI, “We have absolutely no idea how it works, and we are releasing it to hundreds of millions of people. We give it credit cards, bank accounts, social media accounts. We’re doing everything we can to make sure that it can take over the world.”

    Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical. This makes them risky for important tasks. When asked about a specific academic paper, a generative AI might confidently respond, “The 2019 study by Dr. Leah Wolfe at Stanford University found that 73% of people who eat chocolate daily have improved memory function, as published in the Journal of Cognitive Enhancement, Volume 12, Issue 4.” This sounds completely plausible and authoritative, but many details are made up: There is no Dr. Leah Wolfe at Stanford, no such study from 2019, and the 73% statistic is fiction.

    “Generative AI hallucinates, meaning the models sometimes spit out stuff that sounds believable but is wrong or nonsensical.”

    The hallucination is particularly problematic because it’s presented with such confidence and specificity that it seems legitimate. Users might cite this nonexistent research or make decisions based on completely false information.

    On top of that, as generative AI models get bigger, they start picking up surprise skills—like translating languages and writing code—even though nobody programmed them to do that. These unpredictable outcomes are called emergent properties. They hint at even bigger challenges as AI continues to advance and grow larger.

    2. The push for artificial general intelligence (AGI).

    The next big goal in AI is something called AGI, or artificial general intelligence. This means creating an AI that can perform nearly any task a human can, in any field. Tech companies and governments are racing to build AGI because the potential payoff is huge. AGI could automate all sorts of knowledge work, making us way more productive and innovative. Whoever gets there first could dominate global industries and set the rules for everyone else.

    Some believe that AGI could help us tackle massive problems, such as climate change, disease, and poverty. It’s also seen as a game-changer for national security. However, the unpredictability we’re already seeing will only intensify as we approach AGI, which raises the stakes.

    3. From AGI to something way smarter.

    If we ever reach AGI, things could escalate quickly. This is where the concept of the “intelligence explosion” comes into play. The idea was first put forward by I. J. Good. Good was a brilliant British mathematician and codebreaker who worked alongside Alan Turing at Bletchley Park during World War II. Together, they were crucial in breaking German codes and laying the foundations for modern computing.

    “An intelligence explosion would come with incredible upsides.”

    Drawing on this experience, Good realized that if we built a machine that was as smart as a human, it might soon be able to make itself even smarter. Once it started improving itself, it could get caught in a kind of feedback loop, rapidly building smarter and smarter versions—way beyond anything humans could keep up with. This runaway process could lead to artificial superintelligence, also known as ASI.

    An intelligence explosion would come with incredible upsides. Superintelligent AI could solve problems we’ve never been able to crack, such as curing diseases, reversing aging, or mitigating climate change. It could push science and technology forward at lightning speed, automate all kinds of work, and help us make smarter decisions by analyzing information in ways people simply cannot.

    4. The dangers of an intelligence explosion.

    Is ASI dangerous? You bet. In an interview, sci-fi great Arthur C. Clark told me, “We humans steer the future not because we’re the fastest or strongest creature, but the most intelligent. If we share the planet with something more intelligent than we are, they will steer the future.”

    The same qualities that could make superintelligent AI so helpful also make it dangerous. If its goals aren’t perfectly lined up with what’s good for humans—a problem called alignment—it could end up doing things that are catastrophic for us. For example, a superintelligent AI might use up all the planet’s resources to complete its assigned mission, leaving nothing left for humans. Nick Bostrom, a Swedish philosopher at the University of Oxford, created a thought experiment called “the paperclip maximizer.” If a superintelligent AI were asked to make paperclips, without very careful instructions, it would turn all the matter in the universe into paperclips—including you and me.

    Whoever controls this kind of AI could also end up with an unprecedented level of power over the rest of the world. Plus, the speed and unpredictability of an intelligence explosion could throw global economies and societies into complete chaos before we have time to react.

    5. How AI could overpower humanity.

    These dangers can play out in very real ways. A misaligned superintelligence could pursue a badly worded goal, causing disaster. Suppose you asked the AI to eliminate cancer; it could do that by eliminating people. Common sense is not something AI has ever demonstrated.

    AI-controlled weapons could escalate conflicts faster than humans can intervene, making war more likely and more deadly. In May 2010, a flash crash occurred on the stock exchange, triggered by high-frequency trading algorithms. Stocks were purchased and sold at a pace humans could not keep up with, costing investors tens of millions of dollars.

    “A misaligned superintelligence could pursue a badly worded goal, causing disaster.”

    Advanced AI could take over essential infrastructure—such as power grids or financial systems—making us entirely dependent and vulnerable.

    As AI gets more complex, it might develop strange new motivations that its creators never imagined, and those could be dangerous.

    Bad actors, like authoritarian regimes or extremist groups, could use AI for mass surveillance, propaganda, cyberattacks, or worse, giving them unprecedented new tools to control or harm people. We are seeing surveillance systems morph into enhanced weapons systems in Gaza right now. In Western China, surveillance systems keep track of tens of millions of people in the Xinjiang Uighur Autonomous Region. AI-enhanced surveillance systems keep track of who is crossing America’s border with Mexico.

    Today’s unpredictable, sometimes baffling AI is just a preview of the much bigger risks and rewards that could come from AGI and superintelligence. As we rush to create smarter machines, we must remember that these systems could bring both incredible benefits and existential dangers. If we want to stay in control, we need to move forward with strong oversight, regulations, and a commitment to transparency.

    This article originally appeared in Next Big Idea Club magazine and is reprinted with permission.

    The application deadline for Fast Company’s Most Innovative Companies Awards is Friday, October 3, at 11:59 p.m. PT. Apply today.

    dominate dominates
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleiPhone Air vs. iPhone 17 Pro Max: I compared Apple’s two best models, and I’d buy this one
    Next Article Klarna Employees Use Emojis to Show RTO Disappointment
    Techurz
    • Website

    Related Posts

    Opinion

    CES 2026: Follow live for the best, weirdest, and most interesting tech as physical AI and robots dominate the event

    January 8, 2026
    Security

    LockBit, Qilin, and DragonForce Join Forces to Dominate the Ransomware Ecosystem

    October 9, 2025
    Startups

    A Franchise Insider Reveals the Secrets to Multi-Unit Growth

    September 25, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,633 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,633 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    Is safety is ‘dead’ at xAI?

    February 14, 2026

    In a changed VC landscape, this exec is doubling down on overlooked founders

    February 14, 2026

    ‘Clueless’ -inspired app Alta partners with brand Public School to start integrating styling tools into websites

    February 14, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.