Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    How to get into a16z’s super-competitive Speedrun startup accelerator program

    February 11, 2026

    Twilio co-founder’s fusion power startup raises $450M from Bessemer and Alphabet’s GV

    February 11, 2026

    UpScrolled’s social network is struggling to moderate hate speech after fast growth

    February 11, 2026
    Facebook X (Twitter) Instagram
    Trending
    • How to get into a16z’s super-competitive Speedrun startup accelerator program
    • Twilio co-founder’s fusion power startup raises $450M from Bessemer and Alphabet’s GV
    • UpScrolled’s social network is struggling to moderate hate speech after fast growth
    • Upside Robotics is reducing fertilizer use and waste in corn crops
    • Integrate raises $17M to move defense project management into the 21st century
    • Build a pipeline and close deals with an exhibit table at Disrupt 2026
    • Humanoid robot startup Apptronik has now raised $935M at a $5B+ valuation
    • This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Will AI think like humans? We’re not even close – and we’re asking the wrong question
    AI

    Will AI think like humans? We’re not even close – and we’re asking the wrong question

    TechurzBy TechurzJuly 24, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Will AI think like humans? We're not even close - and we're asking the wrong question
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Westend61/Getty Images

    Artificial intelligence may have impressive inferencing powers, but don’t count on it to have anything close to human reasoning powers anytime soon. The march to so-called artificial general intelligence (AGI), or AI capable of applying reasoning through changing tasks or environments in the same manner as humans, is still a long way off. Large reasoning models (LRMs), while not perfect, do offer a tentative step in that direction. 

    In other words, don’t count on your meal-prep service robot to react appropriately to a kitchen fire or a pet jumping on the table and slurping up food. 

    Also: Meta’s new AI lab aims to deliver ‘personal superintelligence for everyone’ – whatever that means

    The holy grail of AI has long been to think and reason as humanly as possible — and industry leaders and experts agree that we still have a long way to go before we reach such intelligence. But large language models (LLMs) and their slightly more advanced LRM offspring operate on predictive analytics based on data patterns, not complex human-like reasoning.

    Nevertheless, the chatter around AGI and LRMs keeps growing, and it was inevitable that the hype would far outpace the actual available technology. 

    “We’re currently in the middle of an AI success theatre plague,” said Robert Blumofe, chief technology officer and executive VP at Akamai. “There’s an illusion of progress created by headline-grabbing demos, anecdotal wins, and exaggerated capabilities. In reality, truly intelligent, thinking AI is a long ways away.”   

    A recent paper written by Apple researchers downplayed LRMs’ readiness. The researchers concluded that LRMs, as they currently stand, aren’t really conducting much reasoning above and beyond the standard LLMs now in widespread use. (My ZDNET colleagues Lester Mapp and Sabrina Ortiz provide excellent overviews of the paper’s findings.)

    Also: Apple’s ‘The Illusion of Thinking’ is shocking – but here’s what it missed

    LRMs are “derived from LLMs during the post-training phase, as seen in models like DeepSeek-R1,” said Xuedong Huang, chief technology officer at Zoom. “The current generation of LRMs optimizes only for the final answer, not the reasoning process itself, which can lead to flawed or hallucinated intermediate steps.” 

    LRMs employ step-by-step chains of thought, but “we must recognize that this does not equate to genuine cognition, it merely mimics it,” said Ivana Bartoletti, chief AI governance officer at Wipro. “It’s likely that chain-of-thought techniques will improve, but it’s important to stay grounded in our understanding of their current limitations.”  

    LRMs and LLMs are prediction engines, “not problem solvers,” Blumofe said. “Their reasoning is done by mimicking patterns, not by algorithmically solving problems. So it looks like logic, but doesn’t behave like logic. The future of reasoning in AI won’t come from LLMs or LRMs accessing better data or spending more time on reasoning. It requires a fundamentally different kind of architecture that doesn’t rely entirely on LLMs, but rather integrates more traditional technology tools with real-time user data and AI.”  

    Also: 9 programming tasks you shouldn’t hand off to AI – and why

    Right now, a better term for AI’s reasoning capabilities may be “jagged intelligence,” said Caiming Xiong, vice president of AI research at Salesforce. “This is where AI systems excel at one task but fail spectacularly at another — particularly within enterprise use cases.” 

    What are the potential use cases for LRMs? And what’s the benefit of adopting and maintaining these models? For starters, use cases may look more like extensions of current LLMs. They will arise in a number of areas — but it’s complicated. “The next frontier of reasoning models are reasoning tasks that — unlike math or coding — are hard to verify automatically,” said Daniel Hoske, CTO at Cresta. 

    Currently, available LRMs cover most of the use cases of classic LLMs — such as “creative writing, planning, and coding,” said Petros Efstathopoulos, vice president of research at RSA Conference. “As LRMs continue to be improved and adopted, there will be a ceiling to what models can achieve independently and what the model-collapse boundaries will be. Future systems will better learn how to use and integrate external tools like search engines, physics simulation environments, and coding or security tools.”  

    Also: 5 tips for building foundation models for AI

    Early use cases for enterprise LRMs include contact centers and basic knowledge work. However, these implementations “are rife with subjective problems,” Hoske said. “Examples include troubleshooting technical issues, or planning and executing a multi-step task, given only higher-level goals with imperfect or partial knowledge.” As LRMs evolve, these capabilities may improve, he predicted. 

    Typically, “LRMs excel at tasks that are easily verifiable but difficult for humans to generate — areas like coding, complex QA, formal planning, and step-based problem solving,” said Huang. “These are precisely the domains where structured reasoning, even if synthetic, can outperform intuition or brute-force token prediction.”  

    Efstathopoulos reported seeing solid uses of AI in medical research, science, and data analysis. “LRM research results are encouraging, with models already capable of one-shot problem solving, tackling complex reasoning puzzles, planning, and refining responses mid-generation.” But it’s still early in the game for LRMs, which may or may not be the best path to fully reasoning AI. 

    Also: How AI agents can generate $450 billion by 2028 – and what stands in the way

    Trust in the results coming out of LRMs also can be problematic, as it has been for classic LLMs. “What matters is if, beyond capabilities alone, these systems can reason consistently and reliably enough to be trusted beyond low-stakes tasks and into critical business decision-making,” Salesforce’s Xiong said. “Today’s LLMs, including those designed for reasoning, still fall short.”

    This doesn’t mean language models are useless, Xiong emphasized. “We’re successfully deploying them for coding assistance, content generation, and customer service automation where their current capabilities provide genuine value.”

    Human reasoning is not without immense flaws and bias, either. “We don’t need AI to think like us — we need it to think with us,” said Zoom’s Huang. “Human-style cognition brings cognitive biases and inefficiencies we may not want in machines. The goal is utility, not imitation. An LRM that can reason differently, more rigorously, or even just more transparently than humans might be more helpful in many real-world applications.”   

    Also: People don’t trust AI but they’re increasingly using it anyway

    The goal of LRMs, and ultimately AGI, is to “build toward AI that’s transparent about its limitations, reliable within defined capabilities, and designed to complement human intelligence rather than replace it,” Xiong said. Human oversight is essential, as is “recognition that human judgment, contextual understanding, and ethical reasoning remain irreplaceable,” he added. 

    Want more stories about AI? Sign up for Innovation, our weekly newsletter.

    close humans Question wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMicrosoft Put Older Versions of SharePoint on Life Support. Hackers Are Taking Advantage
    Next Article Best Indoor Garden Systems (2025), Tested and Reviewed
    Techurz
    • Website

    Related Posts

    Opinion

    Build a pipeline and close deals with an exhibit table at Disrupt 2026

    February 11, 2026
    Opinion

    Humans& thinks coordination is the next frontier for AI, and they’re building a model to prove it

    January 22, 2026
    Opinion

    Humans&, a ‘human-centric’ AI startup founded by Anthropic, xAI, Google alums, raised $480M seed round

    January 20, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,467 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,467 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    How to get into a16z’s super-competitive Speedrun startup accelerator program

    February 11, 2026

    Twilio co-founder’s fusion power startup raises $450M from Bessemer and Alphabet’s GV

    February 11, 2026

    UpScrolled’s social network is struggling to moderate hate speech after fast growth

    February 11, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.