Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Fig Security emerges from stealth with $38M to help security teams deal with change

    March 3, 2026

    India’s Pronto formalizes house help as its valuation jumps 8x in under a year

    March 3, 2026

    Cursor has reportedly surpassed $2B in annualized revenue

    March 3, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Fig Security emerges from stealth with $38M to help security teams deal with change
    • India’s Pronto formalizes house help as its valuation jumps 8x in under a year
    • Cursor has reportedly surpassed $2B in annualized revenue
    • Stripe wants to turn your AI costs into a profit center
    • A married founder duo’s company, 14.ai, is replacing customer support teams at startups
    • Parade’s Cami Tellez announces new creator economy marketing platform, $4M in funding
    • MyFitnessPal has acquired Cal AI, the viral calorie app built by teens
    • Investors spill what they aren’t looking for anymore in AI SaaS companies
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»News»Anthropic CEO claims AI models hallucinate less than humans
    News

    Anthropic CEO claims AI models hallucinate less than humans

    TechurzBy TechurzMay 23, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Dario Amodei
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic CEO Dario Amodei believes today’s AI models hallucinate, or make things up and present them as if they’re true, at a lower rate than humans do, he said during a press briefing at Anthropic’s first developer event, Code with Claude, in San Francisco on Thursday.

    Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropic’s path to AGI — AI systems with human-level intelligence or better.

    “It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” Amodei said, responding to TechCrunch’s question.

    Anthropic’s CEO is one of the most bullish leaders in the industry on the prospect of AI models achieving AGI. In a widely circulated paper he wrote last year, Amodei said he believed AGI could arrive as soon as 2026. During Thursday’s press briefing, the Anthropic CEO said he was seeing steady progress to that end, noting that “the water is rising everywhere.”

    “Everyone’s always looking for these hard blocks on what [AI] can do,” said Amodei. “They’re nowhere to be seen. There’s no such thing.”

    Other AI leaders believe hallucination presents a large obstacle to achieving AGI. Earlier this week, Google DeepMind CEO Demis Hassabis said today’s AI models have too many “holes,” and get too many obvious questions wrong. For example, earlier this month, a lawyer representing Anthropic was forced to apologize in court after they used Claude to create citations in a court filing, and the AI chatbot hallucinated and got names and titles wrong.

    It’s difficult to verify Amodei’s claim, largely because most hallucination benchmarks pit AI models against each other; they don’t compare models to humans. Certain techniques seem to be helping lower hallucination rates, such as giving AI models access to web search. Separately, some AI models, such as OpenAI’s GPT-4.5, have notably lower hallucination rates on benchmarks compared to early generations of systems.

    However, there’s also evidence to suggest hallucinations are actually getting worse in advanced reasoning AI models. OpenAI’s o3 and o4-mini models have higher hallucination rates than OpenAI’s previous-gen reasoning models, and the company doesn’t really understand why.

    Later in the press briefing, Amodei pointed out that TV broadcasters, politicians, and humans in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not a knock on its intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence with which AI models present untrue things as facts might be a problem.

    In fact, Anthropic has done a fair amount of research on the tendency for AI models to deceive humans, a problem that seemed especially prevalent in the company’s recently launched Claude Opus 4. Apollo Research, a safety institute given early access to test the AI model, found that an early version of Claude Opus 4 exhibited a high tendency to scheme against humans and deceive them. Apollo went as far as to suggest Anthropic shouldn’t have released that early model. Anthropic said it came up with some mitigations that appeared to address the issues Apollo raised.

    Amodei’s comments suggest that Anthropic may consider an AI model to be AGI, or equal to human-level intelligence, even if it still hallucinates. An AI that hallucinates may fall short of AGI by many people’s definition, though.

    Anthropic CEO claims hallucinate humans models
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleT3 Awards 2025: The Active finalists
    Next Article Dell’s New Pro Max Plus Brings Enterprise AI to Laptops
    Techurz
    • Website

    Related Posts

    Opinion

    Just 8 months in, India’s vibe-coding startup Emergent claims ARR of over $100M

    February 17, 2026
    Opinion

    Who will own your company’s AI layer? Glean’s CEO explains

    February 11, 2026
    Opinion

    Former GitHub CEO raises record $60M dev tool seed round at $300M valuation

    February 10, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,286 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,286 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    Fig Security emerges from stealth with $38M to help security teams deal with change

    March 3, 2026

    India’s Pronto formalizes house help as its valuation jumps 8x in under a year

    March 3, 2026

    Cursor has reportedly surpassed $2B in annualized revenue

    March 3, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.