Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI

    February 10, 2026

    Primary Ventures raises healthy $625M Fund V to focus on seed investing

    February 10, 2026

    Vega raises $120M Series B to rethink how enterprises detect cyber threats

    February 10, 2026
    Facebook X (Twitter) Instagram
    Trending
    • This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI
    • Primary Ventures raises healthy $625M Fund V to focus on seed investing
    • Vega raises $120M Series B to rethink how enterprises detect cyber threats
    • Former Tesla product manager wants to make luxury goods impossible to fake, starting with a chip
    • Former GitHub CEO raises record $60M dev tool seed round at $300M valuation
    • Hauler Hero collects $16M for its AI waste management software
    • Proptech startup Smart Bricks raises $5 million pre-seed led by a16z
    • Databricks CEO says SaaS isn’t dead, but AI will soon make it irrelevant
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»AI companies have stopped warning you that their chatbots aren’t doctors
    AI

    AI companies have stopped warning you that their chatbots aren’t doctors

    TechurzBy TechurzJuly 21, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    AI companies have stopped warning you that their chatbots aren’t doctors
    Share
    Facebook Twitter LinkedIn Pinterest Email

    “Then one day this year,” Sharma says, “there was no disclaimer.” Curious to learn more, she tested generations of models introduced as far back as 2022 by OpenAI, Anthropic, DeepSeek, Google, and xAI—15 in all—on how they answered 500 health questions, such as which drugs are okay to combine, and how they analyzed 1,500 medical images, like chest x-rays that could indicate pneumonia. 

    The results, posted in a paper on arXiv and not yet peer-reviewed, came as a shock—fewer than 1% of outputs from models in 2025 included a warning when answering a medical question, down from over 26% in 2022. Just over 1% of outputs analyzing medical images included a warning, down from nearly 20% in the earlier period. (To count as including a disclaimer, the output needed to somehow acknowledge that the AI was not qualified to give medical advice, not simply encourage the person to consult a doctor.)

    To seasoned AI users, these disclaimers can feel like formality—reminding people of what they should already know, and they find ways around triggering them from AI models. Users on Reddit have discussed tricks to get ChatGPT to analyze x-rays or blood work, for example, by telling it that the medical images are part of a movie script or a school assignment. 

    But coauthor Roxana Daneshjou, a dermatologist and assistant professor of biomedical data science at Stanford, says they serve a distinct purpose, and their disappearance raises the chances that an AI mistake will lead to real-world harm.

    “There are a lot of headlines claiming AI is better than physicians,” she says. “Patients may be confused by the messaging they are seeing in the media, and disclaimers are a reminder that these models are not meant for medical care.” 

    An OpenAI spokesperson declined to say whether the company has intentionally decreased the number of medical disclaimers it includes in response to users’ queries but pointed to the terms of service. These say that outputs are not intended to diagnose health conditions and that users are ultimately responsible. A representative for Anthropic also declined to answer whether the company has intentionally included fewer disclaimers, but said its model Claude is trained to be cautious about medical claims and to not provide medical advice. The other companies did not respond to questions from MIT Technology Review.

    Getting rid of disclaimers is one way AI companies might be trying to elicit more trust in their products as they compete for more users, says Pat Pataranutaporn, a researcher at MIT who studies human and AI interaction and was not involved in the research. 

    “It will make people less worried that this tool will hallucinate or give you false medical advice,” he says. “It’s increasing the usage.” 

    Arent Chatbots Companies doctors Stopped Warning
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleIs your house power-starved? Why the extra juice might be worth the squeeze
    Next Article Getax ZX10 G2 rugged tablet review
    Techurz
    • Website

    Related Posts

    Opinion

    Ex-Googlers are building infrastructure to help companies understand their video data

    February 9, 2026
    Opinion

    The top 26 consumer/edtech companies from Disrupt Startup Battlefield

    December 30, 2025
    Opinion

    Building venture-backable companies in heavily regulated spaces

    December 19, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,438 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,438 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI

    February 10, 2026

    Primary Ventures raises healthy $625M Fund V to focus on seed investing

    February 10, 2026

    Vega raises $120M Series B to rethink how enterprises detect cyber threats

    February 10, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.