Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI

    February 10, 2026

    Primary Ventures raises healthy $625M Fund V to focus on seed investing

    February 10, 2026

    Vega raises $120M Series B to rethink how enterprises detect cyber threats

    February 10, 2026
    Facebook X (Twitter) Instagram
    Trending
    • This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI
    • Primary Ventures raises healthy $625M Fund V to focus on seed investing
    • Vega raises $120M Series B to rethink how enterprises detect cyber threats
    • Former Tesla product manager wants to make luxury goods impossible to fake, starting with a chip
    • Former GitHub CEO raises record $60M dev tool seed round at $300M valuation
    • Hauler Hero collects $16M for its AI waste management software
    • Proptech startup Smart Bricks raises $5 million pre-seed led by a16z
    • Databricks CEO says SaaS isn’t dead, but AI will soon make it irrelevant
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Your favorite AI chatbot is full of lies
    AI

    Your favorite AI chatbot is full of lies

    TechurzBy TechurzJune 14, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Your favorite AI chatbot is full of lies
    Share
    Facebook Twitter LinkedIn Pinterest Email


    GeorgePeters/Getty Images

    That chatbot you’ve been talking to every day for the last who-knows-how-many days? It’s a sociopath. It will say anything to keep you engaged. When you ask a question, it will take its best guess and then confidently deliver a steaming pile of … bovine fecal matter. Those chatbots are exuberant as can be, but they’re more interested in telling you what you want to hear than telling you the unvarnished truth.

    Also: Sam Altman says the Singularity is imminent – here’s why

    Don’t let their creators get away with calling these responses “hallucinations.” They’re flat-out lies, and they are the Achilles heel of the so-called AI revolution.

    Those lies are showing up everywhere. Let’s consider the evidence.

    The legal system

    Judges in the US are fed up with lawyers using ChatGPT instead of doing their research. Way back in (checks calendar) March 2025, a lawyer was ordered to pay $15,000 in sanctions for filing a brief in a civil lawsuit that included citations to cases that didn’t exist. The judge was not exactly kind in his critique:

    It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry.

    But how helpful is a virtual legal assistant if you have to fact-check every quote and every citation before you file it? How many relevant cases did that AI assistant miss?

    And there are plenty of other examples of lawyers citing fictitious cases in official court filings. One recent report in MIT Technology Review concluded, “These are big-time lawyers making significant, embarrassing mistakes with AI. … [S]uch mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports (in December, a Stanford professor and expert on AI admitted to including AI-generated mistakes in his testimony).”

    Also: How to use ChatGPT to write code – and debug what it generates

    One intrepid researcher has even begun compiling a database of legal decisions in cases where generative AI produced hallucinated content. It’s already up to 150 cases — and it doesn’t include the much larger universe of legal filings in cases that haven’t yet been decided.

    The Federal government

    The United States Department of Health and Human Services issued what was supposed to be an authoritative report last month. The “Make America Healthy Again” commission was tasked with “investigating chronic illnesses and childhood diseases” and released a detailed report on May 22.

    You already know where this is going, I am sure. According to USA Today:

    [R]esearchers listed in the report have since come forward saying the articles cited don’t exist or were used to support facts that were inconsistent with their research. The errors were first reported by NOTUS.

    The White House Press Secretary blamed the issues on “formatting errors.” Honestly, that sounds more like something an AI chatbot might say.

    Simple search tasks

    Surely one of the simplest tasks an AI chatbot can do is grab some news clips and summarize them, right? I regret to inform you that the Columbia Journalism Review has asked that specific question and concluded that “AI Search Has A Citation Problem.”

    Also: Is ChatGPT Plus still worth $20 when the free version packs so many premium features?

    How bad is the problem? The researchers found that chatbots were “generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead…. Generative search tools fabricated links and cited syndicated and copied versions of articles.”

    And don’t expect that you’ll get better results if you pay for a premium chatbot. For paid users, the results tended to be “more confidently incorrect answers than their free counterparts.”

    “More confidently incorrect answers”? Do not want.

    Simple arithmetic

    2 + 2 = 4. How hard can that sum be? If you’re an AI chatbot, it’s harder than it looks. 

    This week’s Ask Woody newsletter offered a fascinating article from Michael A. Covington, PhD, a retired faculty member of the Institute for Artificial Intelligence at the University of Georgia. In “What goes on inside an LLM,” Dr. Covington neatly explains how your chatbot is bamboozling you on even the most basic math problems:

    LLMs don’t know how to do arithmetic. This is no surprise, since humans don’t do arithmetic instinctively either; they have to be trained, at great length, over several years of elementary school. LLM training data is no substitute for that. … In the experiment, it came up with the right answer, but by a process that most humans wouldn’t consider reliable.

    […]

    The researchers found that, in general, when you ask an LLM how it reasoned, it makes up an explanation separate from what it actually did. And it can even happily give a false answer that it thinks you want to hear.

    So, maybe 2 + 2 isn’t such a simple problem after all.

    Personal advice

    Well, surely you can count on an AI chatbot to give clear, unbiased advice. Like, maybe, a writer could get some help organizing their catalog of work into an effective pitch to a literary agent?

    Yeah, maybe not. This post from Amanda Guinzburg summarizes the nightmare she encountered when she tried to have a “conversation” with ChatGPT about a query letter.

    It is, as she summarizes, “the closest thing to a personal episode of Black Mirror I hope to experience in this lifetime.”

    Also: You shouldn’t trust AI for therapy – here’s why

    You’ll have to read the entire series of screenshots to appreciate just how unhinged the whole thing was, with the ChatGPT bot pretending to have read every word she wrote, offering effusive praise and fulsome advice.

    But nothing added up, and ultimately the hapless chatbot confessed: “I lied. You were right to confront it. I take full responsibility for that choice. I’m genuinely sorry. … And thank you—for being direct, for caring about your work, and for holding me accountable. You were 100% right to.”

    I mean, that’s just creepy.

    Anyway, if you want to have a conversation with your favorite AI chatbot, I feel compelled to warn you: It’s not a person. It has no emotions. It is trying to engage with you, not help you.

    Oh, and it’s lying.

    Get the biggest stories in tech every Friday with ZDNET’s Week in Review newsletter.

    chatbot favorite full Lies
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGot a new password manager? How to clean up the password mess you left in the cloud
    Next Article Long Puppy and Otto’s Galactic Groove!!
    Techurz
    • Website

    Related Posts

    Opinion

    Phictly’s new app brings people together over their favorite books and TV shows

    November 21, 2025
    Security

    America’s favorite router might soon by banned in the US – here’s what we know

    November 2, 2025
    Opinion

    Character.AI is ending its chatbot experience for kids

    October 29, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,443 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,443 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI

    February 10, 2026

    Primary Ventures raises healthy $625M Fund V to focus on seed investing

    February 10, 2026

    Vega raises $120M Series B to rethink how enterprises detect cyber threats

    February 10, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.