Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Clio’s $500M milestone arrives just as Anthropic ups the ante

    May 14, 2026

    Anduril raises $5B, doubles valuation to $61B

    May 13, 2026

    Kevin Hartz’s A* just closed its third fund with $450M

    May 13, 2026
    Facebook X (Twitter) Instagram
    Tech Pulse
    • Clio’s $500M milestone arrives just as Anthropic ups the ante
    • Anduril raises $5B, doubles valuation to $61B
    • Kevin Hartz’s A* just closed its third fund with $450M
    • Riding an AI rally, Robinhood preps second retail venture IPO
    • Korea’s biggest manufacturers back Config, the TSMC of robot data
    X (Twitter) Pinterest YouTube LinkedIn WhatsApp
    Techurz
    • Home
    • AI Systems
    • Cyber Reality
    • Future Tech
    • Disruption Lab
    • Signals
    • Tech Pulse
    Techurz
    Home - AI - Claude can now stop conversations – for its own protection, not yours
    AI

    Claude can now stop conversations – for its own protection, not yours

    TechurzBy TechurzAugust 19, 2025Updated:May 10, 2026No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Claude can now stop conversations - for its own protection, not yours
    Share
    Facebook Twitter LinkedIn Pinterest Email


    CHRISTOPH BURGSTEDT/SCIENCE PHOTO LIBRARY via Getty Images

    ZDNET’s key takeaways:

    • Claude Opus 4 and 4.1 can now end some “potentially distressing” conversations.
    • It will activate only in some cases of persistent user abuse.
    • The feature is geared toward protecting models, not users. 

    Anthropic’s Claude chatbot can now end some conversations with human users who are abusing or misusing the chatbot, the company announced on Friday. The new feature is integrated with Claude Opus 4 and Opus 4.1. 

    Also: Claude can teach you how to code now, and more – how to try it

    Claude will only exit chats with users in extreme edge cases, after “multiple attempts at redirection have failed and hope of a productive interaction has been exhausted,” Anthropic noted. “The vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude.”

    If Claude ends a conversation, the user will no longer be able to send messages in that particular thread; all of their other conversations, however, will remain open and unaffected. Importantly, users who Claude ends chats with will not experience penalties or delays in starting new conversations immediately. They will also be able to return to and retry previous chats “to create new branches of ended conversations,” Anthropic said. 

    The chatbot is designed not to end conversations with users who are perceived as being at risk of harming themselves or others.

    Tracking AI model well-being 

    The feature isn’t aimed at improving user safety — it’s actually geared toward protecting models themselves.

    Letting Claude end chats is part of Anthropic’s model welfare program, which the company debuted in April. The move was prompted by a Nov. 2024 paper that argued that some AI models could soon become conscious and would thus be worthy of moral consideration and care. One of that paper’s coauthors, AI researcher Kyle Fish, was hired by Anthropic as part of its AI welfare division.

    Also: Anthropic mapped Claude’s morality. Here’s what the chatbot values (and doesn’t)

    “We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future,” Anthropic wrote in its blog post. “However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.” 

    Claude’s ‘aversion to harm’

    The decision to give Claude the ability to hang up and walk away from abusive or dangerous conversations arose in part from Anthropic’s assessment of what it describes in the blog post as the chatbot’s “behavioral preferences” — that is, the patterns in how it responds to user queries. 

    Interpreting such patterns as a model’s “preferences” as opposed merely to patterns that have been gleaned from a corpus of training data is arguably an example of anthropomorphizing, or attributing human traits to machines. The language behind Anthropic’s AI welfare program, however, makes it clear that the company considers it to be more ethical in the long run to treat its AI systems as if they could one day exhibit human traits like self-awareness and a moral concern for the suffering of others.

    Also: Patients trust AI’s medical advice over doctors – even when it’s wrong, study finds

    An assessment of Claude’s behavior revealed “a robust and consistent aversion to harm,” Anthropic wrote in its blog post, meaning the bot tended to nudge users away from unethical or dangerous requests, and in some cases even showed signs of “distress.” When given the option to do so, the chatbot would end simulated some user conversations if they started to veer into dangerous territory.

    Each of these behaviors, according to Anthropic, arose when users would repeatedly try to abuse or misuse Claude, despite its efforts to redirect the conversation. The chatbot’s ability to end conversations is “a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted,” Anthropic wrote. Users can also explicitly ask Claude to end a chat.

    Claude conversations Protection Stop
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMeta spent $27 million protecting Mark Zuckerberg last year, more than any other CEO
    Next Article Google AI Pioneer Employee Says to Stay Away From AI PhDs
    Techurz
    • Website

    Related Posts

    Opinion

    Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage

    April 4, 2026
    Opinion

    Why Garry Tan’s Claude Code setup has gotten so much love, and hate

    March 17, 2026
    Opinion

    MayimFlow wants to stop data center leaks before they happen

    December 28, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Our Picks

    Clio’s $500M milestone arrives just as Anthropic ups the ante

    May 14, 2026

    Anduril raises $5B, doubles valuation to $61B

    May 13, 2026

    Kevin Hartz’s A* just closed its third fund with $450M

    May 13, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.