Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The reputation of troubled YC startup Delve has gotten even worse

    April 1, 2026

    Startup funding shatters all records in Q1

    April 1, 2026

    StrictlyVC San Francisco is in less than a month

    April 1, 2026
    Facebook X (Twitter) Instagram
    Trending
    • The reputation of troubled YC startup Delve has gotten even worse
    • Startup funding shatters all records in Q1
    • StrictlyVC San Francisco is in less than a month
    • Toyota’s Woven Capital appoints new CIO and COO in push for finding the ‘future of mobility’
    • Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project
    • It’s not your imagination: AI seed startups are commanding higher valuations
    • Yupp.ai shuts down after raising $33M from a16z crypto’s Chris Dixon
    • Whoop’s valuation just tripled to $10 billion
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Anthropic says Claude helps emotionally support users – we’re not convinced
    AI

    Anthropic says Claude helps emotionally support users – we’re not convinced

    TechurzBy TechurzJune 28, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Anthropic says Claude helps emotionally support users - we're not convinced
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Richard Drury/Getty Images

    More and more, in the midst of a loneliness epidemic and structural barriers to mental health support, people are turning to AI chatbots for everything from career coaching to romance. Anthropic’s latest study indicates its chatbot, Claude, is handling that well — but some experts aren’t convinced. 

    Also: You shouldn’t trust AI for therapy – here’s why

    On Thursday, Anthropic published new research on its Claude chatbot’s emotional intelligence (EQ) capabilities — what the company calls affective use, or conversations “where people engage directly with Claude in dynamic, personal exchanges motivated by emotional or psychological needs such as seeking interpersonal advice, coaching, psychotherapy/counseling, companionship, or sexual/romantic roleplay,” the company explained. 

    While Claude is designed primarily for tasks like code generation and problem solving, not emotional support, the research acknowledges that this type of use is still happening, and is worthy of investigation given the risks. The company also noted that doing so is relevant to its focus on safety. 

    The main findings 

    Anthropic analyzed about 4.5 million conversations from both Free and Pro Claude accounts, ultimately settling on 131,484 that fit the affective use criteria. Using its privacy data tool Clio, Anthropic stripped conversations of personally identifying information (PII). 

    The study revealed that only 2.9% of Claude interactions were classified as affective conversations, which the company says mirrors previous findings from OpenAI. Examples of “AI-human companionship” and roleplay comprised even less of the dataset, combining to under 0.5% of conversations. Within that 2.9%, conversations about interpersonal issues were most common, followed by coaching and psychotherapy. 

    Anthropic

    Usage patterns show that some people consult Claude to develop mental health skills, while others are working through personal challenges like anxiety and workplace stress — suggesting that mental health professionals may be using Claude as a resource. 

    The study also found that users seek Claude out for help with “practical, emotional, and existential concerns,” including career development, relationship issues, loneliness, and “existence, consciousness, and meaning.” Most of the time (90%), Claude does not appear to push back against the user in these types of conversations, “except to protect well-being,” the study notes, as when a user is asking for information on extreme weight loss or self-harm. 

    Also: AI is relieving therapists from burnout. Here’s how it’s changing mental health

    The study did not cover whether the AI reinforced delusions or extreme usage patterns, as Anthropic noted that these are worthy of separate studies.

    Most notably, however, is that Anthropic determined people “express increasing positivity over the course of conversations” with Claude, meaning user sentiment improved when talking to the chatbot. “We cannot claim these shifts represent lasting emotional benefits — our analysis captures only expressed language in single conversations, not emotional states,” Anthropic stated. “But the absence of clear negative spirals is reassuring.” 

    Within these criteria, that’s perhaps measurable. But there is growing concern — and disagreement — across medical and research communities about the deeper impacts of these chatbots in therapeutic contexts. 

    Conflicting perspectives

    As Anthropic itself acknowledged, there are downsides to AI’s incessant need to please — which is what they’re trained to do as assistants. Chatbots can be deeply sycophantic (OpenAI recently recalled a model update for this very issue), agreeing with users in ways that can dangerously reinforce harmful beliefs and behaviors. 

    (Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    Earlier this month, researchers at Stanford released a study detailing several reasons why using AI chatbots as therapists can be dangerous. In addition to perpetuating delusions, likely due to sycophancy, the study found that AI models can carry stigmas toward certain mental health conditions and respond inappropriately to users. Several of the chatbots studied failed to recognize suicidal ideation in conversation and offered simulated users dangerous information. 

    These chatbots are perhaps less guardrailed than Anthropic’s models, which were not included in the study. The companies behind other chatbots may lack the safety infrastructure Anthropic appears committed to. Still, some are skeptical about the Anthropic study itself. 

    “I have reservations of the medium of their engagement,” said Jared Moore, one of the Stanford researchers, citing how “light on technical details” the post is. He believes some of the “yes or no” prompts Anthropic used were too broad to determine fully how Claude is reacting to certain queries. 

    “These are only very high-level reasons why a model might ‘push back’ against a user,” he said, pointing out that what therapists do — push back against a client’s delusional thinking and intrusive thoughts — is a “much more granular” response in comparison. 

    Also: Anthropic has a plan to combat AI-triggered job losses predicted by its CEO

    “Similarly, the concerns that have lately appeared about sycophancy seem to be of this more granular type,” he added. “The issues I found in my paper were that the ‘content filters’ — for this really seems to be the subject of the Claude push-backs, as opposed to something deeper — are not sufficient to catch a variety of the very contextual queries users might make in mental health contexts.”

    Moore also questioned the context around when Claude refused users. “We can’t see in what kinds of context such pushback occurs. Perhaps Claude only pushes back against users at the start of a conversation, but can be led to entertain a variety of ‘disallowed’ [as per Anthropic’s guidelines] behaviors through extended conversations with users,” he said, suggesting users could “warm up” Claude to break its rules. 

    That 2.9% figure, Moore pointed out, likely doesn’t include API calls from companies building their own bots on top of Claude, meaning Anthropic’s findings may not generalize to other use cases. 

    “Each of these claims, while reasonable, may not hold up to scrutiny — it’s just hard to know without being able to independently analyze the data,” he concluded. 

    The future of AI and therapy 

    Claude’s impact aside, the tech and healthcare industries remain very undecided about AI’s role in therapy. While Moore’s research urged caution, in March, Dartmouth released initial trial results for its “Therabot,” an AI-powered therapy chatbot, which claims to be fine-tuned on conversation data and showed “significant improvements in participants’ symptoms.” 

    Online, users also colloquially report positive outcomes from using chatbots this way. At the same time, the American Psychological Association has called on the FTC to regulate chatbots, citing concerns that mirror Moore’s research. 

    CNET: AI obituary pirates are exploiting our grief. I tracked one down to find out why

    Beyond therapy, Anthropic acknowledges there are other pitfalls to linking persuasive natural language technology and EQ. “We also want to avoid situations where AIs, whether through their training or through the business incentives of their creators, exploit users’ emotions to increase engagement or revenue at the expense of human well-being,” Anthropic noted in the blog. 

    Anthropic Claude convinced emotionally Helps Support users
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAnthropic has a plan to combat AI-triggered job losses predicted by its CEO
    Next Article PSA: If you have been having issues on GSMArena.com lately, check your ad blocker
    Techurz
    • Website

    Related Posts

    Opinion

    Exclusive: Runway launches $10M fund, Builders program to support early stage AI startups

    March 31, 2026
    Opinion

    Why Garry Tan’s Claude Code setup has gotten so much love, and hate

    March 17, 2026
    Opinion

    A married founder duo’s company, 14.ai, is replacing customer support teams at startups

    March 2, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Our Picks

    The reputation of troubled YC startup Delve has gotten even worse

    April 1, 2026

    Startup funding shatters all records in Q1

    April 1, 2026

    StrictlyVC San Francisco is in less than a month

    April 1, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.