Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Delve whistleblower strikes again, with alleged receipts about ‘fake compliance’

    March 31, 2026

    Popular AI gateway startup LiteLLM ditches controversial startup Delve

    March 30, 2026

    What we’re looking for in Startup Battlefield 2026 and how to put your best application forward

    March 30, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Delve whistleblower strikes again, with alleged receipts about ‘fake compliance’
    • Popular AI gateway startup LiteLLM ditches controversial startup Delve
    • What we’re looking for in Startup Battlefield 2026 and how to put your best application forward
    • ScaleOps raises $130M to improve computing efficiency amid AI demand
    • Qodo raises $70M for code verification as AI coding scales
    • Elon Musk’s last co-founder reportedly leaves xAI
    • From Moon hotels to cattle herding: 8 startups investors chased at YC Demo Day
    • Aetherflux reportedly raising Series B at $2 billion valuation
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Anthropic will start training its AI models on chat transcripts
    AI

    Anthropic will start training its AI models on chat transcripts

    TechurzBy TechurzAugust 28, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Anthropic will start training its AI models on chat transcripts
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic will start training its AI models on user data, including new chat transcripts and coding sessions, unless users choose to opt out. It’s also extending its data retention policy to five years — again, for users that don’t choose to opt out.

    All users will have to make a decision by September 28th. For users that click “Accept” now, Anthropic will immediately begin training its models on their data and keeping said data for up to five years, according to a blog post published by Anthropic on Thursday.

    The setting applies to “new or resumed chats and coding sessions.” Even if you do agree to Anthropic training its AI models on your data, it won’t do so with previous chats or coding sessions that you haven’t resumed. But if you do continue an old chat or coding session, all bets are off.

    The updates apply to all of Claude’s consumer subscription tiers, including Claude Free, Pro, and Max, “including when they use Claude Code from accounts associated with those plans,” Anthropic wrote. But they don’t apply to Anthropic’s commercial usage tiers, such as Claude Gov, Claude for Work, Claude for Education, or API use, “including via third parties such as Amazon Bedrock and Google Cloud’s Vertex AI.”

    New users will have to select their preference via the Claude signup process. Existing users must decide via a pop-up, which they can defer by clicking a “Not now” button — though they will be forced to make a decision on September 28th.

    But it’s important to note that many users may accidentally and quickly hit “Accept” without reading what they’re agreeing to.

    The pop-up that users will see reads, in large letters, “Updates to Consumer Terms and Policies,” and the lines below it say, “An update to our Consumer Terms and Privacy Policy will take effect on September 28, 2025. You can accept the updated terms today.” There’s a big black “Accept” button at the bottom.

    In smaller print below that, a few lines say, “Allow the use of your chats and coding sessions to train and improve Anthropic AI models,” with a toggle on / off switch next to it. It’s automatically set to “On.” Ostensibly, many users will immediately click the large “Accept” button without changing the toggle switch, even if they haven’t read it.

    If you want to opt out, you can toggle the switch to “Off” when you see the pop-up. If you already accepted without realizing and want to change your decision, navigate to your Settings, then the Privacy tab, then the Privacy Settings section, and, finally, toggle to “Off” under the “Help improve Claude” option. Consumers can change their decision anytime via their privacy settings, but that new decision will just apply to future data — you can’t take back the data that the system has already been trained on.

    “To protect users’ privacy, we use a combination of tools and automated processes to filter or obfuscate sensitive data,” Anthropic wrote in the blog post. “We do not sell users’ data to third-parties.”

    Anthropic chat models start Training transcripts
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCrowdStrike buys Onum in agentic SOC push
    Next Article Creating a qubit fit for a quantum future
    Techurz
    • Website

    Related Posts

    Opinion

    Yann LeCun’s AMI Labs raises $1.03 billion to build world models

    March 10, 2026
    Opinion

    EXCLUSIVE: Luma launches creative AI agents powered by its new ‘Unified Intelligence’ models

    March 5, 2026
    Opinion

    India’s Sarvam launches Indus AI chat app as competition heats up

    February 21, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Our Picks

    Delve whistleblower strikes again, with alleged receipts about ‘fake compliance’

    March 31, 2026

    Popular AI gateway startup LiteLLM ditches controversial startup Delve

    March 30, 2026

    What we’re looking for in Startup Battlefield 2026 and how to put your best application forward

    March 30, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.