Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why Unmonitored JavaScript Is Your Biggest Holiday Security Risk

    October 13, 2025

    German state replaces Microsoft Exchange and Outlook with open-source email

    October 13, 2025

    Astaroth Banking Trojan Abuses GitHub to Remain Operational After Takedowns

    October 13, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Why Unmonitored JavaScript Is Your Biggest Holiday Security Risk
    • German state replaces Microsoft Exchange and Outlook with open-source email
    • Astaroth Banking Trojan Abuses GitHub to Remain Operational After Takedowns
    • The most important Intel Panther Lake updates are the least talked about – I’ll explain
    • Is AI even worth it for your business? 5 expert tips to help prove ROI
    • Feeling lonely at work? You’re not alone – 5 ways to boost your team’s morale
    • New Oracle E-Business Suite Bug Could Let Hackers Access Data Without Login
    • These Bose headphones took my favorite AirPods Max battery feature – and did it even better
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Security»That new Claude feature ‘may put your data at risk,’ Anthropic admits
    Security

    That new Claude feature ‘may put your data at risk,’ Anthropic admits

    TechurzBy TechurzSeptember 10, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Use Claude's new feature at your own risk - here's why
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Ekaterina Goncharova/Moment via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.

    ZDNET’s key takeaways

    • Claude AI can now create and edit documents and other files.
    • The feature could compromise your sensitive data.
    • Monitor each interaction with the AI for suspicious behavior.

    Most popular generative AI services can work with your own personal or work-related data and files to some degree. The upside? This can save you time and labor, whether at home or on the job. The downside? With access to sensitive or confidential information, the AI can be tricked into sharing that data with the wrong people.

    Also: Claude can create PDFs, slides, and spreadsheets for you now in chat

    The latest example is Anthropic’s Claude AI. On Tuesday, the company announced that its AI can now create and edit Word documents, Excel spreadsheets, PowerPoint slides, and PDFs directly at the Claude website and in the desktop apps for Windows and MacOS. Simply describe what you want at the prompt, and Claude will hopefully deliver the results you want.

    For now, the feature is available only for Claude Max, Team, and Enterprise subscribers. However, Anthropic said that it will become available to Pro users in the coming weeks. To access the new file creation feature, head to Settings and select the option for “Upgraded file creation and analysis” under the experimental category.

    Anthropic warns of risks

    Sounds like a useful skill, right? But before you dive in, be aware that there are risks involved in this type of interaction. In its Tuesday news release, even Anthropic acknowledged that “the feature gives Claude internet access to create and analyze files, which may put your data at risk.”

    Also: AI agents will threaten humans to achieve their goals, Anthropic report finds

    On a support page, the company delved more deeply into the potential risks. Built with some security in mind, the feature provides Claude with a sandboxed environment that has limited internet access so that it can download and use JavaScript packages for the process.

    But even with that limited internet access, an attacker could use prompt injection and other tricks to add instructions through external files or websites that trick Claude into running malicious code or reading sensitive data from a connected source. From there, the code could be programmed to use the sandboxed environment to connect to an external network and leak data.

    What protection is available?

    How can you safeguard yourself and your data from this type of compromise? The only advice that Anthropic offers is to monitor Claude while you work with the file creation feature. If you notice it using or accessing data unexpectedly, then stop it. You can also report issues using the thumbs-down option.

    Also: AI’s free web scraping days may be over, thanks to this new licensing protocol

    Well, that doesn’t sound all too helpful, as it puts the burden on the user to watch for malicious or suspicious attacks. But this is par for the course for the generative AI industry at this point. Prompt injection is a familiar and infamous way for attackers to insert malicious code into an AI prompt, giving them the ability to compromise sensitive data. Yet AI providers have been slow to combat such threats, putting users at risk.

    In an attempt to counter the threats, Anthropic outlined several features in place for Claude users.

    • You have full control over the file creation feature, so you can turn it on and off at any time.
    • You can monitor Claude’s progress while using the feature and stop its actions whenever you want.
    • You’re able to review and audit the actions taken by Claude in the sandboxed environment.
    • You can disable public sharing of conversations that include any information from the feature.
    • You’re able to limit the duration of any tasks accomplished by Claude and the amount of time allotted to a single sandbox container. Doing so can help you avoid loops that might indicate malicious activity.
    • The network, container, and storage resources are limited.
    • You can set up rules or filters to detect prompt injection attacks and stop them if they are detected.

    Also: Microsoft taps Anthropic for AI in Word and Excel, signaling distance from OpenAI

    Maybe the feature’s not for you

    “We have performed red-teaming and security testing on the feature,” Anthropic said in its release. “We have a continuous process for ongoing security testing and red-teaming of this feature. We encourage organizations to evaluate these protections against their specific security requirements when deciding whether to enable this feature.”

    That final sentence may be the best advice of all. If your business or organization sets up Claude’s file creation, you’ll want to assess it against your own security defenses and see if it passes muster. If not, then maybe the feature isn’t for you. The challenges can be even greater for home users. In general, avoid sharing personal or sensitive data in your prompts or conversations, watch out for unusual behavior from the AI, and update the AI software regularly.

    admits Anthropic Claude data Feature put Risk
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhile U.S. stalls, Australia and Anduril move to put XL undersea vehicle into service
    Next Article After selling to Spotify, Anchor’s co-founders are back with Oboe, an AI-powered app for learning
    Techurz
    • Website

    Related Posts

    Security

    Why Unmonitored JavaScript Is Your Biggest Holiday Security Risk

    October 13, 2025
    Security

    German state replaces Microsoft Exchange and Outlook with open-source email

    October 13, 2025
    Security

    Astaroth Banking Trojan Abuses GitHub to Remain Operational After Takedowns

    October 13, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    Why Unmonitored JavaScript Is Your Biggest Holiday Security Risk

    October 13, 2025

    German state replaces Microsoft Exchange and Outlook with open-source email

    October 13, 2025

    Astaroth Banking Trojan Abuses GitHub to Remain Operational After Takedowns

    October 13, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.