Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    OpenAI co-founder Greg Brockman takes charge of product strategy

    May 17, 2026

    Marketing operating system Nectar Social raises $30M Series A led by Menlo

    May 17, 2026

    The haves and have nots of the AI gold rush

    May 17, 2026
    Facebook X (Twitter) Instagram
    Tech Pulse
    • OpenAI co-founder Greg Brockman takes charge of product strategy
    • Marketing operating system Nectar Social raises $30M Series A led by Menlo
    • The haves and have nots of the AI gold rush
    • Meridian Ventures launched $35M fund to back MBA-deferred founders
    • Lovable just backed a company that’s looking to bring vibe coding to hardware
    X (Twitter) Pinterest YouTube LinkedIn WhatsApp
    Techurz
    • Home
    • AI Systems
    • Cyber Reality
    • Future Tech
    • Disruption Lab
    • Signals
    • Tech Pulse
    Techurz
    Home - Cyber Reality - That new Claude feature ‘may put your data at risk,’ Anthropic admits
    Cyber Reality

    That new Claude feature ‘may put your data at risk,’ Anthropic admits

    TechurzBy TechurzSeptember 10, 2025Updated:May 10, 2026No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Use Claude's new feature at your own risk - here's why
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Ekaterina Goncharova/Moment via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.

    ZDNET’s key takeaways

    • Claude AI can now create and edit documents and other files.
    • The feature could compromise your sensitive data.
    • Monitor each interaction with the AI for suspicious behavior.

    Most popular generative AI services can work with your own personal or work-related data and files to some degree. The upside? This can save you time and labor, whether at home or on the job. The downside? With access to sensitive or confidential information, the AI can be tricked into sharing that data with the wrong people.

    Also: Claude can create PDFs, slides, and spreadsheets for you now in chat

    The latest example is Anthropic’s Claude AI. On Tuesday, the company announced that its AI can now create and edit Word documents, Excel spreadsheets, PowerPoint slides, and PDFs directly at the Claude website and in the desktop apps for Windows and MacOS. Simply describe what you want at the prompt, and Claude will hopefully deliver the results you want.

    For now, the feature is available only for Claude Max, Team, and Enterprise subscribers. However, Anthropic said that it will become available to Pro users in the coming weeks. To access the new file creation feature, head to Settings and select the option for “Upgraded file creation and analysis” under the experimental category.

    Anthropic warns of risks

    Sounds like a useful skill, right? But before you dive in, be aware that there are risks involved in this type of interaction. In its Tuesday news release, even Anthropic acknowledged that “the feature gives Claude internet access to create and analyze files, which may put your data at risk.”

    Also: AI agents will threaten humans to achieve their goals, Anthropic report finds

    On a support page, the company delved more deeply into the potential risks. Built with some security in mind, the feature provides Claude with a sandboxed environment that has limited internet access so that it can download and use JavaScript packages for the process.

    But even with that limited internet access, an attacker could use prompt injection and other tricks to add instructions through external files or websites that trick Claude into running malicious code or reading sensitive data from a connected source. From there, the code could be programmed to use the sandboxed environment to connect to an external network and leak data.

    What protection is available?

    How can you safeguard yourself and your data from this type of compromise? The only advice that Anthropic offers is to monitor Claude while you work with the file creation feature. If you notice it using or accessing data unexpectedly, then stop it. You can also report issues using the thumbs-down option.

    Also: AI’s free web scraping days may be over, thanks to this new licensing protocol

    Well, that doesn’t sound all too helpful, as it puts the burden on the user to watch for malicious or suspicious attacks. But this is par for the course for the generative AI industry at this point. Prompt injection is a familiar and infamous way for attackers to insert malicious code into an AI prompt, giving them the ability to compromise sensitive data. Yet AI providers have been slow to combat such threats, putting users at risk.

    In an attempt to counter the threats, Anthropic outlined several features in place for Claude users.

    • You have full control over the file creation feature, so you can turn it on and off at any time.
    • You can monitor Claude’s progress while using the feature and stop its actions whenever you want.
    • You’re able to review and audit the actions taken by Claude in the sandboxed environment.
    • You can disable public sharing of conversations that include any information from the feature.
    • You’re able to limit the duration of any tasks accomplished by Claude and the amount of time allotted to a single sandbox container. Doing so can help you avoid loops that might indicate malicious activity.
    • The network, container, and storage resources are limited.
    • You can set up rules or filters to detect prompt injection attacks and stop them if they are detected.

    Also: Microsoft taps Anthropic for AI in Word and Excel, signaling distance from OpenAI

    Maybe the feature’s not for you

    “We have performed red-teaming and security testing on the feature,” Anthropic said in its release. “We have a continuous process for ongoing security testing and red-teaming of this feature. We encourage organizations to evaluate these protections against their specific security requirements when deciding whether to enable this feature.”

    That final sentence may be the best advice of all. If your business or organization sets up Claude’s file creation, you’ll want to assess it against your own security defenses and see if it passes muster. If not, then maybe the feature isn’t for you. The challenges can be even greater for home users. In general, avoid sharing personal or sensitive data in your prompts or conversations, watch out for unusual behavior from the AI, and update the AI software regularly.

    admits Anthropic Claude data Feature put Risk
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWhile U.S. stalls, Australia and Anduril move to put XL undersea vehicle into service
    Next Article After selling to Spotify, Anchor’s co-founders are back with Oboe, an AI-powered app for learning
    Techurz
    • Website

    Related Posts

    Opinion

    Clio’s $500M milestone arrives just as Anthropic ups the ante

    May 14, 2026
    Opinion

    Korea’s biggest manufacturers back Config, the TSMC of robot data

    May 11, 2026
    Opinion

    Altara secures $7M to bridge the data gap that’s slowing down physical sciences

    May 6, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Our Picks

    OpenAI co-founder Greg Brockman takes charge of product strategy

    May 17, 2026

    Marketing operating system Nectar Social raises $30M Series A led by Menlo

    May 17, 2026

    The haves and have nots of the AI gold rush

    May 17, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.