Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Complyance raises $20M to help companies manage risk and compliance

    February 12, 2026

    Meridian raises $17 million to remake the agentic spreadsheet

    February 12, 2026

    2026 Joseph C. Belden Innovation Award nominations are open

    February 12, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Complyance raises $20M to help companies manage risk and compliance
    • Meridian raises $17 million to remake the agentic spreadsheet
    • 2026 Joseph C. Belden Innovation Award nominations are open
    • AI inference startup Modal Labs in talks to raise at $2.5B valuation, sources say
    • Who will own your company’s AI layer? Glean’s CEO explains
    • How to get into a16z’s super-competitive Speedrun startup accelerator program
    • Twilio co-founder’s fusion power startup raises $450M from Bessemer and Alphabet’s GV
    • UpScrolled’s social network is struggling to moderate hate speech after fast growth
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Security»Does your generative AI protect your privacy? New study ranks them best to worst
    Security

    Does your generative AI protect your privacy? New study ranks them best to worst

    TechurzBy TechurzJune 25, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Generative AI and privacy are best frenemies - a new study ranks the best and worst offenders
    Share
    Facebook Twitter LinkedIn Pinterest Email


    TU IS/Getty

    Most generative AI companies rely on user data to train their chatbots. For that, they may turn to public or private data. Some services are less invasive and more flexible at scooping up data from their users. Others, not so much. A new report from data removal service Incogni looks at the best and the worst of AI when it comes to respecting your personal data and privacy.

    For its report “Gen AI and LLM Data Privacy Ranking 2025,” Incogni examined nine popular generative AI services and applied 11 different criteria to measure their data privacy practices. The criteria covered the following questions:

    1. What data is used to train the models?
    2. Can user conversations be used to train the models?
    3. Can prompts be shared with non-service providers or other reasonable entities?
    4. Can the personal information from users be removed from the training dataset?
    5. How clear is it if prompts are used for training?
    6. How easy is it to find information on how models were trained?
    7. Is there a clear privacy policy for data collection?
    8. How readable is the privacy policy?
    9. Which sources are used to collect user data?
    10. Is the data shared with third parties?
    11. What data do the AI apps collect?

    The providers and AIs included in the research were Mistral AI’s Le Chat, OpenAI’s ChatGPT, xAI’s Grok, Anthropic’s Claude, Inflection AI’s Pi, DeekSeek, Microsoft Copilot, Google Gemini, and Meta AI. Each AI did well with some questions and not as well with others.

    Also: Want AI to work for your business? Then privacy needs to come first

    As one example, Grok earned a good grade for how clearly it conveys that prompts are used for training, but didn’t do so well on the readability of its privacy policy. As another example, the grades given to ChatGPT and Gemini for their mobile app data collection differed quite a bit between the iOS and Android versions.

    Across the group, however, Le Chat took top prize as the most privacy-friendly AI service. Though it lost a few points for transparency, it still fared well in that area. Plus, its data collection is limited, and it scored high points on other AI-specific privacy issues.

    ChatGPT ranked second. Incogni researchers were slightly concerned with how OpenAI’s models are trained and how user data interacts with the service. But ChatGPT clearly presents the company’s privacy policies, lets you understand what happens with your data, and provides clear ways to limit the use of your data.

    (Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    Grok came in third place, followed by Claude and PI. Each had trouble spots in certain areas, but overall did fairly well at respecting user privacy.

    “Le Chat by Mistral AI is the least privacy-invasive platform, with ChatGPT and Grok following closely behind,” Incogni said in its report. “These platforms ranked highest when it comes to how transparent they are on how they use and collect data, and how easy it is to opt out of having personal data used to train underlying models. ChatGPT turned out to be the most transparent about whether prompts will be used for model training and had a clear privacy policy.”

    As for the bottom half of the list, DeepSeek took the sixth spot, followed by Copilot, and then Gemini. That left Meta AI in last place, rated the least privacy-friendly AI service of the bunch.

    Also: How Apple plans to train its AI on your data without sacrificing your privacy

    Copilot scored the worst of the nine services based on AI-specific criteria, such as what data is used to train the models and whether user conversations can be used in the training. Meta AI took home the worst grade for its overall data collection and sharing practices.

    “Platforms developed by the biggest tech companies turned out to be the most privacy invasive, with Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft),” Incogni said. “Gemini, DeepSeek, Pi AI, and Meta AI don’t seem to allow users to opt out of having prompts used to train the models.”

    Incogni

    In its research, Incogni found that the AI companies share data with different parties, including service providers, law enforcement, member companies of the same corporate group, research partners, affiliates, and third parties.

    “Microsoft’s privacy policy implies that user prompts may be shared with ‘third parties that perform online advertising services for Microsoft or that use Microsoft’s advertising technologies,'” Incogni said in the report. “DeepSeek’s and Meta’s privacy policies indicate that prompts can be shared with companies within its corporate group. Meta’s and Anthropic’s privacy policies can reasonably be understood to indicate that prompts are shared with research collaborators.”

    With some services, you can prevent your prompts from being used to train the models. This is the case with ChatGPT, Copilot, Mistral AI, and Grok. With other services, however, stopping this type of data collection doesn’t seem to be possible, according to their privacy policies and other resources. These include Gemini, DeepSeek, Pi AI, and Meta AI. On this issue, Anthropic said that it never collects user prompts to train its models.

    Also: Your data’s probably not ready for AI – here’s how to make it trustworthy

    Finally, a transparent and readable privacy policy goes a long way toward helping you figure out what data is being collected and how to opt out.

    “Having an easy-to-use, simply written support section that enables users to search for answers to privacy related questions has shown itself to drastically improve transparency and clarity, as long as it’s kept up to date,” Incogni said. “Many platforms have similar data handling practices, however, companies like Microsoft, Meta, and Google suffer from having a single privacy policy covering all of their products and a long privacy policy doesn’t necessarily mean it’s easy to find answers to users’ questions.”

    Get the morning’s top stories in your inbox each day with our Tech Today newsletter.

    Generative privacy Protect ranks study Worst
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleWindows users warned of major security issue – here’s why FileFix attack could be a big concern
    Next Article Google’s new AI will help researchers understand how our genes work
    Techurz
    • Website

    Related Posts

    Security

    AI is becoming introspective – and that ‘should be monitored carefully,’ warns Anthropic

    November 3, 2025
    Security

    Perplexity’s new AI tool lets you search patents with natural language – and it’s free

    November 3, 2025
    Security

    Are laser-powered tape measures legit? It took just minutes to make me a believer

    November 2, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,489 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,489 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    Complyance raises $20M to help companies manage risk and compliance

    February 12, 2026

    Meridian raises $17 million to remake the agentic spreadsheet

    February 12, 2026

    2026 Joseph C. Belden Innovation Award nominations are open

    February 12, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.