Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Delve accused of misleading customers with ‘fake compliance’

    March 21, 2026

    AI startups are eating the venture industry and the returns, so far, are good

    March 20, 2026

    Bluesky announces $100M Series B after CEO transition

    March 19, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Delve accused of misleading customers with ‘fake compliance’
    • AI startups are eating the venture industry and the returns, so far, are good
    • Bluesky announces $100M Series B after CEO transition
    • Consumer-focused privacy company Cloaked raises $375M as it expands to enterprise
    • Tools for founders to navigate and move past conflict
    • K2 to launch its first high-powered satellite for space compute
    • Anori, Alphabet’s new X spinout, is tackling one of the world’s most expensive bureaucratic nightmares
    • Arc expands into electric commercial and defense boats with $50M raise
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Security»FTC scrutinizes OpenAI, Meta, and others on AI companion safety for kids
    Security

    FTC scrutinizes OpenAI, Meta, and others on AI companion safety for kids

    TechurzBy TechurzSeptember 12, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    FTC scrutinizes OpenAI, Meta, and others on AI companion safety for kids
    Share
    Facebook Twitter LinkedIn Pinterest Email


    Olemedia/iStock/Getty Images Plus via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.

    ZDNET’s key takeaways

    • The FTC is investigating seven tech companies building AI companions.
    • The probe is exploring safety risks posed to kids and teens.
    • Many tech companies offer AI companions to boost user engagement.

    The Federal Trade Commission (FTC) is investigating the safety risks posed by AI companions to kids and teenagers, the agency announced Thursday.

    The federal regulator submitted orders to seven tech companies building consumer-facing AI companionship tools — Alphabet, Instagram, Meta, OpenAI, Snap, xAI, and Character Technologies (the company behind chatbot creation platform Character.ai) — to provide information outlining how their tools are developed and monetized and how those tools generate responses to human users, as well as any safety-testing measures that are in place to protect underage users.

    Also: Even OpenAI CEO Sam Altman thinks you shouldn’t trust AI for therapy

    “The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” the agency wrote in the release.

    Those orders were issued under section 6(b) of the FTC Act, which grants the agency the authority to scrutinize businesses without a specific law enforcement purpose.

    The rise and fall(out) of AI companions

    Many tech companies have begun offering AI companionship tools in an effort to monetize generative AI systems and boost user engagement with existing platforms. Meta founder and CEO Mark Zuckerberg has even claimed that these virtual companions, which leverage chatbots to respond to user queries, could help mitigate the loneliness epidemic.

    Elon Musk’s xAI recently added two flirtatious AI companions to the company’s $30/month “Super Grok” subscription tier (the Grok app is currently available to users ages 12 and over on the App Store). Last summer, Meta began rolling out a feature that allows users to create custom AI characters in Instagram, WhatsApp, and Messenger. Other platforms like Replika, Paradot, and Character.ai are expressly built around the use of AI companions. 

    Also: Anthropic says Claude helps emotionally support users – we’re not convinced

    While they vary in their communication styles and protocol, AI companions are generally engineered to mimic human speech and expression. Working within what’s essentially a regulatory vacuum with very few legal guardrails to constrain them, some AI companies have taken an ethically dubious approach to building and deploying virtual companions. 

    An internal policy memo from Meta reported on by Reuters last month, for example, shows the company permitted Meta AI, its AI-powered virtual assistant, and the other chatbots operating across its family of apps “to engage a child in conversations that are romantic or sensual,” and to generate inflammatory responses on a range of other sensitive topics like race, health, and celebrities.

    Meanwhile, there’s been a blizzard of recent reports of users developing romantic bonds with their AI companions. OpenAI and Character.ai are both currently being sued by parents who allege that their children committed suicide after being encouraged to do so by ChatGPT and a bot hosted on Character.ai, respectively. As a result, OpenAI updated ChatGPT’s guardrails and said it would expand parental protections and safety precautions. 

    Also: Patients trust AI’s medical advice over doctors – even when it’s wrong, study finds

    AI companions haven’t been a completely unmitigated disaster, though. Some autistic people, for example, have used them from companies like Replika and Paradot as virtual conversation partners in order to practice social skills that can then be applied in the real world with other humans. 

    Protect kids – but also, keep building

    Under the leadership of its previous chairman, Lina Khan, the FTC launched several inquiries into tech companies to investigate potentially anticompetitive and other legally questionable practices, such as “surveillance pricing.”

    Federal scrutiny over the tech sector has been more relaxed during the second Trump administration. The President rescinded his predecessor’s executive order on AI, which sought to implement some restrictions around the technology’s deployment, and his AI Action Plan has largely been interpreted as a green light for the industry to push ahead with the construction of expensive, energy-intensive infrastructure to train new AI models, in order to keep a competitive edge over China’s own AI efforts.

    Also: Worried about AI’s soaring energy needs? Avoiding chatbots won’t help – but 3 things could

    The language of the FTC’s new investigation into AI companions clearly reflects the current administration’s permissive, build-first approach to AI. 

    “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” agency Chairman Andrew N. Ferguson wrote in a statement. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.”

    Also: I used this ChatGPT trick to look for coupon codes – and saved 25% on my dinner tonight

    In the absence of federal regulation, some state officials have taken the initiative to rein in some aspects of the AI industry. Last month, Texas attorney general Ken Paxton launched an investigation into Meta and Character.ai “for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.” Earlier that same month, Illinois enacted a law prohibiting AI chatbots from providing therapeutic or mental health advice, imposing fines up to $10,000 for AI companies that fail to comply.

    Companion FTC kids Meta OpenAI Safety scrutinizes
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleApple Warns French Users of Fourth Spyware Campaign in 2025, CERT-FR Confirms
    Next Article GR-3 Care-bot: The Gentle Robot Companion Experience
    Techurz
    • Website

    Related Posts

    Opinion

    The OpenAI mafia: 18 startups founded by alumni

    February 20, 2026
    Opinion

    OpenClaw creator Peter Steinberger joins OpenAI

    February 15, 2026
    Opinion

    Is safety is ‘dead’ at xAI?

    February 14, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20252,288 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202516 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202512 Views
    Our Picks

    Delve accused of misleading customers with ‘fake compliance’

    March 21, 2026

    AI startups are eating the venture industry and the returns, so far, are good

    March 20, 2026

    Bluesky announces $100M Series B after CEO transition

    March 19, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.