Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot

    February 12, 2026

    Eclipse backs all-EV marketplace Ever in $31M funding round

    February 12, 2026

    Complyance raises $20M to help companies manage risk and compliance

    February 12, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot
    • Eclipse backs all-EV marketplace Ever in $31M funding round
    • Complyance raises $20M to help companies manage risk and compliance
    • Meridian raises $17 million to remake the agentic spreadsheet
    • 2026 Joseph C. Belden Innovation Award nominations are open
    • AI inference startup Modal Labs in talks to raise at $2.5B valuation, sources say
    • Who will own your company’s AI layer? Glean’s CEO explains
    • How to get into a16z’s super-competitive Speedrun startup accelerator program
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»Startups»OpenAI and Anthropic are getting cozy with government. What could possibly go wrong?
    Startups

    OpenAI and Anthropic are getting cozy with government. What could possibly go wrong?

    TechurzBy TechurzJune 10, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    PluggedIn Newsletter logo
    Share
    Facebook Twitter LinkedIn Pinterest Email


    While the world and private enterprise are adopting AI rapidly in their workflows, government isn’t far behind. The U.K. government has said early trials of AI-powered productivity tools can shave two weeks of labor off a year’s work, and AI companies are adapting to that need. More than 1,700 AI use cases have been recorded in the U.S. government, long before Elon Musk’s DOGE entered the equation and accelerated AI adoption throughout the public sector. Federal policies introduced in April on AI adoption and procurement have pushed this trend further.

    It’s unsurprising that big tech companies are rolling out their own specialist models to meet that demand. Anthropic, the maker of the Claude chatbot, announced last week a series of models tailored for use by government employees. These include features such as the ability to handle classified materials and understand some of the bureaucratic language that plagues official documents. Anthropic has said its models are already deployed by agencies “at the highest level of U.S. national security, and access to these models is limited to those who operate in such classified environments.”

    The announcement follows a similar one by OpenAI, the makers of ChatGPT, which released its own government-tailored AI models in January to “streamline government agencies’ access to OpenAI’s frontier models.”

    But AI experts worry about governments becoming overly reliant on AI models, which can hallucinate information, inherit biases that discriminate against certain groups at scale, or steer policy in misguided directions. They also express concern over governments being locked into specific providers, who may later increase prices that taxpayers would be left to fund.

    “I worry about governments using this kind of technology and relying on tech companies, and in particular, tech companies who have proven to be quite untrustworthy,” says Carissa Véliz, who researches AI ethics at the University of Oxford. She points out that the generative AI revolution so far, sparked by the November 2022 release of ChatGPT, has seen governments scrambling to retrofit rules and regulations in areas such as copyright to accommodate tech companies after they’ve bent those rules. “It just shows a power relationship there that doesn’t look good for government,” says Véliz. “Government is supposed to be the legislator, the one making the rules and enforcing the rules.”

    Beyond those moral concerns, she also worries about the financial stakes involved. “There’s just a sheer dependency on a company that has financial interests, that is based in a different country, in a situation in which geopolitics is getting quite complicated,” says Véliz, explaining why countries outside the United States might hesitate to sign on to use ClaudeGov or ChatGPT Gov. It’s the same argument the U.S. uses about overreliance on TikTok, which has Chinese ties, amid fears that figures like Donald Trump could pressure U.S.-based firms to act in politically motivated ways.

    OpenAI didn’t respond to Fast Company‘s request for comment. A spokesperson for Anthropic says the company is committed to transparency, citing published work on model risks, a detailed system card, and collaborations with the U.S. and U.K. governments to test AI systems.

    Some fear that AI companies are securing “those big DoD bucks,” as programmer Ashe Dryden put it on Mastodon, and could perpetuate that revenue by fostering dependency on their specific models. The rollout of these models reflects broader shifts in the tech landscape that increasingly tie government, national security, and technology together. For example, defense tech firm Anduril recently raised $5 billion in a new funding round that values the company at over $30 billion.

    Others have argued that the release of these government-specific models by AI companies “isn’t [about] national security. This is narrative laundering,” as one LinkedIn commenter put it. The idea is that these moves echo the norms already set by big government rather than challenging them, potentially reinforcing existing issues.

    “I’ve always been a sceptic of a single supplier for IT services, and this is no exception,” says Andres Guadamuz, an AI researcher at the University of Sussex. Guadamuz believes the development of government-specific AI models is still in its early phase, and urges decision-makers to pause before signing deals. “Governments should keep their options open,” he says. “Particularly with a crowded AI market, large entities such as the government can have a better negotiating position.”

    The final deadline for Fast Company’s Next Big Things in Tech Awards is Friday, June 20, at 11:59 p.m. PT. Apply today.

    Anthropic cozy Government OpenAI possibly wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleCustom Notification Sound Android: Personalize Easily
    Next Article Russia-linked PathWiper malware hits Ukrainian infrastructure
    Techurz
    • Website

    Related Posts

    Opinion

    Anthropic reportedly upped its latest raise to $20B

    January 27, 2026
    Opinion

    OpenAI chief Sam Altman plans India visit as AI leaders converge in New Delhi: sources

    January 23, 2026
    Opinion

    Voice AI engine and OpenAI partner LiveKit hits $1B valuation

    January 22, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,543 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,543 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot

    February 12, 2026

    Eclipse backs all-EV marketplace Ever in $31M funding round

    February 12, 2026

    Complyance raises $20M to help companies manage risk and compliance

    February 12, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.