Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI

    February 10, 2026

    Primary Ventures raises healthy $625M Fund V to focus on seed investing

    February 10, 2026

    Vega raises $120M Series B to rethink how enterprises detect cyber threats

    February 10, 2026
    Facebook X (Twitter) Instagram
    Trending
    • This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI
    • Primary Ventures raises healthy $625M Fund V to focus on seed investing
    • Vega raises $120M Series B to rethink how enterprises detect cyber threats
    • Former Tesla product manager wants to make luxury goods impossible to fake, starting with a chip
    • Former GitHub CEO raises record $60M dev tool seed round at $300M valuation
    • Hauler Hero collects $16M for its AI waste management software
    • Proptech startup Smart Bricks raises $5 million pre-seed led by a16z
    • Databricks CEO says SaaS isn’t dead, but AI will soon make it irrelevant
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»The MechaHitler defense contract is raising red flags
    AI

    The MechaHitler defense contract is raising red flags

    TechurzBy TechurzSeptember 10, 2025No Comments10 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    The MechaHitler defense contract is raising red flags
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Ask someone their worst fears about AI, and you’ll find a few recurring topics — from near-term fears like AI tools replacing human workers and the loss of critical thinking to apocalyptic scenarios like AI-designed weapons of mass destruction and automated war. Most have one thing in common: a loss of human control.

    And the system many AI experts fear most will spiral out of our grip? Elon Musk’s Grok.

    Grok was designed to compete with leading AI systems like Anthropic’s Claude and OpenAI’s ChatGPT. From the beginning, its selling point has been loose guardrails. When xAI, Musk’s AI startup, debuted Grok in November 2023, the announcement said it would “answer spicy questions that are rejected by most other AI systems” and had a “rebellious streak, so please don’t use it if you hate humor!”

    Fast-forward a year and a half, and the cutting edge of AI is getting more dangerous, with multiple companies flagging increased risks of their systems being used for tasks like chemical and biological weapon development. As that’s happening, Grok’s “rebellious streak” has taken over more times than most people can count. And when its “spicy” answers go too far, the slapdash fixes have left experts unconvinced it can handle a bigger threat.

    Senator Elizabeth Warren (D-MA) sent a letter Wednesday to US Defense Secretary Pete Hegseth, detailing her concerns about the Department of Defense’s decision to award xAI a $200 million contract in order to “address critical national security challenges.” Though the contracts also went to OpenAI, Anthropic, and Google, Warren has unique concerns about the contract with xAI, she wrote in the letter viewed by The Verge — including that “Musk and his companies may be improperly benefitting from the unparalleled access to DoD data and information that he obtained while leading the Department of Government Efficiency,” as well as “the competition concerns raised by xAI’s use and rights to sensitive government data” and Grok’s propensity to generate “erroneous outputs and misinformation.”

    Sen. Warren cited reports that xAI was a “late-in-the-game addition under the Trump administration” and that it had not been considered for such contracts before March of this year, and that the company did not have the type of reputation or proven record that typically precedes DoD awards. The letter requests that the DoD provide, in response, the full scope of work for xAI, how its contract differs from the contracts with the other AI companies, and “to what extent DoD will implement Grok, and who will be held accountable for any program failures related to Grok.”

    One of Sen. Warren’s key reasons for concern, per the letter, was specifically “the slew of offensive and antisemitic posts generated by Grok,” which went viral this summer. The company did not immediately respond to a request for comment.

    A ‘patchwork’ approach to safety

    The height of Grok’s power, up to now, has been posting answers to users’ queries on X. But even in this relatively limited capacity, it’s racked up a remarkable number of controversies, often resulting from patchwork tweaks and fixed with patchwork solutions. In February, the chatbot temporarily blocked results that mention Musk or President Trump spreading misinformation. In May, it briefly went viral for constant tirades about “white genocide” in South Africa. In July, it developed a habit of searching for Musk’s opinion on hot-button topics like Israel and Palestine, immigration, and abortion before responding to questions about them. And most infamously, last month it went on an antisemitic bender — spreading stereotypes about Jewish people, praising Adolf Hitler and even going so far as to call itself “MechaHitler.”

    Musk responded publicly to say the company was addressing the issue and that it happened because Grok was “too compliant to user prompts. Too eager to please and be manipulated, essentially.” But the incident happened a few weeks after Musk expressed frustration that Grok was “parroting legacy media” and asked X users to contribute “divisive facts for Grok training” that were “politically incorrect, but nonetheless factually true,” and a few days after a new system prompt gave Grok instructions to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect.” Following the debacle, the prompts were tweaked to scale back Grok’s aggressive endorsement of fringe viewpoints.

    The whack-a-mole approach to Grok’s guardrails concerns experts in the field, who say it’s hard enough to keep an AI system from veering into harmful behavior even when it’s designed intentionally, with some measure of safety in mind from the beginning. And if you don’t do that… then all bets are off.

    It’s “difficult to justify” the patchwork approach xAI has taken, says Alice Qian Zhang, a researcher at Carnegie Mellon University’s Human-Computer Interaction Institute. Qian Zhang says it’s particularly puzzling because the current approach is neither good for the public nor the company’s business model.

    “It’s kind of difficult once the harm has already happened to fix things — early stage intervention is better,” she said. “There are just a lot of bad things online, so when you make a tool that can touch all the corners of the internet I think it’s just inevitable.”

    xAI has not released any type of safety report or system card — which usually describe safety features, ethical questions or concerns, and other implications — for its latest model, Grok 4. Such reports, though voluntary, are typically seen as a bare minimum in the AI industry, especially for a notable, advanced model release.

    “It’s even more alarming when AI corporations don’t even feel obliged to demonstrate the bare minimum, safety-wise,” Ben Cumming, communications director at the Future of Life Institute (FLI), a nonprofit working to reduce risk from AI, said.

    About two weeks after Grok 4’s release in mid-July, an xAI employee posted on X that he was “hiring for our AI safety team at xAI! We urgently need strong engineers/researchers to work across all stages of the frontier AI development cycle.” In response to a comment asking, “xAI does safety?” The employee responded that the company was “working on it.”

    “With the Hitler issue, if that can happen, a lot of other things can happen,” said Qian Zhang. “You cannot just adjust the system prompt for everything that happens. The researcher perspective is [that] you should have abstracted a level above the specific instance… That’s what bothers me about patchwork.”

    Weapons of mass destruction

    Grok’s approach is even more dangerous when scaled up to address some of the biggest issues facing leading AI companies today.

    Recently, OpenAI and Anthropic both disclosed that they believe their models are approaching high risk levels for potentially helping create biological or chemical weapons, saying they had implemented additional safeguards in response. Anthropic did so in May, and in June, OpenAI wrote that its model capabilities could “potentially be misused to help people with minimal expertise to recreate biological threats or assist highly skilled actors in creating bioweapons.” Musk claims that Grok is now “the smartest AI in the world,” an assertion that logically suggests xAI should also be considering similar risks. But the company has not alluded to having any such framework, let alone activating it.

    Heidy Khlaaf, chief AI scientist at the AI Now Institute, who focuses on AI safety and assessment in autonomous weapons systems, said that AI companies’ Chemical, Biological, Radiological, and Nuclear safeguards aren’t at all foolproof — for example, they likely wouldn’t do much against large-scale nation-state threats. But they do help mitigate some risks. xAI, on the other hand, may not even be trying: it has not publicly acknowledged any such safeguards.

    The company may not be able to operate this way forever. Grok’s loose guardrails may play well on parts of X, but many leading AI companies’ revenue comes largely from enterprise and government products. (For instance, the Department of Defense’s aforementioned decision to award OpenAI, Anthropic, Google, and xAI contracts of up to $200 million each.) Enterprise and most government clients worry about security and control of AI systems, especially AI systems they’re using for their own purpose and profit.

    The Trump administration, in its recent AI Action Plan, seemed to signal that Grok’s offensiveness might not be a problem — it included an anti-“woke AI” order that largely aligns with Musk’s politics, and xAI’s latest DoD contract was awarded after the MechaHitler incident. But the plan also included sections promoting AI explainability and predictability, mentioning issues with these capabilities could lead to high-stakes problems in defense, national security, and “other applications where lives are at stake.”

    For now, however, biological and chemical weapons aren’t even the biggest cause of concern when it comes to Grok, according to experts The Verge spoke to. They’re much more worried about widespread surveillance — a problem that would persist even with a greater focus on safety, but that’s particularly dangerous with Grok’s approach.

    Khlaaf said that ISTAR — an acronym denoting Intelligence, Surveillance, Target Acquisition, and Reconnaissance — is currently more important to safeguard against than CBRN, because it’s already happening. With Grok, that includes its ability to train on public X posts.

    “What’s a specific risk of Grok that the other providers may not have? To me, this is one of the biggest ones,” Khlaaf said.

    Data from X could be used for intelligence analysis by Trump administration government agencies, including Immigration and Customs Enforcement. “It’s not just terrorists using it to build bio weapons or even loss of control to superintelligence systems — all of which these AI companies openly acknowledge as material threats,” Cumming said. “It’s these systems being used and abused [as] systems of mass surveillance and monitoring of people, and then using it to censor and persecute undesirables.”

    Grok’s lack of guardrails and unpredictability could create a system that not only conducts mass surveillance, but flags threats and analyzes information in ways that the designers don’t intend and can’t control — persistently over-monitoring minority groups or vulnerable populations, for instance, or even leaking information about its operations both stateside and abroad. Despite the fears he once expressed about advanced AI, Musk appears focused more on beating OpenAI and other rivals than making sure xAI can control its own system, and the risks are becoming clear.

    “Safety can’t just be an afterthought,” Cumming said. “Unfortunately, this kind of frenzied market competition doesn’t create the best incentives when it comes to caution and keeping people safe. It’s why we urgently need safety standards, like any other industry.”
    During Grok 4’s livestreamed release event, Musk said he’s been “at times kind of worried” about AI’s quickly-advancing intelligence and whether it will be “bad or good for humanity” in the end. “I think it’ll be good, most likely it’ll be good,” Musk said. “But I’ve somewhat reconciled myself to the fact that even if it wasn’t going to be good, I’d at least like to be alive to see it happen.”

    0 CommentsFollow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

    • Hayden FieldClose

      Hayden Field

      Posts from this author will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All by Hayden Field

    • AIClose

      AI

      Posts from this topic will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All AI

    • Elon MuskClose

      Elon Musk

      Posts from this topic will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All Elon Musk

    • PolicyClose

      Policy

      Posts from this topic will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All Policy

    • ReportClose

      Report

      Posts from this topic will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All Report

    • TechClose

      Tech

      Posts from this topic will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All Tech

    • xAIClose

      xAI

      Posts from this topic will be added to your daily email digest and your homepage feed.

      PlusFollow

      See All xAI

    contract defense flags MechaHitler raising Red
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleiPhone 17 Pro vs. iPhone 14 Pro: Why I’m upgrading to this years model after three years
    Next Article PsiQuantum hits $7B valuation amid quantum computing gold rush
    Techurz
    • Website

    Related Posts

    Opinion

    Harvey reportedly raising at $11B valuation just months after it hit $8B

    February 9, 2026
    Opinion

    Qualcomm backs SpotDraft to scale on-device contract AI with valuation doubling toward $400M

    January 27, 2026
    Opinion

    Northwood Space secures a $100M Series B and a $50M Space Force contract

    January 27, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,429 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,429 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    This Sequoia-backed lab thinks the brain is ‘the floor, not the ceiling’ for AI

    February 10, 2026

    Primary Ventures raises healthy $625M Fund V to focus on seed investing

    February 10, 2026

    Vega raises $120M Series B to rethink how enterprises detect cyber threats

    February 10, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.