Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Dull but dangerous: A guide to 15 overlooked cybersecurity blind spots

    October 14, 2025

    Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data

    October 14, 2025

    Is art dead? What Sora 2 means for your rights, creativity, and legal risk

    October 14, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Dull but dangerous: A guide to 15 overlooked cybersecurity blind spots
    • Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data
    • Is art dead? What Sora 2 means for your rights, creativity, and legal risk
    • Microsoft Locks Down IE Mode After Hackers Turned Legacy Feature Into Backdoor
    • AI red flags, ethics boards and the real threat of AGI today
    • I tried smart glasses with xMEMS speakers and active cooling – and they’re full of promise
    • Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors
    • Gladinet file sharing zero-day brings patched flaw back from the dead
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Anthropic has new rules for a more dangerous AI landscape
    AI

    Anthropic has new rules for a more dangerous AI landscape

    TechurzBy TechurzAugust 15, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Anthropic has new rules for a more dangerous AI landscape
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic has updated the usage policy for its Claude AI chatbot in response to growing concerns about safety. In addition to introducing stricter cybersecurity rules, Anthropic now specifies some of the most dangerous weapons that people should not develop using Claude.

    Anthropic doesn’t highlight the tweaks made to its weapons policy in the post summarizing its changes, but a comparison between the company’s old usage policy and its new one reveals a notable difference. Though Anthropic previously prohibited the use of Claude to “produce, modify, design, market, or distribute weapons, explosives, dangerous materials or other systems designed to cause harm to or loss of human life,” the updated version expands on this by specifically prohibiting the development of high-yield explosives, along with biological, nuclear, chemical, and radiological (CBRN) weapons.

    In May, Anthropic implemented “AI Safety Level 3” protection alongside the launch of its new Claude Opus 4 model. The safeguards are designed to make the model more difficult to jailbreak, as well as to help prevent it from assisting with the development of CBRN weapons.

    In its post, Anthropic also acknowledges the risks posed by agentic AI tools, including Computer Use, which lets Claude take control of a user’s computer, as well as Claude Code, a tool that embeds Claude directly into a developer’s terminal. “These powerful capabilities introduce new risks, including potential for scaled abuse, malware creation, and cyber attacks,” Anthropic writes.

    The AI startup is responding to these potential risks by folding a new “Do Not Compromise Computer or Network Systems” section into its usage policy. This section includes rules against using Claude to discover or exploit vulnerabilities, create or distribute malware, develop tools for denial-of-service attacks, and more.

    Additionally, Anthropic is loosening its policy around political content. Instead of banning the creation of all kinds of content related to political campaigns and lobbying, Anthropic will now only prohibit people from using Claude for “use cases that are deceptive or disruptive to democratic processes, or involve voter and campaign targeting.” The company also clarified that its requirements for all its “high-risk” use cases, which come into play when people use Claude to make recommendations to individuals or customers, only apply to consumer-facing scenarios, not for business use.

    Anthropic Dangerous Landscape rules
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleI retested Lenovo’s PC handheld but with SteamOS – the difference was night and day
    Next Article Indigenous knowledge meets artificial intelligence
    Techurz
    • Website

    Related Posts

    Security

    Dull but dangerous: A guide to 15 overlooked cybersecurity blind spots

    October 14, 2025
    Security

    Apple Announces $2 Million Bug Bounty Reward for the Most Dangerous Exploits

    October 10, 2025
    Security

    A Dangerous Worm Is Eating Its Way Through Software Packages

    September 20, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    Dull but dangerous: A guide to 15 overlooked cybersecurity blind spots

    October 14, 2025

    Satellites Are Leaking the World’s Secrets: Calls, Texts, Military and Corporate Data

    October 14, 2025

    Is art dead? What Sora 2 means for your rights, creativity, and legal risk

    October 14, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.