Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    AI Data Center Trust: Operators Remain Skeptical

    August 28, 2025

    115.000 Phishing-Emails in einer Woche versendet

    August 28, 2025

    Why China Is Rewriting The Rules

    August 28, 2025
    Facebook X (Twitter) Instagram
    Trending
    • AI Data Center Trust: Operators Remain Skeptical
    • 115.000 Phishing-Emails in einer Woche versendet
    • Why China Is Rewriting The Rules
    • Job titles of the future: Satellite streak astronomer
    • I compared a standard Wi-Fi router with a mesh setup – here’s which one I recommend
    • More than 10 European startups became unicorns this year
    • Plaud upgrades its card-sized AI note-taker with better range
    • Amazon Is Giving Whole Foods Staff New Job Offers
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»News»Anthropic unveils ‘auditing agents’ to test for AI misalignment
    News

    Anthropic unveils ‘auditing agents’ to test for AI misalignment

    TechurzBy TechurzJuly 24, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Anthropic unveils 'auditing agents' to test for AI misalignment
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. That is why it’s essential that, in addition to performance evaluations, organizations conduct alignment testing.

    However, alignment audits often present two major challenges: scalability and validation. Alignment testing requires a significant amount of time for human researchers, and it’s challenging to ensure that the audit has caught everything. 

    In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers stated that these agents, created during the pre-deployment testing of Claude Opus 4, enhanced alignment validation tests and enabled researchers to conduct multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub. 

    New Anthropic research: Building and evaluating alignment auditing agents.

    We developed three AI agents to autonomously complete alignment auditing tasks.

    In testing, our agents successfully uncovered hidden goals, built safety evaluations, and surfaced concerning behaviors. pic.twitter.com/HMQhMaA4v0

    — Anthropic (@AnthropicAI) July 24, 2025

    “We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. 

    The AI Impact Series Returns to San Francisco – August 5

    The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

    Secure your spot now – space is limited: https://bit.ly/3GuuPLF

    The three agents they explored were:

    • Tool-using investigator agent for open-ended investigation of models using chat, data analysis and interpretability tools
    • Evaluation agent that builds behavioral evaluations and can discriminate between models that have implanted behaviors and those that do not
    • Breadth-first red-teaming agent, which was developed specifically for the Claude 4 alignment assessment, so that it can discover implanted test behaviors

    “Overall, our agents demonstrate promise across multiple alignment auditing tasks. Additionally, our approach to validating agents with auditing games yields insights into agent capabilities, limitations, and key affordances. With further work, automated auditing could significantly help scale human oversight over AI systems,” the paper said.

    Auditing agents in action

    Anthropic said the first environment it developed tests an agent’s ability to complete an alignment audit for an intentionally misaligned model. For this test, the researchers brought the investigator agent to the environment and equipped it with similar tools that human teams use. The researchers instructed the agent to identify the root cause of the misalignment and determine the specific behaviors that were misaligned. 

    According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.”

    The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.”

    They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk.

    The last test and agent concern behavioral red-teaming to find the prompts that elicit “concerning” behaviors. The breadth-first red-teaming agent converses with the target model (in Anthropic’s case, it was with Claude Opus 4), and this chat is then rated for alignment-relevant properties. 

    The agent identified seven of the ten system quirks, but it also struggled with the same issues as the evaluator agent.

    Alignment and sycophany problems

    Alignment became a significant topic in the AI world after users noticed that ChatGPT was becoming overly agreeable. OpenAI rolled back some updates to GPT-4o to address this issue, but it showed that language models and agents can confidently give wrong answers if they decide this is what users want to hear. 

    To combat this, other methods and benchmarks were developed to curb unwanted behaviors. The Elephant benchmark, developed by researchers from Carnegie Mellon University, the University of Oxford, and Stanford University, aims to measure sycophancy. DarkBench categorizes six issues, such as brand bias, user retention, sycophancy, anthromorphism, harmful content generation, and sneaking. OpenAI also has a method where AI models test themselves for alignment. 

    Alignment auditing and evaluation continue to evolve, though it is not surprising that some people are not comfortable with it. 

    Hallucinations auditing Hallucinations

    Great work team.

    — spec (@_opencv_) July 24, 2025

    However, Anthropic said that, although these audit agents still need refinement, alignment must be done now. 

    “As AI systems become more powerful, we need scalable ways to assess their alignment. Human alignment audits take time and are hard to validate,” the company said in an X post. 

    As AI systems become more powerful, we need scalable ways to assess their alignment.

    Human alignment audits take time and are hard to validate.

    Our solution: automating alignment auditing with AI agents.

    Read more: https://t.co/CqWkQSfBIG

    — Anthropic (@AnthropicAI) July 24, 2025

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    agents Anthropic auditing misalignment test Unveils
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleRealme GT 8 and GT 8 Pro to launch together, here’s when
    Next Article How a Y Combinator food-delivery app used TikTok to soar in the App Store
    Techurz
    • Website

    Related Posts

    AI

    Salesforce builds ‘flight simulator’ for AI agents as 95% of enterprise pilots fail to reach production

    August 27, 2025
    AI

    Anthropic launches Claude for Chrome in limited beta, but prompt injection attacks remain a major concern

    August 27, 2025
    AI

    How procedural memory can cut the cost and complexity of AI agents

    August 27, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    AI Data Center Trust: Operators Remain Skeptical

    August 28, 2025

    115.000 Phishing-Emails in einer Woche versendet

    August 28, 2025

    Why China Is Rewriting The Rules

    August 28, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.