Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot

    February 12, 2026

    Eclipse backs all-EV marketplace Ever in $31M funding round

    February 12, 2026

    Complyance raises $20M to help companies manage risk and compliance

    February 12, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot
    • Eclipse backs all-EV marketplace Ever in $31M funding round
    • Complyance raises $20M to help companies manage risk and compliance
    • Meridian raises $17 million to remake the agentic spreadsheet
    • 2026 Joseph C. Belden Innovation Award nominations are open
    • AI inference startup Modal Labs in talks to raise at $2.5B valuation, sources say
    • Who will own your company’s AI layer? Glean’s CEO explains
    • How to get into a16z’s super-competitive Speedrun startup accelerator program
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Stop guessing why your LLMs break: Anthropic’s new tool shows you exactly what goes wrong
    AI

    Stop guessing why your LLMs break: Anthropic’s new tool shows you exactly what goes wrong

    TechurzBy TechurzJune 5, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Stop guessing why your LLMs break: Anthropic's new tool shows you exactly what goes wrong
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    Large language models (LLMs) are transforming how enterprises operate, but their “black box” nature often leaves enterprises grappling with unpredictability. Addressing this critical challenge, Anthropic recently open-sourced its circuit tracing tool, allowing developers and researchers to directly understand and control models’ inner workings. 

    This tool allows investigators to investigate unexplained errors and unexpected behaviors in open-weight models. It can also help with granular fine-tuning of LLMs for specific internal functions.

    Understanding the AI’s inner logic

    This circuit tracing tool works based on “mechanistic interpretability,” a burgeoning field dedicated to understanding how AI models function based on their internal activations rather than merely observing their inputs and outputs. 

    While Anthropic’s initial research on circuit tracing applied this methodology to their own Claude 3.5 Haiku model, the open-sourced tool extends this capability to open-weights models. Anthropic’s team has already used the tool to trace circuits in models like Gemma-2-2b and Llama-3.2-1b and has released a Colab notebook that helps use the library on open models.

    The core of the tool lies in generating attribution graphs, causal maps that trace the interactions between features as the model processes information and generates an output. (Features are internal activation patterns of the model that can be roughly mapped to understandable concepts.) It is like obtaining a detailed wiring diagram of an AI’s internal thought process. More importantly, the tool enables “intervention experiments,” allowing researchers to directly modify these internal features and observe how changes in the AI’s internal states impact its external responses, making it possible to debug models.

    The tool integrates with Neuronpedia, an open platform for understanding and experimentation with neural networks. 

    Circuit tracing on Neuronpedia (source: Anthropic blog)

    Practicalities and future impact for enterprise AI

    While Anthropic’s circuit tracing tool is a great step toward explainable and controllable AI, it has practical challenges, including high memory costs associated with running the tool and the inherent complexity of interpreting the detailed attribution graphs.

    However, these challenges are typical of cutting-edge research. Mechanistic interpretability is a big area of research, and most big AI labs are developing models to investigate the inner workings of large language models. By open-sourcing the circuit tracing tool, Anthropic will enable the community to develop interpretability tools that are more scalable, automated, and accessible to a wider array of users, opening the way for practical applications of all the effort that is going into understanding LLMs. 

    As the tooling matures, the ability to understand why an LLM makes a certain decision can translate into practical benefits for enterprises. 

    Circuit tracing explains how LLMs perform sophisticated multi-step reasoning. For example, in their study, the researchers were able to trace how a model inferred “Texas” from “Dallas” before arriving at “Austin” as the capital. It also revealed advanced planning mechanisms, like a model pre-selecting rhyming words in a poem to guide line composition. Enterprises can use these insights to analyze how their models tackle complex tasks like data analysis or legal reasoning. Pinpointing internal planning or reasoning steps allows for targeted optimization, improving efficiency and accuracy in complex business processes.

    Source: Anthropic

    Furthermore, circuit tracing offers better clarity into numerical operations. For example, in their study, the researchers uncovered how models handle arithmetic, like 36+59=95, not through simple algorithms but via parallel pathways and “lookup table” features for digits. For example, enterprises can use such insights to audit internal computations leading to numerical results, identify the origin of errors and implement targeted fixes to ensure data integrity and calculation accuracy within their open-source LLMs.

    For global deployments, the tool provides insights into multilingual consistency. Anthropic’s previous research shows that models employ both language-specific and abstract, language-independent “universal mental language” circuits, with larger models demonstrating greater generalization. This can potentially help debug localization challenges when deploying models across different languages.

    Finally, the tool can help combat hallucinations and improve factual grounding. The research revealed that models have “default refusal circuits” for unknown queries, which are suppressed by “known answer” features. Hallucinations can occur when this inhibitory circuit “misfires.” 

    Source: Anthropic

    Beyond debugging existing issues, this mechanistic understanding unlocks new avenues for fine-tuning LLMs. Instead of merely adjusting output behavior through trial and error, enterprises can identify and target the specific internal mechanisms driving desired or undesired traits. For instance, understanding how a model’s “Assistant persona” inadvertently incorporates hidden reward model biases, as shown in Anthropic’s research, allows developers to precisely re-tune the internal circuits responsible for alignment, leading to more robust and ethically consistent AI deployments.

    As LLMs increasingly integrate into critical enterprise functions, their transparency, interpretability and control become increasingly critical. This new generation of tools can help bridge the gap between AI’s powerful capabilities and human understanding, building foundational trust and ensuring that enterprises can deploy AI systems that are reliable, auditable, and aligned with their strategic objectives.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Anthropics break guessing LLMs shows Stop Tool wrong
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe default TV setting you should turn off ASAP – and why professionals do the same
    Next Article Xiaomi 16’s battery to be even larger than previously rumored
    Techurz
    • Website

    Related Posts

    Opinion

    Former GitHub CEO raises record $60M dev tool seed round at $300M valuation

    February 10, 2026
    Opinion

    OpenAI to acquire the team behind executive coaching AI tool Convogo

    January 8, 2026
    Opinion

    MayimFlow wants to stop data center leaks before they happen

    December 28, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,522 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,522 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot

    February 12, 2026

    Eclipse backs all-EV marketplace Ever in $31M funding round

    February 12, 2026

    Complyance raises $20M to help companies manage risk and compliance

    February 12, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.