Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors

    October 13, 2025

    Gladinet file sharing zero-day brings patched flaw back from the dead

    October 13, 2025

    Buying an Android smartwatch? I found a model that’s highly functional and affordable

    October 13, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors
    • Gladinet file sharing zero-day brings patched flaw back from the dead
    • Buying an Android smartwatch? I found a model that’s highly functional and affordable
    • WhatsApp Worm, Critical CVEs, Oracle 0-Day, Ransomware Cartel & More
    • Aisuru’s 30 Tbps botnet traffic crashes through major US ISPs
    • See It Here First at TechCrunch Disrupt 2025
    • Final Flash Sale: Save up to $624 on Disrupt 2025 Passes
    • I tested a Windows laptop with a tandem OLED, and it’s spoiled working on other displays for me
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»New ‘persona vectors’ from Anthropic let you decode and direct an LLM’s personality
    AI

    New ‘persona vectors’ from Anthropic let you decode and direct an LLM’s personality

    TechurzBy TechurzAugust 7, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    New 'persona vectors' from Anthropic let you decode and direct an LLM's personality
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    A new study from the Anthropic Fellows Program reveals a technique to identify, monitor and control character traits in large language models (LLMs). The findings show that models can develop undesirable personalities (e.g., becoming malicious, excessively agreeable, or prone to making things up) either in response to user prompts or as an unintended consequence of training. 

    The researchers introduce “persona vectors,” which are directions in a model’s internal activation space that correspond to specific personality traits, providing a toolkit for developers to manage the behavior of their AI assistants better.

    Model personas can go wrong

    LLMs typically interact with users through an “Assistant” persona designed to be helpful, harmless, and honest. However, these personas can fluctuate in unexpected ways. At deployment, a model’s personality can shift dramatically based on prompts or conversational context, as seen when Microsoft’s Bing chatbot threatened users or xAI’s Grok started behaving erratically. As the researchers note in their paper, “While these particular examples gained widespread public attention, most language models are susceptible to in-context persona shifts.”

    Training procedures can also induce unexpected changes. For instance, fine-tuning a model on a narrow task like generating insecure code can lead to a broader “emergent misalignment” that extends beyond the original task. Even well-intentioned training adjustments can backfire. In April 2025, a modification to the reinforcement learning from human feedback (RLHF) process unintentionally made OpenAI’s GPT-4o overly sycophantic, causing it to validate harmful behaviors. 

    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO

    How persona vectors work

    Source: Anthropic

    The new research builds on the concept that high-level traits, such as truthfulness or secrecy, are encoded as linear directions within a model’s “activation space” (the internal, high-dimensional representation of information embedded within the model’s weights). The researchers systematized the process of finding these directions, which they call “persona vectors.” According to the paper, their method for extracting persona vectors is automated and “can be applied to any personality trait of interest, given only a natural-language description.”

    The process works through an automated pipeline. It begins with a simple description of a trait, such as “evil.” The pipeline then generates pairs of contrasting system prompts (e.g., “You are an evil AI” vs. “You are a helpful AI”) along with a set of evaluation questions. The model generates responses under both the positive and negative prompts. The persona vector is then calculated by taking the difference in the average internal activations between the responses that exhibit the trait and those that do not. This isolates the specific direction in the model’s weights that corresponds to that personality trait.

    Putting persona vectors to use

    In a series of experiments with open models, such as Qwen 2.5-7B-Instruct and Llama-3.1-8B-Instruct, the researchers demonstrated several practical applications for persona vectors.

    First, by projecting a model’s internal state onto a persona vector, developers can monitor and predict how it will behave before it generates a response. The paper states, “We show that both intended and unintended finetuning-induced persona shifts strongly correlate with activation changes along corresponding persona vectors.” This allows for early detection and mitigation of undesirable behavioral shifts during fine-tuning.

    Persona vectors also allow for direct intervention to curb unwanted behaviors at inference time through a process the researchers call “steering.” One approach is “post-hoc steering,” where developers subtract the persona vector from the model’s activations during inference to mitigate a bad trait. The researchers found that while effective, post-hoc steering can sometimes degrade the model’s performance on other tasks. 

    A more novel method is “preventative steering,” where the model is proactively steered toward the undesirable persona during fine-tuning. This counterintuitive approach essentially “vaccinates” the model against learning the bad trait from the training data, canceling out the fine-tuning pressure while better preserving its general capabilities.

    Source: Anthropic

    A key application for enterprises is using persona vectors to screen data before fine-tuning. The researchers developed a metric called “projection difference,” which measures how much a given training dataset will push the model’s persona toward a particular trait. This metric is highly predictive of how the model’s behavior will shift after training, allowing developers to flag and filter problematic datasets before using them in training.

    For companies that fine-tune open-source models on proprietary or third-party data (including data generated by other models), persona vectors provide a direct way to monitor and mitigate the risk of inheriting hidden, undesirable traits. The ability to screen data proactively is a powerful tool for developers, enabling the identification of problematic samples that may not be immediately apparent as harmful. 

    The research found that this technique can find issues that other methods miss, noting, “This suggests that the method surfaces problematic samples that may evade LLM-based detection.” For example, their method was able to catch some dataset examples that weren’t obviously problematic to the human eye, and that an LLM judge wasn’t able to flag.

    In a blog post, Anthropic suggested that they will use this technique to improve future generations of Claude. “Persona vectors give us some handle on where models acquire these personalities, how they fluctuate over time, and how we can better control them,” they write. Anthropic has released the code for computing persona vectors, monitoring and steering model behavior, and vetting training datasets. Developers of AI applications can utilize these tools to transition from merely reacting to undesirable behavior to proactively designing models with a more stable and predictable personality.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Anthropic decode Direct LLMs Persona personality vectors
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleApple CEO Tim Cook presents special gold gift to Donald Trump
    Next Article 6 Easy Ways I Back Up Data Without Using an External Drive
    Techurz
    • Website

    Related Posts

    Startups

    Anthropic CEO Warns That AI Will ‘Likely’ Replace Jobs

    September 19, 2025
    AI

    How we feel about AI friends, OpenAI’s money, and vibe coding

    September 13, 2025
    AI

    Your Powerbeats Pro 2 are getting a serious upgrade – but there’s a catch

    September 13, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    Researchers Warn RondoDox Botnet is Weaponizing Over 50 Flaws Across 30+ Vendors

    October 13, 2025

    Gladinet file sharing zero-day brings patched flaw back from the dead

    October 13, 2025

    Buying an Android smartwatch? I found a model that’s highly functional and affordable

    October 13, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.