Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The full Space Stage agenda at Disrupt 2025

    October 15, 2025

    The new iPad Pro’s biggest upgrade isn’t the M5 chip – I’d buy it for this feature instead

    October 15, 2025

    How Attackers Bypass Synced Passkeys

    October 15, 2025
    Facebook X (Twitter) Instagram
    Trending
    • The full Space Stage agenda at Disrupt 2025
    • The new iPad Pro’s biggest upgrade isn’t the M5 chip – I’d buy it for this feature instead
    • How Attackers Bypass Synced Passkeys
    • Flax Typhoon exploited ArcGIS to gain long-term access
    • When Face Recognition Doesn’t Know Your Face Is a Face
    • There’s one critical reason why I choose this Garmin smartwatch over competing models
    • Two CVSS 10.0 Bugs in Red Lion RTUs Could Hand Hackers Full Industrial Control
    • The OnePlus 12 is still on sale for $300 off – but time is running out
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»New 1.5B router model achieves 93% accuracy without costly retraining
    AI

    New 1.5B router model achieves 93% accuracy without costly retraining

    TechurzBy TechurzJuly 8, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    New 1.5B router model achieves 93% accuracy without costly retraining
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    Researchers at Katanemo Labs have introduced Arch-Router, a new routing model and framework designed to intelligently map user queries to the most suitable large language model (LLM). 

    For enterprises building products that rely on multiple LLMs, Arch-Router aims to solve a key challenge: how to direct queries to the best model for the job without relying on rigid logic or costly retraining every time something changes.

    The challenges of LLM routing

    As the number of LLMs grows, developers are moving from single-model setups to multi-model systems that use the unique strengths of each model for specific tasks (e.g., code generation, text summarization, or image editing). 

    LLM routing has emerged as a key technique for building and deploying these systems, acting as a traffic controller that directs each user query to the most appropriate model.

    Existing routing methods generally fall into two categories: “task-based routing,” where queries are routed based on predefined tasks, and “performance-based routing,” which seeks an optimal balance between cost and performance.

    However, task-based routing struggles with unclear or shifting user intentions, particularly in multi-turn conversations. Performance-based routing, on the other hand, rigidly prioritizes benchmark scores, often neglects real-world user preferences and adapts poorly to new models unless it undergoes costly fine-tuning.

    More fundamentally, as the Katanemo Labs researchers note in their paper, “existing routing approaches have limitations in real-world use. They typically optimize for benchmark performance while neglecting human preferences driven by subjective evaluation criteria.” 

    The researchers highlight the need for routing systems that “align with subjective human preferences, offer more transparency, and remain easily adaptable as models and use cases evolve.”

    A new framework for preference-aligned routing

    To address these limitations, the researchers propose a “preference-aligned routing” framework that matches queries to routing policies based on user-defined preferences.

    In this framework, users define their routing policies in natural language using a “Domain-Action Taxonomy.” This is a two-level hierarchy that reflects how people naturally describe tasks, starting with a general topic (the Domain, such as “legal” or “finance”) and narrowing to a specific task (the Action, such as “summarization” or “code generation”). 

    Each of these policies is then linked to a preferred model, allowing developers to make routing decisions based on real-world needs rather than just benchmark scores. As the paper states, “This taxonomy serves as a mental model to help users define clear and structured routing policies.”

    The routing process happens in two stages. First, a preference-aligned router model takes the user query and the full set of policies and selects the most appropriate policy. Second, a mapping function connects that selected policy to its designated LLM. 

    Because the model selection logic is separated from the policy, models can be added, removed, or swapped simply by editing the routing policies, without any need to retrain or modify the router itself. This decoupling provides the flexibility required for practical deployments, where models and use cases are constantly evolving.

    Preference-aligned routing framework Source: arXiv

    The policy selection is powered by Arch-Router, a compact 1.5B parameter language model fine-tuned for preference-aligned routing. Arch-Router receives the user query and the complete set of policy descriptions within its prompt. It then generates the identifier of the best-matching policy. 

    Since the policies are part of the input, the system can adapt to new or modified routes at inference time through in-context learning and without retraining. This generative approach allows Arch-Router to use its pre-trained knowledge to understand the semantics of both the query and the policies, and to process the entire conversation history at once.

    A common concern with including extensive policies in a prompt is the potential for increased latency. However, the researchers designed Arch-Router to be highly efficient. “While the length of routing policies can get long, we can easily increase the context window of Arch-Router with minimal impact on latency,” explains Salman Paracha, co-author of the paper and Founder/CEO of Katanemo Labs. He notes that latency is primarily driven by the length of the output, and for Arch-Router, the output is simply the short name of a routing policy, like “image_editing” or “document_creation.”

    Arch-Router in action

    To build Arch-Router, the researchers fine-tuned a 1.5B parameter version of the Qwen 2.5 model on a curated dataset of 43,000 examples. They then tested its performance against state-of-the-art proprietary models from OpenAI, Anthropic and Google on four public datasets designed to evaluate conversational AI systems.

    The results show that Arch-Router achieves the highest overall routing score of 93.17%, surpassing all other models, including top proprietary ones, by an average of 7.71%. The model’s advantage grew with longer conversations, demonstrating its strong ability to track context over multiple turns. 

    Arch-Router vs other models Source: arXiv

    In practice, this approach is already being applied in several scenarios, according to Paracha. For example, in open-source coding tools, developers use Arch-Router to direct different stages of their workflow, such as “code design,” “code understanding,” and “code generation,” to the LLMs best suited for each task. Similarly, enterprises can route document creation requests to a model like Claude 3.7 Sonnet while sending image editing tasks to Gemini 2.5 Pro. 

    The system is also ideal “for personal assistants in various domains, where users have a diversity of tasks from text summarization to factoid queries,” Paracha said, adding that “in those cases, Arch-Router can help developers unify and improve the overall user experience.”

    This framework is integrated with Arch, Katanemo Labs’ AI-native proxy server for agents, which allows developers to implement sophisticated traffic-shaping rules. For instance, when integrating a new LLM, a team can send a small portion of traffic for a specific routing policy to the new model, verify its performance with internal metrics, and then fully transition traffic with confidence. The company is also working to integrate its tools with evaluation platforms to streamline this process for enterprise developers further.

    Ultimately, the goal is to move beyond siloed AI implementations. “Arch-Router—and Arch more broadly—helps developers and enterprises move from fragmented LLM implementations to a unified, policy-driven system,” says Paracha. “In scenarios where user tasks are diverse, our framework helps turn that task and LLM fragmentation into a unified experience, making the final product feel seamless to the end user.”

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    1.5B accuracy achieves costly model retraining router
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleEpic reaches mystery settlement with Samsung days before new Galaxy phones
    Next Article Today’s NYT Mini Crossword Answers for July 8
    Techurz
    • Website

    Related Posts

    Security

    Buying an Android smartwatch? I found a model that’s highly functional and affordable

    October 13, 2025
    Security

    I thought the Bose QuietComfort headphones already hit their peak – then I tried the newest model

    October 12, 2025
    Security

    This new Google Gemini model scrolls the internet just like you do – how it works

    October 10, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    The Reason Murderbot’s Tone Feels Off

    May 14, 20259 Views

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    CNET’s Daily Tariff Price Tracker: I’m Keeping Tabs on Changes as Trump’s Trade Policies Shift

    May 27, 20258 Views
    Our Picks

    The full Space Stage agenda at Disrupt 2025

    October 15, 2025

    The new iPad Pro’s biggest upgrade isn’t the M5 chip – I’d buy it for this feature instead

    October 15, 2025

    How Attackers Bypass Synced Passkeys

    October 15, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.