Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Why top talent is walking away from OpenAI and xAI

    February 13, 2026

    Fusion startup Helion hits blistering temps as it races toward 2028 deadline

    February 13, 2026

    AI burnout, billion-dollar bets, and Silicon Valley’s Epstein problem

    February 13, 2026
    Facebook X (Twitter) Instagram
    Trending
    • Why top talent is walking away from OpenAI and xAI
    • Fusion startup Helion hits blistering temps as it races toward 2028 deadline
    • AI burnout, billion-dollar bets, and Silicon Valley’s Epstein problem
    • Score, the dating app for people with good credit, is back
    • Didero lands $30M to put manufacturing procurement on ‘agentic’ autopilot
    • Eclipse backs all-EV marketplace Ever in $31M funding round
    • Complyance raises $20M to help companies manage risk and compliance
    • Meridian raises $17 million to remake the agentic spreadsheet
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Open-source MCPEval makes protocol-level agent testing plug-and-play
    AI

    Open-source MCPEval makes protocol-level agent testing plug-and-play

    TechurzBy TechurzJuly 24, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    AWS unveils Bedrock AgentCore, a new platform for building enterprise AI agents with open source frameworks and tools
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    Enterprises are beginning to adopt the Model Context Protocol (MCP) primarily to facilitate the identification and guidance of agent tool use. However, researchers from Salesforce discovered another way to utilize MCP technology, this time to aid in evaluating AI agents themselves. 

    The researchers unveiled MCPEval, a new method and open-source toolkit built on the architecture of the MCP system that tests agent performance when using tools. They noted current evaluation methods for agents are limited in that these “often relied on static, pre-defined tasks, thus failing to capture the interactive real-world agentic workflows.”

    “MCPEval goes beyond traditional success/failure metrics by systematically collecting detailed task trajectories and protocol interaction data, creating unprecedented visibility into agent behavior and generating valuable datasets for iterative improvement,” the researchers said in the paper. “Additionally, because both task creation and verification are fully automated, the resulting high-quality trajectories can be immediately leveraged for rapid fine-tuning and continual improvement of agent models. The comprehensive evaluation reports generated by MCPEval also provide actionable insights towards the correctness of agent-platform communication at a granular level.”

    MCPEval differentiates itself by being a fully automated process, which the researchers claimed allows for rapid evaluation of new MCP tools and servers. It both gathers information on how agents interact with tools within an MCP server, generates synthetic data and creates a database to benchmark agents. Users can choose which MCP servers and tools within those servers to test the agent’s performance on. 

    The AI Impact Series Returns to San Francisco – August 5

    The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

    Secure your spot now – space is limited: https://bit.ly/3GuuPLF

    Shelby Heinecke, senior AI research manager at Salesforce and one of the paper’s authors, told VentureBeat that it is challenging to obtain accurate data on agent performance, particularly for agents in domain-specific roles. 

    “We’ve gotten to the point where if you look across the tech industry, a lot of us have figured out how to deploy them. We now need to figure out how to evaluate them properly,” Heinecke said. “MCP is a very new idea, a very new paradigm. So, it’s great that agents are gonna have access to tools, but we again need to evaluate the agents on those tools. That’s exactly what MCPEval is all about.”

    How it works

    MCPEval’s framework takes on a task generation, verification and model evaluation design. Leveraging multiple large language models (LLMs) so users can choose to work with models they are more familiar with, agents can be evaluated through a variety of available LLMs in the market. 

    Enterprises can access MCPEval through an open-source toolkit released by Salesforce. Through a dashboard, users configure the server by selecting a model, which then automatically generates tasks for the agent to follow within the chosen MCP server. 

    Once the user verifies the tasks, MCPEval then takes the tasks and determines the tool calls needed as ground truth. These tasks will be used as the basis for the test. Users choose which model they prefer to run the evaluation. MCPEval can generate a report on how well the agent and the test model functioned in accessing and using these tools. 

    MCPEval not only gathers data to benchmark agents, Heinecke said, but it can also identify gaps in agent performance. Information gleaned by evaluating agents through MCPEval works not only to test performance but also to train the agents for future use. 

    “We see MCPEval growing into a one-stop shop for evaluating and fixing your agents,” Heinecke said. 

    She added that what makes MCPEval stand out from other agent evaluators is that it brings the testing to the same environment in which the agent will be working. Agents are evaluated on how well they access tools within the MCP server to which they will likely be deployed. 

    The paper noted that in experiments, GPT-4 models often provided the best evaluation results. 

    Evaluating agent performance

    The need for enterprises to begin testing and monitoring agent performance has led to a boom of frameworks and techniques. Some platforms offer testing and several more methods to evaluate both short-term and long-term agent performance. 

    AI agents will perform tasks on behalf of users, often without the need for a human to prompt them. So far, agents have proven to be useful, but they can get overwhelmed by the sheer amount of tools at their disposal.  

    Galileo, a startup, offers a framework that enables enterprises to assess the quality of an agent’s tool selection and identify errors. Salesforce launched capabilities on its Agentforce dashboard to test agents. Researchers from Singapore Management University released AgentSpec to achieve and monitor agent reliability. Several academic studies on MCP evaluation have also been published, including MCP-Radar and MCPWorld.

    MCP-Radar, developed by researchers from the University of Massachusetts Amherst and Xi’an Jiaotong University, focuses on more general domain skills, such as software engineering or mathematics. This framework prioritizes efficiency and parameter accuracy. 

    On the other hand, MCPWorld from Beijing University of Posts and Telecommunications brings benchmarking to graphical user interfaces, APIs, and other computer-use agents.

    Heinecke said ultimately, how agents are evaluated will depend on the company and the use case. However, what is crucial is that enterprises select the most suitable evaluation framework for their specific needs. For enterprises, she suggested considering a domain-specific framework to thoroughly test how agents function in real-world scenarios.

    “There’s value in each of these evaluation frameworks, and these are great starting points as they give some early signal to how strong the gent is,” Heinecke said. “But I think the most important evaluation is your domain-specific evaluation and coming up with evaluation data that reflects the environment in which the agent is going to be operating in.”

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    agent MCPEval opensource plugandplay protocollevel testing
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMicrosoft’s incomplete SharePoint patch led to global exploits by China-linked hackers
    Next Article I Underestimated Workout Buddy. Apple’s Playing the Long Game for AI Coaching
    Techurz
    • Website

    Related Posts

    Opinion

    VoiceRun nabs $5.5M to build a voice agent factory

    January 14, 2026
    Opinion

    Yes, LinkedIn banned AI agent startup Artisan, but now it’s back

    January 7, 2026
    Opinion

    Simular’s AI agent wants to run your Mac, Windows PC for you

    December 2, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    College social app Fizz expands into grocery delivery

    September 3, 20251,563 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    College social app Fizz expands into grocery delivery

    September 3, 20251,563 Views

    A Former Apple Luminary Sets Out to Create the Ultimate GPU Software

    September 25, 202514 Views

    The Reason Murderbot’s Tone Feels Off

    May 14, 202511 Views
    Our Picks

    Why top talent is walking away from OpenAI and xAI

    February 13, 2026

    Fusion startup Helion hits blistering temps as it races toward 2028 deadline

    February 13, 2026

    AI burnout, billion-dollar bets, and Silicon Valley’s Epstein problem

    February 13, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2026 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.