Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Could Microsoft’s AI billions go up in smoke?

    August 27, 2025

    3 Things James O’Donnell is into right now

    August 27, 2025

    Only 49% of companies to increase cyber budget after a breach

    August 27, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Could Microsoft’s AI billions go up in smoke?
    • 3 Things James O’Donnell is into right now
    • Only 49% of companies to increase cyber budget after a breach
    • Apple’s ‘Awe Dropping’ iPhone 17, Thin ‘Air’ Event: Sept. 9
    • ‘Vibe-hacking’ is now a top AI threat
    • Who’s winning Big Tech’s new AI assistant wars?
    • 7 smart plug tricks that instantly made my home feel more automated
    • This tiny ratchet beats any multitool or Swiss Army Knife I’ve ever tested – and it’s only $25
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»How procedural memory can cut the cost and complexity of AI agents
    AI

    How procedural memory can cut the cost and complexity of AI agents

    TechurzBy TechurzAugust 27, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    How procedural memory can cut the cost and complexity of AI agents
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    A new technique from Zhejiang University and Alibaba Group gives large language model (LLM) agents a dynamic memory, making them more efficient and effective at complex tasks. The technique, called Memp, provides agents with a “procedural memory” that is continuously updated as they gain experience, much like how humans learn from practice.

    Memp creates a lifelong learning framework where agents don’t have to start from scratch for every new task. Instead, they become progressively better and more efficient as they encounter new situations in real-world environments, a key requirement for reliable enterprise automation.

    The case for procedural memory in AI agents

    LLM agents hold promise for automating complex, multi-step business processes. In practice, though, these long-horizon tasks can be fragile. The researchers point out that unpredictable events like network glitches, user interface changes or shifting data schemas can derail the entire process. For current agents, this often means starting over every time, which can be time-consuming and costly.

    Meanwhile, many complex tasks, despite surface differences, share deep structural commonalities. Instead of relearning these patterns every time, an agent should be able to extract and reuse its experience from past successes and failures, the researchers point out. This requires a specific “procedural memory,” which in humans is the long-term memory responsible for skills like typing or riding a bike, that become automatic with practice.

    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO

    Starting from scratch (top) vs using procedural memory (bottom) (source: arXiv)

    Current agent systems often lack this capability. Their procedural knowledge is typically hand-crafted by developers, stored in rigid prompt templates or embedded within the model’s parameters, which are expensive and slow to update. Even existing memory-augmented frameworks provide only coarse abstractions and don’t adequately address how skills should be built, indexed, corrected and eventually pruned over an agent’s lifecycle.

    Consequently, the researchers note in their paper, “there is no principled way to quantify how efficiently an agent evolves its procedural repertoire or to guarantee that new experiences improve rather than erode performance.”

    How Memp works

    Memp is a task-agnostic framework that treats procedural memory as a core component to be optimized. It consists of three key stages that work in a continuous loop: building, retrieving, and updating memory.

    Memories are built from an agent’s past experiences, or “trajectories.” The researchers explored storing these memories in two formats: verbatim, step-by-step actions; or distilling these actions into higher-level, script-like abstractions. For retrieval, the agent searches its memory for the most relevant past experience when given a new task. The team experimented with different methods, such vector search, to match the new task’s description to past queries or extracting keywords to find the best fit.

    The most critical component is the update mechanism. Memp introduces several strategies to ensure the agent’s memory evolves. As an agent completes more tasks, its memory can be updated by simply adding the new experience, filtering for only successful outcomes or, most effectively, reflecting on failures to correct and revise the original memory.

    Memp framework (source: arXiv)

    This focus on dynamic, evolving memory places Memp within a growing field of research aimed at making AI agents more reliable for long-term tasks. The work parallels other efforts, such as Mem0, which consolidates key information from long conversations into structured facts and knowledge graphs to ensure consistency. Similarly, A-MEM enables agents to autonomously create and link “memory notes” from their interactions, forming a complex knowledge structure over time.

    However, co-author Runnan Fang highlights a critical distinction between Memp and other frameworks.

    “Mem0 and A-MEM are excellent works… but they focus on remembering salient content within a single trajectory or conversation,” Fang commented to VentureBeat. In essence, they help an agent remember “what” happened. “Memp, by contrast, targets cross-trajectory procedural memory.” It focuses on “how-to” knowledge that can be generalized across similar tasks, preventing the agent from re-exploring from scratch each time. 

    “By distilling past successful workflows into reusable procedural priors, Memp raises success rates and shortens steps,” Fang added. “Crucially, we also introduce an update mechanism so that this procedural memory keeps improving— after all, practice makes perfect for agents too.”

    Overcoming the ‘cold-start’ problem

    While the concept of learning from past trajectories is powerful, it raises a practical question: How does an agent build its initial memory when there are no perfect examples to learn from? The researchers address this “cold-start” problem with a pragmatic approach.

    Fang explained that devs can first define a robust evaluation metric instead of requiring a perfect “gold” trajectory upfront. This metric, which can be rule-based or even another LLM, scores the quality of an agent’s performance. “Once that metric is in place, we let state-of-the-art models explore within the agent workflow and retain the trajectories that achieve the highest scores,” Fang said. This process rapidly bootstraps an initial set of useful memories, allowing a new agent to get up to speed without extensive manual programming.

    Memp in action

    To test the framework, the team implemented Memp on top of powerful LLMs like GPT-4o, Claude 3.5 Sonnet and Qwen2.5, evaluating them on complex tasks like household chores in the ALFWorld benchmark and information-seeking in TravelPlanner. The results showed that building and retrieving procedural memory allowed an agent to distill and reuse its prior experience effectively.

    During testing, agents equipped with Memp not only achieved higher success rates but became much more efficient. They eliminated fruitless exploration and trial-and-error, leading to a substantial reduction in both the number of steps and the token consumption required to complete a task.

    Using procedural memory (right) helps agents accomplish tasks in fewer steps and using fewer tokens (source: arXiv)

    One of the most significant findings for enterprise applications is that procedural memory is transferable. In one experiment, procedural memory generated by the powerful GPT-4o was given to a much smaller model, Qwen2.5-14B. The smaller model saw a significant boost in performance, improving its success rate and reducing the steps needed to complete tasks.

    According to Fang, this works because smaller models often handle simple, single-step actions well but falter when it comes to long-horizon planning and reasoning. The procedural memory from the larger model effectively fills this capability gap. This suggests that knowledge can be acquired using a state-of-the-art model, then deployed on smaller, more cost-effective models without losing the benefits of that experience.

    Toward truly autonomous agents

    By equipping agents with memory-update mechanisms, the Memp framework allows them to continuously build and refine their procedural knowledge while operating in a live environment. The researchers found this endowed the agent with a “continual, almost linear mastery of the task.”

    However, the path to full autonomy requires overcoming another hurdle: Many real-world tasks, such as producing a research report, lack a simple success signal. To continuously improve, an agent needs to know if it did a good job. Fang says the future lies in using LLMs themselves as judges.

    “Today we often combine powerful models with hand-crafted rules to compute completion scores,” he notes. “This works, but hand-written rules are brittle and hard to generalize.”

    An LLM-as-judge could provide the nuanced, supervisory feedback needed for an agent to self-correct on complex, subjective tasks. This would make the entire learning loop more scalable and robust, marking a critical step toward building the resilient, adaptable and truly autonomous AI workers needed for sophisticated enterprise automation.

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    agents complexity cost cut memory procedural
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAttackers steal data from Salesforce instances via compromised AI live chat tool
    Next Article T-Mobile will give you 4 free Google Pixel phones right now – here’s how the deal works
    Techurz
    • Website

    Related Posts

    AI

    Could Microsoft’s AI billions go up in smoke?

    August 27, 2025
    AI

    3 Things James O’Donnell is into right now

    August 27, 2025
    AI

    ‘Vibe-hacking’ is now a top AI threat

    August 27, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    Could Microsoft’s AI billions go up in smoke?

    August 27, 2025

    3 Things James O’Donnell is into right now

    August 27, 2025

    Only 49% of companies to increase cyber budget after a breach

    August 27, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.