Close Menu
TechurzTechurz

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Intel has received $5.7 billion under Trump’s investment deal

    August 28, 2025

    This Qi2 battery pack from Anker just made wireless charging essential for me

    August 28, 2025

    Bob Odenkirk’s ‘Nobody 2’ Gets Streaming Date, Report Says

    August 28, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Intel has received $5.7 billion under Trump’s investment deal
    • This Qi2 battery pack from Anker just made wireless charging essential for me
    • Bob Odenkirk’s ‘Nobody 2’ Gets Streaming Date, Report Says
    • Unravelling 5G Complexity: Engaging Students with TIMS-Powered Hands-on Education
    • Scientists Are Flocking to Bluesky
    • MathGPT, the ‘cheat-proof’ AI tutor and teaching assistant, expands to over 50 institutions
    • The Download: Google’s AI energy use, and the AI Hype Index
    • Investors are loving Lovable | TechCrunch
    Facebook X (Twitter) Instagram Pinterest Vimeo
    TechurzTechurz
    • Home
    • AI
    • Apps
    • News
    • Guides
    • Opinion
    • Reviews
    • Security
    • Startups
    TechurzTechurz
    Home»AI»Ai2’s MolmoAct model ‘thinks in 3D’ to challenge Nvidia and Google in robotics AI
    AI

    Ai2’s MolmoAct model ‘thinks in 3D’ to challenge Nvidia and Google in robotics AI

    TechurzBy TechurzAugust 14, 2025No Comments6 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Ai2's MolmoAct model ‘thinks in 3D’ to challenge Nvidia and Google in robotics AI
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

    Physical AI, where robotics and foundation models come together, is fast becoming a growing space with companies like Nvidia, Google and Meta releasing research and experimenting in melding large language models (LLMs) with robots. 

    New research from the Allen Institute for AI (Ai2) aims to challenge Nvidia and Google in physical AI with the release of MolmoAct 7B, a new open-source model that allows robots to “reason in space. MolmoAct, based on Ai2’s open source Molmo, “thinks” in three dimensions. It is also releasing its training data. Ai2 has an Apache 2.0 license for the model, while the datasets are licensed under CC BY-4.0. 

    Ai2 classifies MolmoAct as an Action Reasoning Model, in which foundation models reason about actions within a physical, 3D space.

    What this means is that MolmoAct can use its reasoning capabilities to understand the physical world, plan how it occupies space and then take that action. 

    AI Scaling Hits Its Limits

    Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

    • Turning energy into a strategic advantage
    • Architecting efficient inference for real throughput gains
    • Unlocking competitive ROI with sustainable AI systems

    Secure your spot to stay ahead: https://bit.ly/4mwGngO

    “MolmoAct has reasoning in 3D space capabilities versus traditional vision-language-action (VLA) models,” Ai2 told VentureBeat in an email. “Most robotics models are VLAs that don’t think or reason in space, but MolmoAct has this capability, making it more performant and generalizable from an architectural standpoint.”

    Physical understanding 

    Since robots exist in the physical world, Ai2 claims MolmoAct helps robots take in their surroundings and make better decisions on how to interact with them. 

    “MolmoAct could be applied anywhere a machine would need to reason about its physical surroundings,” the company said. “We think about it mainly in a home setting because that’s where the greatest challenge lies for robotics, because there things are irregular and constantly changing, but MolmoAct can be applied anywhere.”

    MolmoAct can understand the physical world by outputting “spatially grounded perception tokens,” which are tokens pretrained and extracted using a vector-quantized variational autoencoder or a model that converts data inputs, such as video, into tokens. The company said these tokens differ from those used by VLAs in that they are not text inputs. 

    These enable MolmoAct to gain spatial understanding and encode geometric structures. With these, the model estimates the distance between objects. 

    Once it has an estimated distance, MolmoAct then predicts a sequence of “image-space” waypoints or points in the area where it can set a path to. After that, the model will begin outputting specific actions, such as dropping an arm by a few inches or stretching out. 

    Ai2’s researchers said they were able to get the model to adapt to different embodiments (i.e., either a mechanical arm or a humanoid robot) “with only minimal fine-tuning.”

    Benchmarking testing conducted by Ai2 showed MolmoAct 7B had a task success rate of 72.1%, beating models from Google, Microsoft and Nvidia. 

    A small step forward

    Ai2’s research is the latest to take advantage of the unique benefits of LLMs and VLMs, especially as the pace of innovation in generative AI continues to grow. Experts in the field see work from Ai2 and other tech companies as building blocks. 

    Alan Fern, professor at the Oregon State University College of Engineering, told VentureBeat that Ai2’s research “represents a natural progression in enhancing VLMs for robotics and physical reasoning.”

    “While I wouldn’t call it revolutionary, it’s an important step forward in the development of more capable 3D physical reasoning models,” Fern said. “Their focus on truly 3D scene understanding, as opposed to relying on 2D models, marks a notable shift in the right direction. They’ve made improvements over prior models, but these benchmarks still fall short of capturing real-world complexity and remain relatively controlled and toyish in nature.”

    He added that while there’s still room for improvement on the benchmarks, he is “eager to test this new model on some of our physical reasoning tasks.” 

    Daniel Maturana, co-founder of the start-up Gather AI, praised the openness of the data, noting that “this is great news because developing and training these models is expensive, so this is a strong foundation to build on and fine-tune for other academic labs and even for dedicated hobbyists.”

    Increasing interest in physical AI

    It has been a long-held dream for many developers and computer scientists to create more intelligent, or at least more spatially aware, robots. 

    However, building robots that process what they can “see” quickly and move and react smoothly gets difficult. Before the advent of LLMs, scientists had to code every single movement. This naturally meant a lot of work and less flexibility in the types of robotic actions that can occur. Now, LLM-based methods allow robots (or at least robotic arms) to determine the following possible actions to take based on objects it is interacting with.

    Google Research’s SayCan helps a robot reason about tasks using an LLM, enabling the robot to determine the sequence of movements required to achieve a goal. Meta and New York University’s OK-Robot uses visual language models for movement planning and object manipulation.

    Hugging Face released a $299 desktop robot in an effort to democratize robotics development. Nvidia, which proclaimed physical AI to be the next big trend, released several models to fast-track robotic training, including Cosmos-Transfer1. 

    OSU’s Fern said there’s more interest in physical AI even though demos remain limited. However, the quest to achieve general physical intelligence, which eliminates the need to individually program actions for robots, is becoming easier. 

    “The landscape is more challenging now, with less low-hanging fruit. On the other hand, large physical intelligence models are still in their early stages and are much more ripe for rapid advancements, which makes this space particularly exciting,” he said. 

    Daily insights on business use cases with VB Daily

    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.

    Ai2s challenge Google model MolmoAct Nvidia robotics thinks
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleMy favorite lens and screen-cleaning kit keeps my tech smudge-free, and it costs $8
    Next Article A Low-Cost Bluetooth Speaker Trying to Blend In
    Techurz
    • Website

    Related Posts

    AI

    Unravelling 5G Complexity: Engaging Students with TIMS-Powered Hands-on Education

    August 28, 2025
    AI

    The Download: Google’s AI energy use, and the AI Hype Index

    August 28, 2025
    AI

    7 ways to use Copilot in classic Outlook – and why I disabled it

    August 28, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Start Saving Now: An iPhone 17 Pro Price Hike Is Likely, Says New Report

    August 17, 20258 Views

    You Can Now Get Starlink for $15-Per-Month in New York, but There’s a Catch

    July 11, 20257 Views

    Non-US businesses want to cut back on using US cloud systems

    June 2, 20257 Views
    Our Picks

    Intel has received $5.7 billion under Trump’s investment deal

    August 28, 2025

    This Qi2 battery pack from Anker just made wireless charging essential for me

    August 28, 2025

    Bob Odenkirk’s ‘Nobody 2’ Gets Streaming Date, Report Says

    August 28, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 techurz. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.