Product Hunt
MolmoAct 2
Open robotics model that reasons in 3D before acting
MolmoAct 2 is an open Action Reasoning Model that reasons in 3D before directing robot actions, handles bimanual tasks without per-task fine-tuning, and runs up to 37x faster than MolmoAct. For robotics researchers and ML engineers.
View Raw Thread
Developer & User Discourse
[Redacted] • May 9, 2026
I wonder how this dataset handles the variability in real-world object interactions—does it include failure cases or only successful demonstrations? That could be huge for robust policy learning.
[Redacted] • May 9, 2026
Really useful for generalist training for industrial robots. Usually covering robotic arm manipulation and covering the inverse kinematics is big hassle. Would definately explore this model for Robot training.
[Redacted] • May 6, 2026
700 hours of bimanual robot demonstrations, all open, is the kind of training resource the robotics field has been missing.What it is: MolmoAct 2 is an open Action Reasoning Model from Ai2 that reasons in 3D before directing physical robot actions, trained in part on the MolmoAct 2-Bimanual YAM dataset, the largest open-source bimanual robotics dataset released to date.Most robotics foundation models are trained on proprietary data that no one outside the lab can inspect or build on. That makes reproducing results nearly impossible and limits who can meaningfully contribute to the field. Ai2 built MolmoAct 2 differently, starting with the data. The MolmoAct 2-Bimanual YAM dataset covers 700 hours of two-arm manipulation demonstrations, folding towels, scanning groceries, clearing tables, charging smartphones, and more. It contains over 30 times the robot data used to train the original MolmoAct.What makes it different: Bimanual capability is baked into the base model rather than added through per-task fine-tuning. The language annotations were reannotated to increase unique instruction labels from 71,000 to around 146,000, which makes the model more robust to real-world phrasing variation. The dataset was supplemented with a broader mix covering different arms, camera setups, and control schemes so the model generalises beyond the training hardware.Key features:700-hour MolmoAct 2-Bimanual YAM dataset, fully openNative bimanual manipulation without per-task fine-tuningReannotated language instructions for phrasing robustnessMolmoAct 2-Think variant with adaptive depth perception tokensReference hardware setup published: YAM arms, overhead and close-up cameras, tabletop workspaceBenefits:Researchers can study, reproduce, and build on the training data directlyDataset covers varied arms, cameras, and control schemes for broader generalisationOpen action tokenizer released alongside model weightsTraining code coming soon under open-source licenseWho it's for: Robotics researchers and ML engineers who need open training data and reproducible recipes to build or improve manipulation models.The data problem in robotics AI is as significant as the model problem. Releasing both together is what makes this launch worth tracking.
SaaS Metrics