RLWRLD, a bodily AI firm creating robotics basis fashions for dexterous manipulation, unveiled RLDX-1 at “Dexterity Night time in SF”, introducing a mannequin designed to assist humanoid robots carry out contact-rich duties akin to greedy, pouring and power use.
The corporate additionally reported benchmark outcomes throughout humanoid tabletop, kitchen manipulation and real-world coffee-pouring evaluations, and mentioned the mannequin runs throughout a number of robotic embodiments together with WIRobotics’ Allex humanoid, Franka Analysis 3 and OpenArm.
On the launch occasion, Amit Goel, head of robotics ecosystem and edge AI product at Nvidia, took the stage and mentioned: “RLWRLD is without doubt one of the core companions within the bodily AI ecosystem we’re constructing at Nvidia.”
RLDX-1 was developed on the Nvidia stack – together with Nvidia Isaac GR00T, Nvidia Isaac Lab, Nvidia Isaac Sim and cuRobo for simulation; Nvidia AI infrastructure with Hopper GPUs for coaching; and Nvidia Jetson AGX Thor with Nvidia TensorRT for inference.
RLWRLD’s debut occasion introduced collectively senior leaders from Nvidia alongside prime humanoid {hardware} and AI infrastructure corporations – together with WIRobotics, Enactic, Origami Robotics and Proception AI – signaling the formation of a brand new alliance constructed round dexterous manipulation.

RLDX-1 is the world’s first basis mannequin designed from the bottom up with a “Dexterity-First” philosophy, a direct response to what RLWRLD CEO Junghee Ryu calls the actual bottleneck of the humanoid period.
“Robotic AI thus far has been caught on ‘seeing’ and ‘speaking,’” Ryu mentioned throughout his keynote. “If robots are going to do actual work in factories, kitchens and warehouses, they should grasp, really feel and maintain on. RLDX-1 was constructed from day one to fill that hole.”
Outperforming Isaac GR00T N1.6 and breaking the 70-point barrier: In a technical session led by RLWRLD chief scientist Jinwoo Shin, additionally a professor at KAIST, the corporate demonstrated RLDX-1 throughout three high-bar evaluations:
GR-1 Tabletop (humanoid-specific): RLDX-1 outperformed Isaac GR00T N1.6 by 10.7 proportion factors.
RoboCasa Kitchen: RLDX-1 scored 70.6 – the primary Imaginative and prescient-Language-Motion (VLA) mannequin on the planet to interrupt the 70-point mark on the long-horizon, contact-rich benchmark.
Espresso-pouring on WIRobotics’ Allex humanoid: RLDX-1 achieved a 70.8 p.c success price – roughly double that of competing fashions.
The mannequin’s core innovation is a Multi-Stream Motion Transformer (MSAT) structure that provides imaginative and prescient, language, motion, tactile and reminiscence indicators their very own impartial streams earlier than fusing them via joint consideration – a design Shin says is important for duties involving dynamic weight shifts, akin to pouring liquid from a pot right into a cup.
A live-recorded demo of the “Pot-to-Cup Pouring” process – by which the robotic hand sensed the altering weight of the vessel in actual time – drew audible applause from the viewers.
Humanoid CEOs converge: ‘The hand is the following inflection level’
The latter half of the night featured product showcases and a candid panel dialogue with humanoid startup CEOs Hiroto Yamamoto of Enactic, Quanting Xie of Origami Robotics and Jay Li of Proception AI.
The panel converged on three themes shaping the following part of humanoid robotics: the significance of cross-embodiment architectures not locked to a single {hardware} vendor; the structural moat created by real-world industrial information partnerships; and the rising world requirements race forming round RFMs.
RLDX-1 already runs throughout a number of embodiments from a single spine – together with the WIRobotics Allex humanoid, the Franka Analysis 3 collaborative robotic and the open-source OpenArm platform – a functionality the panelists referred to as “a compelling collaboration mannequin from a {hardware} firm’s perspective”.
Towards a 4D+ world mannequin: ‘Seeing what pixels can’t seize’
Closing the occasion, Ryu framed RLDX-1 as a place to begin moderately than a vacation spot.
“The knowledge that isn’t captured in pixels won’t ever seem in your dataset, regardless of how a lot video you accumulate,” he mentioned.
“In the present day is the place to begin of an extended roadmap – one we’re strolling along with the worldwide humanoid companions on this room – towards a 4D+ world mannequin.”
RLWRLD’s next-generation 4D+ world mannequin goes past imaginative and prescient, language and motion to collectively predict and generate contact, torque and robotic state on a temporal axis – instantly simulating bodily data that typical video-based world fashions can’t seize.
Backed by main world corporates – Japan and Korea occasions to observe
RLWRLD has raised funding from a roster of main enterprises throughout a number of sectors, together with SK Telecom, LG Electronics, CJ Logistics, Lotte, KDDI, ANA Holdings, Mitsui Chemical substances and Shimadzu Company.
The corporate is presently working joint benchmark improvement, proof-of-concept (PoC) and Robotics Transformation (RX) tasks with greater than ten massive enterprise companions.
Following the US debut, RLWRLD plans to host RLDX-1 launch occasions in Japan and Korea within the coming weeks.
