How NVIDIA trains robots in virtual reality

GR00T 2.0 lives through 10,000 years of simulation in a few days, then works flawlessly in the real world. Zero-Shot Transfer with no additional tuning.

Author: Michael Kokin ·

At GTC 2026, NVIDIA showed a demo of Reinforcement Learning in Omniverse using the GR00T 2.0 model. Dozens of robot types — from humanoids to industrial manipulators — train in a virtual environment via RL, then work flawlessly in the real world. GR00T 2.0 adapts to non-standard situations — catches falling objects, works alongside humans. It looks not like a set of algorithms, but like an AI that understands the physics of space.

Omniverse creates a precise digital twin of reality — friction, gravity, lighting just like in real life. Thousands of robot copies run simultaneously inside, each trying different approaches. The RL agent gets a reward for correct actions, a penalty for mistakes. What would take years in reality takes hours here.

The most fascinating part is Zero-Shot Transfer. The simulation is so accurate that a robot trained in the virtual world immediately works in the real one with no additional tuning. Previously, each robot was manually programmed for a specific operation. Now GR00T 2.0 makes them universal.

Huang called Omniverse a "factory for producing intelligence" — physical skills are encoded in neural networks before the robot takes its first step in reality.

Simulation-based training has been actively attracting investors since last year (in the latest YC batch, at least three startups were working on this in construction and logistics). We're waiting for robots to not just learn in simulation, but to create it themselves — to humanly imagine what would happen if they took certain actions. That's where it's all heading.