11.9 C
New York

Over 25 years of serving tech enthusiasts.

Published:

Robots can learn new skills using Cornell’s RHyME AI by watching a single video. TechSpot is the place to go for tech advice and analysis you can trust.

Context: Traditionally, teaching robots new skills was slow and tedious. It required hours of step by step demonstrations for the simplest tasks. When a robot encountered an unexpected event, such as dropping a tool, or encountering an unanticipated obstruction, its progress would often grind and halt. This inflexibility has limited the practical application of robots for a long time in environments where unpredictable events are the norm.

Cornell University researchers are charting a new course with RHyME. An artificial intelligence framework, RHyME dramatically streamlines robot-learning. RHyME, which stands for Retrieval of Hybrid Imitation Under Mismatched Execution (or RHyME), allows robots to learn new skills just by watching a video. This is a radical departure from the tedious data collection and flawless replication previously required for skill development.

RHyME’s ability to translate human demonstrations into robot actions is the key advancement. Robots, unlike humans, have traditionally needed precise, rigid instructions to succeed. Even small differences in how a human and a robot do a task can derail the learning process.

RHyME solves this problem by allowing the robots to access a memory of previously observed actions. When shown a demonstration, like placing a mug into a sink, a robot searches for similar actions in its memory bank, such as picking up a glass or putting an object down. Even if the robot has never seen this exact scenario, it can still learn how to do the new task using familiar fragments. This approach makes robots more flexible and efficient. RHyME only requires 30 minutes of robot specific training data compared to thousands of hours of earlier methods. In laboratory tests robots trained using RHyME performed tasks more than 50 percent better than those trained using traditional techniques.

The team of researchers, led by Kushal Kedia, a doctoral student, and Sanjiban Choudhury (assistant professor), will present their findings in Atlanta at the IEEE International Conference on Robotics and Automation. Prithwish Da, Angela Chao and Maximus Pace are their collaborators. The project has received funding from Google, OpenAI and the US Office of Naval Research.

www.roboticsobserver.com

Related articles

spot_img

Recent articles

spot_img