Home Uncategorized Users can now correct robot actions using a new interactive system

Users can now correct robot actions using a new interactive system

0

Researchers are now using pre-trained generative AI model to teach robots how to perform complex tasks by following rules. These models are powerful as they can handle many complex tasks.

The models are only exposed to robot actions that are feasible, allowing them learn valid movement paths or trajectories for robots. These trajectories are not always suited to the needs of users in real-world scenarios.

To fix these issues, it is often necessary to collect new data and retrain the model. This can be expensive, slow, and require advanced machine-learning abilities. Imagine being able to correct the robot’s actions in real-time with a simple interaction.

A new technology mimics the movements of nature, allowing robots to move with ease.

Thanks to a groundbreaking framework created by MIT and NVIDIA researches, this scenario could become a real possibility. Their new framework could make robotics more adaptable and friendly, allowing users to correct robot behavior in real time with simple interactions. This technique eliminates any need to collect new data or retrain the robot’s machine learning model. It allows the robots to respond to real-time, intuitive human feedback and select an action sequence that closely matches the user’s intention.

During testing, the framework’s success rate was higher by 21 percent than an alternative method.

Felix Yanwei Wang is a graduate student in electrical engineering and computer sciences (EECS) and the lead author of a research paper.

“We can’t expect laypeople
to collect data and fine-tune neural network models. The consumer will expect that the robot works right out of box. If it doesn’t work, they will want an intuitive way to customize it. This is the challenge that we took on in this work.”

We want to allow users to interact with robots without introducing these kinds of mistakes so we can get a behavior which is more aligned to user intent during deployment, but also valid and achievable.”

MIT researchers developed an interactive system that allows laypeople to easily fix a robot’s behaviour using simple interactions. As Felix Yanwei Wang demonstrates, a user could guide a robot by pointing at an object on the screen.
Photo credit: Melanie Gonick, MIT.

This framework offers three simple ways for users of the robot to guide its actions. They can either point at the object they want to see in the robot camera view, trace the path to follow or physically move the arm of the robot. The most precise method is to physically move the robot, since it avoids the loss of information when translating a 2D picture into a 3D action. Researchers

developed a sampling method to prevent the robot making invalid moves like colliding into objects. This method allows the robots to choose the best option from a list of valid options that match the user’s intention.

Rather than imposing the user’s instructions directly, the robot uses the user’s feedback and its own learned behavior. This balance allows the robot to adapt while remaining within safe limits.

Tests with a robot arm in a kitchen toy and simulations showed that this method outperformed others. It may not always finish a task instantly, but it allows users the ability to correct errors in real-time. This avoids delays caused by waiting for the task to be completed before giving new instructions.

Researchers are now looking forward to optimizing sampling procedures while maintaining or improving their performance.

Journal Reference:

  1. Yanwei Wang, Lirui Wang, Yilun Du, Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino, Dieter Fox, Julie Shah. Inference-Time policy steering through human interactions. arXiv: 2411.16627v1




www.roboticsobserver.com

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version