A robot that can change its own shape is taking a significant step toward achieving animal-like adaptability in unpredictable situations. Researchers in the field of modular self-reconfigurable robots (MSRR) believe that when a machine can alter its structure to suit different tasks, its level of autonomy becomes much more nuanced. Roboticist Hadas Kress-Gazit from Cornell University explains that the ability for a robot to “know itself” and adjust its form according to the environment is crucial for tackling complex challenges.
In a recent study published in Science Robotics, Kress-Gazit and her team demonstrated a breakthrough: a modular robot that autonomously reconfigures itself to solve problems posed by a changing environment. To make this possible, the researchers designed strict parameters both for the test environment and for the robot’s possible actions.
The concept of self-reconfigurable robots isn’t entirely new. As Mark Yim of the University of Pennsylvania pointed out, such ideas have been around since the era reminiscent of Transformers. However, the novelty in this work lies in a deeper understanding of the robot’s capabilities. The experiment used a robot called SMORES-EP, made up of cube-like modules that attach magnetically. While many MSRRs operate in a decentralized manner, with each module sharing in both decision-making and movement, this study integrated a centralized system by equipping one module with a webcam on a small mast—a “Sauron-like eye”—to provide a unified view of its surroundings. This central processor then directs all the modules.
To prepare the robot for its tests, the team developed a comprehensive software library containing a variety of actions—from simple driving maneuvers to tasks like object collection—and the corresponding configurations needed to perform them. In a series of controlled laboratory challenges, the robot had to identify colored, tagged objects within a cluttered test area and then move them to designated locations. In one scenario, the robot needed to navigate a tunnel; in another, it had to reach high to stamp a box.
The robot’s strategy was straightforward. Using its camera feed, the planning software analyzed the environment and selected the appropriate configuration from its library to meet the challenge at hand. Although not directly compared with a decentralized version, the system’s occasional shortcomings were documented. The researchers noted that just over 40 percent of the errors stemmed from low-level hardware issues, like actuator failures, with perception problems and occasional human errors also contributing to the failures.
Previous work by this research group focused on robots that modify their environment by moving objects or constructing ramps. However, the current study’s integration of sensory perception, high-level planning, and modular hardware marks a significant leap forward. Pinhas Ben-Tzvi of Virginia Tech remarked that with a clear task description, a modular robot can autonomously explore an unknown environment, determine when to reconfigure, and manipulate objects to complete its mission.
Looking ahead, the team suggests that future improvements could involve incorporating more detailed sensor data—such as input from a wheel that detects unexpected obstacles like unusually high steps—into the planning software. Mauro Dragone from Heriot-Watt University in Edinburgh emphasized the need to test these systems in less structured, more challenging real-world environments to fully demonstrate their practical value.