Home Uncategorized Robots are starting to make decisions in the operating room

Robots are starting to make decisions in the operating room

0

In 2020, the Smart Tissue Autonomous Robot performed laparoscopic surgery on a live animal autonomously for the first time.

This is a scene of a not-too distant future. In a high-tech, bright operating room, a sleek robot arm stands next to the table. The autonomous robot will not operate completely on its own, but it can assist in the upcoming operation, performing key tasks with enhanced precision and lower risk.

The patient is one of

In the United States, colon cancer is diagnosed in more than 150,000 people
each year. The only cure is to remove the affected part of the colon. This is best done using a minimally-invasive laparoscopic technique, which uses surgical tools and a thin video camera inserted through tiny incisions. The surgery is usually difficult. The most important factors are the surgeon’s experience, skills, and technique.

Important factors influencing surgical outcomes
and complications, which occur up to

16 percent of cases
These complications can decrease the patient’s life expectancy and increase their risk of death. It is hoped that an autonomous robotic surgical system will improve these odds.



This video shows the Smart Tissue Autonomous Robot in action, demonstrating the laparoscopic suturing of a small intestine.

This robot will perform tasks during surgery that require the highest level of accuracy. The surgeon will control the robot’s movements by hand first to remove the cancerous tissues, and then supervise its motion as it sews back together the remaining healthy colon. The robot will use real-time planning and imaging to place each stitch with submillimeter accuracy, which is impossible with human hands. The suture line is stronger and more uniform. This makes it less likely to leak. A dangerous complication can occur if the connection does not heal properly.

Although autonomous robots haven’t been used to operate on humans in the manner we’ve described, we now possess the tools necessary for this futuristic style surgery. Our team is centered around coauthor

Axel Krieger
‘s

Johns Hopkins University’s robotics lab
is dedicated to developing robots capable of performing complex, repetitive tasks with greater accuracy and consistency than the best surgeons. Soon, a patient will hear a new greeting: “The Robot will See You Now.”

The History of Surgical Robots.

Robot assisted surgery

The first recorded use of an industrial robot arm for a brain biopsy was in 1985
by a team at Long Beach Memorial Medical Center in California. The procedure was successful,

Westinghouse
the robot’s maker, stopped further surgeries. The company claimed that the robot, which was designed for industrial purposes, lacked safety features. Despite this setback, surgical robots have continued to develop. In 1994, U.S. regulators gave approval to the first surgical robotic system: the Automated Endoscopic System for Optimized Positioning (AESOP), which is a voice-controlled robot arm for positioning laparoscopic cameras. In 2000, the first surgical robot was introduced.

The da Vinci robot is a teleoperated device that allows surgeons to control tiny instruments.


The first version of STAR sutured a small intestine piece pulled up through an opening.

Ryan Decker

Initially, surgeons were slow to adopt this technology because they are cautious. In 2012, robots were used in less than 2% of surgeries in the United States. By 2018, this had increased to a staggering 80%.

The number of people who were affected rose to 15 percent
. Robots are now being used to perform certain procedures such as prostate gland removal.

In the United States, robots are used in more than 90%
(of such procedures). The benefits of many other surgeries are still uncertain. Some experts question the overall usefulness of robotic assistance during surgeries because the robots are expensive and the surgeons who use the robots require specialized training.

However autonomous robotic systems that can handle discrete tasks by themselves, could potentially demonstrate a better performance with less training required. Surgery requires a high level of medical expertise, a steady hand, and spectacular precision. Years of intensive training are required to learn how to safely perform specialized surgical procedures. There is little room for error. The high demands for safety and consistency in surgery can be more easily met with autonomous robotic systems. These robots can perform routine tasks, avoid mistakes, and even complete operations without human intervention.

It is clear that innovation is needed: the number of surgeons in the world is rapidly decreasing, but the number of patients who need surgery is increasing.

A report from 2024
of the Association of American Medical Colleges projected a shortage of up to 19900 surgeons in the United States by 2036. These robots offer a way for many people to have access to high-quality surgeries. Why aren’t autonomous surgery performed yet?

When we think of robots at work, we usually imagine them performing factory tasks like sorting packages or building cars. Robots excel in these environments because of the controlled conditions and relatively few variations in tasks. In an auto factory, for example, robots on the assembly line place the exact same parts at the exact same location in every car. But the complexity of surgical procedures–characterized by dynamic interactions with soft tissues, blood vessels, and organs–does not easily translate to robotic automation. Each surgical scenario is unpredictable and requires real-time decisions. This is why we haven’t seen robots in our daily lives yet. The world is full of surprises and requires us to adapt on the fly.

Developing robots capable of navigating the intricacies of the human body is a formidable challenge that requires sophisticated mechanical design, innovative imaging techniques, and most recently, advanced artificial-intelligence algorithms. These algorithms must be able to process real-time data to adapt to the unpredictable body environment.

A Surgical Bot That Can Work on Its Own

The year 2016 marked a significant milestone in our field. One of our robotic systems performed the world’s first autonomous soft tissue surgery on a living animal. Called the

Smart Tissue Autonomous Robot (19459120) or STAR sewed tissue in the small intestinal tract of a pig with a robot arm commercially available while being supervised by a surgeon. The robot moved autonomously between suturing positions along the tissue edge, and waited for approval from the surgeon before autonomously placing stitches. This control strategy is called supervised autonomy and is used to ensure that surgeons remain engaged when automating critical tasks.

STAR’s suturing marked the first time a robotic system had demonstrated autonomous surgical performance objectively superior to the standard of care. Compared with human surgeons’ performance, STAR achieved more uniform suture spacing which creates a more durable and stronger suture line. A stronger stitch line can withstand greater pressures within the intestine, without leaking. This is a breakthrough achievement.

Such leaks
can be the most dangerous complication of any type of gastrointestinal surgery. Up to

20 percent
patients who undergo surgery to reconnect their colon develop a leak. This can lead to life-threatening infections, and may require further surgery.


The 2016 STAR system sutures the small intestine with a single robotic arm. Behind the robot, a screen shows near-infrared and 3D imaging side by side.

Ryan Decker

Before this 2016 surgery, autonomous soft-tissue surgery was considered a fantasy of science fiction. Because soft tissue constantly shifts and contorts, the surgical field changes each time the tissue is touched, and it’s impossible to use presurgical imaging to guide a robot’s motion. We had also been stymied by the state of surgical imaging. The best cameras that were compatible with surgical scopes—the long, thin tubes used to view internal surgeries—lacked the quantifiable depth information that autonomous robots need for navigation.

Critical innovations in surgical tools and imaging made the STAR robot a success. For instance, the system sutured with a curved needle, simplifying the motion needed to pass a needle through tissue. Additionally, a new design allowed a single robotic arm to both guide the needle and control the suture tension, so there was no risk of tools colliding in the surgical field.

But the most important innovation that made STAR possible was the use of a novel dual-camera system that enabled real-time tracking of the intestine during surgery. The first camera provided color images and quantifiable three-dimensional information about the surgical field. Using this information, the system created surgical plans by imaging the intestinal tissue and identifying the optimal locations for the stitches to yield the desired suture spacing. But at the time, the imaging rate of the system was limited to five frames per second—not fast enough for real-time application.

To solve this limitation, we introduced a second, near-infrared camera that took

about 20 images per second
to track the positions of near-infrared markers placed on the target tissue. When the position of a given marker moved too much from one frame to the next, the system would pause and update the surgical plan based on data from the slower camera, which produced three-dimensional images. This strategy enabled STAR to track the soft-tissue deformations in two-dimensional space in real time, updating the three-dimensional surgical plan only when tissue movement jeopardized its success.


This version of STAR
could place a suture at the correct location on the first try a little more than half the time. In practice, this meant that the STAR system needed a human to move the suture needle—after it had already pierced the skin—once every 2.37 stitches. That rate was nearly on par with how frequently human surgeons have to correct the needle position when manually controlling a robot: once every 2.27 stitches. The number of stitches applied per needle adjustment is a critical metric for quantifying how much collateral tissue is damaged during a surgery. In general, the fewer times tissue is pierced during surgery (which corresponds to a higher number of sutures per adjustment), the better the surgical outcomes for the patient.

For its time, the STAR system was a revolutionary achievement. However, its size and limited dexterity hindered doctors’ enthusiasm, and it was never used on a human patient. STAR’s imaging system was much bigger than the cameras and endoscopes used in laparoscopic surgeries, so it could perform intestinal suturing only through an open surgical technique in which the intestine is pulled up through a skin incision. To modify STAR for laparoscopic surgeries, we needed another round of innovation in surgical imaging and planning.

Improving STAR’s Surgical Autonomy

In 2020 (results published in 2022), the next generation of STAR set another record in the world of soft-tissue surgery: the first

autonomous laparoscopic surgery in a live animal
(again, intestinal surgery in a pig). The system featured a new endoscope that generates three-dimensional images of the surgical scene in real time by illuminating tissue with patterns of light and measuring how the patterns are distorted. What’s more, the endoscope’s dimensions were small enough to allow the camera to fit within the opening used for the laparoscopic procedure.


The autonomy afforded by the 2020 STAR system allows surgeons to take a step back from the surgical field [top]. Axel Krieger [bottom] takes a close look at STAR’s suturing.

Max Aguilera Hellweg

Adapting STAR for a laparoscopic approach affected every part of the system. For instance, these procedures take place within limited workspace in the patient’s abdomen, so we had to add a second robotic arm to maintain the proper tension in the suturing thread—all while avoiding collisions with the suturing arm. To help STAR autonomously manipulate thread and to keep the suture from tangling with completed stitches, we added a second joint to the robot’s surgical tools, which enabled wristlike motions.

Now that the intestine was to be sutured laparoscopically, the tissue had to be held in place with temporary sutures so that STAR’s endoscope could visualize it—a step commonly done in the nonrobotic equivalent of this procedure. But by anchoring the intestine to the abdominal wall, the tissue would move with each breath of the animal. To compensate for this movement, we used machine learning to detect and measure the motions caused by each breath, then direct the robot to the right suture location. In these procedures, STAR generated options for the surgical plan before the first stitch, detected and compensated for motion within the abdomen, and completed most suturing motions in the surgical plan without surgeon input. This control strategy, called task autonomy, is a fundamental step toward the full surgical autonomy we envision for future systems.

While the original STAR’s method of tissue detection still relied on the use of near-infrared markers, recent advancements in deep learning have enabled autonomous tissue tracking without these markers. Machine learning techniques in image processing also shrank the endoscope to 10 millimeters in diameter and enabled simultaneous three-dimensional imaging and tissue tracking in real time, while maintaining the same accuracy of STAR’s earlier cameras.

All these advances enabled STAR to make fine adjustments during an operation, which have reduced the number of corrective actions by the surgeon. In practice, this new STAR system can autonomously complete 5.88 stitches before a surgeon needs to adjust the needle position—a much better outcome than what a surgeon can achieve when operating a robot manually for the entire procedure, guiding the needle through every stitch. By comparison, when human surgeons perform laparoscopic surgery without any robotic assistance, they adjust their needle position after almost every stitch.

AI and machine learning methods will likely continue to play a prominent role as researchers push the boundaries of what surgical jobs can be completed using task automation. Eventually, these methods could lead to a more complete type of automation that has eluded surgical robots—so far.

The Future of Robotic Surgery

With each technical advance, autonomous surgical robots inch closer to the operating room. But to make these robots more usable in clinical settings, we’ll need to equip the machines with the tools to see, hear, and maneuver more like a human. Robots can use computer vision to interpret visual data, natural-language processing to understand spoken instructions, and advanced motor control for precise movements. Integrating these systems will mean that a surgeon can verbally instruct the robot to “grasp the tissue on the left”or “tie a knot here,” for instance. In traditional robotic surgery systems, by contrast, each action has to be described using complex mathematical equations.


Specialized imaging enables STAR’s laparoscopic suturing. The purple dots here show the system’s proposed suture locations.

Hamed Saeidi

To build such robots, we’ll need general-purpose robotic controllers capable of learning from vast datasets of surgical procedures. These controllers will observe expert surgeons during their training and learn how to adapt to unpredictable situations, such as soft-tissue deformation during surgery. Unlike the consoles used in today’s robotic surgeries, which give human surgeons direct control, this future robot controller willuse AI to autonomously manage the robot’s movements and decision-making during surgical tasks, reducing the need for constant human input—while keeping the robot under a surgeon’s supervision.

Surgical robots operating on human patients will gather a vast amount of data and, eventually, the robotic systems can train on that data to learn how to handle tasks they weren’t explicitly taught. Because these robots operate in controlled environments and perform repetitive tasks, they can continuously learn from new data, improving their algorithms. The challenge, however, is in gathering this data across various platforms, as medical data is sensitive and bound by strict privacy regulations. For robots to reach their full potential, we’ll need extensive collaboration across hospitals, universities, and industries to train these

intelligent machines.

As autonomous robots make their way into the clinical world, we’ll face increasingly complex questions about accountability when something goes wrong. The surgeon is traditionally accountable for all aspects of the patient’s care, but if a robot acts independently, it’s unclear whether liability would fall on the surgeon, the manufacturer of the robotic hardware, or the developers of the software. If a robot’s misinterpretation of data causes a surgical error, for example, is the surgeon at fault for not intervening, or does the blame lie with the technology providers? Clear guidelines and regulations will be essential to navigate these scenarios and ensure that patient safety remains the top priority. As these technologies become more prevalent, it’s also important that patients be fully informed about the use of autonomous systems, including the potential benefits and the associated risks.

A scenario in which patients are routinely greeted by a surgeon and an autonomous robotic assistant is no longer a distant possibility, thanks to the imaging and control technologies being developed today. And when patients begin to benefit from these advancements, autonomous robots in the operating room won’t just be a possibility but a new standard in medicine.


























www.roboticsobserver.com

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version