
Okay so I need you to actually sit with this one for a second.
A robot performed surgery. Alone. Nobody controlling it, nobody guiding it, nobody at a console. Just a machine, a patient, and an algorithm making every single decision in real time.
I'm a final year medical student. I've watched surgeries. I've seen how much concentration, intuition, and honestly just raw experience goes into what happens in that theatre. So when I tell you this stopped me cold, I mean it.
Let me back up though.
Surgery has always been about the human hand. Not just the technical skill, but the judgment behind it. Reading a surgical field isn't something you can explain easily, experienced surgeons will tell you it's almost instinctive after a while. You see something that doesn't look right and you adjust before you can even articulate why.
For all that brilliance though, surgeons are still human. They fatigue. They have bad days. And in huge parts of the world, especially across Africa and Southeast Asia, there just aren't enough of them. Patients wait. Conditions worsen. Some don't make it.
Robotic surgery was the first serious attempt to address part of that problem. Most people have heard of the da Vinci system at this point. Surgeon sits at a console, their movements get translated into precise robotic action inside the patient's body. Smaller incisions, quicker recovery, fewer complications. Genuinely transformative when it arrived.
But the surgeon was still there running everything. The robot did what it was told, nothing more.
Then Johns Hopkins did something that I think is genuinely one of the most significant moments in surgical history and it barely made mainstream news.
In 2022 their team built STAR, the Smart Tissue Autonomous Robot, and it performed soft tissue surgery on a pig without any human guidance whatsoever. Then in July 2025 they published results on their next system and this is the part that got me. It watched video recordings of expert surgeons performing gallbladder removals. Studied them. Learned from them. Then performed the same procedure independently on human like surgical models, 17 steps, eight separate times, with 100% accuracy.
When tissue got obscured it adapted. When its starting position changed it adjusted. It didn't hesitate.
Now here's something that gets lost whenever this story gets covered.
People focus on the robotic arm. That's not really the story. The story is about what the robot can see. These systems use AI powered imaging that reads the surgical field in real time, processing information continuously and informing every movement as the procedure unfolds. The robot isn't executing a fixed plan. It's responding to what's in front of it as things change.
But here's what I keep thinking about.
The Johns Hopkins system learned from videos. That's remarkable, genuinely. But videos show you what surgery generally looks like, not what this specific patient looks like on this specific day. Every human body is different. Positions shift. Anatomy varies in ways that even experienced surgeons find surprising sometimes.
So what if the robot was also receiving live radiographic imaging throughout the entire procedure? Real time AI processed X-rays, fluoroscopy, intraoperative CT, feeding the system continuously so it knows exactly where it is inside that specific body at every single moment. Not working from memory of what a gallbladder removal usually looks like. Actually seeing your gallbladder, your blood vessels, your specific anatomy, live, and making decisions based on that.
That's the version of autonomous surgery that I think makes the leap from impressive to genuinely reliable. Because the gap between a learned procedure and a live human body is exactly where things go wrong. Live imaging closes that gap.
And that's also where my brain starts asking the uncomfortable questions.
Because when something goes wrong, and I say when not if because no system is perfect, who is responsible? The engineer who wrote the algorithm? The hospital that approved its deployment? The company that manufactured it? The surgeon who was technically present but whose hands never touched anything?
We don't have a clean answer to that right now. Medical liability law was built on the assumption that a human being is always the final decision maker in that room. That assumption is being quietly dismantled and I'm not sure enough people in medicine are paying attention to it yet.
There's also something else worth saying. These systems learn from data. The quality and diversity of that data determines how well the robot performs across different patients, different body types, different anatomical variations. If the training data is narrow the robot's judgment will be narrow too. That's not a hypothetical, it's a pattern we've already seen play out in medical AI.
None of this means the technology isn't worth pursuing. The idea that a system like this, especially one enhanced with live radiographic guidance, could one day operate in a rural clinic in Northern Nigeria or a field hospital in a conflict zone, performing procedures that currently require specialists nobody there has access to, that possibility genuinely matters.
But we're moving fast. Faster than the conversations around accountability, consent, and equity are keeping up with.
And that gap between what the technology can do and what we've actually figured out about how to use it responsibly, that's the thing I keep coming back to.
.
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)