Artificial intelligence is going to change medicine. That much is no longer controversial.
This is my third of six nuggets in my series titled: After AI.
AI will read scans faster than radiologists, spot anomalies earlier, reduce missed diagnoses, standardize protocols, lower certain costs, and expand access to expertise that was once scarce or unevenly distributed. All of that is good news, and it will save lives.
But somewhere along the way, an important distinction has started to blur. We are beginning to talk as if medicine itself scales.
It does not.
Diagnosis scales. Decision trees scale. Risk models scale. Clinical guidance scales. These are precisely the kinds of problems machines are built to solve, and AI will solve them well.
Care Does Not Scale
Care still looks like a clinician sitting longer than the schedule allows. A therapist repeating the same movement again and again. A nurse noticing what the vitals do not show. A parent being patiently and repeatedly taught how to help when no one else is in the room. No algorithm does that work. No system optimizes it. And no dashboard captures why it matters.
AI will get extraordinarily good at being right. That is not the same thing as being helpful.
Healing has always required things that resist automation.
- Trust
- Patience
- Judgment
- Adaptation
- Relationship
Two patients with the same diagnosis often need entirely different care. Progress is rarely linear. Compliance is emotional before it is rational. Outcomes are shaped as much by how safe someone feels as by how correct the plan is.
Let’s Ask an Expert
My wife has practiced physical therapy for 40 years. Her thoughts are:
A physical therapist doesn’t just follow a routine. They assess, treat, and reassess constantly. With trained hands, they can feel changes in mobility, muscle tone, tissue tension, and movement quality over time. Treatment is individualized because no two bodies respond the same way.
Medicine has Always Lived at the Intersection of Science and Service
AI dramatically advances the science. It does almost nothing for the service, unless humans decide that part still matters.
The subtle risk as AI becomes widespread in medicine isn’t that machines will replace clinicians, but that institutions will eliminate human work because it’s harder to measure. Time spent with patients can’t be scaled. Reassurance isn’t easily captured in performance metrics. Care remains costly because it is inherently personal.
The temptation will be to celebrate diagnostic breakthroughs while quietly downplaying everything that comes afterward. That would be a serious mistake.
Here’s the counterintuitive truth: as AI improves diagnosis, care becomes more valuable, not less. The more confident we are about what’s happening, the more essential it is to help someone live with it.
This is especially true in pediatrics, rehabilitation, chronic conditions, and recovery that takes place over months or years. These are fields where progress is measured in small steps, where encouragement counts, and where coming back tomorrow is the work. These are not edge cases; they are the heart of healthcare.
After AI, Human Work Doesn’t Disappear – It Becomes Sharper
Machines will help us know faster. Humans will help us heal more slowly. The slow part is not a flaw. It is the feature.
Despite all our technology, medicine still takes place in rooms, between people, in moments that can’t be rushed. AI will make medicine smarter. Only humans can keep it humane.
That’s not nostalgia. It’s realism.
Next week, I will discuss ‘design’, where AI can generate endless options, but only human taste can decide what belongs.
