In aviation, the path to safety was paved not by better engines but by better communication. In the aftermath of several deadly airline accidents in the 1970s, the problem was not mechanical—it was human. Pilots and crew failed to speak up, share critical observations, or override autopilot when judgment said otherwise. These breakdowns led to a transformative idea: Crew Resource Management (CRM), a system first proposed by NASA in 1979 and later adopted by airlines to train crews in effective communication, situational awareness, and decision-making under high-pressure situations.¹
The lesson still applies: technology must serve the expert, not replace them.
Healthcare now faces a similar inflection point with artificial intelligence. The tools are powerful, yes—but incomplete. They are trained on limited datasets, exposed to systemic biases, and susceptible to delivering incorrect or misleading output. AI can streamline administrative tasks, support diagnosis, and even predict deterioration. But it cannot—on its own—make medicine better.
That is our job.
Too often, AI is presented as a solution to healthcare’s biggest challenges: clinician burnout, rising costs, and care inequities. But the reality is more complex. AI is not a fix—it is a force multiplier. It makes good systems more efficient, and flawed systems faster at failing. In both cases, it requires human expertise to guide it.
To deploy AI effectively, we must:
- Train clinicians to recognize where AI enhances decisions—and where it distorts them
- Redesign workflows to include human checkpoints, not bypass them
- Hold vendors accountable for transparency, testing, and explainability
- Incorporate feedback loops that flag drift, bias, or unintended consequences
This is not caution for its own sake. It is a responsibility. In healthcare, the cost of error is not lost profit—it is lost trust, and sometimes, lost lives.
Looking Ahead
Healthcare AI can help us reach more patients more efficiently and with greater precision. But it can only succeed if we lead its integration—clinically, ethically, and operationally. That leadership starts with recognizing the role AI is meant to play: not to replace the clinician, but to amplify clinical judgment, support overburdened teams, and reinforce the human connections at the heart of care.
This fall, I will continue to share strategies for integrating AI in ways that preserve the patient-physician relationship, maintain public trust, and improve outcomes across the system.
In the meantime, I invite you to explore two resources:
- The Ask Dr. Barry chatbot, which includes all newsletter articles, blog posts, and my books: barrychaiken.com/ask-dr-barry
- My healthcare AI book, Future Healthcare 2050, is available in a signed deluxe edition exclusively at barrychaiken.com/fh2050 and in standard editions at Barnes & Noble and Amazon
Let us move forward not with fear or blind enthusiasm—but with purpose, precision, and a clear-eyed understanding that AI cannot replace us—but it can make us better.
0 Comments