As 2025 draws to a close, I remain encouraged by the quiet, deliberate progress we have made in applying artificial intelligence to healthcare. This has not been a year of breathtaking breakthroughs or sensational headlines. Instead, it has been a year of foundation-building—one that prepared healthcare for the years yet to unfold in this transformative decade of rapid innovation.
Most of AI’s progress has occurred not at the bedside but within the infrastructure of care—largely invisible to patients yet profoundly reshaping how care is delivered. Ambient listening technologies now capture and summarize clinical encounters, reducing the documentation burden that has weighed heavily on physicians for years. Digital assistants and chatbots respond to routine patient questions, manage scheduling, and bridge communication gaps that once led to frustration and inefficiency.
Tools such as OpenEvidence go further, analyzing medical literature in real time to help physicians identify potential diagnoses, review evidence-based treatments, and produce personalized explanatory documents for patients. Rather than dictating choices, these tools serve as intelligent collaborators—bringing vast information to the clinician’s fingertips while leaving judgment, compassion, and accountability squarely in human hands.
AI’s visible impact on patient outcomes remains modest, but its invisible influence is profound. Administrative friction is declining. Decision support is strengthening. Most importantly, clinicians are reclaiming the time and focus that bureaucracy once stole. The next chapter of progress will build on this groundwork, ensuring that these technologies move from improving workflow to transforming clinical care itself.
These advances mark the beginning, not the culmination, of AI’s impact on medicine. This is not a moment for celebration, but for resolve—a moment to ensure that our progress continues to reflect the moral core of medicine.
Our Trust Covenant
Medicine has always depended on trust—an ancient form of human connection that predates both science and technology. At its heart lies the trust covenant between caregivers and those who seek their help. It defines the moral architecture of medicine: not a contract written on paper, but a bond inscribed in the conscience of every clinician and the expectation of every patient.
Patients entrust their lives, privacy, and hopes to clinicians, believing each decision is guided by competence, compassion, and integrity. This trust is freely given, not negotiated. It arises instinctively from human vulnerability and the belief that caregivers—and the institutions that support them—will always do what is right. This faith is remarkably stable, enduring across generations unless broken by neglect or deception.
That covenant carries obligations for both sides. Caregivers pledge to act selflessly, to relieve suffering even at personal cost. Patients, in turn, accept risk in ways that make progress possible. Each year, thousands of people enroll in clinical trials, knowing the therapy under study may never benefit them personally. They do so both to seek hope for themselves and to contribute knowledge that may help others. Their willingness to confront uncertainty is an act of quiet heroism—a living expression of the trust covenant. Through their courage, medicine learns, evolves, and fulfills its enduring promise.
History reminds us of the fragility yet power of this trust. When anesthesia was first introduced in the nineteenth century, both physicians and patients faced the unknown together. Surgeons risked their reputation; patients risked their lives. Families gathered outside operating theaters, torn between hope and dread, uncertain whether relief or tragedy would emerge. Out of that shared faith grew one of humanity’s most significant advances—the transformation of surgery from a painful agony to a compassionate practice.
So it must be with AI. The patient still steps into the clinical encounter, assuming that the clinician, institution, and now algorithm are aligned in service of their well-being. That expectation is both a privilege and a responsibility. The trust covenant remains the foundation of care, constant even as medicine evolves.
Building Guardrails for Tomorrow
Technology cannot sustain the covenant on its own; it requires human stewardship. In Future Healthcare 2050, I argued that the next leap in quality and safety will come not from more powerful AI tools, but from the systems that oversee them. We must treat AI as we would any potent therapy—continuously tested, monitored, and improved.
Clinicians must remain at the center of this oversight. They interpret AI’s findings, recognize its limits, and take moral responsibility for its use. Continuous testing must become as routine as monitoring infection rates or medication interactions. Data-science teams and clinicians must work together, examining performance across populations, identifying bias, and refining outputs to ensure equity and reliability.
To achieve this, healthcare requires new forms of collaboration. One proposal, my Healthcare AI Review and Transparency (HART) model, envisions a public-private partnership where government agencies, academic centers, and technology firms share accountability. All stakeholders, including representatives of government and patients, must be included. Like the cooperative institutions created during the early twentieth century to regulate food and drug safety, such a framework could provide the guardrails needed for innovation to flourish safely.
The Birth of the FDA
The HART model offers one vision for that stewardship. History shows that such structures can transform public anxiety into trust. At the dawn of the twentieth century, industrial chemistry outpaced public oversight. Dangerous elixirs and mislabeled tonics filled the market, promising miracle cures but delivering harm. Outrage following Upton Sinclair’s The Jungle and tragedies from contaminated drugs led to the creation of the Food and Drug Administration in 1906. The FDA did not stifle innovation; it made it credible. By insisting on transparency, labeling, and testing, it transformed public suspicion into confidence, paving the way for the modern pharmaceutical industry. In much the same way, healthcare now needs transparent oversight to earn the same public confidence in AI that the FDA once brought to medicine.
AI stands today where pharmacology stood then—full of promise yet vulnerable to misuse. Establishing trusted oversight now will determine whether AI becomes healthcare’s greatest instrument or its next cautionary tale.
The Trust Compact
As intelligent systems become embedded in clinical practice, the ancient trust covenant is evolving into a trust compact—a shared understanding among clinicians, patients, and technology. This compact does not replace the covenant; it expands it.
Patients are becoming active participants rather than passive recipients of care. AI tools now explain imaging findings, interpret laboratory trends, and help individuals manage chronic conditions. A patient newly diagnosed with diabetes, for example, can use an AI-enabled health coach to understand how diet, medication, and exercise interact, while their clinician oversees treatment adjustments. Participation becomes easier, conversations more informed, and outcomes more lasting.
Clinicians, in turn, gain patients as partners who are engaged and better prepared. When patients arrive with an understanding of their conditions, the conversation shifts from explanation to collaboration. AI becomes the translator that connects data to human experience.
The Printing Press and Shared Knowledge
In the fifteenth century, the printing press shattered the monopoly of knowledge once held by scribes and scholars. For the first time, ideas could circulate freely, and literacy became a gateway to empowerment. The result was not chaos but enlightenment—a society capable of questioning, learning, and contributing. Ideas that once belonged to the few became shared by the many, altering not just how people learned but how they participated in shaping society.
Healthcare now stands at a similar threshold. AI, like the printing press, can democratize understanding. It can make medical information comprehensible, contextual, and personal, enabling patients to participate as informed partners with their clinicians. The clinician remains essential—interpreting, guiding, and caring—but knowledge itself becomes shared property. The trust compact serves as the moral foundation for this new literacy of health.
The Human-Centered Future
Looking ahead, I am confident that AI will deepen, not diminish, medicine’s humanity. Used wisely, it can reduce administrative burdens, improve diagnostic accuracy, and extend access to care for those who need it most. It can help allocate resources more rationally, highlight disparities, and measure quality in ways previously impossible. Yet these gains will mean little if we neglect the principles that give healthcare its soul.
The trust covenant—our promise to act with empathy, integrity, and respect—remains unbroken. The trust compact—our invitation for patients to share that promise—defines the path forward.
Next year, I will explore this relationship further in my forthcoming book, I, Patient, which examines how individuals can navigate the healthcare system as informed and empowered partners. For now, we close 2025 with optimism: AI is beginning to restore time, reduce friction, and create space for what matters most—the relationship between those who seek care and those who provide it.
As medicine moves from a trust covenant to a trust compact, we carry forward the same values that have guided us for centuries — empathy, integrity, and respect for every life we touch. With AI as our ally, patients can join caregivers as true partners in care, sharing knowledge, making informed decisions, and taking on shared responsibility. Together, we will shape a future where compassion and intelligence work side by side, and health becomes a journey we navigate — together.









0 Comments