AI is changing healthcare. I help you use it safely.

Free, weekly. Written by a physician and AI expert. Plainspoken, evidence-based, skeptical of hype. Each issue distills what's real, what's risky, and what's actually useful for patients, families, and clinicians.

Launching May 2026.



Who writes this

MH

Dr. Maia Hightower, MD MPH MBA

Healthcare is a human right. AI governance is how we protect it.

Dr. Maia Hightower is a physician executive who has spent more than fifteen years leading AI strategy inside major health systems. She has helped govern, deploy, and pressure-test AI in some of the most high-stakes settings in American medicine, from academic medical centers to the safety-net clinics where the technology actually lands. Ask Dr. Maia is her direct line to patients, families, and clinicians who are now using these tools without that infrastructure behind them.

She is co-author of HAIRA (Healthcare AI Governance Readiness Assessment), a peer-reviewed maturity model published in npj Digital Medicine in 2026.1 Her work has been published in NEJM AI, Health Affairs, JAMA, and npj Digital Medicine. She founded Equality AI, a pioneer in AI quality and safety for healthcare systems. Before that, she served as Chief Digital and Technology Officer at UChicago Medicine and Chief Medical Information Officer at University of Utah Health and University of Iowa Healthcare. She holds an MD, an MPH, and an MBA, a triple credential that spans clinical medicine, population health, and the business of health systems.

The thread through all of it is the same. The governance gap for AI is largest where the stakes are highest, in the rural clinic, the underfunded community hospital, the bedroom of the teenager already using ChatGPT to talk through a panic attack. Most of the patients with the most to gain from AI in healthcare, and the most to lose when it fails, are the ones with the least institutional protection between them and the algorithm. This newsletter exists to help close that gap, one reader at a time.

  • Co-author, HAIRA (Healthcare AI Governance Readiness Assessment), npj Digital Medicine, 2026.
  • Published in NEJM AI, Health Affairs, JAMA, and npj Digital Medicine.
  • Founder, Equality AI.
  • Former Chief Digital and Technology Officer, UChicago Medicine.
  • Former Chief Medical Information Officer, University of Utah Health and University of Iowa Healthcare.
  • MD, MPH, MBA.

Why this exists

I've lived the paradox.

I was on one of the first teams to deploy autonomous AI for diabetic retinopathy screening in a major health system. I watched it find disease in patients who would have gone undiagnosed, in clinics that couldn't otherwise have offered the test. AI in healthcare can do extraordinary things to expand access.

I've also seen the harm. When a model is trained on patients who don't look like the patient in front of it, the model gets things wrong. The risk isn't abstract. It shows up in the exam room, in the prescription, in the diagnosis that came too late.

Most AI tools were not tested on patients who look like you. We are each unique. There is no digital twin. Given the right circumstances, anyone can be at risk for AI harm because they are underrepresented in the training data: by insurance status, by zip code, by age, by language, by income, by health condition, by who their parents are.

One of the fastest-growing groups at risk is the growing number of uninsured and underinsured Americans. They are also the heaviest users of free consumer AI health tools, often as a substitute for care they cannot otherwise access.

Patients are the last line of defense against AI harm in healthcare. That sounds like a strong claim. It is also the truth of how the system works right now. The first line of defense is supposed to be federal regulation. The second is state government. The third is the institution where care is delivered. When those lines hold, patients are protected. When they don't, the patient is the one in the room with the algorithm.

Ask Dr. Maia exists to give every patient, not just the privileged few, the clinical knowledge to navigate the AI health era safely. The mission is simple: democratize access to safe AI in healthcare. The newsletter is free, always. The subscription tier supports access for those who need it most.


Read the latest issue

Issue 1 · May 2026

The mental health AI guide I wish I'd had ten years ago

The evidence for these tools is real, the harms are documented, and the people who need them most are the ones the tools serve worst. Here is what the data actually shows.

Read the issue → Browse the full archive


What you'll get

The Newsletter

A weekly dispatch on AI in health. Short, sourced, written to be read in under ten minutes.

Tool Reviews

Independent, physician-led evaluations of consumer AI health tools. Methodology published, scores tracked over time, harm flags posted when something changes.

Regulatory Watch

Plain-language briefs on FDA actions, federal guidance, state legislation, and policy shifts that affect what reaches your phone.

Action Guides

Step-by-step playbooks for using AI safely in your own care, and knowing when to call a real doctor.


A note on your privacy

Data privacy is one of our grounding principles. We will never sell your email. We will never sell your reading behavior. Sponsor disclosures are explicit and never involve sharing subscriber data.


  1. Hussein R, Hightower M, Beaulieu-Jones B, et al. Healthcare AI Governance Readiness Assessment (HAIRA): a peer-reviewed maturity model. npj Digital Medicine. 2026;9:236.