AI is changing healthcare. I help you use it safely.

Free, weekly. Written by a physician and AI expert. Plainspoken, evidence-based, skeptical of hype. Each issue distills what's real, what's risky, and what's actually useful for patients, families, and clinicians.

Launching May 2026.


Who writes this

MH

Dr. Maia Hightower, MD MPH MBA

Dr. Maia Hightower is a physician executive who has spent more than fifteen years leading AI strategy at major health systems. Ask Dr. Maia is her direct line to patients, families, and clinicians trying to make sense of it all.

  • Former Chief Digital and Technology Officer, UChicago Medicine.
  • Former Chief Medical Information Officer, University of Utah Health.
  • Founded Equality AI, a pioneer in AI quality and safety for healthcare systems.
  • Long-term advocate for AI for everyone, AI for good, AI that benefits everyone, no one left behind.
  • Co-author of a peer-reviewed framework for evaluating AI risk in healthcare.1

Why this exists

I've lived the paradox.

I was on one of the first teams to deploy autonomous AI for diabetic retinopathy screening in a major health system. I watched it find disease in patients who would have gone undiagnosed, in clinics that couldn't otherwise have offered the test. AI in healthcare can do extraordinary things to expand access.

I've also seen the harm. When a model is trained on patients who don't look like the patient in front of it, the model gets things wrong. The risk isn't abstract. It shows up in the exam room, in the prescription, in the diagnosis that came too late.

Most AI tools were not tested on patients who look like you. We are each unique. There is no digital twin. Given the right circumstances, anyone can be at risk for AI harm because they are underrepresented in the training data: by zip code, by age, by language, by income, by health condition, by who their parents are.

The first line of defense for AI safety in healthcare is supposed to be federal regulation. The second is state government. The third is the institution where care is delivered. When those lines hold, patients are protected. When they don't, the patient is the last line of defense.

Ask Dr. Maia exists to give every patient, not just the privileged few, the clinical knowledge to navigate the AI health era safely. The mission is simple: democratize access to safe AI in healthcare. The newsletter is free, always. The subscription tier supports access for those who need it most.


What you'll get

The Newsletter

A weekly dispatch on AI in health. Short, sourced, written to be read in under ten minutes.

Tool Reviews

Independent, physician-led evaluations of consumer AI health tools. Methodology published, scores tracked over time, harm flags posted when something changes.

Regulatory Watch

Plain-language briefs on FDA actions, federal guidance, state legislation, and policy shifts that affect what reaches your phone.

Action Guides

Step-by-step playbooks for using AI safely in your own care, and knowing when to call a real doctor.


A note on your privacy

Data privacy is one of the grounding principles of this brand. We will never sell your email. We will never sell your reading behavior. Sponsor disclosures are explicit and never involve sharing subscriber data. We are deliberately leaving certain commercial paths on the table to protect this.


  1. Hussein R, Hightower M, Beaulieu-Jones B, et al. Health AI Risk Assessment (HAIRA) framework. npj Digital Medicine. 2026;9:236.