In a recent blog post, Gary Marcus (who I consider an excellent source for thoughtful, technically informed assessment of AI progress) sounded an alarm. In recently published assessments in four different medical journals, AI chatbots were tested for the quality of their medical advice. They failed miserably, and that is a warning sign for anyone who turns to AI for medical information.

The four studies that Marcus cites were published in the American Medical Association’s JAMA Network Open, and in a journal of the British Medical Association, along with two other studies in Nature Medicine, all peer reviewed.

You can read Marcus’ blog post here.

A fundamental problem, Marcus writes, is that today’s AI chatbots don’t know how to conduct a diagnostic interview. They often don’t understand what questions to ask or how to interpret the answers. As a result, they miss things a doctor would pick up. At the same time, they sound very authoritative.

Marcus also links to the story of a very specific case, described in the New York Times of April 13, 2026. A man, Ben Riley, discovered that his father had decided to trust AI over his oncologist. Ben knew about the ways that AI can be wrong—he published a newsletter about the topic—but he just couldn’t convince his father to listen to his doctor’s advice. As a result, Ben’s father died from a form of leukemia that would have been treatable if the treatment had started when the doctor advised. The NYT article about that sad story is here.

The lesson here is that today’s AI is too unreliable to use as the basis for important decisions. Those include financial decisions and decisions about relationships as well as medical decisions. Someday that may change, but to me that day seems years in the future—if it ever arrives.