Red-Teaming Your Health Plan: Demetri Giannikopoulos on Responsible AI, the Cures Act, and Adverserial Approach to AI Outputs

Demetri's AI use pattern inverts the common assumption. For MS management he trusts his Johns Hopkins care team and does not lean on AI. Where LLMs have been "absolutely priceless" is the administrative layer — specifically, decoding US health insurance.

Read More
One Tool, One Job: How a Stage 4 Cancer Patient Is Building Infrastructure the Health System Won't

Russ's story is the clearest case study in the series of a patient building durable, purpose-specific AI infrastructure to do what the health system will not: maintain a coherent, longitudinal, cross-specialty view of his own care. He tracks chemo symptoms daily in Claude, mirrors them into a Google Doc because chatbots silently lose long-term memory (he lost roughly half of three months of chemo data to memory failures across two tools), runs a weekly "cancer smasher" Claude project that scans new literature and trials against his own tumor profile, and uses Notebook LM as a closed-corpus analyst over his doctor's notes and bloods.

Read More
The Research Patient: Dale on Agency, Custom GPTs, and the Concealment That Nearly Killed Him

After Dale Atkinson decided to fight his diagnosis, his approach was research-first, AI-second. He used ChatGPT as a literature triage layer — feeding in his diagnosis and medical letters, asking "if you were in my shoes, where would you start?" and getting a reading list. He spent six weeks (two to three hours a day) building a custom GPT trained only on the self-curated corpus of papers he found relevant — explicitly instructed not to search the open web, required to cite at least five sources, required to explain its sourcing.

Read More
The Two-AI War in the Consulting Room: What Happens When Parents Bring ChatGPT to the Hospital and Contradict Medical AI Decision Support

Diana Ferro works at a major pediatric hospital in Italy, working on AI infrastructure, rare diseases, and — importantly — the International Alliance of Pediatric Centers on AI. Unlike the patient voices earlier in the Agentic Patient series, she sits on the other side of the consulting-room door. Her concerns are sharper, more specific, and more uncomfortable. She is not against patient AI use. She is watching what happens when desperate parents, teenagers in crisis, and sycophantic chatbots meet in a pediatric setting and she is trying to build the guardrails in real time.

Read More