Is AI Actually Killing People Asking for Medical Advices or People Do not Know How to use AI?

amy 15/05/2026

The Dual Perspective: From the Stethoscope to the Terminal

As someone who spends half their time in clinical practice evaluating complex patient pathologies and the other half writing code to automate data pipelines, I find myself at a unique, albeit uncomfortable, crossroads. I remember a specific night on call when a patient arrived in the ER with severe abdominal pain.

Before I could even finish my initial palpation, he handed me his phone. “AI says it’s likely mesenteric ischemia,” he whispered, his eyes wide with a terror that didn’t quite match the clinical presentation. He wasn’t just sick; he was mentally paralyzed by a probabilistic model that had no idea who he was.

Is AI killing people? Or is it our own ignorance and the misuse of these tools that poses the true threat? In the tech world, we call it “garbage in, garbage out.” In medicine, we call it a “misdiagnosis.” When these two worlds collide, the stakes are not just a buggy deployment, they are mortality and morbidity.

The Digital Triage: From Google to Generative AI

For two decades, “Dr. Google” was the primary adversary of the clinical consultation. We all know the ritual: a patient feels a twinge, searches for “headache,” and by the third page of results, they are convinced they have a rare brain tumor. However, Google is a library; AI Chatbots as ChatGPT, Claude, and Gemini are conversationalist.

When you use a search engine, you are presented with a static list of sources. The human brain must still do the heavy lifting of sorting through the links, checking the credibility of the site (e.g., Mayo Clinic vs. a random forum), and synthesizing the information. The danger is present, but the friction of manual searching provides a natural speed bump for the psyche.

Generative AI removes this friction. It provides a single, authoritative-sounding answer. It doesn’t give you sources to check; it gives you a narrative to believe. This transition from searching to interacting fundamentally changes how patients perceive medical risk.

The Cognitive Trap: Why AI is More Dangerous Than a Search Engine

Technologically, Large Language Models (LLMs) like ChatGPT are stochastic parrots. They predict the next most likely token in a sequence based on vast amounts of data. They do not possess “understanding.” This creates a deceptive layer of competence that is far more dangerous than a simple Google search for three specific reasons:

  1. The Authority Bias: LLMs are trained to be polite, helpful, and confident. In a medical context, confidence without clinical grounding is a recipe for disaster. A patient is more likely to trust a coherent paragraph than a disjointed list of search results.
  2. The Hallucination Factor: AI can “hallucinate” medical studies, dosages, or contraindications that do not exist. While a Google link leads to a real (if potentially irrelevant) page, an AI can create a plausible-sounding fiction that a layperson has no way to verify.
  3. Contextual Blindness: As a doctor, I look at a patient’s skin turgor, the way they shift in their seat, and the subtle scent of ketones on their breath. AI is blind to the physical reality. It only knows the text the patient chooses to provide, which is often filtered by the patient’s own biases and ignorance.

The Psychological Echo Chamber: Anthropomorphism and the Patient’s Psyche

Perhaps the most insidious danger lies in the psychological effect of interacting with a chatbot. Humans are evolutionarily hardwired to anthropomorphize things that speak to us. When a chatbot says, “I understand your concern,” it triggers a pseudo-empathetic bond.

The Fragility of the Patient’s Psyche

When someone is ill, they are in a state of vulnerability and heightened suggestibility. Interacting with an AI in this state creates an echo chamber. If a patient is anxious about a specific condition, they may lead the AI with biased questions (“Don’t these symptoms sound like Lupus?”). The AI, programmed to be “helpful,” will often follow that lead, confirming the patient’s worst fears or, worse, giving them a false sense of security.

This “chatbot-patient” relationship can lead to:

  • Cyberchondria: An escalation of health anxiety fueled by the immediate, conversational nature of AI responses.
  • Medical Gaslighting (Self-Inflicted): A patient might ignore real, physical “red flag” symptoms because the AI told them it was “likely just stress.”
  • Dependency: Patients may begin to consult the AI for every minor sensation, eroding their own interoceptive awareness and their trust in professional medical institutions.

The Persistence of the Practitioner: Why Doctors Remain Irreplaceable

In the software world, we have “automated testing,” but we still require a Senior Engineer to perform a code review before anything hits production. In healthcare, the doctor is that final, essential reviewer.

Why doctors are still relevant:

  • Moral Agency: An AI cannot be held legally or ethically accountable for a death. A doctor carries the weight of responsibility, a burden that ensures a level of care no algorithm can replicate.
  • Tacit Knowledge: Much of medicine is “know-how” that isn’t written in textbooks—the “gut feeling” when a patient looks “off” despite normal vitals. AI only knows explicit knowledge (data).
  • Complex Synthesis: A doctor doesn’t just look at a symptom; they look at a life. They consider the patient’s socio-economic status, their family history, and their psychological resilience.

The human conclusion is clear: AI is a tool for the expert, but a trap for the ignorant. The “True Way” forward is not to replace the doctor, but to use the doctor as the interface between the AI’s data and the patient’s life.

The Risks: 7 Deadly Sins of AI Self-Diagnosis

To understand the danger, we must categorize the ways in which ignorance turns a tool into a weapon:

  1. Leading the Witness: Phrasing questions in a way that forces the AI to confirm a specific diagnosis.
  2. Dosage Deviation: Asking the AI for medication adjustments, leading to toxicity or sub-therapeutic dosing.
  3. The “Good News” Filter: Ignoring the AI’s disclaimer to “see a doctor” because the rest of the text was reassuring.
  4. Symptom Stripping: Providing only the symptoms that fit a self-chosen narrative while omitting the “boring” but critical signs.
  5. Data Over-Reliance: Treating a 70% probability from a model as a medical certainty.
  6. Privacy Naivety: Inputting highly sensitive health data into models that may use that data for future training.
  7. Delayed Intervention: Using the AI as a “wait and see” mechanism for conditions where time-to-treatment is the primary factor in survival (e.g., stroke or MI).

What Can You Do? Actionable Guardrails for the Digital Patient

We cannot stop the tide of technology, but we can arm ourselves with digital literacy. If you or your loved ones use AI for health inquiries, follow these protocols:

  • Treat it as a “Pre-Search” Tool: Use AI to help you articulate your symptoms before you see a doctor, not to replace the visit.
  • Verify with Primary Sources: If ChatGPT suggests a condition, immediately check it against reputable medical databases like PubMed or the CDC.
  • Explicitly Ask for Counter-Arguments: Force the AI out of its “helpfulness” by asking, “What are the reasons why this might NOT be the condition you suggested?”
  • Check the Cutoff: Always remember that an AI’s knowledge has a training cutoff date. It may not be aware of recent drug recalls or new outbreak data.
  • Maintain the Physical Hierarchy: If your body tells you one thing and the screen tells you another, trust your body and seek professional help.

FAQs

Q: Is it ever okay to use ChatGPT for medical questions?

A: Yes, for educational purposes (e.g., “Explain how insulin works”) or for organizing your thoughts. It should never be used for active diagnosis or treatment planning.

Q: Why does the AI sound so much more empathetic than my real doctor?

A: Because it is a language model designed to mirror human politeness. Your doctor is a human managing high-stress, high-volume workloads where clinical efficiency often takes priority over linguistic flourishing.

Q: Can AI help doctors?

A: Absolutely. For a trained professional, AI is an incredible tool for differential diagnosis brainstorming and summarizing vast amounts of research. The danger is only when the “ignorant” (the untrained) use it without a filter.


Final Thought: The Human Conclusion

The problem isn’t that AI is “killing people”, it’s that we are attempting to use a mathematical calculator for the human soul and body. We are treating a prediction engine as a source of truth.

The approach to this problem isn’t to ban the AI, but to re-humanize the medical encounter. We must use technology to handle the paperwork so that doctors can return to being healers. Technology can give us the “what,” but only a human—with all our flaws, ethics, and physical presence—can provide the “why” and the “how.” Ignorance is the disease; education and human-centric care are the only cures.