ChatGPT Health: A Dangerous Distraction in Disguise, Why This Isn’t Innovation, It’s Risky Overreach

amy 14/01/2026

By Hamza Musa | Physician, Developer, Open-Source Advocate | Founder of BookLore & medevel.com

For years, I have been writing about, and warning against, the dangers of AI self-diagnosis. I have heard the stories. People are actually hurting themselves, especially in the realm of mental health, and the consequences are severe.

Treating AI as if it were a doctor or a healer is not just a mistake; it is reckless. It is the equivalent of a three-year-old child trying to perform open-heart surgery on his own grandfather

Let’s cut through the hype, OpenAI just dropped ChatGPT Health, a new “dedicated experience” for managing your health data with AI. They’re touting it as a secure, privacy-first space where you can connect Apple Health, medical records via b.well, and wellness apps like MyFitnessPal or Function.

It sounds impressive on paper.

But here’s what I see when I look under the hood: a well-marketed digital illusion that dangerously blurs the line between information and medical advice.

And as both a doctor who’s spent years treating real patients, and a software developer who’s debugged systems that fail silently, I have to ask:
Is this really helping people? Or are we handing them a false sense of control over something they simply cannot understand?

Let’s be clear: No AI is a doctor.

Not GPT-5. Not o3. Not any model trained on public forums, Wikipedia, or Reddit threads.

You can’t diagnose diabetes from an HbA1c of 7.8. You can’t assess heart risk from a lipid panel. You can’t interpret a CT scan of the lungs without clinical context.

Yet ChatGPT Health invites users to ask things like:

“Give me a realistic diet plan based on my GLP-1 use.”
“What should I talk to my doctor about before my annual physical?”
“Based on my history, which insurance plan is best?”

These aren’t questions for a chatbot. They’re questions for a clinician.

And when someone walks into a doctor’s office saying, “I asked ChatGPT, and it said I’m fine,” that’s not empowerment, that’s a breakdown in care.

The Illusion of Control

The biggest lie here is the promise of “empowerment.” Yes, you can connect your Apple Health data. Yes, you can upload lab reports. But what do you do with that information?

Most people don’t know how to read a CBC, ESR, or TSH. They don’t understand what “mild elevation” means. They don’t realize that a single number tells no story, only a pattern does.

So instead of empowering, we’re enabling self-diagnosis by proxy, where AI becomes the gatekeeper to your own health.

That’s not patient-centered care. That’s digital paternalism, telling people they’re in charge while quietly removing their ability to truly understand.

Privacy? Don’t Bet Your Data on It.

OpenAI says: “Health conversations aren’t used to train models.”
They say: “Data is isolated.”
They say: “You can disconnect anytime.”

Fine. But let’s be real:

  • You’re connecting to third-party services (b.well, Function, MyFitnessPal) that may not follow HIPAA or GDPR.
  • These apps collect metadata, your sleep patterns, eating habits, activity levels, data that could be used to infer conditions even if not explicitly shared.
  • And once your data enters the system, how do you really know it stays there?

No audit trail. No transparency. Just promises. In healthcare, trust isn’t built on marketing. It’s built on accountability.

Why This Feels Like a Mistake

As a developer, I appreciate the ambition behind building tools that help people manage their health. But this isn’t a tool, it’s a consumer product masquerading as medicine.

We already have a crisis of misinformation in health. People Google symptoms, find worst-case scenarios, and panic. Now we’re giving them an AI that feels authoritative, but delivers no clinical grounding.

And worse: it encourages users to skip doctors altogether.

I’ve seen patients walk away from ERs because ChatGPT told them their chest pain was “likely anxiety.” Two days later, they were in ICU with a STEMI.

This isn’t innovation. This is dangerous negligence.

What Should Be Done Instead?

  1. No direct integration of personal health data into LLMs without clinical oversight.
  2. AI tools must be regulated like medical devices, not launched as “beta features.”
  3. Patients should never be encouraged to make decisions based on AI-generated summaries.
  4. Healthcare tech should augment clinicians—not replace them.

If you want to build a tool that helps patients prepare for appointments, great. But frame it honestly: “Here’s a list of common questions. Discuss these with your doctor.” Not: “Ask AI to explain your bloodwork.”

Final Thought

Technology has a role in healthcare, but only when it serves humanity, not replaces it.

We don’t need more AI that pretends to be smart. We need more doctors who listen. More systems that protect patients. More tools that reduce burden, not increase confusion.

Until ChatGPT Health is proven safe, clinically validated, and regulated like a medical device, it shouldn’t be trusted with anything resembling medical decision-making.

Calling it “health” is misleading.
Calling it “secure” without proof is reckless.
Calling it “empowering” when it removes agency? That’s a joke.

And if we keep pretending otherwise, we’ll pay the price, with lives.


Hamza Mousa, MD | Software Developer | Linux Enthusiast | Founder, medevel.com!
“The best code doesn’t solve problems, it reveals them.”