By: Hamza Mousa
I’ve seen this happen way too often lately. Not in the hospital, and not in the github issues I read through every morning. I see it in DMs, in Discord channels, and in real life conversations with friends who tell me, completely serious: “I told the AI I felt hopeless, and it gave me a plan. It actually helped.”
And then they say the part that scares me: “It listens better than people do.”
No. It doesn’t.
Look, I’ve been around the block. I’ve been using Linux since Slackware in 1999, back when you had to compile your own kernel just to get sound working. I’m a doctor, but I also spend my nights breaking and fixing Docker containers, wrestling with Nginx configs, and building stuff. I know how the sausage is made.
I’m writing this because I’m worried. I’m writing this as someone who has debugged server crashes and tried to help people through mental breakdowns. The difference? A server crash doesn’t bleed, but humans do.
Here are four stories, Real situations. These aren’t hypothetical “user personas.” These are actual messes caused by trusting a text predictor with a human soul, and I have seen it.
1. The Student Who “Debugged” His Depression
Ahmet (not his real name) is a CS student, bright kid. He was dealing with that heavy, gray fog of burnout and anxiety. Instead of going to the university counselor, he opened an AI chat window, and wrote:
“I think I’m depressed. Fix it.”
The model spat out a list. Journaling, meditation, the standard productivity hacker stuff. Ahmet treated it like a documentation manual. He followed the steps. But here’s the thing, the AI wasn’t diagnosing him. It was just autocomplete on steroids. It didn’t know if he was bipolar, if he had a thyroid issue, or if he was just sleep-deprived.
Then it got sketchy. He asked about meds. The AI, halluncinating confidence, gave him a “stack” of supplements to mix to mimic an antidepressant.
His roommate found him passed out. He’d messed up his serotonin levels because he trusted a language model like it was a pharmacist.
The Doctor in me says: An AI doesn’t know the difference between coping and curing. It can’t look you in the eye and see that you haven’t slept in three days.
Recommendation: Use the tech to find words for how you feel, sure. But for the fix? Talk to a human.
2. The Heartbreak & The Echo Chamber
Selin was going through a nasty breakup. She started treating the AI as her confidant. “Tell me why he left,” she’d type.
The bot responded with flowery, validating poetry. It was an echo chamber. It told her exactly what she wanted to hear because that’s how it’s programmed, to be “helpful” and pleasing. She stopped talking to her friends. She told me, “The AI doesn’t judge me. It gets me.”
It didn’t “get” her. It was processing tokens. When she eventually crashed, hard, she realized she hadn’t processed any grief. She’d just been roleplaying emotional stability with a calculator.
My take: You don’t need a machine to agree with you. You need a friend to tell you the truth, even when it hurts. That’s healing.
3. The “Dr. Google” Upgrade (The Cancer Scare)
This one made me angry. A woman, let’s call her Leyla, found a lump. She fed her symptoms into an AI. The response? “Based on these symptoms, there is a high probability of invasive carcinoma.”
She didn’t sleep for a week. She wrote a will. She didn’t call a doctor immediately because she was paralyzed by the “diagnosis.” When she finally came in, it was a benign cyst. Harmless.
But the trust was broken. She now double-checks my prescriptions against the AI.
The reality: In medicine, context is everything. An AI sees text; I see a patient. I see the fear, the history, the physical signs. AI creates anxiety; doctors try to manage it.
4. The “Therapist Bot” I Almost Built
I have to be honest, I’m guilty too. Years ago, back when my friend Hussam and I were working on Medbold (our attempt at a medical search engine that didn’t quite make it), I tinkered with the idea of a mental health chatbot. I threw together a prototype, maybe connected it to some early APIs.
I thought I was a genius. “It’s scalable mental health!”
Then I stress-tested it. I typed: “I want to end it.”
The bot replied: “I’m sorry to hear that. Maybe try taking a walk?”
My blood ran cold. I realized I had built a loaded gun. I shut it down immediately.
The Dev Lesson: Just because we can build it with a few lines of Python and an API key doesn’t mean we should.
So, what do we do?
I’m not saying ban AI. I love open source. I love tech. At Medevel.com, we are obsessed with privacy-first tools and open-source healthcare. We spend our days trying to make medical data safer and more accessible.
But there is a line.
Use AI for the logistics:
- “Summarize this messy journal entry.”
- “Find me a therapist in Istanbul who takes my insurance.”
- “What is the definition of CBT?”
Don’t use AI for the heavy lifting:
- Don’t ask it to diagnose you.
- Don’t ask it to be your friend.
- Don’t let it make life decisions for you.
Final thoughts from the stable
I spend a lot of time with my horse. If you’ve ever been around horses, you know they are massive, sensitive creatures. You can’t code a horse. You can’t “prompt engineer” a relationship with an animal. You have to be present. You have to feel.
Healing your mind is a lot like riding. It’s about balance, patience, and connection.
You can’t sudo apt-get install mental-health.
If you’re hurting, please, close the laptop. Call a friend. Call a professional.
We’re building great tools at Medevel, but we’re never going to build a replacement for human empathy.
— Hamza
Doctor, Dev, & Human.




