When Sam Altman recently admitted that we are still in a “primitive age” regarding digital memory, he wasn’t just describing a technical bottleneck at OpenAI. Unintentionally, he was announcing the end of the honeymoon phase between modern medicine and Large Language Models (LLMs).
I view this landscape through a dual lens: one eye in the operating room as a medical doctor, and the other on the command line as a developer and open-source advocate. From this vantage point, the healthcare sector looks like the biggest victim of what I call the “Linguistic Illusion.” We are currently pouring billions into systems that are brilliant at talking about disease but are fundamentally incapable of understanding the patient.
Here is why our obsession with digital eloquence needs to end, and why the future belongs to “Physical Intelligence.”
The Trap of the “Digital Placebo”
The fundamental flaw of LLMs in healthcare isn’t that they make mistakes; it’s that they are probabilistic machines designed to fake confidence.
In the realm of mental health, the current wave of “AI Therapist” applications represents a potential existential risk. A language model does not understand depression as a physiological or behavioral state; it simply predicts the next most probable token that sounds “empathetic.” It is a Digital Placebo, it might soothe a patient momentarily with flowery language, but it is blind to the acoustic changes in voice tone, micro-expressions, or behavioral patterns that signal suicidal ideation.
From Support to Danger: The Risks of Relying on AI for Mental Health
The Catastrophe of Self-Diagnosis
As a clinician, I have watched a worrying trend explode: patients arriving at my clinic armed with “diagnoses” from ChatGPT, Gemini, or Copilot.
When you ask a chatbot about your symptoms, you aren’t getting a medical analysis. You are getting a statistical average of word frequency across medical texts. In medicine, an AI “hallucination” isn’t a quirky software bug we can patch later; it is a matter of life and death. A doctor doesn’t need eloquence; they need truth. The current generation of AI offers the former in abundance while struggling dangerously with the latter.
The Operating Room Doesn’t Need Poets
The next real revolution requires a shift from the “Chat Economy” to the “Action Economy.”
A future surgical robot does not need to know how to write a sonnet about a scalpel, nor does it need to summarize a medical report in the style of Shakespeare. What it needs is Physical Intelligence:
- Understanding tissue resistance (haptics).
- Calculating precise cutting angles in real-time.
- Mastering fluid dynamics and pressure.
This is the promise of World Models. We need to move from “Software” that lives on screens to “Smartware” that interacts with biological and physical reality. Betting on teaching AI “grammar rules” rather than “Newton’s laws” is an investment in the past.
$15 Trillion Looking for a Body
While Silicon Valley remains fixated on digitizing information (automating reports, summarizing electronic health records), the “elephant in the room” is the physical economy, a $15 trillion sector comprising hospitals, pharmaceutical logistics, and elderly care.
Robots that can navigate chaotic hospital corridors or physically assist a stroke patient require intelligence that grasps 3D space and time (spatiotemporal dynamics), not just a language model that regurgitates protocols. We need intelligence that can “work with its hands,” not just a chatty management consultant.
The Real Fear: Tech Feudalism
As a Linux user since the 90s and a lifelong open-source advocate, this is what keeps me up at night.
Training on text was expensive, but training on “physics simulations” and World Models (as NVIDIA and Tesla are currently attempting) requires astronomical computing power. If LLMs created a monopoly for companies like OpenAI, World Models risk creating a hermetically sealed “Tech Feudalism.”
The critical question for us as doctors and developers is this: Will we see a “Linux” for medical World Models? Or are we destined to be mere tenants, renting reality from a few tech giants who possess the exclusive power to simulate the physical world?
The Bottom Line
The reliance on stacking more text data has reached the point of diminishing returns. The solution isn’t more data; it’s deeper understanding.
At Medevel, and in my own practice, I have stopped being impressed by an AI’s ability to write. The future belongs to those who build accurate physical models that understand the human body and the laws of the world it inhabits.




