The Dangerous Imbalance: AI in the Hands of Patients but not Clinicians
Algorithms now sit in the waiting room.
A 48-year-old man arrives in the ED with chest pain. He is pale, anxious, and holding a transcript from ChatGPT. The AI told him he was likely having a heart attack. His ECG is normal, but the anxiety is fixed, framed in binary terms because that is how the AI presented it. Calming him while working the differential takes time that could have gone to the next patient in line.
Sam Altman, chief executive of OpenAI, has called ChatGPT “most of the time… a better diagnostician than most doctors” yet he would never trust it without a human doctor in the loop. In Australian hospitals, the opposite is emerging: patients can consult AI tools such as ChatGPT and Claude freely, while clinicians are barred from using them in practice.
A Debate in the Dark
This is algorithmic anchoring: patients latch onto a conclusion from an authoritative AI source and struggle to let go. Without access to the same tool, the clinician is debating an unseen third voice in the room. Sometimes the harm is immediate. One patient delayed attending because AI labelled their symptoms as “non-urgent” and they arrived septic.
If both patient and doctor can see the same AI-generated medical advice, why should only one be allowed to use it?
What AI Misses in a Clinical Encounter
Diagnosis in the ED is never just about the data. A doctor reads tone, body language, a flicker of hesitation. These cues shape the line of questioning, the pace of investigations, and the order of treatment. The algorithm sees numbers. The doctor sees the patient’s partner wringing their hands.
The Risk to Trust
If patients see AI as faster, more certain, and free from bias, the clinician risks becoming the second opinion, one already under suspicion. In a domain where trust must be won in minutes, that is dangerous. If trust erodes, patients will treat the doctor as a hurdle, not a partner.
Clintix View
Banning clinician access while allowing patient use is a safety hazard. Governance, not prohibition, is the solution: vetted tools, clear safeguards, and the ability for both patient and doctor to interrogate AI together.
AI will enter the consultation room. The only question is whether doctors will be inside it.
Coming next in Part 2: Why Australia’s lack of national AI policy for healthcare is creating confusion, deepening inequality, and leaving patients unprotected when they upload their medical data to AI platforms — and what we can learn from countries already getting it right.