Dispatching the Future

In emergency care, the chain of survival doesn’t begin in the emergency department.
It begins with a phone call.

And increasingly, that call is being answered - not just by a human, but by artificial intelligence.

Emergency dispatch centres are trialling AI systems that analyse voice, breathing patterns, tone, and keywords in real time. These tools are trained to identify cardiac arrest, trauma, stroke, and sepsis before the caller even finishes their sentence.

It’s remarkable. Algorithms, listening quietly in the background, can detect the abnormal breathing or the rising panic in a parent’s voice. Some are accurate enough to recognise a cardiac arrest over the phone faster than trained human dispatchers. Others predict the likelihood of a high-acuity event and suggest resource allocation - before any paramedic is even dispatched.

This is not speculative. It’s happening now.

In Tel Aviv, MDGo integrates vehicle sensor data with crash analytics to predict internal injuries before paramedics arrive. In the United States, platforms like RapidSOS pull real-time data from smartphones, wearables, and connected cars, pushing this information directly to emergency services. A fall detected by a smartwatch or a car crash detected by onboard telemetry can trigger a prehospital response without a human voice being involved at all.

AI, in this space, acts as a sentinel - watching, listening, deciding.

And the potential benefits are obvious: faster recognition of emergencies, better triage, more efficient allocation of scarce EMS resources, and earlier hospital pre-notification. The promise is a streamlined, intelligent system that connects people in crisis to the right care - faster than ever before.

But there are risks. Subtle ones.

Dispatch is a messy, emotional, context-rich process. A mother screaming over a lifeless child. A confused elderly caller describing chest pain in metaphors. A teenager using slang for overdose. These aren’t clean datasets - they’re linguistic chaos, shaped by culture, language, fear, and misunderstanding.

An AI system trained on one linguistic population may misinterpret another.
A model optimised for a clean audio signal may falter in a noisy urban environment.
A pattern recognised as benign in one region may be deadly in another.

And the consequences of error are acute:

  • A false negative, and the ambulance doesn’t arrive.

  • A false positive, and a limited resource is diverted from a genuine emergency.

  • Or worse, a biased system that systematically misclassifies distress in marginalised communities.

Then there is the issue of accountability.
If a dispatcher follows an AI’s recommendation and harm occurs, who bears the responsibility?
If an algorithm silently overrides a human intuition, and someone dies, is it a systems failure — or an ethical failure?

These are not abstract questions. They are pressing.
Because the technology is already being deployed.

The way forward is not to abandon AI in dispatch, but to embed it wisely:

  • As a supportive tool, not an invisible authority.

  • As a partner that prompts better human decision-making, not one that replaces it.

  • With constant audit, contextual training, and the ability for human override.

  • And with culturally sensitive datasets that reflect the diversity of the people we serve.

AI has extraordinary potential to reshape the start of the emergency care journey.
It could reduce response times, predict deterioration before arrival, and ensure patients are taken to the right place, first time.

But that promise will only be realised if we remain vigilant - not just about what the machine hears, but about what it doesn’t understand.

Because the best emergency systems are not just fast.
They are attentive, adaptive, and human-centred.
And in that, AI still has much to learn.

Next
Next

AI in Healthcare: A Clinician's Imperative