Jurassic Tech: When AI Outruns the System

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
— Dr Ian Malcolm, Jurassic Park (1993)

In Jurassic Park, the brilliance of genetic engineering collides with the absence of boundaries, unleashing consequences no one can fully control. The same cautionary tale applies to artificial intelligence in Australian healthcare today.

The TGA’s 2025 report, Clarifying and Strengthening the Regulation of Medical Device Software including Artificial Intelligence, is our version of the perimeter fence. It is necessary, but not yet electrified. While regulation is being modernised, AI systems are already loose in the hospital ecosystem. They are influencing decisions, shaping workflows and sometimes operating in ways never intended.

The question is no longer whether we can regulate AI. The question is: can we implement it safely?

What the TGA Got Right

To its credit, the Therapeutic Goods Administration has strengthened the foundation:

  • AI tools that meet the definition of a medical device must be regulated for safety, quality and performance.

  • Definitions like supply, manufacturer and sponsor are being reviewed to reflect digital and virtual environments.

  • A risk-based, technology-agnostic approach is retained, aligning with international regulators.

  • Commitments have been made to develop guidance for adaptive AI, open datasets and model transparency.

  • Harmonisation with EU and FDA frameworks remains a core strategy.

It’s a solid regulatory perimeter. But a perimeter is not a playbook. It won’t help when the systems are already inside.

What’s Missing: No National AI Implementation Framework

The dinosaurs in Jurassic Park weren’t dangerous because they existed. They became dangerous because no one planned how to manage them.

In healthcare, we are making the same mistake. Australia has no national framework to guide how AI is adopted, monitored, retired or governed across real-world clinical settings. The consequences are mounting.

1. Inconsistent Adoption Across Health Services

Right now, hospitals are writing their own AI rules — or have none at all. There is no national baseline for validation, oversight or consent.

Result: Unequal care. One ED may benefit from AI-driven triage. Another may avoid using it altogether due to uncertainty or risk aversion.

2. Clinician Confusion and Legal Risk

Who is responsible when an AI tool makes a bad call? What counts as informed consent when a predictive model influences decision-making?

Result: Some clinicians over-rely on AI. Others ignore it entirely. Neither approach is safe.

3. No System-Wide Safety Monitoring

Adaptive AI can evolve after deployment. Yet there is no national system to detect performance drift, latent bias or emerging safety concerns.

Result: Degraded or misapplied AI may influence care without structured detection or mitigation.

4. Innovation Gridlock

Developers face unclear approval pathways. Startups and research teams struggle to navigate fragmented procurement and integration processes.

Result: Promising Australian tools are delayed or diverted to overseas markets with clearer frameworks.

5. Ethics and Equity Gaps

There is no formal process to test AI models for cultural safety, equity across populations or representativeness of training data.

Result: AI risks embedding systemic bias and excluding the groups most in need of healthcare innovation.

What Needs to Happen Now

We don’t need more fences. We need a park operations manual.

Australia urgently needs a national AI implementation framework that can:

  • Define clear roles for hospitals, clinicians and vendors

  • Establish governance for adoption, oversight and decommissioning

  • Require transparency, explainability and real-world validation

  • Embed equity and safety into procurement and deployment

  • Enable innovation sandboxes for testing novel models in safe environments

Without this, we are left hoping the velociraptors don’t find the gaps.

Conclusion: AI Is Already Here. Are We Ready?

The TGA has done what it can within its regulatory remit. But AI is no longer confined to research labs or pilot programs. It is already embedded in emergency departments, outpatient clinics and ICUs, often without clear oversight, governance or real-world validation.

In one of Jurassic Park’s most unforgettable scenes, a jeep tears through the darkness, tyres skidding on slick jungle roads. The T-Rex is loose. Power has failed. Warning lights flash. Mud flies. The passengers are jolted as the vehicle bucks and accelerates, pursued by something bigger, faster and more powerful than they ever imagined. The mirror reads: “Objects in mirror are closer than they appear.” No one is steering. There is no map.

We’re already in that jeep. Whether we stay ahead depends not on the speed of AI, but on the strength of our direction.

Next
Next

Ex Machina MD