Skip to content
1ST PLACE — TAMPA BAY HEALTHCARE AI HACKATHON 2024
Healthcare IT

How We Won the Tampa Bay Healthcare AI Hackathon — and What We Learned

Our team built a clinical prior authorization AI in 28 hours. Here's what worked, what almost didn't, and why the compliance layer was the hardest part.

LockedIn Labs Engineering TeamNovember 15, 20248 min read

The brief was simple: reduce prior authorization turnaround time for a regional health system. 28 hours. Go.

We'd entered the Tampa Bay Innovation Center's Healthcare AI Hackathon with a team of four — two engineers, one ML specialist, and one compliance architect. That last role is what separated us from the other teams.

The Problem With Most Healthcare AI Demos

Every team at the hackathon built something that worked on demo data.

The problem with healthcare AI isn't the model. It's the context. A prior auth request contains ICD-10 codes, CPT codes, clinical notes, payer-specific criteria, and a paper trail that needs to be audit-ready for CMS.

Most teams treated the AI as the product. We treated the compliance layer as the product, and made AI the engine.

What We Built

Our system had three components:

1. Document ingestion layer

OCR + classification pipeline that extracted structured data from unstructured prior auth forms. Trained on 12 clinical document types. Ran locally (no PHI left the health system's environment).

2. Prior auth decision engine

A fine-tuned classification model that matched patient criteria against payer-specific guidelines. Not a black box. Every decision came with a confidence score and the specific criteria matched or missed.

3. Audit trail generator

Every model decision was logged to an immutable audit record: timestamp, input hash, model version, decision, confidence, and a human-readable explanation. This was the piece that made the judges stop and ask questions.

The Compliance Layer Was The Hardest Part

Three hours in, our ML engineer had a working prototype. We spent the other 25 hours on compliance.

Not because we're paranoid — because that's where real healthcare AI lives or dies.

CMS requires that prior auth decisions be explainable to both providers and patients. HIPAA requires that any system touching PHI be access-controlled, logged, and auditable. Our audit trail wasn't a nice-to-have. It was the product.

We also built a role-based access layer. The model could be queried by the clinical staff system, not by individual user credentials. This matters for HIPAA minimum necessary standard compliance.

Results and What the Judges Said

4.3s
200 prior auth requests processed
87%
Agreement with human reviewer decisions
91%
Cases resolved same-day (vs 3.2 day baseline)

The judges — a VP from BayCare, a compliance attorney, and a healthcare IT architect — said the same thing: most teams forgot the compliance layer existed.

We didn't win because our model was the best. We won because our system could actually be deployed in a real hospital without triggering a CMS audit.

What We'd Do Differently

The OCR pipeline was the biggest bottleneck. We used an off-the-shelf model that struggled with handwritten physician notes. In production, that's a hard problem — a lot of clinical documentation is still handwritten or poorly structured.

We'd also integrate directly with an EHR (we used mock data), which adds weeks of Epic or Cerner integration work that a hackathon environment glosses over.

Why This Matters for Enterprise Healthcare AI

The lesson isn't “compliance is important” — everyone knows that. The lesson is: compliance architecture should drive your AI architecture, not the other way around.

Start with the audit trail. Then build the model around what you can actually log, explain, and defend.

That's how you build healthcare AI that makes it past the legal team. That's how you build systems for 12 million patients without a breach.

The hackathon proved the concept. Production proved the principle.