S solven_labs
home / case-studies / medical chronology
| medico-legal

Medical record chronology automation - UK medico-legal platform

A UK SaaS platform used by HM Courts and Tribunals, instructing solicitors, and around 180 medical experts. We built the AI pipeline that turns hundreds of pages of clinical records into a structured, searchable timeline.

The problem

Medico-legal case work depends on chronology. Before a medical expert can form an opinion, before a solicitor can build an argument, someone has to read every page of a patient's clinical records and construct a timeline - every diagnosis, every treatment, every hospital admission, every test result, in order.

For a typical case bundle that means 200-400 pages of GP records, hospital letters, discharge summaries, and imaging reports, many of them handwritten or low-quality scans. An experienced clinician or paralegal doing this manually takes 2-4 hours per bundle. Multiply that across dozens of active cases and the time cost is significant - and the risk of missing something buried in page 340 is real.

The client already handled the case management side. What they needed was a way to automate the chronology extraction itself, reliably enough to be used in live court proceedings.

What we built

The core of the system is a document processing pipeline that takes an encrypted PDF, extracts clinical events from it, and outputs a structured, dated chronology. Each event carries a confidence score, a source page reference, and links to related events elsewhere in the document.

Document quality in medico-legal bundles varies considerably. Standard OCR handles most typed records well, but older notes, faxed referrals, and low-resolution scans need a different approach. The pipeline uses a two-stage extraction strategy: a fast GPU-based pass that handles the majority of documents, falling back to a medical vision model for pages that score below a quality threshold. This keeps latency low for the common case without sacrificing accuracy on the difficult ones.

Large documents need chunked processing to stay within context limits and to allow parallelism. The pipeline breaks documents into overlapping segments, processes them concurrently, and then deduplicates events that appear in adjacent chunks - an important step given that the same clinical event often gets referenced multiple times across a bundle (the original letter, a GP summary, a discharge note).

// pipeline stages

1. ingest

encrypted pdf received, integrity verified

2. extraction

fast gpu ocr pass

- low-quality pages flagged

- fallback: medical vision model

3. chunking

overlapping segments, parallel processing

timeout scaling by document size

4. event extraction

llm identifies 18 clinical event types

- diagnoses, treatments, medications

- tests, procedures, admissions

- referrals, follow-ups, +10 more

5. deduplication

fuzzy matching across chunk boundaries

6. output

structured chronology: date, event type,

confidence score, page ref, related events

7. indexing

vector embeddings for semantic search

The extraction stage classifies events into 18 categories - diagnoses, treatments, medications, diagnostic tests, procedures, hospital admissions, referrals, and more. Each extracted event gets a confidence score based on the clarity of the source text and how unambiguous the date reference is. Low-confidence events are surfaced to the reviewer rather than silently included.

Once a chronology is extracted, it is indexed using vector embeddings, giving users semantic search over the full timeline. Searching "shoulder surgery" will surface relevant entries even where the record uses different terminology.

Users see the extraction happening in real time via a websocket connection - progress updates as each section of the document is processed, rather than a wait screen followed by a result.

Results

8-15 min

per document, vs 2-4 hrs manual

400+ pages

maximum document size handled

live cases

in use by medical experts and solicitors

Documents that previously required 2-4 hours of manual review by a clinician or paralegal are now processed in 8-15 minutes, depending on size and scan quality. The system handles documents up to 400 pages and is in active use on real cases submitted by medical experts and instructing solicitors on the platform.

The time saving matters, but the consistency matters just as much. Manual chronology construction is vulnerable to reviewer fatigue across long documents. The pipeline applies the same attention to page 1 and page 390.

Data residency and compliance

Patient data is among the most sensitive category of personal data in UK law. The entire system runs on self-hosted infrastructure within the UK. No patient records are sent to third-party model APIs. Encryption is applied at rest and in transit throughout the pipeline - from initial upload through to the final chronology output. This approach meets the standards required for handling court bundle materials and is consistent with NHS data security expectations.

The infrastructure design was driven partly by The client's obligations to HM Courts and Tribunals and to the medical experts who rely on the platform. A cloud-API-first approach would have been faster to prototype but incompatible with those requirements from the start.

// similar problem?

Working with high-volume documents?

If you have a process that involves reading and extracting information from large document sets, we should talk. We will tell you honestly what is and is not tractable with current AI.

start a conversation →
← all case studies