Dumbo

The elephant in the room: AI and legal advice

An AI system just beat doctors at diagnosis.
More accurate.
More cost-effective.
What does this mean for lawyers?

A study led by Microsoft AI - Sequential Diagnosis with Language Models (June 2025) - tested a multi-agent system against practising UK and US physicians.
The task? Work through clinical vignettes, one step at a time, to identify the correct diagnosis.

This wasn’t patients self-diagnosing with ChatGPT.
There were no patients.

Nor was it one clever model.
It was a team of agents:
One to generate hypotheses.
One to choose tests.
One to challenge assumptions.
One to manage cost.
All coordinated by an orchestrator.

The output was scored by another AI - The Judge - using a rubric built by doctors.

It’s not hard to imagine an orchestrator for legal advice.

Much of what we do is precisely the kind of reasoning that can be scaffolded.

Picture this:

  • A Red Flag Agent, trained on thousands of DD reports, flagging material risks.

  • A What’s Market Agent, benchmarking clause positions by deal type and jurisdiction.

  • A Timeline Agent, managing inputs and nudging toward artificial urgency - no change there, then.

  • An Inversion Thinking Agent, scanning the term sheet and asking: what’s the fastest way this could fall apart so we know what to avoid obsessively?

  • An SPA Drafting Agent, trained on curated deal precedents and approved know-how documents, swapping provisions with speed and fluency.

  • A Legal Orchestrator, sitting on top - coordinating outputs, packaging advice.

If that sounds familiar, it should.
These roles already exist.
They just have names. And titles. And billable targets.

Which makes what’s left feel narrower but sharper.

On a recent deal, the legal drafting wasn’t what demanded me to be at my best.
It’s not why the client called me.
The issues weren’t really legal at all.
It was deeper than that:
Who needed to win?
Who couldn’t afford to be seen backing down?
When is a red line really red?

The battles weren’t won in mark-ups.
It was settled before we even got there.
They were won:
On a lonesome walk through a field.
Across the lunch table.
By picking up the phone.

AI can increasingly reason like us.
But it can’t map pride, fear, ambition -
not yet.
So what does it mean for lawyers?
That we may need to redefine where our edge really lies.

✍️ Note to self:
The more AI thinks like us,
the more it shows what it can’t feel.