
via
Getty)
Legal
AI
is
usually
framed
as
a
model
problem.
Better
models.
Larger
models.
More
capable
models.
The
assumption
is
that
if
the
technology
is
powerful
enough,
usefulness
will
follow.
The
empirical
evidence
suggests
a
different
conclusion.
Legal
AI
does
not
fail
because
models
are
insufficiently
advanced.
It
fails
because
the
dominant
metaphor
is
wrong.
The
most
effective
legal
AI
behaves
less
like
an
automated
system
and
more
like
a
mentor.
This
insight
emerged
during
a
series
of
empirical
classroom
pilots
run
through
Product
Law
Hub
using
an
AI-based
legal
coach
called
Frankie.
The
pilots
were
designed
to
observe
how
users
develop
judgment-based
legal
skills
when
working
alongside
AI.
The
findings
draw
on
quantitative
engagement
data
and
qualitative
interviews
conducted
throughout
the
course.
What
consistently
produced
better
learning
outcomes
was
not
authority,
speed,
or
completeness.
It
was
collaboration.
Automation
Is
The
Wrong
Aspiration
Much
of
legal
AI
development
is
oriented
around
automation.
Reduce
effort.
Eliminate
steps.
Deliver
answers
faster.
That
framing
works
for
clerical
or
repetitive
tasks.
It
breaks
down
when
the
task
is
judgment.
Judgment
cannot
be
automated
without
being
diminished.
It
requires
context,
prioritization,
and
explanation.
When
AI
systems
attempt
to
replace
those
processes
with
outputs,
they
strip
away
the
very
work
that
produces
expertise.
In
the
classroom
pilot,
authority-driven
interactions
exposed
this
limitation
quickly.
When
the
AI
behaved
like
a
tool
that
delivered
conclusions,
engagement
dropped.
Users
deferred
rather
than
reasoned.
Learning
slowed.
The
model
was
capable.
The
interaction
was
wrong.
Mentorship
Is
How
Lawyers
Actually
Learn
Lawyers
do
not
develop
judgment
by
being
handed
answers.
They
develop
it
through
guided
struggle.
A
senior
lawyer
asks
questions,
challenges
assumptions,
and
explains
why
something
matters.
They
do
not
solve
the
problem
for
you
unless
it
is
necessary.
The
most
effective
AI
interactions
in
the
pilot
mirrored
that
dynamic.
When
the
system
asked
clarifying
questions,
surfaced
tradeoffs,
and
prompted
users
to
articulate
reasoning
before
responding,
engagement
increased.
Quantitative
data
showed
longer
sessions
and
more
iterative
exchanges.
Interviews
revealed
greater
confidence
and
stronger
retention.
The
AI
did
not
become
smarter.
It
became
more
mentor-like.
Authority
Shuts
Learning
Down
One
of
the
clearest
contrasts
in
the
data
was
between
collaborative
and
authoritative
modes.
When
the
AI
asserted
answers
early
or
framed
guidance
as
definitive,
users
disengaged.
They
moved
faster
but
learned
less.
This
is
not
surprising.
Authority
short-circuits
curiosity.
Once
an
answer
is
presented
as
final,
there
is
little
incentive
to
explore
alternatives
or
test
assumptions.
In
contrast,
when
the
AI
withheld
judgment
and
instead
invited
reasoning,
users
stayed
cognitively
involved.
They
treated
the
interaction
as
a
conversation
rather
than
a
transaction.
Legal
AI
that
defaults
to
authority
undermines
its
own
value.
Collaboration
Scales
Better
Than
Control
There
is
a
temptation
to
believe
that
authoritative
AI
is
safer.
Clear
answers
feel
controllable.
Collaborative
systems
feel
messy.
The
pilot
suggests
the
opposite.
Collaborative
AI
produced
more
durable
learning
and
more
trust.
Users
were
better
able
to
explain
their
reasoning
and
adapt
it
across
scenarios.
Control
may
reduce
short-term
risk.
It
increases
long-term
dependence.
Mentorship
builds
capability.
This
distinction
matters
as
AI
becomes
embedded
in
training
and
workflows.
Systems
that
act
as
authorities
create
passive
users.
Systems
that
act
as
mentors
create
better
lawyers.
Why
Models
Keep
Getting
The
Metaphor
Wrong
Part
of
the
problem
is
language.
We
talk
about
models,
not
relationships.
We
optimize
for
outputs,
not
interactions.
We
evaluate
correctness,
not
growth.
Mentorship
does
not
fit
neatly
into
benchmark
metrics.
It
is
harder
to
demo.
It
takes
longer
to
show
value.
But
it
aligns
far
more
closely
with
how
legal
expertise
actually
develops.
The
Product
Law
Hub
pilot
made
this
visible
by
stripping
away
performance
theater.
Students
did
not
care
how
fast
the
AI
responded.
They
cared
whether
it
engaged
with
their
thinking.
Mentors
Adapt.
Models
Repeat.
Another
insight
from
the
pilot
was
how
quickly
trust
eroded
when
the
AI
repeated
itself
or
applied
the
same
framework
regardless
of
context.
Repetition
signaled
inattention.
Users
disengaged.
Mentors
do
not
repeat
scripts.
They
adapt.
They
notice
what
the
learner
already
understands
and
adjust
accordingly.
When
the
AI
adapted
its
approach
based
on
prior
exchanges,
users
attributed
greater
intelligence
to
it,
even
when
its
substantive
guidance
was
constrained.
Trust
followed
attentiveness,
not
sophistication.
The
Cost
Of
Choosing
The
Wrong
Metaphor
Choosing
automation
as
the
dominant
metaphor
for
legal
AI
carries
a
cost.
It
encourages
tools
that
optimize
for
speed
over
understanding
and
authority
over
engagement.
Those
tools
may
look
impressive
but
fail
quietly
in
practice.
Choosing
mentorship
as
the
metaphor
changes
design
priorities.
It
emphasizes
questioning
over
answering,
adaptation
over
uniformity,
and
explanation
over
assertion.
The
classroom
data
suggests
that
this
shift
is
not
philosophical.
It
is
practical.
What
This
Means
For
Builders
And
Buyers
For
builders,
the
takeaway
is
clear.
Stop
asking
how
much
the
model
can
do.
Start
asking
how
it
behaves
when
a
user
is
uncertain,
wrong,
or
exploring.
For
buyers,
the
question
is
not
how
many
tasks
a
system
can
automate.
It
is
whether
the
system
helps
lawyers
think
better
over
time.
Legal
AI
will
be
judged
not
by
its
outputs,
but
by
its
influence
on
judgment.
The
Future
Of
Legal
AI
Is
Relational
The
most
important
lesson
from
the
empirical
classroom
work
is
that
legal
AI
succeeds
when
it
respects
how
lawyers
learn.
That
learning
is
relational.
It
is
iterative.
It
depends
on
challenge
and
explanation.
Models
will
continue
to
improve.
That
is
inevitable.
What
is
not
inevitable
is
how
we
choose
to
deploy
them.
If
legal
AI
continues
to
chase
automation,
it
will
keep
disappointing.
If
it
embraces
mentorship,
it
has
a
chance
to
become
something
far
more
valuable.
Legal
AI
does
not
need
to
replace
lawyers.
It
needs
to
teach
them
how
to
think.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
