
Legal
AI
is
often
sold
as
a
training
accelerator.
Give
junior
lawyers
faster
answers,
cleaner
summaries,
and
clearer
issue
spotting,
and
they
will
ramp
more
quickly.
That
theory
is
tidy.
It
is
also
wrong.
In
practice,
many
legal
AI
tools
are
quietly
eroding
the
very
skills
junior
lawyers
most
need
to
develop.
Not
because
the
tools
are
inaccurate,
but
because
they
collapse
judgment
into
answers
too
early
in
the
learning
curve.
When
that
happens,
junior
lawyers
stop
thinking
before
they
have
learned
how.
This
dynamic
became
hard
to
ignore
during
a
series
of
empirical
classroom
pilots
run
through
Product
Law
Hub
using
an
AI-based
product
law
coach
called
Frankie.
The
pilots
were
conducted
in
a
product
counseling
course
and
designed
to
observe,
not
market,
how
law
students
and
early-career
lawyers
interact
with
AI
when
learning
judgment-based
legal
skills.
The
findings
were
based
on
a
mix
of
quantitative
engagement
data
and
qualitative
interviews
conducted
during
and
after
the
course.
What
emerged
should
worry
law
firms
investing
heavily
in
AI
as
a
training
solution.
Junior
Lawyers
Already
Struggle
With
Confidence
And
Framing
Anyone
who
has
supervised
junior
lawyers
knows
the
pattern.
They
are
often
technically
capable
but
hesitant.
They
look
for
the
“right”
answer
instead
of
learning
how
to
frame
a
problem,
assess
tradeoffs,
and
explain
risk
in
context.
Confidence
does
not
come
from
correctness
alone.
It
comes
from
repeated
exposure
to
uncertainty
and
the
experience
of
reasoning
through
it.
AI
tools
that
jump
straight
to
answers
short-circuit
that
process.
They
remove
the
productive
discomfort
that
forces
a
junior
lawyer
to
ask,
“What
am
I
missing?”
or
“Why
does
this
matter
to
the
business?”
Over
time,
that
matters
more
than
speed.
In
the
classroom
pilot,
this
showed
up
quickly.
When
the
AI
behaved
like
an
answer
engine,
delivering
conclusions
without
first
engaging
the
student’s
reasoning,
engagement
dropped.
Quantitative
usage
data
showed
shorter
sessions
and
fewer
follow-up
interactions.
Students
moved
on
faster,
but
they
did
not
go
deeper.
When
AI
Answers
Too
Fast,
Thinking
Stops
The
most
striking
finding
from
the
pilot
was
not
about
accuracy.
The
AI’s
legal
guidance
was
generally
sound.
The
problem
was
timing.
When
students
were
given
answers
before
they
had
articulated
their
own
reasoning,
many
disengaged.
In
interviews,
several
described
feeling
less
confident,
not
more.
They
deferred
to
the
system’s
output
without
fully
understanding
why
it
was
correct.
Others
described
a
subtle
sense
that
their
own
analysis
no
longer
mattered.
This
is
exactly
the
opposite
of
what
junior
lawyers
need.
Early
in
their
careers,
they
need
to
build
judgment
muscles,
not
outsource
them.
AI
that
answers
too
quickly
trains
deference
instead
of
reasoning.
In
contrast,
when
the
AI
forced
students
to
slow
down
by
asking
clarifying
questions
or
prompting
them
to
articulate
tradeoffs
before
responding,
engagement
increased.
Students
stayed
longer,
revised
their
thinking,
and
were
more
willing
to
defend
their
conclusions.
The
difference
was
not
intelligence.
It
was
design.
Confidence
Erosion
Is
Easy
To
Miss
And
Hard
To
Fix
One
of
the
more
concerning
qualitative
signals
from
the
pilot
was
how
easily
confidence
eroded
when
AI
interactions
felt
overly
directive.
Several
students
reported
that
they
second-guessed
themselves
more
after
using
the
system
in
answer-forward
modes.
Even
when
they
agreed
with
the
output,
they
felt
less
ownership
over
the
reasoning.
In
a
firm
setting,
this
kind
of
erosion
is
easy
to
miss.
Junior
lawyers
may
appear
productive.
They
may
turn
work
faster.
But
over
time,
they
become
overly
reliant
on
tools
to
tell
them
what
to
think.
That
dependence
shows
up
later,
when
they
struggle
to
explain
their
reasoning
to
a
partner,
a
client,
or
a
regulator.
AI
did
not
create
this
risk,
but
it
amplifies
it.
Training
Environments
Reveal
What
Practice
Hides
Classrooms
are
unusually
good
at
surfacing
these
dynamics
because
learners
have
fewer
incentives
to
hide
confusion.
They
disengage
visibly.
They
complain.
They
stop
using
the
tool.
In
practice,
junior
lawyers
adapt
instead.
They
comply,
even
if
the
tool
is
making
them
worse.
That
is
why
the
Product
Law
Hub
pilot
is
instructive
beyond
education.
It
offers
an
early
warning
signal
for
what
will
happen
as
AI
tools
are
embedded
deeper
into
firm
training
and
workflows.
If
a
tool
discourages
reasoning
in
a
low-stakes
learning
environment,
it
will
do
the
same
under
billable
pressure.
The
Problem
Is
Not
AI.
It
Is
How
We
Deploy
It.
None
of
this
argues
against
AI
in
legal
training.
It
argues
against
lazy
deployment.
AI
can
support
junior
lawyers
when
it
behaves
like
a
mentor
instead
of
an
oracle.
The
most
effective
interactions
in
the
pilot
occurred
when
the
system
asked
questions
before
giving
answers,
explained
why
an
issue
mattered
in
context,
and
made
tradeoffs
explicit
instead
of
hiding
them.
Those
design
choices
kept
the
human
in
the
loop
cognitively,
not
just
procedurally.
They
reinforced
the
idea
that
judgment
is
something
you
build,
not
something
you
receive.
What
Firms
Should
Take
Seriously
If
firms
want
AI
to
help
junior
lawyers
improve,
they
need
to
be
honest
about
what
they
are
optimizing
for.
Speed
is
easy
to
buy.
Judgment
is
not.
Tools
that
prioritize
instant
answers
may
look
efficient
in
demos,
but
they
risk
producing
lawyers
who
are
faster
and
less
capable
at
the
same
time.
That
is
not
a
trade
most
firms
would
accept
if
they
saw
it
clearly.
The
classroom
data
suggests
a
simple
but
uncomfortable
truth.
AI
does
not
automatically
make
junior
lawyers
better.
In
many
cases,
it
makes
them
worse,
unless
it
is
deliberately
designed
to
slow
them
down,
challenge
them,
and
force
them
to
think.
That
may
feel
counterintuitive
in
a
profession
obsessed
with
efficiency.
But
judgment
has
never
been
built
quickly.
AI
should
not
pretend
otherwise.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
