
There
is
a
familiar
anxiety
running
through
legal
education
and
law
firms
alike.
If
AI
can
analyze
issues,
draft
language,
and
flag
risks,
what
happens
to
legal
judgment?
Is
it
being
replaced,
diminished,
or
quietly
outsourced?
The
more
uncomfortable
answer
is
different.
AI
is
not
replacing
legal
judgment.
It
is
exposing
how
little
of
it
we
explicitly
teach.
This
became
clear
during
a
series
of
empirical
classroom
pilots
run
through
Product
Law
Hub
using
an
AI-based
legal
coach
called
Frankie.
The
pilots
were
conducted
in
a
product
counseling
course
and
designed
to
observe
how
students
develop
judgment-based
legal
skills
when
working
alongside
AI.
The
findings
draw
on
quantitative
engagement
data
and
qualitative
interviews
conducted
throughout
the
course.
What
emerged
was
not
a
story
about
automation.
It
was
a
story
about
instruction.
We
Talk
About
Judgment,
But
We
Rarely
Teach
It
Legal
education
and
law
firm
training
both
emphasize
judgment
as
a
defining
professional
skill.
We
expect
lawyers
to
know
how
to
weigh
risks,
frame
advice,
and
make
tradeoffs
under
uncertainty.
Yet
much
of
legal
training
focuses
on
correctness.
Did
you
spot
the
issue?
Did
you
cite
the
right
authority?
Did
you
reach
a
defensible
conclusion?
Judgment
is
assumed
to
emerge
along
the
way.
In
the
classroom
pilot,
that
assumption
was
tested
directly.
Students
were
given
realistic
scenarios
and
asked
to
work
through
them
with
AI
support.
The
difference
in
outcomes
turned
not
on
whether
the
AI
provided
the
right
answer,
but
on
how
it
explained
the
answer.
‘Why
This
Matters’
Changed
Everything
The
strongest
learning
gains
occurred
when
the
AI
explained
why
an
answer
mattered
in
context,
not
simply
whether
it
was
correct.
When
feedback
connected
legal
analysis
to
business
impact,
stakeholder
priorities,
or
downstream
consequences,
students
retained
more
and
engaged
more
deeply.
Quantitative
data
showed
longer
session
times
and
higher
completion
rates
when
explanations
tied
legal
issues
to
product
decisions.
Interviews
confirmed
that
students
felt
more
confident
explaining
their
reasoning,
not
just
reaching
conclusions.
By
contrast,
when
feedback
stopped
at
correctness,
learning
stalled.
Students
moved
on
quickly,
but
they
struggled
to
articulate
why
an
issue
mattered
or
how
it
should
be
framed
for
a
non-legal
audience.
This
distinction
is
easy
to
overlook
because
correctness
is
measurable.
Judgment
is
not.
AI
made
that
gap
visible.
Framing
Is
Learned,
Not
Inferred
One
of
the
most
consistent
improvements
observed
during
the
pilot
was
in
framing.
Students
became
better
at
explaining
tradeoffs,
prioritizing
risks,
and
tailoring
advice
to
context
when
the
AI
modeled
that
behavior
explicitly.
This
did
not
happen
because
the
AI
was
smarter
than
the
students.
It
happened
because
it
made
the
reasoning
process
legible.
It
showed
how
legal
considerations
connect
to
product
timelines,
customer
impact,
and
business
strategy.
In
practice,
this
is
what
senior
lawyers
do
instinctively.
They
do
not
recite
doctrine.
They
translate
it.
Yet
that
translation
step
is
rarely
taught
systematically.
AI
forced
it
into
the
open.
The
Myth
That
Judgment
Cannot
Be
Taught
There
is
a
persistent
belief
in
legal
culture
that
judgment
is
something
you
absorb
through
experience,
not
something
that
can
be
taught
directly.
Experience
certainly
matters.
But
the
pilot
suggests
that
judgment
can
be
accelerated
when
it
is
made
explicit.
Students
improved
fastest
when
the
AI
articulated
the
reasoning
path,
not
just
the
destination.
They
learned
how
to
think
about
tradeoffs,
not
just
how
to
reach
outcomes.
That
learning
transferred
across
scenarios.
This
should
matter
to
firms
struggling
with
training.
If
judgment
were
truly
untouchable,
AI
would
have
little
to
contribute.
Instead,
the
data
suggests
that
AI
can
support
judgment
development
when
it
is
designed
to
surface
reasoning
rather
than
obscure
it.
Education
And
Practice
Are
Closer
Than
We
Admit
One
of
the
more
interesting
aspects
of
the
pilot
was
how
closely
classroom
dynamics
mirrored
practice.
The
same
behaviors
that
supported
learning
also
supported
credibility.
Systems
that
explained
context
built
trust.
Systems
that
collapsed
nuance
undermined
it.
This
alignment
matters
because
it
challenges
the
idea
that
education
and
practice
require
fundamentally
different
tools.
They
require
the
same
thing:
support
for
reasoning,
not
shortcuts
around
it.
Law
schools
and
firms
often
talk
past
each
other
about
preparedness.
The
pilot
suggests
a
shared
opportunity.
Both
environments
struggle
to
teach
judgment
explicitly.
AI
did
not
create
that
gap.
It
revealed
it.
What
AI
Makes
Impossible
To
Ignore
Before
AI,
gaps
in
judgment
training
were
easier
to
hide.
Senior
lawyers
compensated.
Juniors
learned
slowly.
Feedback
was
uneven.
AI
interactions,
by
contrast,
are
immediate
and
observable.
When
a
system
explains
why
something
matters,
learning
accelerates.
When
it
does
not,
the
absence
is
obvious.
That
visibility
is
uncomfortable,
but
valuable.
The
Product
Law
Hub
pilot
did
not
show
that
AI
can
replace
judgment.
It
showed
that
we
have
been
relying
on
implicit
learning
for
too
long.
AI
forces
us
to
decide
whether
we
are
willing
to
teach
what
we
claim
to
value.
The
Real
Lesson
For
The
Profession
The
real
lesson
from
these
findings
is
not
about
technology.
It
is
about
intention.
If
we
want
lawyers
who
can
exercise
judgment,
we
have
to
teach
judgment.
That
means
explaining
tradeoffs,
modeling
reasoning,
and
connecting
legal
analysis
to
real-world
consequences.
AI
can
help
with
that,
but
only
if
we
stop
using
it
as
an
answer
machine.
AI
did
not
expose
a
weakness
in
lawyers.
It
exposed
a
weakness
in
how
we
train
them.
That
is
a
problem
worth
solving,
with
or
without
technology.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
