
Legal
AI
tools
are
usually
sold
as
if
lawyers
are
interchangeable.
Same
interface.
Same
prompts.
Same
outputs.
The
assumption
is
that
if
the
technology
works,
everyone
will
benefit
equally.
That
assumption
is
wrong,
and
it
is
one
of
the
main
reasons
legal
AI
adoption
keeps
stalling
inside
firms.
This
became
especially
clear
during
a
series
of
empirical
classroom
pilots
run
through
Product
Law
Hub
using
an
AI-based
legal
coach
called
Frankie.
The
pilots
were
designed
to
observe
how
users
at
different
experience
levels
interact
with
AI
when
learning
judgment-based
legal
skills.
The
findings
were
based
on
a
combination
of
quantitative
engagement
data
and
qualitative
interviews.
What
emerged
was
a
sharp
divide.
Junior
users
wanted
structure
and
reassurance.
More
advanced
users
wanted
challenge
and
ambiguity.
One
system
could
not
satisfy
both,
and
when
it
tried,
it
frustrated
everyone.
Legal
AI
Assumes
A
Uniform
Lawyer
Who
Does
Not
Exist
Most
legal
AI
tools
are
built
around
an
implicit
user
model.
That
user
is
competent
but
unsure,
wants
guidance,
and
values
efficiency
over
exploration.
That
model
maps
loosely
to
a
junior
lawyer.
It
does
not
map
to
a
senior
associate,
counsel,
or
partner.
In
the
classroom
pilot,
this
mismatch
surfaced
quickly.
Early-stage
users
responded
well
to
structured
prompts,
checklists,
and
staged
reasoning.
They
wanted
to
know
what
mattered,
what
to
consider
next,
and
whether
they
were
missing
something
obvious.
Structure
helped
them
orient
themselves
and
reduced
anxiety.
More
experienced
users
reacted
very
differently.
They
described
the
same
structure
as
constraining.
They
wanted
the
system
to
push
back,
surface
edge
cases,
and
challenge
assumptions.
When
the
AI
behaved
like
a
tutor,
they
disengaged.
The
problem
was
not
the
AI’s
intelligence.
It
was
the
assumption
that
one
interaction
mode
could
serve
everyone.
Divergent
Behavior
Showed
Up
In
The
Data
This
divide
was
not
anecdotal.
Quantitative
usage
patterns
diverged
sharply
by
experience
level.
Less
experienced
users
spent
more
time
in
structured
modes
and
followed
prompts
sequentially.
More
advanced
users
exited
sessions
earlier
when
interactions
felt
overly
guided.
Interview
feedback
reinforced
the
data.
Junior
users
described
the
AI
as
helpful
when
it
reduced
uncertainty.
Senior
users
described
the
same
behavior
as
unhelpful
when
it
removed
ambiguity.
One
group
wanted
guardrails.
The
other
wanted
sparring.
These
are
not
preferences
you
can
average
away.
One-Size
AI
Fails
Quietly
In
Firms
In
law
firms,
this
seniority
problem
often
goes
unaddressed
because
failure
is
subtle.
Junior
lawyers
may
continue
using
the
tool
even
if
it
limits
growth,
because
they
are
grateful
for
guidance.
Senior
lawyers
may
stop
using
it
quietly,
dismissing
it
as
“not
for
me.”
From
the
outside,
adoption
looks
mixed
but
acceptable.
In
reality,
the
tool
is
underserving
both
groups.
Juniors
are
not
developing
judgment
as
quickly
as
they
should.
Seniors
are
not
getting
value
at
all.
The
classroom
setting
made
this
visible
because
disengagement
was
immediate
and
explicit.
In
practice,
it
shows
up
months
later
as
stalled
usage
and
quiet
abandonment.
Structure
And
Ambiguity
Are
Not
Opposites.
They
Are
Stage-Specific.
One
of
the
most
important
insights
from
the
pilot
was
that
structure
and
ambiguity
are
not
competing
values.
They
are
appropriate
at
different
stages
of
development.
Junior
lawyers
benefit
from
structured
guidance
early
on,
especially
when
learning
how
to
spot
issues
and
frame
risks.
But
that
structure
must
fade.
If
it
does
not,
it
becomes
a
ceiling
rather
than
a
scaffold.
Senior
lawyers
need
ambiguity
to
sharpen
judgment.
They
want
tools
that
surface
competing
considerations,
not
tools
that
tell
them
what
to
do.
When
AI
eliminates
uncertainty
too
early,
it
removes
the
very
terrain
where
senior
judgment
operates.
Legal
AI
that
ignores
this
progression
will
always
feel
misaligned.
Vendors
Are
Not
The
Only
Ones
Responsible
It
is
easy
to
blame
vendors
for
this
problem,
but
buyers
play
a
role
as
well.
Firms
often
ask
for
a
single
system
that
“works
for
everyone”
because
it
is
easier
to
procure,
train,
and
manage.
That
convenience
comes
at
a
cost.
By
insisting
on
uniformity,
firms
reinforce
the
fiction
that
lawyers
at
different
stages
need
the
same
kind
of
support.
The
result
is
technology
that
is
broadly
deployed
and
narrowly
useful.
The
Product
Law
Hub
pilot
suggests
a
different
approach.
AI
systems
should
adapt
to
the
user’s
experience
level
and
agency
preference,
not
flatten
them.
That
is
harder
to
build
and
harder
to
buy,
but
it
is
the
only
path
that
respects
how
lawyers
actually
work.
Why
This
Matters
More
As
AI
Becomes
Embedded
As
AI
moves
from
optional
tool
to
embedded
infrastructure,
the
seniority
problem
becomes
more
consequential.
Tools
that
junior
lawyers
rely
on
shape
how
they
learn
to
think.
Tools
that
senior
lawyers
reject
shape
whether
institutional
knowledge
is
reinforced
or
lost.
Ignoring
experience-level
differences
does
not
just
affect
adoption.
It
affects
talent
development.
The
Uncomfortable
Takeaway
The
uncomfortable
lesson
from
the
classroom
data
is
that
legal
AI
does
not
fail
because
it
is
not
smart
enough.
It
fails
because
it
is
not
differentiated
enough.
Lawyers
are
not
interchangeable
users.
They
never
have
been.
Systems
that
pretend
otherwise
will
continue
to
disappoint,
no
matter
how
sophisticated
the
underlying
models
become.
Until
legal
AI
acknowledges
the
seniority
problem
and
designs
for
it
explicitly,
firms
will
keep
buying
tools
that
look
promising,
deploy
broadly,
and
quietly
fail
where
it
matters
most.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
