
Legal
AI
vendors
talk
about
trust
constantly.
Transparent
models.
Responsible
AI
principles.
Guardrails
and
disclosures.
Yet
many
lawyers
distrust
legal
AI
not
because
it
is
unsafe
or
unethical,
but
because
it
feels
inattentive.
That
distinction
matters
more
than
most
discussions
of
trust
acknowledge.
This
became
clear
during
a
series
of
empirical
classroom
pilots
run
through
Product
Law
Hub
using
an
AI-based
legal
coach
called
Frankie.
The
pilots
were
designed
to
observe
how
users
build
or
lose
trust
in
AI
systems
while
learning
judgment-based
legal
skills.
The
findings
draw
on
quantitative
engagement
data
and
qualitative
interviews
conducted
during
and
after
the
course.
What
emerged
was
counterintuitive.
Users
were
willing
to
tolerate
difficulty,
ambiguity,
and
even
uncertainty.
What
they
did
not
tolerate
was
repetition,
generic
responses,
and
overstructured
interactions
that
made
the
system
feel
inattentive
to
context.
Lawyers
Do
Not
Trust
Politeness.
They
Trust
Judgment.
Many
legal
AI
systems
are
designed
to
be
agreeable.
They
explain
patiently.
They
reassure
users.
They
avoid
friction.
On
paper,
that
looks
like
good
UX.
In
practice,
it
often
has
the
opposite
effect.
During
the
pilot,
users
consistently
reported
lower
trust
when
the
AI
behaved
in
overly
“helpful”
ways.
Repeating
the
same
guidance
in
slightly
different
words.
Offering
generic
checklists
regardless
of
context.
Steering
users
toward
safe,
obvious
answers
without
engaging
the
substance
of
the
problem.
These
interactions
felt
polite,
but
not
thoughtful.
Users
described
them
as
shallow
or
inattentive.
Trust
eroded
quickly.
By
contrast,
when
the
AI
challenged
assumptions,
surfaced
competing
considerations,
or
forced
users
to
grapple
with
ambiguity,
trust
increased.
Even
when
the
interaction
was
harder,
users
felt
the
system
was
paying
attention.
Repetition
Erodes
Trust
Faster
Than
Difficulty
One
of
the
clearest
quantitative
signals
from
the
pilot
was
that
trust
dropped
more
sharply
in
response
to
repetition
than
to
hard
questions.
Sessions
shortened
when
users
encountered
recycled
prompts
or
familiar
phrasing.
Follow-up
engagement
declined
even
when
the
underlying
legal
issue
was
manageable.
Interviews
reinforced
this
pattern.
Users
were
explicit
that
difficulty
was
not
the
problem.
In
fact,
many
welcomed
it.
What
frustrated
them
was
the
sense
that
the
system
was
not
responding
uniquely
to
their
inputs.
For
lawyers,
repetition
is
a
red
flag.
It
signals
that
a
tool
is
not
reasoning,
but
pattern-matching.
Once
that
perception
takes
hold,
trust
is
hard
to
recover.
Overstructuring
Can
Feel
Like
Disengagement
Another
trust
killer
was
overstructuring.
Checklists
and
frameworks
were
helpful
early,
especially
for
less
experienced
users.
But
when
structure
persisted
regardless
of
context,
it
began
to
feel
like
the
system
was
ignoring
nuance.
Users
described
these
interactions
as
“going
through
the
motions.”
The
AI
was
doing
what
it
was
programmed
to
do,
not
what
the
situation
required.
That
distinction
matters
deeply
in
legal
work,
where
credibility
often
turns
on
whether
advice
reflects
situational
awareness.
Overstructuring
is
often
justified
as
a
safety
measure.
In
reality,
it
can
undermine
trust
by
signaling
that
the
system
is
not
capable
of
adapting.
Realism
Builds
Trust
Better
Than
Reassurance
One
of
the
strongest
trust-building
signals
in
the
pilot
was
realism.
Users
consistently
preferred
fewer,
richer
scenarios
over
large
numbers
of
simplified
questions.
Role-play
exercises
that
incorporated
stakeholder
pushback,
incomplete
information,
and
messy
tradeoffs
felt
credible.
Importantly,
these
scenarios
were
not
easier.
They
were
harder.
But
they
felt
real.
When
the
AI
engaged
with
that
complexity
instead
of
smoothing
it
away,
users
trusted
it
more.
When
it
defaulted
to
generic
explanations
or
abstract
advice,
trust
declined.
This
mirrors
how
trust
works
between
lawyers.
We
trust
colleagues
who
acknowledge
uncertainty
and
wrestle
with
complexity.
We
distrust
those
who
offer
tidy
answers
to
messy
problems.
Bugs
Matter
Less
Than
Behavior
Another
surprising
insight
was
how
users
reacted
to
technical
imperfections.
Minor
bugs
or
rough
edges
were
noticed,
but
they
were
not
decisive.
What
mattered
more
was
how
the
system
behaved
in
response.
If
the
AI
adapted,
acknowledged
limitations,
or
adjusted
its
approach,
trust
was
preserved.
If
it
repeated
itself
or
ignored
context,
trust
evaporated.
This
has
implications
for
how
legal
AI
teams
prioritize
development.
Fixing
every
edge
case
matters
less
than
ensuring
the
system
behaves
attentively
when
things
are
imperfect.
Trust
Is
Earned
Through
Resistance,
Not
Agreement
The
most
trusted
interactions
in
the
pilot
shared
a
common
feature.
The
AI
resisted
the
user
in
some
way.
It
asked
follow-up
questions.
It
surfaced
alternative
views.
It
declined
to
collapse
complexity
into
a
single
answer.
That
resistance
signaled
judgment.
In
legal
work,
trust
is
not
built
by
agreeing.
It
is
built
by
demonstrating
that
you
understand
what
is
at
stake
and
are
willing
to
engage
with
it
honestly.
AI
systems
that
optimize
for
smoothness
miss
this
entirely.
Why
Responsible
AI
Rhetoric
Misses
The
Point
Much
of
the
current
conversation
about
trust
in
legal
AI
focuses
on
ethics,
bias,
and
transparency.
Those
issues
matter.
But
they
are
not
the
primary
drivers
of
day-to-day
trust
for
lawyers.
Behavior
is.
Lawyers
trust
systems
that
feel
attentive,
situationally
aware,
and
willing
to
challenge
them.
They
distrust
systems
that
feel
generic,
repetitive,
or
overly
eager
to
please.
The
Product
Law
Hub
pilot
suggests
that
trust
in
legal
AI
is
less
about
assurances
and
more
about
interaction
design.
Systems
that
push
back
thoughtfully
earn
credibility.
Systems
that
try
too
hard
to
be
helpful
lose
it.
Until
legal
AI
builders
and
buyers
internalize
that
distinction,
they
will
keep
investing
in
tools
that
look
responsible
on
paper
and
feel
untrustworthy
in
practice.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
