
There’s
lots
of
talk
about
the
impact
of
GenAI
and
LLMs
on
the
practice
of
law
and
what they will
do to everything from workflows to
business models to
young
lawyer training.
But
one
thing
that’s
not
talked
about
much
is
the
impact
GenAI
will
have
on client relationships.
Clients have
always come
to
their
lawyer
believing
in
the
rightness
of
their
cause.
Now
they
will
come
with
information
from
a
third
player:
an
LLM
tool. Whether
that
information
is
right
or
wrong, it’s going
to
impact
the
trust
between
lawyer
and clients.
And
lawyers
better
be
ready.
To
State
the
Obvious,
Clients
Are
Using
GenAI
for
Legal
Questions
The
large accounting firm,
Deloitte, recently
surveyed the
top
100
Dutch
law
firms
to
determine
the
state
of
AI
adoption
in
day-to-day
operations.
(The
fact
that
the
survey
was
done
by
an
accounting
firm
that
itself
offers
legal
services,
at
least
in
some
jurisdictions,
ought
to
give
legal
pause.)
The survey
looked
at
a
variety
of
things
like
strategy,
training,
and
importantly,
client
expectations,
among
other
things.
Here’s
what Deloitte discovered
about
law
firm
clients: 60%
of
the
firms
report
that
clients
are
now
using
AI
tools
to
perform
simple
legal
tasks.
As
a
result, clients are
expecting from
their lawyers faster
turnaround
times, transparency about
AI
risk,
and
of course,
lower fees. Significantly,
only
3%
of
the
firms had
seen
no
change
in
client
expectations.
What
this
means
is
not
only
will
clients
be
using
the
tools
to
perform
“simple”
tasks, but
they
are
also going
to
use
them more
and
more for
pure
legal advice and
strategy,
often
even
before
they
see
a
lawyer.
This
poses
all
kinds
of
practical problems,
particularly given the
fact
that
GenAI
tools hallucinate,
give
wrong
answers
and
advice, and
often
will
tell
clients what
they
want
to
hear.
The
Practical
Problems
If
a
client
talks
to
his
GenAI
and
gets
bad advice and
then
acts
on
it
to
their detriment,
that’s
a
real
problem. By doing
so,
the
client
may
very
well
place
themselves unknowingly in harm’s
way.
And
by
postponing seeking human legal
advice,
the
client
may make their position even
worse.
There’s
also
the
discoverability
problem: what
the
client
tells
his
favorite
bot
may itself be
discoverable,
as
I
have written before.
So, by
the
time
the
client
does
finally
see
a
lawyer,
that
lawyer may have
to
spend
time
cleaning
up a mess.
That
will
likely
cost
the
client
more, not less
money,
in
the
long
run.
But
the
practical
problems
may
be
the
least
of
it.
A Human
Relations
Problem
The
human
relations dynamic
plays
out
in
concrete
ways trial
lawyers will
recognize
immediately. Think
of
this:
there’s
a
dispute
with
conflicting
testimony.
The
client
thinks
his
version
will
prevail
in
front
of
a
jury
and
the
bot
supports
him.
The
lawyer
looks
at the
testimony and
knows
intuitively
that
the
client’s
version
will
not
convince
the
jury
for
a
whole
lot
of
reasons like
body
language,
jurors’ perception,
and
bias.
How
will
the
lawyer
ever
persuade
the
client (and
their
bot) that
the
client’s
version
will
not
prevail?
Since
time
immemorial, clients came
to
a
lawyer
convinced of
the
merits
of
their
matter.
That
their
version
of
the
facts
is
the
most
convincing.
That
their
strategy
of
what
their
lawyer
ought
to
do
is
the
best.
It
doesn’t
matter
whether
it’s
a
family
law
matter
or
a sophisticated
businessperson,
most
of
the
time clients
think
they know more
than
their lawyers.
Even
in
the
best
of
times,
this
always
placed
the
lawyer
in
a
difficult
spot.
Pointing
out
to
a
stubborn
client
that
their theory
and
strategy
is
wrong
is
always dicey.
Say
too little
and
the
client
gets
the
wrong
idea
about their case.
That
wrong
idea
will
only
fester
and
grow
over
the length of
the
case
and
can
lead
to
horrible
results
and
trauma
later.
I
have
seen
it
happen so
many
times:
the
lawyer
gives
the
client
the
idea
they
are
right
and
a
year
later
when
a
good
settlement
offer
comes
around,
the
client
balks
because
they
think
their
case
is
better
than
it
is.
But
if
the
lawyer
says
too
much,
it’s
also
a problem.
I’ve
heard
too
many
people
complain
that
their lawyer
“wasn’t
on
their
side”
because
they were overly blunt in
their
assessment. It
erodes
trust.
But
now
we
have
a
third
player
in
the
mix: a GenAI bot who may
just be
flat
out wrong
in
its
assessment
of
a
case
or
problem.
Moreover, it
may
be telling the client what they want
to
hear.
And
when
a
client
tells their
story
to
their
favorite
bot,
they
are
going
to
tell
it
in
the
most
favorable way.
So, now if
a
client
wasn’t
already
convinced in
the
merits
of
the
case
before, they now have “evidence”
from
the
bot. The
result?
It’s
going
to
be
harder
to
disabuse
them
of
what
the
bot
has
told them, and
the lawyers’ job will
get a
whole
lot
harder.
Another
problem:
if
a
client
listened
to
a
bot
before
they
came
to
see
the
lawyer,
they
are
probably going
to
listen
to
one
throughout
the
matter. So, every
call
the
lawyer
makes,
every
recommendation
they
make,
might
be
reviewed
by
bot.
But
the
crux
of
the
matter
is
that
the
law
and
legal
strategy
is
always
a
gray
area,
even
more
so
than
other
disciplines
like medicine.
And
when
it
comes
to
strategy
calls,
the
lawyer
and
the
client
only
know
the
result
of
the
strategy that
was
adopted,
not
the
ones
that
weren’t. So, the
second
guessing
never
ends.
Add
on
top
of
this
the
fee
issue.
The
client
believes
based
on
the
bot
that
the
work
the
lawyer
needs
to
do
is
not necessary.
It’s
a
simple,
slam-dunk
case
that
shouldn’t
cost as much as
it
is.
But
the
lawyer has
to clean
up
the
mess
that
wrong
advice
may
have
caused.
The
lawyer has
to spend
time
convincing the
client
of
reality
and
what
needs
to
be
done.
All
of
that
takes
time
and
increases
cost. In
the
meantime,
a case
and
a
relationship
turn
into a
nightmare.
Bottom
line:
if lawyers aren’t
careful,
they will face
an
erosion
of
trust
in
the
attorney-client
relationship
as
their
judgment
and
advice
is
substituted
for
that
of
AI. That
trust
has
always
been
the
bedrock not
only of
the
relationship but
in getting
the
best
result.
It
Need
Not
be
Insurmountable
It’s
not a hopeless situation.
But
it
does
require
an
understanding
of
the
problem
and
greater
education
all
the
way
around.
First
and
foremost,
if
there
ever
was
a
reason
for lawyers to
become
educated
about
AI
and
its
risks
(and
benefits),
it
is ironically to
bolster
the
level
of
trust
in
the
human
side
of
things.
A
lawyer has
to be
ready
to
explain
to
the
client
not
only
why
the
bot
is
wrong
when
it
is,
but
also
that it’s inherent in
the
structure
of
LLMs
to
make
mistakes
and
try
to
tell
the
prompter
what
they
want
to
hear. And lawyers also need
to
be
ready
to
tell
clients
before problems develop about the
risks
of
creating
discovery
trails.
A
lawyer
can’t
do all that
without
that
knowledge
themselves.
On
the
flip
side,
lawyers
need
to
realize
that
GenAI
tools
often
give sound answers.
We
can’t
argue
with
the
result
of
a
prompt
if
the
result
is
right. That
will
not
breed trust much
less
yield
good
outcomes. There
is
a
time
and
place
for
GenAI
tools
and lawyers
must use
them
to
their
and
their
clients’ benefit.
All
that
being
said,
good lawyers
know
the
law,
they
understand
exposure, and they
know
how
best
to
navigate
the
exposure.
And
now
more
than
ever,
they
will
need to understand
their
clients
and to
be
adept
at
explaining
all those
things
to
their
clients.
And
know
this:
clients
will
have
more
information
than
ever before, so
we
better
be
on our toes.
Gone are the
days
where
a
lawyer
can
just
say
this
is
what
we
are
going
to
do
and
expect
the
client
to
accept
it.
Years ago, I
was
called
upon
to
explain
the intricacies of
class
actions to
a
room
full
of
insurance
executives.
I
knew
a
lot
about
class
actions
already,
but
I
spent
hours
practicing
what
I
was
going
to
say
to
a
group
that
was
a)
skeptical
and
b)
had
no
understanding
of
class
actions
and
their peculiarities that
often
seem
counterintuitive.
At
the
end
of
the
discussion,
there
was
silence
and
then
one
of
them
said
one
word: brilliant.
That
cemented
their
trust
in
me.
In
the
days
of
GenAI,
it
is
just that
kind
of
trust,
earned
through preparation,
knowledge, and
understanding
the
client, that lawyers
will
need
to
earn
by
doing what
GenAI
can’t.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.
