The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Washington Post Analysis Shows We Are Talking Too Much And Getting Questionable Advice From LLMs – And It May All Be Discoverable – Above the Law

The
jury
is
still
out
on
how
much
and
how
soon
GenAI
will
impact
the
legal
profession,
as
I
pointed
out
in
a
recent
article.
But
one
thing
is
certain:
GenAI
is
affecting
what
people
are
revealing,
the
questions
they’re
asking,
and
what
advice
they’re
receiving.
The
implications
for
lawyers,
or
perhaps
more
accurately,
their
clients,
are
downright
scary.
People
are
talking
too
much
and
getting
wrong
advice
that’s
memorialized
for
future
use
and
discovery

I
had

sounded
this
alarm

before.
And
now
a
recent

Washington
Post
analysis

of
some
47,000
ChatGPT
conversations
validates
many
of
these
concerns
in
alarming
ways.


The
Post
Analysis

Here’s
what
the
Post
found:

  • While
    most
    people
    are
    using
    the
    tool
    to
    get
    specific
    information,
    more
    than
    1
    in
    10
    use
    it
    for
    more
    abstract
    discussions.
  • Most
    people
    use
    the
    tool
    not
    for
    work
    but
    for
    very
    personal
    uses.
  • Emotional
    conversations
    were
    common,
    and
    people
    are
    sharing
    personal
    information
    about
    their
    lives.
  • The
    way
    ChatGPT
    is
    designed
    encourages
    intimacy,
    and
    the
    sharing
    of
    personal
    things.
    It
    has
    been
    found
    that
    techniques
    that
    make
    the
    tool
    seem
    more
    helpful
    and
    engaging
    also
    make
    the
    tool
    more
    likely
    to
    say
    what
    the
    user
    wants
    to
    hear.
  • About
    10%
    of
    the
    chats
    analyzed
    show
    people
    talking
    about
    emotions.
    OpenAI
    estimated
    that
    about
    1
    million
    people
    show
    signs
    of
    becoming
    emotionally
    reliant
    on
    it.
  • People
    are
    sharing
    personally
    identifiable
    information,
    their
    mental
    issues,
    and
    medical
    information.
  • People
    are
    asking
    the
    chat
    to
    prepare
    letters
    and
    drafts
    of
    all
    sorts
    of
    stuff.
  • ChatGPT
    begins
    its
    responses
    with
    yes
    or
    correct
    more
    than
    10
    times
    as
    often
    as
    it
    starts
    with
    no.

And
of
course,
it
still
hallucinates.
While
the
analysis
focused
on
ChatGPT
conversations,
there
can
be
little
doubt
that
other
public
and
perhaps
closed
LLMs
are
being
used
in
many
of
the
same
ways
and
doing
the
same
things.


The
Problem

That
means
there’s
a
lot
of
scary
stuff
out
there
that
could
of
course
be
open
to
discovery
in
judicial
and
regulatory
proceedings.
Indeed,
as

previously
written
,
OpenAI’s
CEO
Sam
Altman
has
recognized
that
the
company
would
have
to
comply
with
subpoenas.
And
government
agencies
like
law
enforcement
can
seek
access
to
private
conversations
with
an
LLM
as
well.

What
the
Post
analysis
tells
me
though
is
that
people
aren’t
recognizing
this
danger.
They
seem
to
think
that
they
stuff
they
put
in
and
get
out
is
private.
Indeed,
the
Post
got
the
47,000
conversations
because
people
created
sharable
links
to
their
chats
that
were
then
preserved
in
the
Internet
Archive.
OpenAI
has
now
removed
the
option
to
have
shared
conversations
discoverable
with
a
mere
Google
search
since
people
had
accidentally
made
some
chats
public.
That’s
troubling
in
and
of
itself.

Worse,
the
answers
given
by
ChatGPT
since
they
tell
the
user
what
they
want
to
hear,
are
wrong.
One
thing
I
learned
in
my
years
practicing
law,
is
that
clients
usually
start
out
convinced
they
are
right.
(Most
never
really
change
their
minds.)
Their
mindset
when
their
lawyer
tells
them
they
are
wrong
is
that
they
would
have
received
the
answer
they
wanted
if
only
they
had
a
better
lawyer.

Now
we
have
the
problem
on
steroids.
The
client
walks
in
convinced
they
are
right
and
thinks
that
their
position
has
been
confirmed
by
ChatGPT.

Perhaps
even
worse,
people
may
be
acting
on
the
advice
they
are
getting
from
the
LLMs,
getting
themselves
in
even
more
trouble.
Clients
often
held
back
acting
on
something
because
they
knew
enough
to
know
they
should
consult
a
lawyer.
But
since
that
was
expensive,
they
just
didn’t
do
it
out
of
an
exercise
of
caution.
Now
they
have
what
they
think
is
confirmation.
A
green
light.


Here’s
Where
We
Are

Putting
these
facts

people
putting
discoverable
and
potential
damaging
stuff
in
an
LLM
thinking
it’s
private
(which
LLMs
encourage),
LLMs
telling
the
user
what
they
want
to
hear
or
making
up
answer
which
the
user
believes
and
might
even
act
upon
–together
with
some
common
situations
demonstrates
why
these
factors
should
be
concerning
to
lawyers.
 

It
doesn’t
take
much
to
foresee
a
C-suite
officer,
for
example,
using
ChatGPT
to
seek
to
solve
a
thorny
personnel
problem
by
brainstorming
with
an
LLM
and
commenting
on
the
responses
in
a
back-and-forth
manner
that
creates
a
paper
trail
for
a
future
wrongful
termination
case.

Or
a
disgruntled
spouse
venting
in
a
conversation
that
becomes
public
in
a
divorce
or
custody
decision.
Or
people
seeking
advice
on
how
to
hide
documents.
Or
how
to
avoid
discovery.
Or
taking
advice
to
avoid
paying
taxes.

Or
someone
in
a
fit
of
rage
writing
something
threatening
even
though
they
were
just
venting.
And
then
getting
charged
with
terroristic
threatening.

I
could
go
on
and
on.

And
don’t
forget,
the
tools
are
going
to
get
better.


An
Added
Issue

I
am
sure
that
the
Post
got
access
to
the
47,000
conversations
in
a
legitimate
way.
But
it
also
seemed
pretty
easy
and
carried
the
risk
that
some
of
the
participants
didn’t
realize
their
conversations
were
public.

And
that
makes
me
uneasy.
As
we
have
seen
over
and
over
in
the
digital
world,
what
many
think
is
private
somehow
became
public.
I
worry
that
many
of
the
millions
of
conversations
with
LLMs
might
end
up
being
not
private
at
all,
either
through
legitimate
or
illegitimate
ways.


What’s
a
Lawyer
to
Do

Back
in
the
early
days
of
eDiscovery,
there
was
a
push
by
many
lawyers
to
try
to
educate
their
clients
about
the
perils
of
not
being
careful
with
what
they
say
in
things
like
emails,
texts,
and
other
digital
tools.
Even
with
that,
people
still
screw
up
and
say
things
they
shouldn’t,
thinking
or
assuming
that
just
because
it’s
digital
it’s
somehow
private.
Now
we
have
a
tool
that
in
essence
eggs
you
on
to
perhaps
say
or
do
something
you
shouldn’t
and
help
you
do
it.

It’s
incumbent
on
all
of
us

lawyers,
legal
professionals,
vendors,
and
even
LLM
developers

to
do
all
we
can
to
make
ordinary
people
aware
of
the
dangers.
There
can
be
little
doubt
that
savvy
lawyers
will
use
the
proclivity
of
people
to
say
too
much
to
their
favorite
bot
to
their
advantage
in
litigation
and
discovery,
as
will
government
investigative
and
regulatory
entities.

Based
on
experience,
I
know
many
aren’t
going
to
get
the
message.
But
that
doesn’t
mean
we
shouldn’t
try.
We
need
to
lead
the
way
in
training
our
clients
about
the
risks,
not
the
other
way
around,
when
the
damage
is
already
done.
We
need
to
sound
the
alarm
in
ways
they
can
understand.

The
Post
analysis
is
a
start
toward
an
educational
process.
We
owe
it
to
our
clients
to
do
more.
And
don’t
forget
we
are
ethically
and
practically
bound
to
understand
the
risks
and
benefits
of
relevant
technology.
It’s
hard
to
run
and
hide
from
the
relevance
of
GenAI
anymore.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law
.