The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Lawyers Using ChatGPT: Let’s Be Careful – Above the Law

(Photo
by
Jakub
Porzycki/NurPhoto
via
Getty
Images)

There’s
a
danger
lurking
when
it
comes
to
the
legal
profession’s
use
of
GenAI
tools.
Because
they
are
so
easy
and
tempting
to
use
to
get
answers
quickly,
we
too
often
forget
about
the
risks.
Especially
when
those
risks
are
not
well
publicized.

Believe
me,
I
know.
You’re
faced
with
a
deadline.
You
rush
to
ChatGPT
to
get
an
answer
without
thinking
through
whether
you
may
be
putting
confidential
material
or
information
about
your
client
in
the
prompt.
Or
you
convince
yourself
you
have
disguised
it.
Or
you
think
no
one
will
ever
know
anyway.
Or
you
start
off
good
but
in
follow-up
prompts
you
feel
compelled
to
add
more
to
get
results.
And
you
assume
the
privacy
toggle
will
protect
you.
And
boom,
stuff
you
shouldn’t
have
revealed
is
in
there.

But
that’s
exactly
what’s
happening
when
it
comes
to
inputting
confidential
client
data
or
semi-confidential
data
into
ChatGPT
and
other
public-facing
AI
tools.
Lawyers
and
legal
professionals
are
lulled
into
thinking
it’s
no
big
deal
and
there
won’t
be
any
harm
or
consequences.
But
that’s
not
so.

Part
of
this
complacency
stems
from
assuming
that
by
toggling
off
the
switch
that
allows
the
tool
to
use
inputted
information
to
train
it,
confidential
material
is
protected
from
an
ethical
and
practical
standpoint.
That’s
an
incorrect
assumption.

Part
of
the
complacency
comes
from
the
lack
of
publicity
about
this
risk
of
late.
Early
on,
the
warnings
of
placing
confidential
material
in
an
LLM’s
hands
were
front
and
center.
But
as
publicity
about
hallucinations
and
inaccuracies
increased,
the
dangers
of
putting
client
confidences
or
anything
close
to
a
client
confidence
in
a
public
system
have
gotten
less
fanfare.

And
finally,
many
have
gotten
so
used
to
using
the
tools
for
so
many
things
that
they
aren’t
as
vigilant
as
they
once
were
or
should
be.


The
Privacy
Switch

Certainly,
it’s
good
practice
to
use
the
privacy
settings
most
public
tools
have.
Such
practices
include
telling
the
tool
not
to
use
your
inputs
to
train
the
system
and
using
the
temporary
chat
feature
so
that
the
tool
presumably
won’t
save
anything
from
the
chat.
But
that
does
not
protect
the
material
in
ways
consistent
with
ethical
and
client
responsibilities.

First,
there
is
no
contractual
commitment
on
the
part
of
the
tool
provider
to
keep
material
confidential
or
much
of
anything
else,
only
that
it
won’t
use
the
material
to
train.
Second,
most
tools
retain
conversations
for
some
time
period
no
matter
what—ChatGPT
for
30
days,
for
example—for
safety
related
and
other
monitoring.
That
means
you
have
no
control
over
the
data
you
have
put
in.
Third,
the
tool
owns
the
infrastructure
and
servers
on
which
it
runs.
And
your
data
is
transmitted
to
those
servers
over
which
you
have
no
control.
Next,
using
the
privacy
settings
doesn’t
mean
the
data
is
deleted.
It’s
there
and
you
have
no
control
over
access.

And
finally,
there
is
no
guarantee
human
review
will
never
occur,
there
is
no
commitment
to
eliminate
metadata
or
logging
information,
and
there
is
no
audit
feature
should
you
need
to
establish
confidentiality.
And
certainly,
these
settings
don’t
ensure
compliance
with
HIPAA
and
other
privacy-related
requirements.


Our
Responsibilities
to
Our
Clients

All
of
which
leads
back
to
what
is
required
of
lawyers.
These
requirements
take
two
forms:
the
ethical
rules
to
protect
client
materials
and
providing
adequate
protections
to
ensure
that
the
attorney
client
and
work
product
privileges
aren’t
waived.

Turning
first
to
ethics:

ABA
Model
Rule
1.6(c)

says,
“A
lawyer
shall
make
reasonable
efforts
to
prevent
the
inadvertent
or
unauthorized
disclosure
of,
or
unauthorized
access
to,
information
relating
to
the
representation
of
a
client.”
So,
does
relying
on
the
not-to-train
provision
and
the
commitment
not
to
save
your
chat
fall
within
the
reasonable
protection
umbrella?
While
few
courts
have
ruled,
most
bar
association
opinions
say
no.

At
the
very
least,
reasonable
protection
would
require
a
specific
contractual
commitment
to
keep
the
material
confidential,
to
isolate
it
and
not
commingle
with
that
of
other
users,
to
define
the
data
retention
and
deletion
terms
and
much
more
specificity
as
to
what
can
be
done
with
the
data

similar
to
what
is
required
from
cloud
storage
providers,
e-discovery
vendors,
and
practice
management
systems.

Beyond
the
ethical
question,
there
is
a
practical
privilege-related
concern:
lawyers
need
to
ensure
that
confidential
materials
are
protected
from
discovery
through
the
attorney
client
and
work
product
privilege.
While
courts
are
beginning
to
look
at
these
issues
as
I

have
discussed
,
at
the
very
least,
there
is
a
substantial
risk
that
these
privileges
are
waived
by
placing
the
material
in
a
public
system.

Waiver
hinges
on
whether
the
confidentiality
of
the
material
is
adequately
safeguarded
and
whether,
by
revealing
the
information,
you
have
a
reasonable
expectation
that
it
will
be
kept
private.
It’s
hard
to
say
that
given
the
various
ways
the
material
provided
to
a
public
LLM
could
leak
out
that
this
standard
is
met.
If
your
reasonable
expectation
hinges
on
the
naked
representation
that
the
material
won’t
be
used
to
train,
it’s
pretty
damn
weak.
A
representation,
by
the
way,
from
those
who
have
no
obligation,
understanding
or
even
concern
of
lawyers’
duties
to
their
clients.


Some
Bedrock
Rules

Certainly,
there
are
many
fine
and
safe
uses
of
public
tools.
They
are
inexpensive,
can
save
time,
and
make
you
a
better
lawyer
in
many
ways.
But
as
our
reliance
on
them
increases,
we
often
forget
some
bedrock
principles
and
risks.
Don’t
put
client
names
in
the
prompt.
Use
hypotheticals
that
don’t
reveal
sufficient
information
for
someone
to
identify
the
client.

Strip
any
and
all
things
that
could
be
used
to
identify
the
client,
the
matter,
and
the
facts
that
could
be
used
to
figure
out
client
information.
Keep
in
mind
the
discovery-related
risks
when
you
place
something
in
the
chat.
A
good
rule
is
the
New
York
Times
test:
if
your
prompt
appeared
in
the
Times,
would
you
feel
comfortable?

Remember
that
your
ethical
obligation
is
not
just
to
protect
client
secrets.
Under
Rule
1.6,
it’s
to
not
reveal
information
relating
to
the
representation
of
a
client.
That’s
broader
than
just
client
secrets
and
makes
double
checking
your
prompt
critical.

Bottom
line:
if
you
think
it
may
be
wrong
to
put
something
in
a
prompt,
it’s
wrong.


Let’s
Be
Careful
Out
There

Let’s
not
be
lulled
into
complacency
and
rely
on
nothing
more
than
some
vague
commitment
not
to
use
information
to
train
to
meet
our
serious
obligations
to
protect
our
clients.

Years
ago,
there
was
a
television
cop
show
entitled
Hill
Street
Blues.
Each
episode
began
with
a
daily
briefing
by
the
precinct
captain
about
the
day’s
events.
He
ended
his
briefing
with
the
words,
“Let’s
be
careful
out
there.”

The
biggest
risk
is
always
forgetting
there
is
a
big
risk.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law
.