
On
February
20th,
my
article
warning
about
the
dangers
of
clients
using
GenAI
tools
and
creating
discoverable
information
was
published.
Unbeknownst
to
me,
the
day
before
the
article
was
published,
a
ruling
from
the
Southern
District
of
New
York
affirmed
my
very
fears.
So
once
again,
I
get
to
say,
“I
told
you
so.”
Unlike
what
many
think,
just
because
someone
puts
something
in
a
GenAI
tool
doesn’t
mean
it’s
private.
In
United
States
of
America
v.
Bradley
Heppner,
Judge
Rakoff
ruled
that
certain
written
exchanges
Heppner
had
with
the
GenAI
platform,
Claude,
were
not
protected
from
disclosure
to
the
government
by
either
the
work
product
or
attorney
client
privilege.
Without
Suggestion
from
Counsel
The
key
concept
cited
by
Judge
Rakoff
was
that
Heppner
consulted
Claude
without
any
suggestion
or
direction
of
counsel
when
he:
a)
outlined
for
Claude
his
defense
strategy
and
sought
comments,
b)
outlined
the
law
and
facts
he
might
argue,
and
c)
asked
what
the
other
side
might
argue.
He
then
gave
what
he
had
learned
from
Claude
to
his
counsel.
Heppner
argued
that
he
did
all
this
in
anticipation
of
speaking
with
his
lawyer
to
get
legal
advice.
The
Attorney-Client
Privilege
According
to
the
court,
application
of
the
attorney-client
privilege
requires
a
communication
between
a
client
and
their
lawyer,
that
was
intended
to
be
and
was
kept
confidential,
and
was
for
purpose
of
obtaining
legal
advice.
Judge
Rakoff
made
short
work
of
Heppner’s
attorney-client
privilege
argument.
First
of
all,
the
communications
between
Heppner
and
Claude
were
not
between
lawyer
and
client
but
between
a
client
and
a
GenAI
platform.
Second,
the
communications
were
not
confidential.
Under
the
terms
of
use,
it
was
clear
that
Claude
collects
data
from
those
who
use
it
and
then
uses
those
communications
for
training
purposes.
Heppner
was
thus
clearly
on
notice
of
the
lack
of
confidentiality
and
that
any
input
data
could
be
disclosed
to
others.
As
a
result,
said
the
court,
Heppner
had
no
reasonable
expectation
of
privacy.
The
court
noted
Heppner’s
argument
that
he
consulted
with
Claude
with
the
intent
to
give
it
to
counsel
to
get
later
advice.
But
that
argument
rang
hollow
since
Heppner
didn’t
tell
counsel
in
advance
that
he
was
going
to
do
it,
and
his
lawyer
didn’t
know
he
did
it.
The
Work
Product
Privilege
The
work
product
privilege
is
designed
to
protect
and
shelter
the
mental
processes
and
thinking
of
an
attorney
in
representing
their
client
and
in
anticipation
of
litigation.
But
the
key,
said
the
court,
was
that
the
material
needs
to
be
prepared
by
the
attorney.
Certainly,
said
the
court,
the
privilege
may
apply
if
done
by
an
agent
and
at
the
direction
of
the
attorney.
On
first
blush,
that
sounds
like
it
may
save
the
Heppner
communications
from
disclosure.
But
once
again,
Heppner
didn’t
communicate
with
Claude
under
the
direction
of
his
lawyer
or,
again,
even
with
his
knowledge.
So,
there
was
no
way
either
Heppner
or
Claude
was
acting
as
an
agent
of
the
lawyer.
Even
if
the
material
was
prepared
in
anticipation
of
litigation,
the
privilege
doesn’t
apply,
nor
did
it
reflect
the
lawyer’s
strategy
or
mental
processes.
Lessons
Learned
I
have
been
warning
about
the
impact
of
throwing
caution
to
the
wind
when
inputting
sensitive
material
into
GenAI
tools.
First,
it’s
clear
that
what
a
client
puts
into
a
GenAI
and
what
they
get
out
before
they
see
a
lawyer
is
fair
game
for
discovery.
Nor,
under
the
court’s
analysis
in
Heppner,
will
it
be
protected
from
once
litigation
is
commenced
unless
directed
by
the
lawyer.
That
doesn’t
change
by
afterwards
saying,
well,
I
was
going
to
give
to
my
lawyer.
Right.
How
convenient.
Granted,
some
systems
provide
the
option
to
direct
the
tool
not
to
disclose
the
information
to
others
or
use
it
for
training.
But
it
still
pays
to
read
the
terms
of
use
very
carefully
before
placing
confidential
material
into
the
platform.
Oh,
and
by
the
way,
ignoring
the
privilege
issue
for
the
moment,
under
the
ethical
confidentiality
rules
(Model
Rule
1.6),
it’s
not
just
confidential
material
we
need
to
protect.
It’s
information
“relating
to
the
representation
of
a
client.”
That’s
a
little
broader.
Nor
will
ignorance
of
the
terms
of
use
be
an
excuse.
Terms
of
use
matter
and
it’s
clear
—
lawyer
or
not
—
they
better
be
read.
So,
as
I
have
discussed
before,
we
as
lawyers
need
to
educate
our
clients
as
to
these
basic
principles
if
we
want
to
protect
them
down
the
road.
But
What
About
the
Lawyers?
But
what
about
us
lawyers?
It’s
been
said
over
and
over
that
we
shouldn’t
put
client
confidential
material
into
an
open
or
public
system.
And
that
we
need
to
be
careful
in
directing
our
clients
to
use
the
tools
as
well.
Merely
telling
a
client
to
look
something
on
ChatGPT
doesn’t
make
what
they
input
or
get
back
privileged
if
the
other
criteria
are
not
met.
But
more
and
more,
I
see
lawyers
themselves
going
to
public
GenAI
tools
to
do
many
of
the
things
Heppner
was
doing:
brainstorming
their
cases
and
strategies.
Will
the
work
product
privilege
apply
to
that
material?
Certainly,
if
the
lawyer
inputs
the
material,
the
platform
might
be
considered
an
agent.
Assuming
that
the
material
is
being
prepared
or
obtained
in
anticipation
of
litigation
and
contains
or
references
the
lawyer’s
mental
processes
and
strategies,
there
should
be
no
problem,
right?
Maybe.
As
I
have
discussed,
the
issue
is
whether
the
privilege
is
waived
by
placing
it
in
a
public
platform
where,
like
Claude,
the
material
is
retained
by
the
platform
and
used
for
training.
In
thinking
how
this
issue
might
come
up,
assume
that
you
use
Claude
and
ask
it
for
help
with
evaluating
your
strategy.
Assuming
that
what
you
say
is
relevant
to
the
case
itself
(which
granted
could
be
a
tall
order
for
the
other
side
to
show),
your
opponent
moves
to
compel
production
of
the
material.
You
make
your
arguments,
and
your
adversary
says
with
a
sly
smile,
“I
actually
asked
ChatGPT
what
it
thought
about
this.
Here
is
what
it
said”:
Placing
thoughts
and
ideas
into
a
large
language
model
(LLM)
like
ChatGPT could
potentially
waive
work
product
protection. If
a
lawyer
uses
a public,
consumer-facing
LLM (like
ChatGPT
or
Copilot)
without
a confidentiality
agreement
or
enterprise-level
protections,
inputting
sensitive
legal
analysis
or
impressions might
be
considered
disclosure
to
a
third
party. If
the
provider
reserves
the
right
to retain,
review,
or
use the
input
data,
a
court
might
find
that
confidentiality
was
not
preserved.
Nothing
like
having
your
own
tool
stuck
up
your
you
know
what.
As
the
judge
says,
you
got
10
days
to
produce
your
prompts
and
the
outputs.
What’s
the
Point?
The
point
is
not
to
use
GenAI
tools
but
to
use
them
knowledgeably,
understanding
the
risks
to
you
and
to
your
client.
You
can’t
do
that
by
sticking
your
head
in
the
sand
about
GenAI.
You
need
to
carefully
read
the
terms
of
use.
You
need
to
train
yourself.
Beyond
what
you
do
for
yourself,
you
also
need
to
educate
your
clients.
You
need
to
think
about
what
you’re
putting
in
and
getting
out
and
weigh
the
risks.
As
a
profession,
we
don’t
want
our
clients
or
ourselves
ending
up
like
Mr.
Heppner.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.
