Litigants
trying
to
understand
their
legal
situation
with
the
help
of
AI
are
either
totally
fine
or
totally
screwed.
Welcome
to
the
modern
practice
of
law!
Earlier
this
month,
Judge
Jed
Rakoff
of
the
Southern
District
of
New
York
ruled
in
United
States
v.
Heppner
that
31
documents
that
a
criminal
defendant
generated
using
the
consumer
version
of
Anthropic’s
Claude
were
not
protected
by
attorney-client
privilege
or
the
work
product
doctrine.
Meanwhile,
Magistrate
Judge
Anthony
P.
Patti
of
the
Eastern
District
of
Michigan
heard
a
substantially
similar
discovery
dispute
and
concluded
in
Warner
v.
Gilbarco,
Inc.
that
of
course
the
other
side
can’t
seize
the
litigant’s
legal
work
just
because
it
went
through
a
large
language
model.
In
Heppner,
the
defendant
had
already
engaged
counsel
and
queried
the
AI
on
his
own
to
prepare
materials
for
a
meeting
with
his
lawyers.
By
contrast,
the
party
in
Warner
represented
herself
and
used
AI
to
prepare
her
own
case.
The
fact
that
Warner
acted
as
her
own
counsel
and
the
searches
directly
reflect
her
legal
strategy
goes
a
ways
toward
explaining
the
distinction,
but
it
doesn’t
go
quite
far
enough.
The
Heppner
decision
talked
about
AI
as
a
non-lawyer
third-party
whose
terms
of
service
acknowledge
that
inputs
may
not
remain
confidential.
Those
issues
don’t
change
just
because
the
party
is
acting
as
their
own
counsel.
Judge
Rakoff
identified
a
Claude
ping
as
a
third-party
disclosure.
Judge
Patti
drew
a
distinction,
based
on
the
D.C.
Circuit
in
United
States
v.
American
Telephone
&
Telegraph
Co.,
that
voluntary
disclosure
to
a
third
party
does
not,
by
itself,
waive
work
product
protection.
To
defeat
the
work
product
doctrine,
Judge
Patti
ruled,
the
party
has
to
disclose
the
material
directly
to
an
adversary
or
in
some
way
likely
to
reach
the
adversary’s
hands.
So
unless
you’re
litigating
against
Anthropic,
you
would
be
fine.
That’s
where
Judge
Rakoff’s
opinion
holds
to
the
letter
of
the
law
in
a
way
that
undermines
the
spirit
in
a
world
of
AI
tools.
The
Heppner
confidentiality
analysis
pointed
to
Anthropic’s
privacy
policy
and
found
no
reasonable
expectation
of
confidentiality,
because
the
company
asserts
that
it
can
collect
user
data,
train
models
on
it,
and
disclose
information
to
government
authorities
and
third
parties.
Therefore,
Rakoff
reasoned,
sharing
information
with
Claude
is
like
discussing
your
legal
strategy
in
a
crowded
room.
Except
every
major
cloud
service
has
substantially
identical
terms.
If
the
client
saves
emails
and
documents
on
Microsoft
OneDrive
or
something,
have
they
waived
all
protections?
If
the
client
uses
Gmail,
they
arguably
waive
privilege
under
this
reasoning.
The
Heppner
analysis
makes
sense
in
the
abstract,
but
practically
we
can’t
allow
our
new
cloud-based
reality
to
obviate
traditional
protections.
And
that’s
if
you
think
an
AI
product
is
a
third-person
at
all,
a
concept
that
Judge
Patti
wasn’t
sold
on:
ChatGPT
(and
other
generative
AI
programs)
are
tools,
not
persons,
even
if
they
may
have
administrators
somewhere
in
the
background.
Had
Heppner
taken
information
that
he
received
from
his
attorneys
and
gone
to
the
local
law
library
or
even
run
standard
Google
searches,
we
wouldn’t
be
having
this
discussion.
But
these
days,
Google
pumps
your
searches
into
its
AI
anyway…
does
that
make
a
client’s
internet
search
to
figure
out
that
legalese
the
lawyer
just
said
on
the
call
presumptively
discoverable?
That
can’t
be
right.
It
gets
even
worse
when
you
realize
CoPilot
is
baked
into
Microsoft
Office
and
Google’s
Gemini
is
embedded
in
Workspace.
The
notes
a
client
takes
of
an
attorney
meeting
are
traditionally
protected,
but
if
boilerplate
terms
of
service
for
cloud
applications
can
defeat
the
expectation
of
privacy,
all
bets
are
off.
These
are
previously
untested
applications
of
rules
that
were
pretty
clear
before
running
aground
on
the
jagged
rocks
of
technology.
As
the
Wagner
opinion
notes:
Additionally,
the
Court
agrees
with
Plaintiff
that
the
pursuit
of
this
information
is
“a
distraction
from
the
merits
of
this
case[,]”
and
that
Defendants’
theory,
which
is
supported
by
no
case
law
but
only
a
Law360
article
posing
rhetorical
questions
“would
nullify
work-product
protection
in
nearly
every
modern
drafting
environment,
a
result
no
court
has
endorsed.”
A
Law360
article?
If
it
were
an
Above
the
Law
article
maybe,
but
come
on.
It’s
worth
noting,
as
Jennifer
Ellis
observed,
Judge
Patti
handles
discovery
disputes
every
day
and
has
a
more
intimate
experience
with
the
ways
technology
plays
hell
with
the
letter
of
the
law.
Judge
Rakoff
doesn’t
spend
his
whole
day
on
these
complications.
As
these
cases
proliferate,
expect
to
see
a
divide
between
the
magistrate
judges
and
the
district
judges.
But
Heppner
is,
for
the
time
being,
the
go-to
standard
of
the
most
important
federal
court
in
the
country.
Every
Biglaw
firm
has
already
blasted
out
a
client
alert
ruminating
on
its
implications.
Clients
interested
in
AI
are
advised
to
“use
enterprise
tools,”
though
that’s
unlikely
to
resolve
the
underlying
problem.
Unless
(or
until)
the
AI
bubble
bursts
so
spectacularly
that
we’re
back
to
our
tried
and
true
tools,
the
question
remains
whether
courts
should
treat
AI
chat
history
as
the
equivalent
of
shouting
a
legal
strategy
in
Times
Square.
For
now,
the
proper
advice
is
that
clients
shouldn’t
risk
talking
about
their
cases
with
AI.
And
maybe
save
everything
locally.
And
maybe
don’t
run
internet
searches
with
Gemini.
You
know
what?
Maybe
just
don’t
use
a
computer
at
all.
Joe
Patrice is
a
senior
editor
at
Above
the
Law
and
co-host
of
Thinking
Like
A
Lawyer.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or
Bluesky
if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a
Managing
Director
at
RPN
Executive
Search.
