
You’re getting
ready
to
make
a
document
production
to
the
other
side.
You’re worried
though
that
the
other
side
may
use
GenAI
tools on
the
documents that
don’t
ensure they are
protected
from public disclosure.
You
ask
to
see
the
other
side’s
policies
just
to
be
sure.
They
refuse.
You
ask
the
judge
for
a
protective
order
since
some
of your documents
contain
trade
secrets.
The
other
side
argues
you
are
just
delaying
production
and
trying
to
make
it
hard
for
them
to
find
and
review
documents.
The
judge
denies
your
motion.
Six
months
later,
the
documents
turn
up
in
the
ChatGPT
database. You
move
for sanctions, but
the
economic
damage
is
already
done.
The
Reality
Think
this
couldn’t
happen?
Think
again.
We
live
in
a
world
of interconnected GenAI
tools where inadvertent or unintentional
disclosure can
easily happen.
And the
ease
of
use
makes the
temptation
to and
likelihood
of use of these
tools pretty great.
Moreover, it just
takes
one
slip
up for
documents
to
be
jeopardized. Finally,
while
you
may
be
able
to
control
your
shop,
you
have
little
control
once
the
documents
leave
your
hands.
I
talked
to Matt
Mahon, VP
of
Customer
Experience
at
Level Legal,
recently
about
these very
problems. Level
Legal
is an
e-discovery
provider; I
have
written
about
the
company before. I
have
found
its
people to
be some
of
the
most
insightful
in
the
business.
And it’s
refreshing to
find
a
company
in
the
space
that
is
long
on
substance
and
short
on
hype. Mahon
has
thought
a
lot
about
the
problems
GenAI
poses
in
the
discovery
context.
Mahon
agreed
that
using
GenAI
tools
to review your
own documents
pre-production
is
fine as
long
as you
have
a
good
policy
in
place,
you
train
your
people thoroughly,
and
“consistently
remind
team
members
of
how
to
use the
tools.”
But
he
gave
me
several
examples
of
how
your
documents could
end
up
in
public
once
they
get
in
the
other
side’s
hands.
Some
Problematic
Examples
Mobile
phones
are
a
prime
risk
says
Mahon.
“It’s
easy
to
download
an
email
attachment
to
your
phone,
import
it
into
your
ChatGPT
app,
and
risk
a
potential
breach.”
Another
common
example:
someone attaches a
photo
of
a
document to their phone
photo
app to review
later
or
for use
in
a
deposition.
Mahon legitimately asks, “Where
does
the
photo
go
when
you
do
that?
I
couldn’t
tell
you
for
sure.
Some
apps
could
allow
an
LLM
to
learn
from
the
picture.” Moreover, as he
points
out,
apps
update their
policies
all
the
time
and users often don’t
know
what
new
permissions
may
have
been
added.
Another
example: someone
on
the
other
side
uses a
GenAI
tool
to
summarize
some
documents,
letting
the
proverbial
horse
out
of
the
barn.
Mahon also talked
about
the
risk
of
providing
documents
to
experts which further widens
the
field
of
risks.
The
expert might use a GenAI tool to
be
efficient, for
example, making
the
documents
public.
Or
what
if
the
expert
or
even
one
of
the
lawyers
on
the
other
side
use
a
GenAI
bot
on
their
email to
help organize it
and help
with replies
and
calendaring.
If that email
has
a
sensitive
document
attached
as
a
PDF
—
picture
an associate
sending
a
hot
PDF
document
to
a
partner
with
a “look
at
this” statement
—
the
document is
now
in
the
public
domain.
Mahon also
told
me
that
tools
like
Dropbox may allow LLM
tools to run
in
the
background on
stored
documents.
“These
connections
between
different
systems
and
applications
can
result
in
data getting
downloaded in
all
sorts
of
ways.”
Yet
another
looming
risk
is
posed
by
the
proliferation
of
AI
agents,
says
Mahon.
“Agents can
be installed
on
systems
that
have
full
file
system
access. Others are
monitoring
emails
some
of
which
may
contain attorney-client
privilege communications,
and
these
AI
agents
are
reading
those
emails
too, which would
potentially jeopardize confidentiality
and risk waiving
privilege.”
So
many
ways
that
sensitive
documents
can
go public,
many
inadvertently. And
there’s
not
a
whole
lot
of good solutions.
Solutions Remain
Elusive
There
are some
ways
to
reduce risk, but
none
are
foolproof.
The
parties
could by
agreement
provide
and
ensure
reasonable
protections.
Or they
could retain
a third-party provider
to
hold
the
documents
and
allow
use
subject
to
certain
parameters,
according
to
Mahon. And
there’s
always
the
option
of
seeking
judicial
intervention.
But
all
these
solutions
are
all
too
often
difficult
to
obtain. A
fundamental problem
is
that
even
today,
after
years
of
dealing
with
e-discovery,
too
many practitioners don’t
understand
it,
don’t
want
to
deal
with
it,
and
remain
ignorant
of
basic principles.
They
don’t
get e-discovery in
general,
much
less
the
increased
risk
GenAI
poses. Moreover,
lawyers
and
legal
professionals
aren’t
exactly
known
for
being
proactive
with
technology
in
general. All
of
this
makes
agreement difficult.
The
problem
is also compounded
by
our
adversary
system. Trying
to ensure reasonable
protections by
agreement is
almost
always
going
to
be
met
with
opposition
given
that firms
use
some
many
different
systems
and
some
have
more
protections
in
place
than
others. And
agreement
requires revealing information
about
what
a
law
firm
is
doing
internally,
always
a
sensitive
topic. The
problems are compounded
even
more
where
one
side
in
a
case
has
lots
of documents and
the
other
side
few
as
in
most
personal
injury
cases.
And
most
lawyers
aren’t
going
to
be
happy
with
being
forced
into
using
a
third-party
provider
that
takes
time
and
energy
to
use
to
review
and
use
documents.
Trying
to
talk
a
judge
into
intervening
is
also
problematic.
Most
judges
hate
discovery
disputes since
they
inevitably
devolve
into
“he
said,
she
said”
arguments.
And
the
constant
gamesmanship
from
both
sides
leads
judges
to
either
punt
the
issue
back
to
the
parties
or
maintain
the
status
quo by
not
entering orders
requiring
things beyond
that spelled
out
in the
rules.
But
Mahon
says
without
some
sort
of
protections
in
place, parties’
privacy could be
at
risk
in
virtually every case.
Granted,
many
documents
produced
in
litigation
carry
no
privacy
expectation
in
any
event.
And
we
are
all
used
to
less
privacy
in
general.
But
some
documents
—
like
those
containing
trade
secrets
—
are
highly
sensitive
and making
them
public
is
a
real
economic
threat
to
a
business
and
individuals. Medical
records
are
also
sensitive
and additionally
are covered
by
various privacy regulations.
What’s
Needed
What’s
really
needed
is
for
reputable
think
tanks
like
Sedona
to
become involved
and
offer
guidance.
What’s
really
needed
is
for
rulemaking
bodies
to
offer
procedural
and
discovery
rules that
clearly
state
expectations
and requirements.
Yet
thus
far,
the
rulemaking
bodies
ignore
real
risks
like GenAI
and
deepfakes
and
concentrate
on
things
like
requiring
lay witnesses using
GenAI materials
to
satisfy expert witness
standards.
While that
may
be
of
some
value,
it ignores far
more
serious threats as
I have
written.
Stronger
rules
and
statements from well-respected bodies
on
how
to
protect discovery documents would
at
least
provide
a
base
from
which
judges
could view
protection
requests.
It
would
validate
the
idea
that
firms
working
with
documents
obtained through discovery
need
to adequately protect
those
documents
from
disclosure. It
would
sensitize
courts
and
lawyers
for
that
matter
to
the
very
real
risk
of inadvertent disclosure.
It
has
long
been
the
case
that
turning
over
digital
documents
to
the
other
side
risks
disclosure
of
sensitive documents.
Now
though,
with
the
advent
of
GenAI
and
GenAI
agents,
that
risk
is
compounded
exponentially.
As
a
profession,
we
can’t
hide
our
heads
in
the
sand
and
ignore
that
reality.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.
