The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Deepfakes: A Problem In Search Of A Problem? – Above the Law

I
asked
a
room
full
of
lawyers
and
legal
professionals
recently
how
many
of
them
had
come
across
deepfakes
in
litigation.
Not
a
single
hand
went
up.
Is
the
deepfake
phenomenon
a
problem
that’s
really
not
one?
Or
is
it
like
the
hallucinated
case
citation
problem
once
was:
skepticism
that
hadn’t
caught
up
with
reality?

I
was
giving
a
presentation
on
deepfakes
with
the
esteemed
jurist,

Xavier
Rodriguez
,
at
ABA’s

TECHSHOW

to
some
50
or
so
lawyers
and
legal
professionals
when
I
asked
my
deepfakes
question.
Judge
Rodriguez
is
a
federal
district
judge
for
the
Western
District
of
Texas
and
a
leading
voice
on
technology
and
AI
in
the
federal
judiciary.
I

have
written

before
about
the
threat
of
AI-generated
deepfakes
and,
like
Judge
Rodriguez,
fear
its
impact
on
our
judicial
system.

The
fact
that
not
one
person
raised
their
hands
is
significant.
Granted,
the
sample
size
was
small,
but
TECHSHOW
typically
draws
some
pretty
savvy
tech
people
and
litigators.
So,
of
anyone,
they
should
be
well
aware
of
and
sensitive
to
the
potential
problem.

We
shouldn’t
have
been
all
that
surprised
that
no
hands
went
up
though.
After
all,
the

Advisory
Committee
on
Evidence
Rules

that
proposes
changes
to
the
Federal
Rules
of
Evidence
recently
rejected
a
change
to
Federal
Rule
901
to
strengthen
authentication
rules.
One
major
reason:
the
Committee

reportedly

thought
it
was
premature
given
that
there
were
so
few
reported
cases
involving
deepfake
evidence.
The
Committee
opted
for
a
wait-and-see
approach.

But
with
all
the
publicity
about
deepfakes
and
the
dangers
they
portend
to
our
judicial
system
and
society,
you
have
to
ask
why
it
isn’t
showing
up
more.
Is
it
just
a
problem
in
search
of
a
problem
(to
paraphrase
the
saying
it’s
a
solution
in
search
of
a
problem)?


What’s
the
Why?

There
could
be
several
reasons
that
we
apparently
aren’t
yet
seeing
a
deepfake
problem
in
our
courtrooms.

Maybe
litigants
aren’t
yet
savvy
enough
to
create
the
kind
of
deepfake
that
passes
the
realistic-looking
test
one
would
need
for
litigation.
For
those
with
some
tech
knowledge,
it
seems
pretty
easy
to
create
a
convincing
fake.
But
to
those
with
less
tech
background,
maybe
it
isn’t.

Or
perhaps
litigants
still
have
respect
and
outright
fear
of
brazenly
offering
fake
evidence
in
front
of
a
black-robed
judge.
After
all,
committing
what
is
in
essence
perjury
should
give
anyone
pause.

Or
maybe,
as
one
litigator
who
I
know
well
and
respect
told
me
after
the
presentation,
maybe
deepfakes
are
occurring,
but
lawyers
and
judges
aren’t
catching
them.
After
all,
we
have
been
conditioned
for
years
by
the
photography
and
audio
recording
industry
that
what
you
see
in
a
picture
or
hear
in
a
recording
is,
in
fact,
real.
So,
we
assume
things
are
real
today
when
they
aren’t.

And
the
ability
to
use
AI
to
create
extremely
realistic
but
fake
evidence
is
a
fairly
recent
phenomenon.
It
burst
upon
us
all
quickly
and
continues
to
develop
rapidly.
So,
our
minds
have
not
yet
caught
up
with
the
fact
that
its
use
could
create
a
problem
for
our
litigation
system.


It’s
Easy

And
Tempting

I
tend
to
doubt
the
first
two
reasons
because
it’s
so
easy
to
manufacture
convincing
evidence.
Certainly,
in
the
criminal
law
arena,
the
opportunity
to
engage
in
deepfakes
by
defendants
would
seem
ripe
for
the
taking.
Manufacturing
a
picture
to
establish
an
alibi.
Creating
an
audio
recording
to
suggest
someone
else
committed
the
crime.
The
list
could
go
on
and
on.
Indeed,
prosecutors
have
told
me
that
they
are
in
fact
very
worried
about
just
this.

Another
area
ripe
for
abuse
is
family
law.
Someone
seeking
a
TRO
creates
an
audio
recording
suggesting,
for
example,
domestic
abuse.
That
puts
a
judge
in
a
tough
spot
since
the
impact
of
treating
the
recording
as
a
fake
could
have
a
devastating
impact
if
wrong.

But
it’s
not
just
the
bad
guys.
Even
well-meaning
people
might
be
tempted
to
cross
the
line.
Over
my
career,
I
saw
litigants
and
witnesses
constantly
convince
themselves
of
a
version
of
facts
that
were
just
not
correct.
Their
minds
would
embellish
the
version
they
wanted
and
add
things
to
it
that
simply
didn’t
happen.
Indeed,
it’s
often
not
conscious;
it’s
human
nature.

And
now
it
would
be
an
easy
line
to
cross
from
mind
embellishment
to
creating
proof.
I
had
a
case
once
that
turned
on
whether
a
fire
protective
device
was
or
was
not
present
in
a
building.
One
person
was
sure
it
wasn’t
there
when
in
fact
it
was.
In
the
age
of
deepfakes,
it
would
be
easy
to
create
a
picture
showing
what
the
mind’s
eye
was
certain
was
true:
a
device
that
was
not
there.

Or
if
one
side
had,
say,
a
picture
showing
it
was
there
and
their
adversary
concluded
that
picture
was
fake.
The
temptation
to
counter
the
picture
with
another
fake
one
would
be
high.


Skepticism
vs.
Reality

Which
brings
us
back
to
the
unscientific
poll
in
our
presentation
and
the
Rules
Committee
attitude:
why
aren’t
we
seeing
the
problem
in
our
courtrooms?

Judge
Rodriguez
made
a
good
point
in
our
discussion
that
I
mentioned
above:
there
is
a
presumption
of
validity
for
photos,
recordings,
and
videos.
It’s
the
notion
that
a
picture
is
worth
a
thousand
words.
So,
skepticism
for
what
we
see
has
not
yet
caught
up
with
reality.
At
least
in
the
courtroom.

But
more
and
more
people
are
rightfully
questioning
what
they
are
seeing
on
things
like
social
media
and
elsewhere.
There
is
increased
publicity
about
the
deepfake
phenomenon
as
the
development
of
AI
has
created
greater
opportunity
for
realistic
deepfakes.
Maybe
we
just
aren’t
there
yet.

It’s
like
hallucinated
cases.
Most
people
knew
that
LLMs
could
hallucinate
from
the
time
they
burst
on
the
scene.
Yet
it
wasn’t
until
later
that
the
first
instance
of
a
hallucinated
citation
popped
up
in
a
courtroom.
Now
it
happens
all
the
time.

The
reality
is
that
lawyers
and
judges
have
not
yet
realized
that
virtually
any
piece
of
evidence,
the
realism
of
which
we
have
taken
for
granted,
could
now
be
fake.
And
that
routine
authentication
may
need
to
take
on
a
whole
new
meaning.

But
if
it
does,
litigation
will
turn
into
a
side
show
of
battles
over
whether
any
piece
of
evidence
is
real
or
not.
And,
even
worse,
again
as
Judge
Rodriguez
pointed
out,
fact
finders

judges
and
juries

won’t
or
can’t
believe
any
piece
of
evidence.
It
could
turn
our
litigation
system
designed
for
fact
finding
on
its
head.
Endless
fights
with
no
one
believing
anything
they
see
or
hear.

That
threat
is
real.
Waiting
and
seeing
is
not
an
option.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law
.