The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

AI, Deepfakes, And Litigation: It’s Not Always What It Seems – Above the Law

A
willingness
to
adapt
to
the
changing
times
is
essential
in
today’s
rapidly
evolving
technology-driven
environment.
This
is
all
the
more
important
as
artificial
intelligence
(AI
)
advances
occur
at
an
exponential
rate,
forcing
our
courts
into
uncharted
territory
rife
with
AI-altered
evidence
like
deepfake
videos.

For
example,
in
a
recent
case
in
the
state
of
Washington,
a
King
County
Superior
Court
judge

ruled
on
the
admissibility
of
AI-enhanced
video

in
a
triple
murder
prosecution.
The
defense
sought
to
enter
into
evidence
a
cellphone
video
that
had
been
enhanced
using
AI
technology.

Judge
Leroy
McCullough
expressed
concern
about
the
lack
of
transparency
regarding
the
AI
editing
tool’s
algorithms
before
precluding
the
admission
of
the
altered
video.
He
determined
that
the
“admission
of
this
AI-enhanced
evidence
would
lead
to
a
confusion
of
the
issues
and
a
muddling
of
eyewitness
testimony,
and
could
lead
to
a
time-consuming
trial
within
a
trial
about
the
non-peer-reviewable
process
used
by
the
AI
mode.”

That
case
is
but
one
example
of
the
emerging
dilemma
facing
our
trial
courts.
Determining
the
admissibility
of
videos
created
using
AI
tools
presents
a
challenge
even
for
the
most
technology-adept
judges,
of
which
there
are
relatively
few.
Grappling
with
these
issues
has
been
all
the
more
problematic
in
the
absence
of
existing
guidance
or
updated
evidentiary
rules.
Fortunately,
help
is
on
the
way
in
the
form
of
ethics
guidance
and
proposed
evidentiary
rule
amendments.

In
a
recent
report
issued
by
the
New
York
State
Bar
Association’s
Task
Force
on
Artificial
Intelligence
on
April
6,
the
issue
of
AI-created
evidence
and
current
efforts
to
address
it
were
discussed.
The
lengthy
91-page
Report
and
Recommendations
of
the
New
York
State
Bar
Association
Task
Force
on
Artificial
Intelligence,

addressed
a
wide
range
of
issues,
including:
1)
the
evolution
of
AI
and
generative
AI,
2)
its
risks
and
benefits,
3)
how
it
is
impacting
society
and
the
practice
of
law,
and
4)
ethics
guidelines
and
recommendations
for
lawyers
who
use
these
tools.

One
area
of
focus
was
on
the
impact
of
AI-created
deepfake
evidence
on
trials.
The
task
force
acknowledged
the
challenge
presented
by
synthetic
evidence,
explaining
that
“(d)eciding
issues
of
relevance,
reliability,
admissibility
and
authenticity
may
still
not
prevent
deepfake
evidence
from
being
presented
in
court
and
to
a
jury.”

According
to
the
task
force,
the
threat
of
AI-created
deepfake
evidence
is
significant
and
may
impact
the
administration
of
justice
in
ways
never
before
seen.
As
generative
AI
tools
advance,
their
output
is
increasingly
sophisticated
and
deceptive,
making
it
incredibly
difficult
for
triers
of
fact
to
“determine
truth
from
lies
as
they
confront
deepfakes.”
Efforts
are
underway
on
both
a
national
and
state
level
to
address
these
concerns.

First,
the Advisory
Committee
for
the
Federal
Rules
of
Evidence
is
considering
a
proposal
by
former
U.S.
District
Judge
Paul
Grimm
and
Dr.
Maura
R.
Grossman
of
the
University
of
Waterloo.
Their
suggestion
is
to
revise
the 
Rule
901(b)(9)
standard
for
admissible
evidence
from
“accurate”
to
“reliable.”

The
new
rule
would
read
as
follows
(additions
in
bold):

(A)
evidence
describing
it
and
showing
that
it
produces
an
accurate

a
valid
and


reliable

result;
and

 
(B)
if
the
proponent
concedes
that
the
item
was
generated
by
artificial


intelligence,
additional
evidence
that:


 
 
 
(i)
describes
the
software
or
program
that
was
used;
and


 
 
 
(ii)
shows
that
it
produced
valid
and
reliable
results
in
this
instance.

The
advisory
committee
is
also
recommending
the
addition
of
a
new
rule,
901(c) 
to
address
the
threat
posed
by
deepfakes:

901(c)
Potentially
Fabricated
or
Altered
Electronic
Evidence.
If
a
party
challenging
the
authenticity
of
computer-generated
or
other
electronic
evidence
demonstrates
to
the
court
that
it
is
more
likely
than
not
either
fabricated,
or
altered
in
whole
or
in
part,
the
evidence
is
admissible
only
if
the
proponent
demonstrates
that
its
probative
value
outweighs
its
prejudicial
effect
on
the
party
challenging
the
evidence.

Similarly,
in
New
York,
 amendments
to
the
Criminal
Procedure
Law
and
CPLR
have
been
proposed
by
New
York
State
Assemblyman
Clyde
Vanel,
who
has
introduced
bill
A
8110,
which
amends
the
Criminal
Procedure
Law
and
the
Civil
Practice
Law
and
Rules
regarding
the
admissibility
of
evidence
created
or
processed
by
artificial
intelligence.

He
suggests
distinguishing
between
evidence
“created”
by
AI
when
it
produces
new
information
from
existing
information
and
evidence
“processed”
by
AI
when
it
produces
a
conclusion
based
on
existing
information.

He
posits
that
evidence
“created”
by
AI
would
not
be
admissible
absent
independent
evidence
that
“establishes
the
reliability
and
accuracy
of
the
AI
used
to
create
the
evidence.”
Evidence
“processed”
by
AI
would
require
that
the
reliability
and
accuracy
of
the
AI
used
be
established
prior
to
admission
of
the
AI
output
into
evidence.

Legislative
changes
aside,
there
are
other
ways
to
adapt
to
the
changes
wrought
by
AI.
Now,
more
than
ever,
technology
competence
requires
embracing
and
learning
about
this
rapidly
advancing
technology
and
its
impact
on
the
practice
of
law
at
all
levels
of
the
profession,
from
lawyers
and
law
students
to
judges
and
regulators.
This
includes
understanding
how
existing
laws
and
regulations
apply,
and
whether
new
ones
are
needed
to
address
emerging
issues
that
have
the
potential
to
reduce
the
effectiveness
of
the
judicial
process.

The
evolving
landscape
of
AI
presents
both
opportunities
and
challenges
for
the
legal
system.
While
AI-powered
tools
can
enhance
efficiency
and
analysis,
AI-created
evidence
like
deepfakes
poses
a
significant
threat
to
the
truth-finding
process.

The
efforts
underway,
from
proposed
rule
changes
to
increased
education,
demonstrate
a
proactive
approach
to
addressing
these
concerns.
As
AI
continues
to
advance,
a
multipronged
strategy
that
combines
legal
reforms,
technological
literacy
within
the
legal
profession,
and
a
commitment
to
continuous
learning
is
needed
to
ensure
a
fair
and
just
legal
system
in
the
age
of
artificial
intelligence.





Nicole
Black



is
a
Rochester,
New
York
attorney
and
Director
of
Business
and
Community
Relations
at




MyCase
,
web-based
law
practice
management
software.
She’s
been




blogging



since
2005,
has
written
a




weekly
column



for
the
Daily
Record
since
2007,
is
the
author
of




Cloud
Computing
for
Lawyers
,
co-authors




Social
Media
for
Lawyers:
the
Next
Frontier
,
and
co-authors




Criminal
Law
in
New
York
.
She’s
easily
distracted
by
the
potential
of
bright
and
shiny
tech
gadgets,
along
with
good
food
and
wine.
You
can
follow
her
on
Twitter
at




@nikiblack



and
she
can
be
reached
at




niki.black@mycase.com
.