
It’s
the
morning
of
the
first
day
of
trial.
Your
opponent
calls
her
first
witness
who
testifies
about
a
video
he
says
was
taken
at
the
accident
scene.
The
video
clearly
shows
your
client
running
the
red
light.
The
witness
is
pointing
at
the
screen
and
saying
the
video
is
a
fair
and
accurate
depiction
of
what
he
observed.
The
judge
nods
his
head
knowingly
and
looks
at
you.
Your
client
is
tugging
at
your
sleeve
and
whispering
something.
There
is
something
vaguely
off
about
the
footage.
What
do
you
do?
How
do
you
attack
the
presumed
authenticity?
Should
you?
What
if
the
video
is
more
or
less
authentic
but
has
been
enhanced?
Should
you
mention
that?
How?
Deepfakes
and
evidence
created
or
enhanced
by
AI
are
going
to
become
increasingly
prevalent.
There
are
numerous
examples
but
few
solutions
or
answers
for
lawyers
like
the
above,
for
judges
who
are
evidentiary
gatekeepers,
and
for
jurors
who
are
often
the
ultimate
decision-makers
in
court.
That’s
why
what
the
Visual
Evidence
Lab
at
University
of
Colorado
Boulder
recently
created
and
did
is
important.
The
Lab
gathered
20
some
experts
from
academia,
law,
media
forensics,
journalism,
and
human
rights
practice
in
April
of
this
year
to
discuss
the
use
of
video
and
AI
for
a
full
day
and
to
talk
about
the
problems
that
AI
can
and
is
creating
in
our
courtrooms.
The
group
released
a
report
entitled,
Video’s
Day
in
Court:
Advancing
Equitable
Legal
Usage
of
Visual
Technologies
and
AI.
While
the
focus
of
the
group
was
on
video
evidence,
much
of
what
was
discussed
is
applicable
to
other
forms
of
non-documentary
evidence.
The
group
talked
about
three
key
things:
systematic
public
access
to
and
storage
of
video
evidence,
how
to
place
guidelines
on
the
interpretation
of
video
evidence
by
judges
and
juries
to
mitigate
bias
and
properly
interpret
the
evidence,
and
the
issues
posed
by
the
impact
of
AI
on
video
evidence
to
better
establish
and
ensure
reliability
and
integrity.
The
Access
Problem
The
group
was
concerned
about
access
since
unlike
documentary
evidence,
video
evidence
is
haphazardly
stored.
Why
is
that
important?
It
prevents
researchers
and
others
from
being
able
to
grasp
the
scope
of
the
problem
and
the
risks
it
poses.
It
also
precludes
a
meaningful
analysis
of
the
characteristics
that
might
indicate
a
deepfake:
“These
visual
materials
cannot
become
a
proper
part
of
common-law
jurisprudence
either
because
lawyers
and
judges
are
not
able
to
refer
in
any
reasoned
fashion
to
decisions
of
other
courts
regarding
comparable
videos.”
Frankly,
I
had
not
thought
of
this
issue.
But
as
we
shall
see
later
in
the
report
discussed
below,
the
lack
of
the
ability
to
understand
the
scope
and
magnitude
of
the
problem
hampers
the
ability
to
systematically
deal
with
it.
You
can’t
solve
a
problem
with
anecdotes
instead
of
facts.
But
anecdotes
are
all
we
have
right
now.
And
the
access
problem
is
only
the
beginning.
The
Interpretation
Problem
The
impact
of
video
evidence
is
different
than
documentary
evidence
in
ways
that
are
often
misunderstood.
There’s
lots
of
psychology
research
that
shows
perception
of
video
evidence
can
be
more
selective,
biased,
and
shaped
by
what
the
report
calls
motivated
reasoning,
that
is,
using
the
evidence
to
support
a
preexisting
conclusion.
In
addition,
the
video
medium
can
be
manipulated
to
shape
interpretations.
Things
like
playback
speed
can
alter
the
perception
of
video
evidence:
it
makes
the
depicted
action
seem
more
deliberate.
Other
factors
including
camera
angle
and
field
of
view
are
important.
The
report
concludes,
“Despite
the
multiple
factors
shaping
interpretation
and
decision-making,
judges,
lawyers,
and
jurors
are
largely
unaware
of
the
various
influences
on
how
they
construe
what
they
see
in
a
video.”
Put
bluntly,
video
evidence,
by
its
very
nature,
impacts
decision-making
in
ways
that
are
different
than
other
evidence.
There
is
precious
little
study
of
how
this
impacts
decision-making
in
the
courtroom
and
how
altering
or
enhancing
the
video
can
impact
that
reasoning.
Without
that,
it’s
hard
to
know
what
is
fair
and
how
to
define
what
is
impartial
when
it
comes
to
decision-making.
For
example,
is
it
fair
for
a
jury
to
be
presented
with
an
enhanced
video
to
better
demonstrate
a
bloody
and
brutal
injury?
Or
does
that
place
jurors
too
close
to
the
victim
and
interfere
with
fairness?
The
Impact
of
AI
All
of
these
issues
are
compounded
by
AI,
the
report
concluded.
It’s
hard
to
confidently
distinguish
whether
a
video
accurately
depicts
what
it
is
being
offered
to
show,
the
standard
test
of
authenticity.
Three
questions
arise:
-
The
difficulty
detecting
and
verifying
AI-created
media -
The
uncertainty
about
what
kind
of
enhancement
is
permissible
in
court -
The
fear
that
deepfakes
may
become
more
prevalent
Here’s
the
problem:
as
noted
by
the
report,
the
Advisory
Committee
on
Federal
Evidentiary
Rules
decided
in
May
of
this
year
that
no
changes
to
Evidentiary
Rule
901
which
governs
authenticity
were
necessary.
Why?
Because
the
Committee
concluded
so
few
deepfakes
had
been
offered
as
evidence.
(Of
course,
that
assumes
that
all
“deepfakes”
had
been
found,
labelled,
and
that
labelling
recorded
in
a
way
that
could
be
accessed,
which
gets
back
to
the
first
problem.)
The
Lab
report
notes:
The
central
challenge
is
how
to
establish
robust
authentication
standards
that
can
withstand
scrutiny,
without
simultaneously
creating
verification
systems
that
compromise
people’s
right
to
confront
evidence
or
endanger
the
human
rights
of
media
creators
and
witnesses.
The
report
also
noted
that
courts
have
long
allowed
the
use
and
admission
of
technologically
enhanced
media
like
enlarged
photos
and
interactive
3D
models.
But
AI
tools
bring
new
levels
of
enhancement
not
seen
before.
Moreover,
the
ease
of
use
and
affordability
of
these
tools
make
them
ubiquitous.
Things
like
changes
to
resolution,
brightness,
contrast,
sharpness,
and
other
features
allow
video
evidence
(and
photographic
evidence
for
that
matter)
—
features
we
all
use
every
day,
by
the
way
—
to
be
presented
in
new
and
persuasive
ways.
Here’s
a
real-world
example
of
a
problem
with
video.
In
a
previous
life,
I
was
a
swim
official.
One
of
the
calls
a
swimming
official
makes
is
to
make
sure
in
relay
events
no
swimmer
leaves
the
blocks
before
his
teammate
touches
the
wall.
The
only
way
to
do
that
is
to
stand
right
next
to
the
block.
I
can’t
tell
you
how
many
times
a
spectator
would
come
to
me
with
a
video
taken
30
yards
away
to
dispute
a
call.
That
video,
of
course,
is
not
an
accurate
depiction
of
what
actually
happened.
But
the
spectator
would
extrapolate
what
actually
happened
from
that
video.
The
question
is
at
what
point
do
those
kinds
of
enhancements
cross
the
line
between
what
is
convenient
and
proper
and
become
a
deepfake?
We
have
no
firm,
universal
rules
to
determine
this.
Without
these
rules,
inequalities
exist
which
undermines
a
consistent
application
of
the
rule
of
law.
There
is,
by
the
way,
a
proposed
amendment
to
Evidentiary
Rule
707
that
would
apply
the
Daubert
standard
of
reliability
to
determine
the
admissibility
of
AI-enhanced
and
-generated
evidence.
It
is
open
for
public
comment
until
February
2026.
All
of
this,
combined
with
the
fear
that
deepfakes
are
going
to
become
more
and
more
prevalent,
all
raise
issues
of
evidentiary
integrity,
says
the
report.
What
Is
There
to
Do?
The
Colorado
gang
didn’t
just
stop
at
identifying
a
problem,
they
came
up
with
several
recommendations
to
get
us
to
some
solutions:
-
The
development
of
standards
for
labeling,
storing,
securing,
and
archiving
video
evidence.
This
would
include
a
data
strategy
along
with
a
decentralized
architecture
that
would
enable
use
and
analysis
of
that
data. -
The
development
of
visual
evidence
training
for
judges
(e.g.,
how
to
probe
and
ask
relevant
questions)
to
better
perform
their
role
as
gatekeepers. -
The
development
of
research-based
guidance
to
help
jurors
better
evaluate
video
evidence. -
Systematic
research
into
the
prevalence
of
deepfakes
in
court
to
develop
safeguards
for
AI-generated
evidence. -
The
issuance
of
ethics
opinions
on
the
offering
of
known
or
suspected
AI-generated
or
-enhanced
evidence.
According
to
the
report:
Judges
must
be
prepared
to
handle
cases
involving
AI-generated
and
AI-enhanced
video
evidence.
Improving
notice
and
disclosure
for
AI-enhanced
evidence
can
help
safeguard
reliability
without
further
exacerbating
the
inequality
of
access
to
justice.
The
Report
Conclusion
The
Report
concluded
as
follows:
The
development
of
a
long-term
infrastructure
for
storing
and
accessing
evidentiary
videos,
research-based
training
for
judges,
instructions
for
jurors,
and
safeguards
for
the
admission
of
AI-based
evidence
will
advance
the
consistent
and
fair
use
of
video
and
AI
technologies
in
the
pursuit
of
justice.
Some
Final
Thoughts
Yes,
the
report
is
short
on
concrete,
practical
solutions.
It’s
one
thing
to
say
we
need
to
do
things
like
educate
judges.
It’s
another
thing
to
create
training
modules
and
roundtables
to
do
just
that.
The
former
is
easy,
the
latter
harder.
But
what
the
Lab
has
done
is
a
start.
It’s
a
studied,
inclusive,
and
fair
examination
of
a
problem
that’s
only
going
to
get
worse
without
action.
While
the
devil
is
often
in
the
details,
you
don’t
get
to
the
details
without
understanding
the
problem
you
are
trying
to
solve.
That’s
what
the
Colorado
group
is
doing.
That’s
what
we
need
more
of
if
we
as
a
profession
are
going
to
successfully
confront
the
problem.
Until
we
get
serious
about
understanding
the
scope
of
this
problem,
we’re
just
playing
courtroom
roulette
with
the
truth.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.





