The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Police Wonder If AI Bodycam Reports Are Accurate After Model Transforms Officer Into A Frog – Above the Law

We’ve
spent
the
better
part
of
the
last
two
years
documenting
the
parade
of
attorneys
firing
off
AI-muddled
filings.
It’s
been
a
field
day
for
bar
disciplinary
authorities
and
an
endless
source
of
content
for
us.
But
at
the
end
of
the
day,
lawyers
confidently
citing

Martinez
v.
Maldonia
Airways

have

for
the
most
part

been
mostly
caught
and
laughed
at.
Once
lawyers
are
involved
in
a
case,
there
are
often
opposing
counsel
and,
more
importantly,
judges
that
will
check
their
work.

It’s
not
foolproof.
There
are,
of
course,
asymmetric
cases
where
one
party
has
more
superior
legal
resources
and
could
slip
one
past
a
judge
when
the
opposing
counsel
doesn’t
see
it.
That
said,
in
the
majority
of
cases
involving
hallucinated
filings
there’s
someone
to
catch
the
lazy
lawyers
with
their
pants
down
and
a
large
language
model
in
their
hands.

It
gets
much
more
dangerous
when
law
enforcement
gets
into
AI
hallucinations.
Like

these
Utah
cops

auditioning
for

Paw
Patrol

as
Chase’s
new
amphibious
partner
“Croaker”

a
new
addition
who
doesn’t
have
time
to
play
it
by
the
book
but,
dammit,
he
gets
results!

HEBER
CITY,
Utah

An
artificial
intelligence
that
writes
police
reports
had
some
explaining
to
do
earlier
this
month
after
it
claimed
a
Heber
City
officer
had
shape-shifted
into
a
frog.

All
Cops
Are
Bullfrogs
as
they
say.

The
culprit
behind
the
officer’s
miraculous
transformation
is
the
AI
credulously
accepting
magic
of
cinema
just
like
those
Nicole
Kidman’s
AMC
ads
tell
us
we
should.

“The
body
cam
software
and
the
AI
report
writing
software
picked
up
on
the
movie
that
was
playing
in
the
background,
which
happened
to
be
‘The
Princess
and
the
Frog,’”
Sgt.
Keel
told
FOX
13
News.
“That’s
when
we
learned
the
importance
of
correcting
these
AI-generated
reports.”

So
someone
was
watching

The
Princess
and
the
Frog

in
the
background,
and
the
AI

which
processes
audio,
but
apparently
lacks
the
discernment
of
a
moderately
alert
golden
retriever

wove
Disney’s
magical
narrative
into
the
official
record.
As
ominous
warnings
about
the
dangers
of
massive
police
budgets
and
reckless
technological
advances
go,
consider
this
the
proverbial
“frog
in
the
pot
of
water”
moment…
and
the
water
was
just
run
through
a
data
center
to
generate
400
words
about

how
to
sit
in
a
library
.

By
the
way,
the
implication
of
the
above
quote
is
that
before
turning
one
of
the
officers
into
a
frog,
the
department
had
NOT
considered
“the
importance
of
correcting
these
AI-generated
reports.”

The
cops
had
been
using
an
AI
tool
called
“Draft
One”
from
Axon
(the
Taser
people)
to
use
AI
models
to
transform
body
camera
audio
into
police
reports.
Apparently,
the
department
was
also
testing
a
program
that
uses
AI
to
generate
reports
from
the
footage
itself,
and
it
was
not
fooled
into
conflating
the
officer
with
Prince
Naveen.

Futurism
notes
that

Draft
One
already
faced
serious
criticism
for
its
role
in
policing
:

Critics
also
argue
that
the
tool
could
be
used
to
introduce
deniability
and
make
officers
less
accountable
in
case
mistakes
were
to
fall
through
the
cracks.
According
to
recent
investigation
 by
the
Electronic
Frontier
Foundation,
Draft
One
“seems
deliberately
designed
to
avoid
audits
that
could
provide
any
accountability
to
the
public.”

According
to
records
obtained
by
the
group,
“it’s
often
impossible
to
tell
which
parts
of
a
police
report
were
generated
by
AI
and
which
parts
were
written
by
an
officer.”

Add
in
that
generative
AI
systems
have
been
repeatedly
shown
to
perpetuate
racial
and
gender
biases

and
that’s
before
they
started
teaching
Grok
to

preach
about
White
Genocide

and

digitally
undress
children
.
These
models
are
trained
on
data
sets
that
inevitably
reflect
society’s
existing
prejudices.
Then
consider
what
happens
when
that
technology
is
deployed
by
law
enforcement

an
institution
with
its
own
thoroughly
documented
history
of
acting
upon
racial
bias.
It’s
just
stacking
bias
on
top
of
bias.

And
it’s
doing
it
in
the
name
of
“efficiency,”
which
is
certainly
one
argument
you
can
make
for
embracing
prejudices.

This
should
all
give
everyone
pause
that
the
software
is
already

sigh


jumping

to
conclusions
elsewhere.
A
frog
in
a
police
report
is
funny
because
it’s
falsifiable…
presumably.
“Suspect
appeared
nervous
and
evasive”
is
not.
The
real
concern
rests
in
these
less
comical
areas
where
the
AI
shades
police
reports
in
directions
that
are
prejudicial,
unfalsifiable,
and
completely
invisible
to
review.
Criminal
cases
already
present
the
most
glaring
asymmetries
in
the
legal
system.

When
bots
are
falsifying
the
record
and
making
it
look
“neutral,”
there’s
not
much
hope
for
the
average
defendant.




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.