The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

It’s Not An AI Hallucination – It’s Lazy Editing Of A Human Paralegal – Above the Law

There
are
now

over
1,000
AI
hallucination
cases
and
counting

around
the
world,

according
to
one
researcher
.
Covering
hallucinations
has
become
its
own
subgenre
of
legal
journalism
at
this
point,
a
growth
industry
rivaling
the
artificial
intelligence
industry
itself.
So,
occasionally,
we
need
a
story
to
come
along
and
remind
everyone
of
the
inconvenient
truth
that
these
professionally
embarrassing
mistakes
aren’t
the
fault
of
the
technology
as
much
as
a
crucial
operator
error.

A
new
sanctions
order
out
of
the
District
of
New
Jersey
in


Gutierrez
v.
Lorenzo
Food
Group

(flagged
by

Rob
Freund
,
a
must-follow
for
AI
hallucination
news)
sets
the
stage
with
a
familiar
tale
for
those
following
the
AI
hallucination
beat.
A
brief
opposing
a
motion
to
dismiss
contained
incorrect
citations
and
quotations
attributed
to
the
wrong
cases.
The
brief
also
included
citations
to
cases
that
had
been
bad
law
for
decades.
The
court
and
defense
counsel
both
identified
the
problems,
and
everyone
began
the
countdown
to
the
next
big
AI
hallucination
benchslap.

Except
it
never
arrived.

Because
after
months
of
investigation

including
conflicting
affidavits,
finger-pointing
between
colleagues,
and
an
evidentiary
hearing

Judge
Evelyn
Padin
concluded
that
no
one
used
generative
AI
at
all.
Instead,
an
unlucky
paralegal
had
been
substantively
drafting
the
brief
and,
when
a
former
associate
told
her
that
the
brief
needed
to
have
Third
Circuit
citations
(logically,
as
the
case
was
in
the
Third
Circuit),
she
took
that
instruction
and,
as
Judge
Padin
observes,
“made
the
regrettable
decision
to
attribute
quotations
that
were
actually
from
cases
outside
the
Third
Circuit
to
cases
within
the
Third
Circuit.”
The
quotes
had
appeared
in
earlier
drafts,
and
when
told
that
they
needed
to
be
Third
Circuit
cites,
the
paralegal
“seemingly
swapped
in
the
Third
Circuit
citations,
making
it
appear
as
if
the
quotations
came
from
those
Third
Circuit
cases.”

Humans
can
hallucinate
too!

The
court
was
admirably
direct
about
why
this
distinction
doesn’t
actually
matter:

Whether
GAI
was
used
in
drafting
the
MTD
Opposition
is
not
central
to
this
Court’s
decision
because
regardless
of
whether
it
was
a
person
or
a
large
language
model
that
made
these
errors,
the
attorney
responsible
for
filing
the
brief
has
an
obligation
to
ensure
that
the
arguments
and
contentions
made
within
it
are
accurate
and
supported
by
existing
law.

Artificial
intelligence
may

accelerate
the
process

of
uncovering
lawyers
who
take
thorough
editing
for
granted,
but
the
mistake

in
either
event

is
a
human
failure
to
check
their
work.

Attorney
Geoffrey
Mott,
who
signed
the
brief,
reviewed
exactly
one
draft
of
the
opposition

the
initial
one

and,
the
court
found,
never
looked
at
it
again.
As
the
paralegal
made
disastrous
citation
changes,
seemingly
no
lawyer
doubled
back
to
cite
check
the
final
brief.
The
court
noted
that
Mott’s
assertion
that
he
“thoroughly
reviewed”
the
brief
“at
the
very
best,
strain[s]
credulity.”

But
the
cover-up

as
always

made
things
worse.
When
the
court
first
flagged
the
problems,
Mott
and
the
paralegal
filed
affidavits
blaming
the
former
associate
for
inserting
the
bad
citations
as
opposed
to
just
giving
the
misunderstood
instruction.
The
court
was
“deeply
troubled”
by
this
approach
and
didn’t
sugarcoat
it:

Mr.
Mott
was
disappointingly
slow
to
take
any
real
ownership
over
these
errors.
The
Court
might
have
avoided
a
hearing

and
Mr.
Mott
might
have
avoided
monetary
sanctions

had
he
promptly
conducted
a
thorough
inquiry
and
provided
the
Court
with
a
holistic
and
accurate
representation
of
the
facts
the
first
time
he
was
ordered
to
do
so.

Mott
got
hit
with
monetary
sanctions
(the
amount
TBD
once
defense
counsel
submits
its
fee
certification)
and
ordered
to
complete
two
CLE
courses
on
ethics
and
AI.
The
AI
CLE
requirement
might
seem
counterintuitive
as
redress
for
an
entirely
human
error,
but
the
court
pointed
to
Mott’s
repeated
claims
at
the
hearing
that
he
was
unfamiliar
with
generative
AI,
and
decided
he
figured
it
out.

AI
catastrophes
draw
attention
these
days,
whether
it’s

Butler
Snow
getting
kicked
off
Alabama
prison
matters

after
senior
partners
failed
to
check
their
team’s
work,
or
the

Goldberg
Segalla
meltdown

started
with
one
fake
cite
and
metastasized
into
a
systemic
disaster.
But
in
all
those
cases,
the
real
error
is
between
the
keyboard
and
the
chair.
And
when
that’s
the
nature
of
the
bug,
it
doesn’t
matter
if
the
issue
originated
from
the
computer
or
a
misguided
human.


(Check
out
the
full
opinion
on
the
next
page…)




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.