The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Warning Party To Stop Citing Fake AI Cases Is Not, In Fact, Bias – Above the Law

Phony
cases
continue
to
proliferate
across
the
docket.
This
recent
explosion
stems
from
the
advent
of
artificial
intelligence
tools,
with

over
700
instances
of
embarrassing
hallucinations
working
their
way
into
filings
so
far
.
The
problem
will
inevitably
get
worse
since
these
AI
tools
are
eager
to
provide
users
with
whatever
answer
they
desire,
even
if
it’s
wholly
made-up
garbage.
That’s
not
entirely
the
fault
of
the
AI.
A
non-savvy
user
is
more
likely
to
prompt
the
tool
in
ways
that
incentivize
the
algorithms
to
produce
results
to
match
the
user’s
request.
Ask
a
large
language
model
for
key
landlord-tenant
citations
and
it
will

often

do
a
decent
job.
Ask
it
to
provide
case
citations
for
the
proposition
that
my
bonkers
argument
is
actually
correct,
and
it
has
a
much
higher
chance
of
going
off
the
rails.

Some
tools
have
more
robust
safeguards
than
others,
but,
at
the
end
of
the
day,
a
large
language
model

wants

to
give
the
user
what
it
wants.
That’s
trouble
if
the
user
makes
the
wrong
ask
and
isn’t
careful
about
checking
the
work
of
their
semi-random
word
generator.

While
lawyers
keep
screwing
this
up,
the

pro
se

litigant
presents
a
vector
for
hallucinatory
infection.
They’re
already
up
against
it
with
a
system
they
don’t
fully
understand
and
AI
provides
easy,
seemingly
right
answers.
If
AI
is

mansplaining-as-a-service


exceedingly
confident,
regardless
of
accuracy

then
its
most
trusting
victims
will
be
people
just
trying
to
figure
out
how
to
enforce
their
rights.

And
it’s
a
problem
bound
to
get
worse
because
AI
is
cheap
and
lawyers
are
expensive.

That
said,
once
the
courts
warn
a
litigant
to
stop
using
AI,
that
should
be
the
end
of
it.
One
litigant,
however,
went
the
other
direction
and
claimed
the
court’s
warning
proved
its
bias
against
his
case.

This
argument
fared…
poorly.

Finally,
Plaintiffs
[sic]
objects
to
the
Magistrate
Judge’s
“criticism”
of
his
use
of
artificial
intelligence
to
cite
to
non-existent
case
law
and
errors
in
other
citations.
Id.
at
3
(citing
Non-Final
R&R
at
2-4).
Notwithstanding
that
a
review
of
Plaintiffs
“citations”
proves
the
Magistrate
Judge’s
point,
the
warning
given
by
the
Magistrate
Judge
with
respect
to
Plaintiffs
future
filings
had
no
impact
on
the
full
analysis
conducted
by
the
Magistrate
Judge
on
each
of
Plaintiff
s
claims.

Rob
Freund
(who
flagged
the
opinion
on
the
platform
formerly
known
as
Twitter)
offered
the
friendly
advice
If
a
judge
calls
you
out
for
citing
“non-existent
case
law,”
filing
an
objection
in
response
is
probably
not
the
play
.”

The
plaintiff’s
specific
objection
was
that
the
magistrate
judge’s
warning
was
unclear:

No
Specific
Misquotes;
Opposing
Misstatements,
Overlooked
[Doc.
65
p.
2]
R&R
Error:
Vague
“AI-generated,
incorrect
laws”
claim
[citing
Doc.
12-1]
without
examples.

The
problem
with
this
claim
is
that
there
were,
in
fact,

multiple
specific
examples

of
false
citations.
They
were
laid
out
by
opposing
counsel
in
its
motion.
Most
notably
the
response
to
plaintiff’s
citing
Solomon
v.
Norwest
Corp.
,
546
S.E.2d
330
(Ga.
2001),”
prompting
opposing
counsel
to
write:

The
citation
of
“546
S.E.2d
330”
is
actually
for
the
case
Nunley
v.
Nunley,
248
Ga.
App.
208,
546
S.E.2d
330
(2001),
involving
a
hen
farm
partnership.

Talk
about
laying
an
egg.


(Full
opinion
on
the
next
page…)




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.