
Another
day,
another
fictitious
case
citation.
Blink,
and
you’ll
miss
the
most
recent
comedy
of
AI-created
errors.
Since
the
release
of
ChatGPT
in
late
2022,
the
frequency
of
court
submissions
riddled
with
AI-hallucinated
gibberish
has
increased
exponentially.
Now,
more
than
three
years
later,
it
seems
that
not
a
week
goes
by
without
a
headline
about
yet
another
case
in
which
a
lawyer
has
submitted
briefs
to
the
court
full
of
AI-hallucinated
gibberish.
One
of
the
standout
features
of
many
of
these
cases
is
that
the
attorneys
double
down,
rather
than
admitting
the
error
of
their
ways.
Sometimes,
they
even
submit
responsive
papers
in
defense
of
their
actions
that
include
hallucinations.
Of
course,
it’s
one
thing
to
read
about
these
shenanigans,
but
seeing
the
audacity
in
action
during
an
appellate
argument?
Priceless.
And
appalling.
In
equal
proportions.
I
present
to
you,
attorney
for
the
appellant
in
Deutsche
Bank
National
Trust
Company
v.
Jean
LeTennier,
CV-23-0713
(2026),
whose
absolute
chutzpah
was
on
full
display
—
captured
on
video
on
October
16,
2025,
during
an
oral
argument
before
the
New
York
Appellate
Division,
Third
Department.
About
one-third
of
the
way
into
the
video,
at
6:30,
he
faced
a
very
hot
bench.
The
judges
were
collectively,
and
understandably,
piqued
by
both
the
number
of
hallucinations
contained
in
his
submissions
and
the
fact
that
he
seemed
to
be
entirely
unbothered
by
his
own
fictions.
When
asked
about
his
response
to
the
allegations
that
he’d
used
AI,
he
appeared
to
be
oblivious
to
the
very
admissions
he’d
included
in
his
responsive
papers.
Rather
than
answering
their
pointed
questions
about
the
errors,
he
deflected
and
tried
to
avoid
the
topic
entirely*:
The
Court:
The
citations
used
in
some
of
the
cases
are
not
real
citations
(to)
real
cases.
Counsel:
(T)hat
was
never
asserted
to
me.
The
Court:
You
acknowledged
it
in
your
reply
brief
that
some
of
the
cases
were
not
accurate.
Counsel:
The
issue
that
I
believe
is
really
important
is
the
fact
that
fraud
was…
The
Court:
It’s
important
to
us,
so
I
guess
you’re
going
to
have
to
deal
with
what’s
important
to
us.
The
judges
were
not
dissuaded
by
his
obfuscation.
They
persisted
in
their
questioning,
forcing
him
to
concede
that
there
were
fake
cases
cited
in
his
briefs.
Undeterred,
he
then
informed
the
panel
that
the
AI
issue
was
immaterial,
apparently
deciding
that
patronizing
the
bench
was
a
winning
tactical
move:
The
Court:
Did
you,
as
an
attorney,
write
a
brief
and
submit
it
to
this
court?
Counsel:
Yes,
Court:
Okay,
so
you
own
it,
right?
Counsel:
Yes.
Court:
Okay,
in
that
brief,
we’re
telling
you,
and
you
are
aware
…
not
only
were
you
made
aware
by
the
court,
but
by
your
adversary,
that
there
are
citations
that
are
not
real
cases.
Counsel:
That’s
not
germane
to
the
fact
that
the
SEC
is
telling
this
court
…
Once
that
approach
proved
ineffective,
he
switched
gears
once
again.
When
directly
questioned
about
his
AI
use,
he
played
coy,
acting
like
a
petulant
adolescent
who’d
been
caught
drinking.
Finding
that
tactic
to
be
futile,
he
once
again
reverted
to
his
previously
unsuccessful
strategy
of
mansplaining
to
the
bench:
The
Court:
So,
I
guess
I’m
asking
you,
did
you
use
AI
to
do
the
brief,
and
are
these
hallucinated
cases,
or
did
you
miscite
cases?
Because
you
didn’t
give
us
any
corrections
…
Counsel:
AI
is
a
tool
that
I
think
all
of
us
use
these
days.
The
Court:
So
that
would
be
a
“yes,
I
used
AI.”
Counsel:
Well,
not
exactly.
I
mean,
yeah,
I
used
AI.
The
Court:
Okay.
You’ve
got
to
check
AI,
right?
Counsel:
I
do.
The
Court:
Well,
evidently
not
too
well,
right?
Counsel:
It
seems
like
we’re
not
able
to
focus
on
the
issue
that
they
brought
it
…
When
it
became
apparent
that
condescension
wasn’t
working,
he
retreated
to
a
defense
of
statistical
probability.
The
exchange
that
followed
captures
the
surreal
moment
that
he
attempted
to
treat
a
‘mostly
accurate’
brief
as
a
job
well
done:
The
Court:
Because
we’re
on
this
side
of
the
of
the
bench
…
that’s
why
we’re
asking
you
about
the
citations
in
your
brief
that
you
provided
to
us
and
the
response
when
it
was
pointed
out
that
these
are
AI
citations.
Counsel:
I
believe
that
the
citations
that
I
used
were
accurate,
like
90%
were
accurate,
some
of
them
which
really
aren’t
necessarily
germane
to
the
issues
at
hand
…
The
Court:
Okay,
your
time
is
up!
Thank
you.
Needless
to
say,
the
court
was
unimpressed
with
his
assertion
that
a
90%
accuracy
rate
was
a
passing
grade
for
the
truth,
dismissing
his
argument
in
its
written
decision,
issued
in
January:
“(D)uring
oral
argument
defense
counsel
estimated
that
90%
of
the
citations
he
used
were
accurate,
which,
even
if
it
were
true,
is
simply
unacceptable
by
any
measure
of
candor
to
any
court.”
In
the
court’s
eyes,
a
brief
that
is
only
10%
imaginary
is
still
100%
problematic,
especially
when
that
small
slice
of
fiction
accounted
for
“at
least
23
fabricated
legal
authorities
across
five
filings
…
(and
misrepresenting)
the
holdings
of
several
real
cases
as
being
dispositive
in
his
favor
—
when
they
were
not.”
His
stubborn
resistance
to
reality
was
rewarded
with
sanctions
in
the
amount
of
$5,000
due
to
his
refusal
to
take
accountability
for
his
actions:
“(H)is
reliance
on
fabricated
legal
authorities
grew
more
prolific
as
this
appeal
proceeded
…
Rather
than
taking
remedial
measures
or
expressing
remorse,
defense
counsel
essentially
doubled
down
during
oral
argument
on
his
reliance
of
fake
legal
authorities
as
not
‘germane’
to
the
appeal.”
Importantly,
the
court
acknowledged
that
AI
does
have
a
place
in
litigation,
as
long
as
attorneys
and
staff
are
sufficiently
trained
and
carefully
check
their
work
for
accuracy
before
submitting
it
to
the
court:
“As
with
the
work
from
a
paralegal,
intern
or
another
attorney,
the
use
of
GenAI
in
no
way
abrogates
an
attorney’s
or
litigant’s
obligation
to
fact
check
and
cite
check
every
document
filed
with
a
court.
To
do
otherwise
may
be
sanctionable
…”
It’s
a
bold
new
era
for
the
legal
profession:
one
where
‘mostly
accurate’
is
a
tactical
hill
to
die
on
and
23
imaginary
cases
are
just
‘minor’
details.
As
it
turns
out,
that
final
10%
is
the
difference
between
a
winning
argument
and
a
$5,000
audacity
tax.
*
The
court
transcript
excerpts
have
been
lightly
edited
for
readability
and
flow.
Nicole
Black is
a
Rochester,
New
York
attorney
and
Principal
Legal
Insight
Strategist
at 8am,
the
team
behind
8am
MyCase,
LawPay,
CasePeer,
and
DocketWise.
She’s
been blogging since
2005,
has
written
a weekly
column for
the
Daily
Record
since
2007,
is
the
author
of Cloud
Computing
for
Lawyers,
co-authors Social
Media
for
Lawyers:
the
Next
Frontier,
and
co-authors Criminal
Law
in
New
York.
She’s
easily
distracted
by
the
potential
of
bright
and
shiny
tech
gadgets,
along
with
good
food
and
wine.
You
can
follow
her
on
Twitter
at @nikiblack and
she
can
be
reached
at [email protected].
