The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Judges To Generative AI: You’re Out Of Order! – Above the Law

Technology
and
courts
have
never
been
a
great
match.
After
all,
judges
are
concerned
with
precedent,
while
the
technology
adoption
process
is
future-facing.
Certainly,
the
pandemic
helped
reduce
the
friction
somewhat
when,
out
of
necessity,
the
courts
acclimated
to
e-filing
and
virtual
court
proceedings.
Some
judges
even
acknowledged
the
value
and
convenience
of
online
appearances
for
certain
matters.

Despite
the
notable
adaptation
and
changing
attitudes,
our
judicial
counterparts
were
wholly
unprepared
for
the
generative
AI
(GenAI)
tsunami
and
its
impact
on
the
practice
of
law.
Since
the
release
of
GPT-4
a
year
ago,
lawyers
have
increasingly
relied
on
the
output
of
GenAI
tools
to
draft
and
submit
pleadings
and
memoranda
to
the
courts,
often
without
a
full
understanding
of
the
technology
and
its
capabilities.

As
a
result,
many
lawyers
have
made
the
news
for
submitting
court
filings
with
GenAI-generated
fake
case
citations
despite
being
repeatedly
warned
to

carefully
review
all
GenAI
output
.
Judging
by
the
uptick
in
headlines
since
the
start
of
2024,
more
lawyers
than
ever
are
relying
on
GenAI
tools
for
legal
research
but
are
utterly
failing
to
meet
their
ethical
obligations
of
basic
competence:

  • A
    lawyer
    was

    sanctioned

    by
    Massachusetts
    Superior
    Court
    for
    filing
    multiple
    memoranda
    replete
    with
    false
    case
    citations
    (2/12/24).
  • A
    British
    Columbia
    lawyer
    was
    reprimanded
    and

    ordered

    to
    pay
    costs
    for
    opposing
    counsel
    to
    discover
    precedent
    was
    AI
    “hallucination”
    (2/20/24).
  • A
    Florida
    attorney
    was

    suspended

    by
    U.S.
    District
    Court
    for
    the
    Middle
    District
    of
    Florida
    for
    filing
    submissions
    relying
    on
    false
    case
    citations
    (3/8/24).
  •  An
    attorney,
    a
    pro
    se
    litigant,
    was
    called
    out
    by
    the
    court
    for
    submitting
    false
    case
    citations
    to
    court
    for
    the

    second

    time,
    and
    the
    case
    was

    dismissed

    following
    a
    grant
    of
    summary
    judgment
    on
    other
    grounds
    (3/21/24).
  • The

    9th
    Circuit
    summarily
    dismissed
    a
    case


    without
    addressing
    the
    merits
    because
    of
    the
    attorney’s
    reliance
    on
    fake
    cases
    (3/22/24).

To
be
clear,
these
situations
reveal
an
inability
to
meet
basic
competence
obligations.
This
isn’t
an
issue
of
technology
competence.
Instead,
these
instances
highlight
a
failure
by
lawyers
to
carefully
review
documents
submitted
to
the
court.

It
is
this
dereliction
that
is
spurring
judges
into
action.
Courts
across
the
country
are
grappling
with
this
issue,
with
many
judges
signing
orders
that
govern
GenAI
usage
in
their
courtrooms.
RAILS
(Responsible
AI
in
Legal
Services)
has
compiled
a

running
list
of
these
efforts
.
Forty-eight
documents
are
being
tracked
(for
now),
which
include
orders,
guidelines,
and
rules
that
are
either
in
progress
or
have
already
been
issued.

The
approaches
regarding
regulating
GenAI
usage
run
the
gamut
and
include:
providing
guidance
about
GenAI
usage,
requiring
the
disclosure
of
the
use
of
GenAI,
or
banning
its
use
altogether.
As
explained
in
the
RAILS
declaration
statement,
this
haphazard
and
varied
set
of
tactics,
while
well-intentioned,
fails
to
provide
much-needed
consistency:

(T)he
sheer
number
of
these
orders
and
lack
of
uniformity
[in]
their
provisions
can
cause
considerable
confusion
to
litigants
and
practitioners
who
may
have
to
appear
in
many
different
courts.

In
other
words,
there’s
disorder
in
the
courts,
with
no
clarity
in
sight.

Given
the
increasing
number
of
fake
case
citations,
one
thing
is
clear:
the
phenomenon
highlights
evidence
of
an
even
deeper
problem.
Namely,
some
lawyers
are
not
reviewing
their
work
before
submitting
it
to
the
court

and
this
is
not
a
new
occurrence.

Historically,
cases
that
seemed
out
of
place
within
the
context
of
a
document
could
be
dismissed
as
mistakes
or
misinterpretations
of
case
law.
Now,
however,
false
case
citations
undeniably
demonstrate
a
failure
to
review
submitted
work.
This
type
of
negligence
undoubtedly
existed
before,
but
it’s
much
easier
to
prove
when
the
cited
cases
are
nonexistent.

In
other
words,
the
problem
isn’t
the
technology,
it’s
attorney
competency.
This
is
a
preexisting
issue,
and
knee-jerk
reactions
to
GenAI
are
not
the
solution.
While
it’s
tempting
to
react
hastily
to
disruptive
technologies,
the
legal
community
would
be
better
served
by
developing
robust,
uniform
educational
guidelines
on
responsible
AI
use
and
emphasizing
the
timeless
principles
of
careful
review
and
thorough
legal
analysis.





Nicole
Black



is
a
Rochester,
New
York
attorney
and
Director
of
Business
and
Community
Relations
at




MyCase
,
web-based
law
practice
management
software.
She’s
been




blogging



since
2005,
has
written
a




weekly
column



for
the
Daily
Record
since
2007,
is
the
author
of




Cloud
Computing
for
Lawyers
,
co-authors




Social
Media
for
Lawyers:
the
Next
Frontier
,
and
co-authors




Criminal
Law
in
New
York
.
She’s
easily
distracted
by
the
potential
of
bright
and
shiny
tech
gadgets,
along
with
good
food
and
wine.
You
can
follow
her
on
Twitter
at




@nikiblack



and
she
can
be
reached
at




niki.black@mycase.com
.

CRM Banner