The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Elite Law Schools Are Offering Classes On Responsible AI Use – Above the Law

It’s
okay,
this
used
to
confuse
me
too.

Convenience
and
laziness
go
hand
in
hand
like
great
power
and
responsibility.
We’ve
seen
the
consequences
of
lazy
lawyers
and
judges
using
widely
accessible
large
language
models
irresponsibly.
Citing
to
nonexistent
cases
can
get
your
cemented
in
the
annals
of
the
New
York
Times
and
on

Above
The
Law’s
Point
and
Laugh
WallTM
.
Mike
Lindell’s
lawyers

earned
a
sanction
after
shoddy
LLM
use
,
Alabama

lost
its
chosen
lawyers
over
fake
cases
,
and
a
judge
took
the
time
to
(professionally)
mock
a
lawyer
whose
apology
for
using
AI

included
purple
prose
that
would
make
William
Faulkner
blush
.
It
would
be
one
thing
if
it
were
just
lawyers
showing
poor
judgement,
but
judges
have
hopped
on
the
trend
too

one
trial
judge
managed
to
mail-in
their
job
so
hard
that

an
“AI
hallucination”
became
good
law
for
a
short
while
.

At
this
point,
the
general
public
would
benefit
if
someone
stepped
in
to
save
lawyers
and
judges
from
completely
outsourcing
their
jobs
to
LLMs.
Who
better
to
intervene
than
law
schools?

Bloomberg
Law

has
coverage:

Incidents
of
AI-generated
errors
in
legal
citations
have
increased
the
pressure
on
law
schools
to
teach
responsible
use
of
the
technology.

The
University
of
Chicago,
University
of
Pennsylvania,
and
Yale
law
schools
are
among
those
augmenting
curricula.
In
new
or
updated
classes,
schools
are
training
their
students
to
understand
the
AI
tools’
limitations
and
to
check
their
work.

“You
can
never
give
enough
reminders
and
enough
instruction
to
people
about
the
fact
that
you
cannot
use
AI
to
replace
human
judgment,
human
research,
human
writing
skills,
and
a
human’s
job
to
verify
whether
something
is
actually
true
or
not,”
said
William
Hubbard,
deputy
dean
of
University
of
Chicago
Law
School.

This
is
an
amazing
heuristic
to
have.
One,
because
it
directly
counters
Elon
Musk’s
sentiment
that

feeding
Grok
all
prior
precedent
will
replace
judges
,
but
also
because
it
refocuses
agency
back
on
what
matters

the
person
with
a
JD
responsible
for
advocating
on
their
client’s
behalf.
Language
like
“AI
Hallucinations”
does
a
phenomenal
job
of
covering
up
the
real
issue
behind
the
negligence
that
allows
errors
to
make
their
way
in
to
briefs
and
caselaw:

PEBCAK
.
I’ll
admit
it
doesn’t
roll
off
of
the
tongue
quite
as
nicely
as
“AI
hallucination”
does,
but
it’s
a
better
alternative:

Remember:
bad
AI
cites
don’t
make
the
AI
look
nearly
as
bad
as
it
makes
you
look
lazy.
You,
and
your
school,
should
have
known
better
than
to
let
that
happen.


Top
Law
Schools
Boost
AI
Training
as
Legal
Citation
Errors
Grow

[Bloomberg
Law]


Earlier
:

For
The
Love
Of
All
That
Is
Holy,
Stop
Blaming
ChatGPT
For
This
Bad
Brief


Trial
Court
Decides
Case
Based
On
AI-Hallucinated
Caselaw


T14
Law
School
Actually
Wants
You
To
Use
AI
In
The
Application
Process



Chris
Williams
became
a
social
media
manager
and
assistant
editor
for
Above
the
Law
in
June
2021.
Prior
to
joining
the
staff,
he
moonlighted
as
a
minor
Memelord™
in
the
Facebook
group Law
School
Memes
for
Edgy
T14s
.
 He
endured
Missouri
long
enough
to
graduate
from
Washington
University
in
St.
Louis
School
of
Law.
He
is
a
former
boatbuilder
who
is
learning
to
swim, is
interested
in
critical
race
theory,
philosophy,
and
humor,
and
has
a
love
for
cycling
that
occasionally
annoys
his
peers.
You
can
reach
him
by
email
at [email protected]
and
by
tweet
at @WritesForRent.