Most
of
the
time,
when
a
lawyer
unwittingly
cites
a
bunch
of
fake
cases
spit
out
by
artificial
intelligence,
it’s
because
they
never
bothered
to
figure
out
how
the
product
worked
or
even
superficially
consider
the
ethical
implications.
They
plead
with
the
judge
that
they’re
just
a
humble
scribe
of
Ashurbanipal
who
couldn’t
possibly
grasp
the
powerful
forces
involved
in
asking
a
mansplaining-as-a-service
bot
to
magic
up
some
cases.
As
an
excuse
it
doesn’t
always
work,
but
tales
of
ignorance
have,
thus
far,
stayed
many
a
judge’s
hand.
But
when
the
hallucinations
come
from
a
lawyer
who
once
published
the
article
“Artifical
Intelligence
in
the
Legal
Profession:
Ethical
Considerations,”
there’s
not
a
ton
of
wiggle
room.
Goldberg
Segalla’s
Danielle
Malaty,
who
authored
the
article
about
ethics,
is
now
out
after
taking
responsibility
for
a
fake
cite
in
a
Chicago
Housing
Authority
filing
asking
the
judge
to
reconsider
a
jury’s
$24
million
verdict
in
a
lead
paint
poisoning
case.
The
Authority
is
said
to
have
learned
about
the
lead
paint
hazard
in
1992
and
it’s
hard
to
contest
liability
for
a
harm
you’ve
known
about
since
End
of
the
Road
charted.
But
the
firm
struck
gold
with
an
Illinois
Supreme
Court
cite, Mack
v.
Anderson,
that
could
not
have
supported
the
CHA’s
argument
better…
because
it
was
invented
out
of
thin
microchips
by
ChatGPT.
From
the
Chicago
Tribune:
At
the
hearing,
Danielle
Malaty,
the
attorney
responsible
for
the
mistake,
told
the
judge
she
did
not
think
ChatGPT
could
create
fictitious
legal
citations
and
did
not
check
to
ensure
the
case
was
legitimate.
Three
other
Goldberg
Segalla
attorneys
then
reviewed
the
draft
motion
—
including
Mason,
who
served
as
the
final
reviewer
—
as
well
as
CHA’s
in-house
counsel,
before
it
was
filed
with
the
court.
Malaty
was
terminated
from
Goldberg
Segalla,
where
she
had
been
a
partner,
following
her
use
of
AI.
The
firm,
at
the
time,
had
an
AI
policy
that
banned
its
use.
How
did
this
happen?
Was
the
firm
huffing
the
same
lead
paint
that
Chicago
Housing
doesn’t
want
to
pay
for
foisting
on
kids?
According
to
the
Tribune
account,
lead
counsel
on
the
case,
Larry
Mason,
said
that
“An
exhaustive
investigation
revealed
that
one
attorney,
in
direct
violation
of
Goldberg
Segalla’s
AI
use
policy,
used
AI
technology
and
failed
to
verify
the
AI
citation
before
including
the
case
and
surrounding
sentence
describing
its
fictitious
holding.”
Not
quite
sure
what
this
policy
even
means…
has
the
firm
banned
“AI”
generally,
because
that’s
dumb.
It’s
going
to
be
embedded
in
the
guts
of
everything
lawyers
do
soon
enough
—
a
general
objection
to
AI
is
like
lawyers
in
the
90s
informing
the
court
that
they’re
committed
to
never
allowing
online
legal
research.
Hopefully
the
policy
is
more
nuanced
than
Mason
suggests
because
blanket
policies,
paradoxically,
only
encourage
lawyers
to
go
rogue.
But
more
important
than
the
“AI
policy”
is
the
part
where
“Three
other
Goldberg
Segalla
attorneys
then
reviewed
the
draft
motion
—
including
Mason,
who
served
as
the
final
reviewer.”
Don’t
blame
the
AI
for
the
fact
that
you
read
a
brief
and
never
bothered
to
print
out
the
cases.
Who
does
that?
Long
before
AI,
we
all
understood
that
you
needed
to
look
at
the
case
itself
to
make
sure
no
one
missed
the
literal
red
flag
on
top.
It
might’ve
ended
up
in
there
because
of
AI,
but
three
lawyers
and
presumably
a
para
or
two
had
this
brief
and
no
one
built
a
binder
of
the
cases
cited?
What
if
the
court
wanted
oral
argument?
No
one
is
excusing
the
decision
to
ask
ChatGPT
to
resolve
your
$24
million
case,
but
the
blame
goes
far
deeper.
Malaty
will
shoulder
most
of
the
blame
as
the
link
in
the
workflow
who
should’ve
known
better.
That
said,
her
article
about
AI
ethics,
written
last
year,
doesn’t
actually
address
the
hallucination
problem.
While
risks
of
job
displacement
and
algorithms
reinforcing
implicit
bias
are
important,
it
is
a
little
odd
to
write
a
whole
piece
on
the
ethics
of
legal
AI
without
even
breathing
on
hallucinations.
Meanwhile,
“CHA
continues
to
contest
the
ruling
and
is
seeking
a
verdict
in
its
favor,
a
new
trial
on
liability
or
a
new
trial
on
damages
or
to
lower
the
verdict.”
Maybe
Claude
can
give
them
an
out.
Joe
Patrice is
a
senior
editor
at
Above
the
Law
and
co-host
of
Thinking
Like
A
Lawyer.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or
Bluesky
if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a
Managing
Director
at
RPN
Executive
Search.
