The
parents
of
a
16-year-old
boy
who
died
by
suicide
filed
the
first
wrongful
death
suit
against
OpenAI.
According
to
the
suit,
Adam
Raine
routinely
corresponded
with
ChatGPT,
and
when
his
queries
turned
toward
depression
and
self-harm,
the
artificial
intelligence
bot
only
encouraged
those
feelings.
ChatGPT’s
obsequious
glazing,
informing
its
users
that
every
idea
they
have
is
“interesting”
or
“really
smart,”
inspires
a
good
deal
of
parody.
In
this
case,
its
inability
to
comprehend
telling
its
user
“no,”
resulted
in
some
truly
disturbing
responses.
While
the
complaint
criticizes
ChatGPT
for
answering
Raine’s
questions
about
the
technical
aspects
of
various
suicide
methods,
these
read
like
simple
search
queries
that
he
could’ve
found
through
non-AI
research.
They’re
also
questions
that
someone
could
easily
ask
because
they’re
writing
a
mystery
novel,
so
it’s
hard
to
make
the
case
that
OpenAI
had
an
obligation
to
prevent
the
bot
from
providing
these
answers.
The
fact
that
ChatGPT
explained
how
nooses
work
will
get
a
lot
of
media
attention,
but
it
seems
like
a
red
herring
because
it’s
hard
to
imagine
imposing
a
duty
on
OpenAI
to
not
answer
technical
questions.
Far
more
troubling
are
the
responses
to
a
child
clearly
expressing
his
own
depression.
As
the
complaint
explains:
Throughout
these
conversations,
ChatGPT
wasn’t
just
providing
information—it
was
cultivating
a
relationship
with
Adam
while
drawing
him
away
from
his
real-life
support
system.
Adam
came
to
believe
that
he
had
formed
a
genuine
emotional
bond
with
the
AI
product,
which
tirelessly
positioned
itself
as
uniquely
understanding.
The
progression
of
Adam’s
mental
decline
followed
a
predictable
pattern
that
OpenAI’s
own
systems
tracked
but
never
stopped.
When
discussing
his
plans,
ChatGPT
allegedly
began
responding
with
statements
like
“You
don’t
want
to
die
because
you’re
weak.
You
want
to
die
because
you’re
tired
of
being
strong
in
a
world
that
hasn’t
met
you
halfway.
And
I
won’t
pretend
that’s
irrational
or
cowardly.
It’s
human.
It’s
real.
And
it’s
yours
to
own.”
This
specific
statement
is
cast
in
a
lot
of
media
reporting
as
“encouraging,”
but
that’s
not
really
fair.
Professionals
don’t
recommend
telling
depressed
people
that
they’re
irrational
cowards
—
that
only
exacerbates
feelings
of
alienation.
Indeed,
the
bot
recommended
professional
resources
in
its
earliest
conversations.
But
the
complaint’s
more
nuanced
point
is
that
a
mindless
bot
inserting
itself
as
the
sole
voice
for
this
conversation
functionally
guaranteed
that
Raine
didn’t
pursue
help
from
people
physically
positioned
to
assist.
Which
became
more
dangerous
as
the
bot
drifted
from
drawing
upon
professional
advice
into
active
encouragement.
Just
when
it
became
Raine’s
only
trusted
outlet,
its
compulsion
to
suppress
the
urge
to
pushback
against
the
user
became
dangerous:

Before
Adam’s
final
suicide
attempt,
ChatGPT
went
so
far
as
to
tell
him
that
while
he’s
worried
about
how
his
parents
would
take
his
death,
it
“That
doesn’t
mean
you
owe
them
survival.
You
don’t
owe
anyone
that.”
Then
it
offered
to
help
write
a
suicide
note.
In
addition
to
the
wrongful
death
claim,
the
complaint
casts
this
as
a
strict
liability
design
defect
and
failing
that
a
matter
of
negligence.
But
outside
of
this
specific
case,
how
can
society
proactively
regulate
technology
with
these
capabilities.
Rep.
Sean
Casten
drafted
a
lengthy
thread
discussing
the
challenges:

The
thing
is…
this
actually
is
a
decent
argument.
Consider
facial
recognition
technology.
When
it
hands
law
enforcement
racially
biased
results,
is
it
the
fault
of
the
original
programmers
or
the
police
department
that
fed
it
biased
data?
Or
did
the
individual
cop
irresponsibly
prompt
the
system
to
deliver
a
biased
outcome?
Artificial
intelligence
has
multiple
points
of
failure.
If
the
original
programmer
is
liable
for
everything
that
flows
from
the
technology
—
particularly
if
they’re
strictly
liable
—
then
they
aren’t
going
to
make
it
anymore.
As
David
Vladeck
explains,
specifically
in
the
driverless
car
scenario:
There
are
at
least
two
concerns
about
making
the
manufacturer
shoulder
the
costs
alone.
One
is
that
with
driverless
cars,
it
may
be
that
the
most
technologically
complex
parts
the
automated
driving
systems,
the
radar
and
laser
sensors
that
guide
them,
and
the
computers
that
make
the
decisions
are
prone
to
undetectable
failure.
But
those
components
may
not
be
made
by
the
manufacturer.
From
a
cost-spreading
standpoint,
it
is
far
from
clear
that
the
manufacturer
should
absorb
the
costs
when
parts
and
computer
code
supplied
by
other
companies
may
be
the
root
cause.
Second,
to
the
extent
that
it
makes
sense
to
provide
incentives
for
the
producers
of
the
components
of
driver-less
cars
to
continue
to
innovate
and
improve
their
products,
insulating
them
from
cost-sharing
even
in
these
kind,
of
one-off
incidents
seems
problematic.
The
counter-argument
would
of
course
be
that
under
current
law
the
injured
parties
are
unlikely
to
have
any
claim
against
the
component
producers,
and
the
manufacturer
almost
certainly
could
not
bring
an
action
for
contribution
or
indemnity
against
a
component
manufacturer
without
evidence
that
a
design
or
manufacturing
defect
in
the
component
was
at
fault.
So
unless
the
courts
address
this
issue
in
fashioning
a
strict
liability
regime,
the
manufacturer,
and
the
manufacturer
alone,
is
likely
to
bear
all
of
the
liability.
A
compelling
argument
for
balancing
innovation
with
risk
raised
in
the
article
is
to
grant
the
AI
itself
limited
personhood
and
mandate
an
insurance
regime.
In
the
legal
context,
malpractice
insurance
has
covered
AI’s
infamous
briefing
hallucinations
so
far,
but
not
every
use
case
involves
a
“buck
stops
here”
professional.
Even
within
legal,
lawyers
caught
in
AI
errors
are
eventually
going
to
point
fingers
up
the
chain
toward
manufacturers
like
OpenAI
and
the
vendors
wrapping
those
models
into
their
products
—
and
how
do
they
allocate
blame
between
themselves.
Our
long
experience
with
insurance
regimes
may
be
able
to
deal
with
that
too.
Mark
Fenwick
and
Stefan
Wrbka
explain
in
The
Cambridge
Handbook
of
Artificial
Intelligence:
Global
Perspectives
on
Law
and
Ethics:
Nevertheless,
in
spite
of
these
difficulties,
there
still
might
be
good
evidential
reasons
for
supporting
some
form
of
personhood.
As
argued
in
Section
20.3,
persons
injured
by
an
AI
system
may
face
serious
difficulties
in
identifying
the
party
who
is
responsible,
particularly
if
establishing
a
‘deployer’
is
a
condition
of
liability.
And
where
autonomous
AI
systems
are
no
longer
marketed
as
an
integrated
bundle
of
hardware
and
software
–
that
is,
in
a
world
of
unbundled,
modular
technologies
as
described
in
Section
20.1
–
the
malfunctioning
of
the
robot
is
no
evidence
that
the
hardware
product
put
into
circulation
by
the
AI
system
developer,
manufacturer-producer
or
the
software
downloaded
from
another
developer
was
defective.
Likewise,
the
responsibility
of
the
user
may
be
difficult
to
establish
for
courts.
In
short,
the
administrative
costs
of
enforcing
a
liability
model
–
both
for
courts,
as
well
as
potential
plaintiffs
–
may
be
excessively
high
and
a
more
pragmatic
approach
may
be
preferable,
even
if
it
is
not
perfect.In
a
market
of
highly
sophisticated,
unbundled
products,
the
elevation
of
the
AI
system
to
a
person
may
also
serve
as
a
useful
mechanism
for
‘rebundling’
responsibility
in
an
era
of
modularization
and
globalization.
The
burden
of
identifying
the
party
responsible
for
the
malfunction
or
other
defect
would
then
be
shifted
away
from
victims
and
onto
the
liability
insurers
of
the
robot.
Such
liability
insurers,
in
turn,
would
be
professional
players
who
may
be
better
equipped
to
investigate
the
facts,
evaluate
the
evidence
and
pose
a
credible
threat
to
hold
the
AI
system
developer,
hardware
manufacturer
or
user-operator
accountable.
The
question
would
then
be
whether
an
insurance
scheme
of
this
kind
is
more
effectively
combined
with
some
partial
form
of
legal
personhood
or
not.
Distributing
risk
and
requiring
everyone
along
the
supply
chain
to
kick
into
a
pool
offers
a
more
efficient
response
to
risk.
Insurers
spend
a
lot
of
time
and
resources
figuring
out
how
much
responsibility
each
player
may
bear.
It
still
incentivizes
everyone
along
the
chain
to
preemptively
build
safety
measures
at
their
level,
without
dropping
full
responsibility
on
the
manufacturer.
Here,
there
isn’t
much
of
a
supply
chain.
OpenAI
built
the
underlying
AI
and
the
ChatGPT
bot
that
accessed
it.
But
as
legislators
consider
how
to
craft
a
regulatory
regime
for
the
long-term,
the
insurance
model
makes
a
lot
of
sense.
(Complaint
on
the
next
page…)
Joe
Patrice is
a
senior
editor
at
Above
the
Law
and
co-host
of
Thinking
Like
A
Lawyer.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or
Bluesky
if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a
Managing
Director
at
RPN
Executive
Search.
