Sotomayor
during
day
two
of
her
Senate
confirmation
hearing
(Photo
by
NICHOLAS
KAMM/AFP/Getty
Images)
“You
must
learn
to
master
the
dangerous
hallucination
machine
to
do
good
in
the
world”
sounds
like
an
opening
line
from
a
Young
Adults
novel
about
the
folly
of
rapid
technological
advancement.
It
also
summarizes
a
Supreme
Court
justice’s
advice
to
law
students
on
becoming
fluent
in
AI
usage.
Law.com
has
coverage:
AI
systems
are
the
“new
revolution”
in
the
legal
profession,
as
the
advent
of
computers
were
for
lawyers
in
the
latter
half
of
the
20th
century,
Sotomayor
said
Thursday
at
the
University
of
Alabama
School
of
Law…”For
every
student
in
this
room,
do
not
graduate
this
institution
without
learning
how
to
master
AI
as
a
tool[.]
…
“AI
is
a
sophisticated
human,”
Sotomayor
said.
“All
of
its
input
is
input
from
human
beings.
And
because
it
is
that,
it
has
the
potential
to
perpetuate
the
very
best
in
us
and
the
very
worst
in
us.”That
makes
it
particularly
dangerous
in
“judging
the
complexity
of
human
endeavors
and
in
human
situations[.]”
There
is
more
at
stake
than
AI’s
tendency
to
lean
in
to
human:
all
too
human
errors.
We
are
the
gamble.
I
think
that
Sotomayor’s
best
and
worst
in
us
is
poetic
flair
that
covers
up
a
far
more
ominous
reality:
we
have
no
idea
how
rapidly
incorporating
AI
will
impact
us
in
the
long
run.
Let’s
bracket
the
immediate
question
of
if
AI
is
actually
a
“sophisticated
human”
(she’s
a
jurist,
not
a
biologist
after
all)
and
instead
think
of
it
as
a
tool;
is
there
any
room
left
for
Luddites
in
the
profession?
It
sounds
nice
to
recommend
AI
mastery
—
whatever
that
means
—
but
what
will
be
the
subsequent
consequences
of
10,000
hours
typing
away
at
a
delusion-encouraging
black
box?
While
they
may
be
a
little
over-sensationalized,
it
is
still
worth
considering
the
risks
of
AI-induced
psychosis
developing
in
communities
targeted
with
adapt-or-die
rhetoric.
We
also
don’t
know
the
long-term
effects
of
regular
AI
use
on
neuroplasticity.
The
hard-fought
talent
of
thinking
like
a
lawyer
could
be
replaced
with
thinking
like
a
Harvey
or
Claude
prompter.
There’s
also
the
the
material
conditions
of
the
labor.
Lawyers,
especially
ones
in
Biglaw,
work
very
long
hours
and
are
often
sleep
deprived.
One
attorney
said
that
in
his
40
years
of
practice
he
never
slept
more
than
3-4
hours
at
a
time.
How
does
using
AI
when
exhausted
change
us?
And
what
do
you
do
when
the
cure
all
is
the
source
of
the
exhaustion?
And
this
isn’t
to
say
that
AI
is
bad
and
anyone
advocating
for
its
adoption
is
a
snake
oil
salesman
trying
to
make
their
money
before
the
bubble
bursts.
But
it
is
also
naive
to
suggest
that
some
of
that
isn’t
happening
or
that
our
infrastructure
might
not
be
prepared
to
deal
with
it.
Take
the
process
of
editing
documents.
Most
lawyers
have
the
experience
of
submitting
a
brief
you
thought
was
going
to
change
the
direction
of
whatever
body
of
law
you
were
practicing
only
to
get
it
handed
back
to
you
covered
in
red
ink.
The
social
dynamics
of
getting
your
work
edited
by
a
superior
sucks,
but
it
strengthens
your
writing
ability
and
helps
to
build
a
sense
of
interdependent
teamwork.
Your
work
product
may
end
up
good
enough
after
a
couple
rinses
cycles
in
an
LLM,
but
what
happens
to
training
and
team
building
after
AI
enters
the
fold?
Do
you
get
the
same
degree
of
editing
from
your
immediate
team?
Probably
not
—
your
partner
or
supervising
attorney
will
tell
you
to
run
it
through
the
firm’s
proprietary
AI
program
a
couple
of
times
before
you
even
dream
of
sending
it
over.
That’s
a
recipe
for
alienation.
What
long-term
effects
will
that
have
on
the
industry?
We’re
already
starting
to
see
firms
pivot
toward
laterally
hiring
rather
than
bothering
to
train
budding
lawyers.
How
sustainable
is
that
really?
If
you
use
AI
of
your
own
volition
or
because
your
firm
forces
you
to,
Godspeed.
Please
make
an
effort
to
make
sure
you
don’t
inadvertently
dull
your
critical
thinking
skills
along
the
way.
And
if
you
go
the
Amy
Coney
Barrett
route
and
abstain
from
using
AI
in
your
work,
you
aren’t
alone.
You
should
also
brush
up
on
the
First
Amendment.
Justice
Sotomayor
Says
AI
Can
Be
‘Very
Dangerous’
But
Tells
Law
Students
to
Master
It
[Law.com]
Earlier:
AI
Won’t
Replace
Lawyers
But
Can
Create
Critical
Shortage
Of
Good
Ones

Chris
Williams
became
a
social
media
manager
and
assistant
editor
for
Above
the
Law
in
June
2021.
Prior
to
joining
the
staff,
he
moonlighted
as
a
minor
Memelord™
in
the
Facebook
group Law
School
Memes
for
Edgy
T14s
.
He
endured
Missouri
long
enough
to
graduate
from
Washington
University
in
St.
Louis
School
of
Law.
He
is
a
former
boat
builder
who
is
learning
to
swim
and
is
interested
in
rhetoric,
Spinozists
and
humor.
Getting
back
in
to
cycling
wouldn’t
hurt
either.
You
can
reach
him
by
email
at
[email protected]
and
by
Tweet/Bluesky
at @WritesForRent.









