
Artificial
intelligence
is
no
longer
an
abstract
or
experimental
technology
for
lawyers
–
it
is
rapidly
becoming
core
infrastructure
for
law
practice,
courts,
legal
education
and
access-to-justice
efforts,
and
the
legal
profession
must
now
shift
its
focus
from
whether
to
use
AI
to
how
to
govern,
supervise
and
integrate
it
responsibly.
That
is
the
central
conclusion
of
a
report
released
yesterday
from
the
American
Bar
Association’s
Task
Force
on
Law
and
Artificial
Intelligence,
a
56-page
assessment
that
takes
stock
of
how
AI
is
already
reshaping
the
profession
and
discusses
the
risks,
opportunities
and
unresolved
challenges
that
lie
ahead.
The
report,
Addressing
the
Legal
Challenges
of
AI:
Year
2
Report
on
the
Impact
of
AI
on
the
Practice
of
Law,
arrives
at
what
the
Task
Force
calls
a
“pivotal
moment”
for
the
profession.
AI
adoption
has
accelerated
dramatically
over
the
past
year,
pushing
lawyers,
judges,
regulators
and
educators
into
unfamiliar
terrain
that
demands
new
ethical
frameworks,
governance
models
and
competencies.
“AI
is
no
longer
an
abstract
concept,”
William
R.
Bay,
the
ABA’s
immediate
past
president,
writes
in
the
report’s
introduction.
“AI
has
become
key
to
reshaping
the
way
we
practice,
serve
our
clients,
and
safeguard
the
rule
of
law.”
It
is
called
the
“year
2”
report
because
this
is
the
second
and
final
year
of
the
Task
Force,
which
the
ABA
convened
to
study
the
evolving
landscape
of
AI
in
law.
The
ABA
Center
for
Innovation
will
now
be
responsible
for
carrying
out
its
findings
and
recommendations.
From
‘Whether’
to
‘How’
The
report
includes
sections
on
the
implications
of
AI
for
the
rule
of
law,
law
practice,
the
courts,
access
to
justice,
legal
education,
governance,
risk
management
and
ethics.
Peppered
throughout
are
Task
Force
members’
and
advisors’
answers
to
the
question,
“What
do
you
think
the
most
important
development/
improvement
or
challenge
will
be
in
the
application
of
AI
to
the
law
in
the
next
two
years?”
One
of
the
most
striking
shifts
identified
in
the
report
is
how
quickly
the
profession’s
posture
toward
generative
AI
has
evolved.
Just
a
year
ago,
the
dominant
concerns
centered
on
whether
lawyers
should
use
AI
at
all,
with
debates
focused
heavily
on
confidentiality,
competence
and
the
risk
of
hallucinated
citations.
That
debate
has
largely
given
way
to
a
more
pragmatic
question:
How
should
AI
be
used
–
and
governed
–
in
real
legal
workflows?
According
to
the
Task
Force,
early
AI
adoption
concentrated
on
relatively
low-risk
tasks
such
as
summarizing
documents,
extracting
insights
from
large
datasets,
drafting
routine
communications,
and
preparing
first
drafts
of
memos
and
client
alerts.
But
the
report
observes
that
more
advanced
uses
are
now
emerging,
including
“agentic”
systems
that
chain
together
multiple
tasks
and
operate
with
increasing
autonomy.
“[A]s
the
platforms
become
more
sophisticated
and
begin
to
chain
together
tasks
–
whether
called
robotic
process
automation
or
agentic
AI
–
lawyers’
creativity
in
exploring
the
bounds
of
AI
tools
presents
interesting
challenges
for
the
legal
profession
and
for
the
innovation
teams
supporting
them,”
the
report
says.
Among
those
challenges
is
the
widening
gap
between
firms
and
organizations
that
can
afford
secure,
enterprise-grade
AI
systems
and
those
that
cannot.
The
report
warns
of
a
growing
stratification
between
technology
“haves”
and
“have-nots,”
driven
by
licensing
costs,
infrastructure
demands
and
a
shortage
of
staff
with
the
technical
expertise
to
deploy
AI
effectively.
For
Courts,
Opportunities
and
Risks
The
report
devotes
substantial
attention
to
the
judiciary,
where
AI
is
creating
both
efficiency
gains
and
profound
new
risks.
The
report
focuses
on
the
Task
Force’s
development
this
year
of
Guidelines
for
U.S.
Judicial
Officers
Regarding
the
Responsible
Use
of
Artificial
Intelligence,
developed
by
a
working
group
of
judges
and
legal
technologists.
As
I
wrote
about
the
guidelines
when
they
came
out,
they
emphasize
a
core
principle:
AI
may
assist
judges,
but
it
can
never
replace
judicial
judgment.
Judges
remain
solely
responsible
for
decisions
issued
in
their
names,
and
AI
outputs
must
always
be
independently
verified.
The
report
also
highlights
the
growing
threat
posed
by
AI-generated
disinformation
and
deepfakes,
which
Chief
Justice
John
Roberts
has
identified
as
a
direct
danger
to
judicial
independence
and
public
trust.
Judges
are
increasingly
confronting
questions
about
how
to
authenticate
evidence,
how
to
respond
to
claims
that
legitimate
evidence
is
fabricated,
and
whether
existing
rules
of
evidence
are
adequate
for
AI-generated
material.
Some
Progress
in
A2J
Perhaps
the
most
optimistic
section
of
the
report
focuses
on
access
to
justice,
where
the
Task
Force
finds
tangible
progress
since
its
first-year
assessment.
Gen
AI,
the
report
concludes,
is
beginning
to
demonstrate
real
potential
to
expand
access
to
legal
help
by
increasing
the
productivity
of
legal
aid
organizations
and
delivering
understandable
legal
information
directly
to
self-represented
litigants.
The
Task
Force
points
to
more
than
100
documented
AI
use
cases
in
legal
aid
settings,
as
discussed
by
Colleen
V.
Chien
and
Miriam
Kim
of
the
Center
for
Law
and
Technology
at
Berkeley
Law
School
in
their
article,
Generative
AI
and
Legal
Aid:
Results
from
a
Field
Study
and
100
Use
Cases
to
Bridge
the
Access
to
Justice
Gap,
57
Loy.
L.A.
L.
Rev.
903
(2025).
The
Task
Force
also
points
to
corporate
initiatives
that
aim
to
make
advanced
legal
technology
available
at
reduced
or
no
cost
to
public-interest
organizations,
expressly
citing
Thomson
Reuters’
AI
for
Justice
program
and
Everlaw’s
Everlaw
for
Good.
At
the
same
time,
the
report
cautions
that
high
subscription
costs
for
the
most
reliable
AI
tools
risk
widening,
rather
than
narrowing,
the
justice
gap
if
access-to-justice
organizations
are
priced
out.
“Financial
accessibility
to
the
access-to-justice
community
must
be
raised
and
addressed
regularly
with
legal
AI
developers,”
the
report
says.
Law
Schools
Race
to
Keep
Up
Law
schools,
meanwhile,
are
moving
quickly,
but
unevenly,
to
integrate
AI
into
legal
education.
A
Task
Force
survey
found
that
more
than
half
of
responding
law
schools
now
offer
AI-related
courses,
and
more
than
80%
provide
hands-on
opportunities
through
clinics
or
labs.
The
report
highlights
programs
at
schools
such
as
Case
Western
Reserve,
Suffolk
University,
Vanderbilt,
Stanford,
and
Georgetown,
which
are
experimenting
with
AI-powered
simulations,
legal
aid
tools,
and
even
mandatory
AI
certifications
for
students.
Yet
faculty
leaders
quoted
in
the
report
acknowledge
that
they
face
a
persistent
challenge:
The
technology
is
evolving
so
rapidly
that
much
of
what
students
learn
today
may
be
outdated
by
the
time
they
graduate.
“I
tell
students
right
up
front
that
half
of
the
substantive
material
that
we
cover
in
this
class
is
probably
going
to
be
outdated
by
the
time
that
you
graduate,”
says
Mark
Williams,
Vanderbilt
Law
professor
and
co-director
of
Vanderbilt
Law’s
AI
Law
Lab
(VAILL),
in
the
report.
Governance,
Risk
and
Liability
The
report
emphasizes
that,
as
AI
becomes
embedded
in
legal
services
and
business
operations,
AI
governance
is
emerging
as
a
central
responsibility
for
lawyers.
Drawing
on
frameworks
such
as
the
NIST
AI
Risk
Management
Framework,
the
Task
Force
emphasizes
the
need
for
organizational
strategies
that
address
data
quality,
transparency,
accountability
and
human
oversight
across
the
AI
lifecycle.
The
report
also
explores
unresolved
questions
around
liability.
When
AI-driven
decisions
cause
harm,
who
is
responsible?
Is
it
the
developer,
the
data
provider,
the
deployer
or
the
human
who
relied
on
the
output?
The
Task
Force
suggests
that
courts
may
ultimately
resolve
many
of
these
questions
incrementally,
through
traditional
common-law
adjudication,
rather
than
comprehensive
regulation.
“AI
adds
a
new
variable
in
determining
fault
and
will
likely
lead
to
new
liability
frameworks
and
increased
litigation,”
the
report
says.
Ethics
Rulings
Provide
Guidance
Ethics
guidance
has
begun
to
catch
up
with
AI,
the
report
says,
outlining
how
lawyers
can
ethically
implement
AI
in
their
practices.
In
July
2024,
the
ABA
issued
Formal
Opinion
512
on
lawyers’
use
of
generative
AI.
Since
then,
dozens
of
states
and
courts
have
released
their
own
opinions,
policies
and
rules,
many
of
which
are
listed
in
the
report.
Beyond
the
Immediate
Horizon
In
its
final
sections,
the
report
urges
the
profession
not
to
become
so
focused
on
short-term
implementation
challenges
that
it
neglects
the
longer-term
implications
of
increasingly
powerful
AI
systems.
Several
contributors
warn
that
sudden
advances
toward
human-level
or
super-human
AI
capabilities
could
leave
legal
institutions
unprepared,
with
potentially
catastrophic
consequences
if
governance
frameworks
lag
behind
technology.
“Lawyers
will
provide
critical
aid
to
AI
governance
efforts,”
writes
Task
Force
advisor
Stephen
Wu
in
the
report,
“by
promoting
legal
compliance,
managing
legal
risks,
and,
most
importantly,
preserving
the
rule
of
law
in
the
development,
use,
and
behavior
of
AI
systems.”
This
is
the
final
report
from
the
Task
Force,
which
the
ABA
convened
as
a
two-year
project
to
study
the
rapidly
evolving
landscape
of
AI
in
law.
The
responsibility
for
carrying
out
its
findings
and
recommendations
now
shifts
to
the
ABA
Center
for
Innovation.
But
the
takeaway
from
this
year-two
report
is
clear:
AI
is
no
longer
on
the
horizon
of
legal
practice.
It
is
already
here
–
and
the
profession’s
response
to
it
will
shape
the
future
of
law.
