
When
using
any
technology
—
including
AI
—
a
lawyer
“must
independently
review,
verify,
and
exercise
professional
judgment
regarding
any
output
generated
by
the
technology
that
is
used
in
connection
with
representing
a
client.”
That
language
appears
in
a
new
comment
to
Rule
1.1
on
competence
proposed
by
the
State
Bar
of
California’s
Standing
Committee
on
Professional
Responsibility
and
Conduct
(COPRAC)
as
part
of
a
package
of
AI-related
amendments
to
six
of
the
state’s
Rules
of
Professional
Conduct.
The
proposed
changes
would,
for
the
first
time,
write
specific
AI
obligations
into
California’s
rules.
The
changes
span
the
rules
on
competence,
client
communication,
confidentiality,
candor
toward
tribunals,
and
supervision
of
both
lawyers
and
other
staff.
Unfortunately,
I
am
reporting
this
a
bit
late,
as
the
public
comment
period
on
the
proposals
closed
yesterday,
May
4.
But
the
rulemaking
process
is
still
in
early
stages
and
the
amendments
are
far
from
final.
For
anyone
tracking
how
bar
regulators
are
treating
AI
in
legal
practice,
these
proposals
are
worth
a
close
read
regardless.
Initiated
by
Supreme
Court
The
rulemaking
was
set
in
motion
by
the
California
Supreme
Court
itself.
In
an
Aug.
22,
2025,
letter to
the
state
bar’s
interim
executive
director,
the
court’s
clerk
and
executive
officer
directed
COPRAC
to
consider
whether
the
guiding
principles
from
the
bar’s
November
2023
“Practical
Guidance
for
the
Use
of
Generative
Artificial
Intelligence
in
the
Practice
of
Law”
should
be
incorporated
into
the
formal
rules.
The
court
also
directed
the
bar
to
consider
guidance
specifically
addressing
“agentic
AI”
tools
—
systems
that
can
plan
and
execute
tasks
with
little
or
no
human
intervention.
COPRAC
approved
the
proposed
amendments
at
its
March
13,
2026,
meeting
and
opened
the
45-day
comment
period.
Rather
than
drafting
a
standalone
AI
rule,
the
committee
wove
new
language
into
six
existing
rules,
reflecting
a
view
that
AI
sharpens
existing
ethical
duties
rather
than
creating
entirely
new
ones.
Whereas
California’s
2023
practical
guidance
was
a
“living
document”
with
no
binding
authority,
these
proposed
amendments
would
change
that
by
making
AI-specific
obligations
part
of
the
enforceable
rules.
Most
states
that
have
addressed
AI
in
legal
practice
have
done
so
through
ethics
opinions,
which
carry
persuasive
but
not
always
disciplinary
force.
California’s
approach,
if
finalized,
would
be
more
muscular.
I
have
tracked
the
adoption
of
the
duty
of
technology
competence
across
jurisdictions
on
a
dedicated
page
on
this
blog.
These
proposals
represent
among
the
most
detailed
and
comprehensive
set
of
AI-specific
rule
amendments
I
have
seen
any
state
bar
put
forward.
Amendments
to
Rule
1.1,
Competence
The
existing
rule
requires
lawyers
to
maintain
learning
and
skill
sufficient
for
competent
representation.
The
proposed
amendments
add
two
new
comments
specific
to
AI.
The
first
simply
extends
the
existing
technology-competence
language
to
make
explicit
that
the
duty
to
stay
abreast
of
“the
benefits
and
risks
associated
with
relevant
technology”
includes
artificial
intelligence.
The
second,
and
more
consequential,
comment
states
that
when
using
technology,
including
AI,
a
lawyer
“must
independently
review,
verify,
and
exercise
professional
judgment
regarding
any
output
generated
by
the
technology
that
is
used
in
connection
with
representing
a
client.”
In
other
words,
under
this
proposed
change,
the
lawyer
must
personally
and
independently
evaluate
what
AI
tools
produce
before
relying
on
it.
There
is
no
carve-out
for
routine
tasks
or
low-stakes
matters.
Amendments
to
Rule
1.4,
Communication
with
Clients
A
new
Comment
5
to
Rule
1.4
addresses
when
lawyers
must
disclose
their
use
of
AI
to
clients.
The
proposed
language
provides
that
when
a
lawyer’s
use
of
technology,
including
AI,
“presents
a
significant
risk
or
materially
affects
the
scope,
cost,
manner,
or
decision-making
process
of
representation,”
the
lawyer
must
communicate
“sufficient
information
regarding
the
use
of
technology
to
permit
the
client
to
make
informed
decisions
regarding
the
representation.”
The
comment
adds
that
lawyers
must
continue
to
evaluate
their
communication
obligations
throughout
a
representation
based
on
“the
novelty
of
the
technology,
risks
associated
with
the
use
of
the
technology,
scope
of
the
representation,
and
sophistication
of
the
client.”
Notably,
this
does
not
create
a
blanket
disclosure
requirement
every
time
a
lawyer
uses
AI.
The
trigger
is
a
“significant
risk”
or
“material”
effect
on
the
representation.
More
routine
use
may
not
require
affirmative
disclosure,
depending
on
the
circumstances.
But
the
obligation
is
ongoing
—
it
must
be
reassessed
as
the
representation
evolves.
Amendments
to
Rule
1.6,
Confidential
Information
of
a
Client
The
confidentiality
rule,
which
prohibits
lawyers
from
revealing
confidential
client
information,
gets
a
new
Comment
2
that
expand
sthe
definition
of
“reveal”
to
encompass
AI
use.
Under
the
proposed
language,
“reveal”
includes
“exposing
confidential
information
to
technological
systems,
including
artificial
intelligence
tools,
where
such
exposure
creates
a
material
risk
that
the
information
may
be
accessed,
retained,
or
used,
whether
by
the
technological
system
or
another
user
of
that
technological
system,
in
a
manner
inconsistent
with
the
lawyer’s
duty
of
confidentiality.”
This
means
that
inputting
client
information
into
an
AI
tool
—
even
if
the
lawyer
never
intends
for
anyone
else
to
see
it
—
can
constitute
a
revelation
of
confidential
information
under
the
rules
if
there
is
a
material
risk
the
system
or
its
other
users
could
access,
retain
or
use
that
data.
Lawyers
using
cloud-based
AI
tools
with
unclear
or
unfavorable
data
retention
and
training
policies
need
to
pay
attention
to
this.
Amendments
to
Rule
3.3,
Candor
Toward
the
Tribunal
This
amendment
directly
addresses
the
AI
hallucination
problem
that
has
generated
judicial
sanctions
and
considerable
alarm
across
the
profession.
A
new
Comment
3
states
that
“a
lawyer’s
duty
of
candor
towards
the
tribunal
includes
the
obligation
to
verify
the
accuracy
and
existence
of
cited
authorities,
including
ensuring
no
cited
authority
is
fabricated,
misstated,
or
taken
out
of
context,
before
submission
to
a
tribunal,
including
any
cited
authorities
generated
or
assisted
by
artificial
intelligence
or
other
technological
tools.”
The
existing
rule
already
prohibits
knowingly
misquoting
authority
or
citing
overruled
decisions.
The
new
comment
makes
explicit
that
AI-generated
citations
are
not
exempt
from
those
obligations,
and
that
the
verification
duty
extends
specifically
to
fabricated,
misstated
or
decontextualized
authority.
In
the
wake
of
now-notorious
sanctions
cases
involving
AI-hallucinated
citations,
this
comment
codifies
what
many
courts
have
already
been
saying
in
their
opinions.
Amendments
to
Rule
5.1,
Responsibilities
of
Managerial
and
Supervisory
Lawyers
The
proposed
amendment
adds
AI
governance
to
the
list
of
matters
that
managerial
lawyers
at
law
firms
must
address
through
internal
policies
and
procedures.
The
existing
comment
already
refers
to
policies
for
conflicts,
calendaring
and
client
funds.
The
new
language
adds
that
managerial
lawyers
must
make
reasonable
efforts
to
establish
procedures
“governing
the
use
of
artificial
intelligence,
in
accordance
with
the
Rules
of
Professional
Conduct.”
Law
firm
leaders,
practice
group
chairs
and
managing
partners
will
need
to
ensure
their
firms
have
actual,
functioning
AI
governance
policies,
not
just
aspirational
statements,
if
this
rule
is
finalized.
Amendments
to
Rule
5.3,
Responsibilities
Regarding
Nonlawyer
Assistants
A
corresponding
amendment
to
the
rule
on
supervising
nonlawyer
personnel
adds
AI
to
the
scope
of
supervision.
The
existing
comment
states
that
lawyers
must
give
nonlawyer
assistants
“appropriate
instruction
and
supervision
concerning
all
ethical
aspects
of
their
employment.”
The
proposed
amendment
adds
“including
the
use
of
technology
in
the
provision
of
legal
services,
such
as
artificial
intelligence.”
This
extends
the
AI
supervision
obligation
to
paralegals,
legal
assistants,
law
clerks
and
any
other
staff
who
use
AI
tools
in
their
work.
Given
that
AI
tools
are
proliferating
throughout
law
firm
operations
at
every
level,
this
makes
sense
as
a
practical
clarification.
The
Takeaway
A
few
things
stand
out
to
me
about
California’s
approach.
First,
by
embedding
these
obligations
in
the
enforceable
rules
rather
than
guidance
documents,
these
changes
would
underscore
and
make
explicit
ethical
duties
that
are
already
implicit
in
the
existing
rules.
While
some
might
argue
that
modifications
of
the
existing
rules
are
unnecessary,
there
are
plenty
of
lawyers
out
there
who
have
been
proving
them
wrong.
Second,
the
independent
verification
requirement
in
Rule
1.1
is
worth
emphasizing.
It
does
not
say
lawyers
should
generally
be
careful
with
AI
output.
It
says
they
must
independently
review,
verify
and
exercise
professional
judgment
regarding
any
output
used
in
client
representation.
That
is
a
strict
standard,
and
one
that
cuts
against
any
casual
reliance
on
AI-generated
work
product.
Third,
the
confidentiality
amendment’s
expansion
of
“reveal”
is
practically
significant.
Lawyers
accustomed
to
thinking
of
confidentiality
as
a
disclosure-to-humans
concept
will
need
to
rethink
how
they
select
and
use
AI
tools
in
light
of
this
definition.
Finally,
while
the
proposals
do
not
explicitly
address
agentic
AI,
as
the
court
suggested
in
the
letter
that
spurred
these
revisions,
they
do
address
them
implicitly.
The
independent
verification
requirement
in
Rule
1.1
and
the
supervisory
obligations
in
Rules
5.1
and
5.3
are
directly
relevant
to
agentic
workflows.
If
a
lawyer
deploys
an
AI
agent
that
researches,
drafts
and
revises
a
brief
with
limited
oversight,
these
rules
would
squarely
apply.
Although
the
comment
period
has
closed,
the
rulemaking
process
continues.
COPRAC
will
review
public
input
and
could
modify
the
proposals
before
they
advance.
The
California
Supreme
Court
ultimately
has
authority
over
the
Rules
of
Professional
Conduct.
Whether
and
when
these
amendments
might
take
effect
remains
to
be
seen.
