
In
case
anyone
hasn’t
been
listening
carefully,
Comment
8
to
Model
Rule
1.1
requires
lawyers
to
understand
not
just
the risks of
technology,
but
the benefits
and
the
word
“benefits”
appears
first.
We
have
an
ethical
duty
(yes,
duty)
to
understand
and
leverage
the
benefits
of
technology.
Ethics
and
Benefits
Let’s
talk
about
the
notion
of
benefits.
Comment
8
to
Model
Rule
1.1,
is
the
oft
cited
source
when
people
preach
about
risks
and
technology.
But
in
doing
so,
they
ignore
the
additional
requirement:
To
maintain
the
requisite
knowledge
and
skill,
a
lawyer
should
keep
abreast
of
changes
in
the
law
and
its
practice,
including
the
benefits
and
risks
associated
with
relevant
technology,
engage
in
continuing
study.
Not
only
does
the
word
benefits
appear,
but
it
also
actually
appears
first.
Comment
8
has
been
adopted
in
most
states
and
even
where
it
hasn’t
been,
there
seems
to
be
little
question
that
competency
these
days
requires
the
consideration
and
use
of
technology.
And
to
be
competent
can’t
mean
wringing
our
hands
over
the
risks
of
technology
and
concluding
it
shouldn’t
be
used.
Understanding
benefits
and
taking
advantage
of
them
is
an
ethical
requirement.
And
the
word
benefits
means
the
positive
capabilities
of
technologies
like
AI
to
improve
the
practice.
Things
like
using
technology
to
do
things
to
efficiently
and
save
costs,
using
things
like
AI
to
enhance
client
service,
using
things
like
data
analytics
for
better
insights
and
outcomes,
predicting
case
outcomes
and
judicial
tendencies,
better
use
of
technology
in
the
courtroom
to
achieve
better
outcomes
for
clients,
preventative
lawyering.
I
could
go
on
and
on.
But
that
message
gets
lost,
particularly
at
legal
tech
conferences.
Legal
Tech
Conference
Speak
A
friend
of
mine
was
a
recent
speaker
at
a
legal
AI
conference.
Speaking
last,
my
friend
noticed
that
every
speaker
focused
on
the
risks
and
dangers
of
using
AI.
You
know
the
drill:
hallucinations,
loss
of
confidentiality,
the
need
for
accurate
prompts,
the
need
to
check
the
outputs,
etc.
My
friend
took
a
different
tack
and
talked
about
what
AI
could
do.
How
it
could
be
used
to
be
more
efficient,
precise,
and
accurate
in
particular
practice
areas.
I
was
the
first
speaker
at
a
recent
legal
AI
conference
as
well.
I
spoke
about
ethics
and
AI;
toward
the
end
of
my
talk,
I
realized
I
also
had
not
spent
enough
time
talking
about
our
ethical
duty
to
understand
and
leverage
the
benefits.
Of
course,
I
was
followed
by
a
slew
of
people
doing
just
what
the
speakers
at
my
friend’s
conference
did:
talking
about
the
problems,
the
risks,
the
needs
for
cautions.
Some
were
vendors
who
seemed
to
be
saying
something
like
“Lawyers
don’t
try
this
at
home.
AI
should
only
be
used
in
conjunction
with
a
licensed
professional.”
Of
course,
the
vendors
weren’t
licensed
professionals
in
the
true
sense.
But
the
message
was
clear,
lawyers
shouldn’t
use
AI
without
the
help
of
someone
who
really
knows
what
they
are
doing.
But
that
message
directly
leads
lawyers
to
shy
away
from
such
a
“dangerous”
tool.
The
Only
Thing
You
Need
to
Know
Is
That
There’s
Not
That
Much
to
Know
And
it’s
wrong.
I
have
another
friend
who
is
not
a
lawyer
but
who
hires
them.
She
uses
ChatGPT
extensively
for
all
sorts
of
things.
When
I
told
her
about
my
conference,
she
scoffed:
“The
only
thing
you
need
to
know
about
AI
is
there
is
really
not
that
much
to
know.”
She
meant
of
course
that
us
lawyers
tend
to
get
all
balled
up
in
how
many
angels
(or
risks)
can
dance
on
the
head
of
a
pin
and
we
don’t
just
roll
up
our
sleeves
and
use
the
product,
learning
as
we
go.
Get
A
Grip
Get
a
grip.
The
truth
is
there
are
only
a
couple
of
things
you
need
to
know
about
using
AI:
-
It
makes
mistakes.
Check
the
results. -
Don’t
put
client
confidences
in
it.
I’m
amazed
how
we
make
this
so
complicated.
No
one
in
their
right
mind
would
put
their
client
confidences
in
a
Google
search.
No
lawyer
in
their
right
mind
would
take
the
websites
that
Google
provides
in
response
to
a
search
and
use
them
without
reviewing
the
site.
Yes,
there
have
been
numerous
instances
of
lawyers
taking
the
results
of
prompts
and
not
checking
the
cites,
to
later
get
embarrassed.
Yes,
it
shouldn’t
happen.
Yes,
they
were
dumb.
But
how
many
examples
are
there
of
dumb
lawyers
commingling
funds,
using
client
funds
for
their
own
expense,
violating
conflict
of
interest
standards,
missing
deadlines,
and
plain
incompetence
out
there?
Happens
every
day
but
we
don’t
say
using
bank
accounts
is
too
risky
because
a
dumb
lawyer
might
commingle
funds.
It’s
the
missed
cites
that
get
all
the
attention.
Here’s
a
good
example:
a
recent
AP
article
reported
a
French
data
scientist
and
lawyer
has
catalogued
at
least
490
court
fillings
in
the
past
six
months
with
hallucinations.
But
buried
in
the
article
was
the
fact
that
the
majority
of
instances
occurred
in
cases
where
the
plaintiffs
were
representing
themselves
instead
of
being
represented
by
lawyers.
That
fact
got
lost
in
the
headlines.
Bottom
line,
we
can’t
let
the
fact
that
there
are
dumb
lawyers
making
stupid
mistakes
blind
us
to
the
benefits
that
AI
brings.
How
to
Get
There
Another
point:
don’t
get
hung
up
on
thinking
the
tools
are
too
hard
and
complicated
to
use
as
some
would
have
you
believe.
Start
by
using
the
tools
for
anything
and
everything.
Start
with
personal
and
inconsequential
stuff.
Then
build.
It’s
on-the-job
training.
You
don’t
learn
to
play
the
guitar
by
reading
all
the
risks
of
an
electric
guitar.
You
learn
by
playing
it.
Or
trying
to
until
you
become
competent.
You
didn’t
learn
how
to
try
a
case
well
by
reading
about
it.
You
learned
by
trying
cases.
By
making
mistakes.
It’s
All
About
Our
Clients
And
make
no
mistake,
when
we
talk
about
our
ethical
duty
of
competency
that
requires
understanding,
being
aware
of,
and
taking
advantage
of
technology,
we
are
also
talking
about
something
else:
our
ethical
duty
to
benefit
our
clients,
not
just
ourselves.
We
are
talking
about
things
like
making
our
fees
reasonable
(Model
Rule
1.5),
rendering
candid
and
professional
advice
(Model
Rule
2.1),
keeping
our
clients
informed
(Model
Rule
1.4),
acting
with
reasonable
diligence
and
dedication
to
the
interests
of
our
clients
(Model
Rule
1.3),
and
serving
our
clients
best
interests.
If
you
can
use
AI
to
get
to
a
better
legal
answer
to
a
thorny
litigation
question
in
a
fraction
of
the
time
and
more
timely
advise
your
client
of
the
accompanying
risks
and
exposure,
your
client
is
the
beneficiary.
And
it
is
—
or
should
be
—
all
about
them.
So,
What’s
the
Point
Really?
Yes,
we
have
to
know
the
risks.
But
we
can’t
be
blind
to
the
benefits
of
things
like
AI.
Getting
to
these
benefits
doesn’t
require
a
host
of
consultants
or
years
of
study
and
handwringing.
It
means
getting
a
rudimentary
knowledge
of
the
tools
and
then
using
the
tools
to
wrap
your
arms
around
how
they
can
benefit
you
in
your
practice.
That
takes
a
little
time
and
effort,
but
the
benefits
can
be
worth
it.
And
just
remember
two
things:
don’t
put
client
confidences
in
a
prompt
and
check
cites.
That’s
pretty
much
all
you
need
to
know.
Now
let’s
get
to
work.
Open
up
an
AI
tool
and
ask
it
a
question.
Ask
it
to
do
something
for
you.
You
might
be
amazed
what
you
will
get.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.
