
One
of
the
big
worries
in
the
age
of
AI
is
training
younger
lawyers,
as
I
and
others
have
discussed
before.
But
a
new
LexisNexis
study
suggests
we’ve
been
focusing
on
the
wrong
thing:
the
more
critical
training
may
need
to
be
of
experienced
lawyers
instead.
Training
the
trainers.
Key
Findings
LexisNexis
surveyed
873
legal
professionals
in
the
U.K.
in
January
of
this
year.
Like
many
things
with
GenAI,
the
resulting
study
was
long
on
questions
but
short
on
answers.
That’s
not
necessarily
a
criticism.
The
truth
is
we
just
don’t
know
a
lot
of
the
answers
yet.
Which
makes
the
notion
of
training
the
trainers
perhaps
key.
Here’s
some
key
statistics
from
the
study:
-
65%
say
legal
AI
tools
allow
them
to
work
faster. -
72%
believe
that
younger
lawyers
using
GenAI
will
have
trouble
developing
reasoning
and
critical
thinking
skills. -
69%
worry
that
new
lawyers
lack
“verification
and
source-checking
skills.”
Okay.
Not
terribly
surprising
although
one
would
think
verification
and
cite-checking
skills
would
be
one
thing
beginning
lawyers
ought
to
be
able
to
do.
Perhaps
the
concern
is
not
that
they
don’t
have
the
skills,
it’s
that
they
won’t
use
them.
These
findings
align
with
expectations.
Indeed,
they
are
remarkably
consistent
with
a
white
paper
by
one
of
LexisNexis’
main
competitors,
Thomson
Reuters,
about
which
I
wrote
back
in
December
of
last
year.
But
the
study
revealed
something
more
troubling.
But
Wait,
There’s
More
Here’s
one
surprising
finding:
only
29%
believe
that
AI
helps
them
produce
higher
quality
work
and
only
2%
—
2%!
—
believes
AI
strengthens
their
learning.
Think
about
what
this
suggests:
it
suggests
the
main
value
of
AI
is
that
it
helps
us
produce
work
faster,
not
necessarily
better.
And
almost
all
agree
that
its
use
doesn’t
help
them
learn.
Meaning
it
doesn’t
help
them
be
better
lawyers.
The
study
identifies
the
problem:
“pouring
over
lengthy
contracts,
interrogating
every
cause,
and
immersing
yourself
in
case
law
may
not
be
glamourous,
but
these
tasks
have
traditionally
been
how
legal
judgement
is
formed.”
And
if
this
means
AI
will
do
a
lot
of
this
work
and
those
skills
aren’t
developed,
the
long-term
result,
assuming
these
statistics
reflect
reality
not
just
impressions,
can
only
mean
one
thing:
lower
quality
lawyering.
A
Thinking
Partner?
The
study
does
propose
a
solution:
getting
young
lawyers
to
treat
GenAI
as
a
“thinking
partner”
in
doing
legal
work.
Like
a
lot
of
platitudes,
it’s
a
catchy
phrase
but
the
study
is
a
little
short
on
how
we
get
harried
associates
under
time
stress
to
do
just
that.
Indeed,
it
will
be
too
easy
to
let
a
“thinking
partner”
do
thinking
for
you
especially
when
pressed
for
time.
And
if
you
don’t
have
critical
thinking
skills
already
and
confidence
in
your
own
thinking,
it’s
likely
you
will
just
accept
what
a
bot
tells
you
as
right.
Over
reliance
and
lack
of
learning.
As
one
of
the
study
participants
put
it,
“No
critical
reasoning,
no
belief
in
themselves
and
no
confidence.”
So,
What’s
the
Answer?
As
one
of
my
mentors
once
put
it,
the
problem
is
the
problem.
And
as
reflected
by
the
alignment
of
the
LexisNexis
and
Thomson
Reuters
studies,
certain
realties
are
clear.
For
example,
young
lawyers
are
going
to
use
GenAI
no
matter
what
we
do.
Another
reality,
as
one
of
the
LexisNexis
participants
put
it,
“we
need
to
be
deliberate
about
how
we
build
judgment
and
strategic
thinking
alongside
technical
capability.”
Certainly,
true
but
how
do
we
do
that:
“How
do
firms
redesign
early
legal
careers
so
judgment
is
built
not
bypassed?
How
do
they
embed
verification,
accuracy
and
critical
reasoning
in
an
AI-enabled
workflow?”
Reading
through
the
comments
from
the
participants,
I
couldn’t
help
but
think
there
is
no
consensus.
Some
say
more
collaboration.
Some
say
being
clear
on
objectives
and
providing
context.
Some
say
teaching
how
to
prompt.
Some
say
guided
decision
making
and
structured
feedback.
Some
say
embed
the
right
mindset.
I
even
know
a
lawyer
who
runs
a
small
firm
who
once
told
me
we
could
just
ban
the
use
of
GenAI
until
a
lawyer
has
two
or
three
years
of
experience.
Nice
idea,
not
very
enforceable
or
practical
as
client
demands
for
use
of
time
saving
GenAI
tools
ratchet
up.
All
good
ideas
but
they
all
assume
that
more
experienced
lawyers
can
provide
just
these
kinds
of
skills.
And
that
assumption
may
not
necessarily
be
correct.
Sure,
more
experienced
lawyers
who
are
familiar
with
GenAI
tools
and
know
how
to
use
them
can
formulate
better
prompts
and
spot
AI
slop
when
they
see
it.
But
how
many
experienced
lawyers
have
that
underlying
familiarity?
And
if
they
don’t
have
it,
how
can
they
mentor
younger
lawyers
correctly?
So,
we
need
to
start
with
the
notion
that
older,
more
experienced
lawyers
need
AI
training
just
as
much
if
not
more
than
younger
lawyers.
“They
need
to
understand
the
strengths
and
limitations
of
AI,
feel
confident
using
it
responsibly,
and
know
how
to
review
and
refine
outputs.”
Armed
with
these
skills,
then
and
only
then
can
they
mentor
young
lawyers
using
GenAI
tools
to
develop
the
critical
thinking
skills
and
abilities
they
will
need
to
perform
well
in
the
future.
Moreover,
firms
will
need
to
realize
that
mentoring
takes
time
and
investment
in
the
future
that
let’s
face
it,
a
lot
of
firms
are
not
known
for.
A
Real
Life
Example
How
could
this
work
in
real
life?
In
today’s
world,
an
associate
is
asked
to
do
a
first
draft
of
a
brief.
They
take
it
to
the
partner
who
marks
it
up
with
a
red
pen
and
sends
it
back.
But
in
the
future
what
will
be
needed
is
more
from
the
partner.
The
partner
will
need
to
be
able
to
spot
if
it
looks
like
the
associate
over
relied
on
GenAI
or
that
the
cites
don’t
look
or
sound
right.
The
partner
will
need
to
sit
down
with
the
associates
and
explain
how
they
caught
that
overuse,
why
it
didn’t
look
right,
and
the
consequences
of
that
both
with
the
client
and
the
courts.
The
Commitment
That
takes
a
commitment
that’s
not
there
right
now.
The
study
points
out
an
interesting
gap
in
this
regard
about
technology
perceptions:
51%
of
the
associates
say
keeping
pace
with
technological
developments
is
a
top
challenge
while
only
34%
of
the
leaders
do
so.
In
other
words,
while
associates
see
technology
as
a
top
challenge,
their
leaders
don’t
share
that
concern.
How
can
they
lead
and
train
if
they
aren’t
equally
invested
in
understanding
what’s
coming?”
So,
let’s
start
there:
instead
of
looking
at
what
we
need
to
do
to
train
young
lawyers
about
technology
that’s
changing
weekly
if
not
daily,
let’s
develop
a
mindset
among
more
experienced
lawyers
about
technology
and
its
impact.
To
get
there
we
need
an
attitudinal
change:
more
experienced
lawyers
need
to
commit
to
learning
and
keeping
up
with
technology.
They
need
to
commit
to
mentoring
new
lawyers
in
different
and
more
intense
ways.
It
means
they
need
to
seek
to
define
what
good
lawyering
is
on
a
more
consistent
basis
as
Jordan
Furlong,
one
of
the
most
astute
observers
of
the
legal
scene
recently
talked
about.
(Furlong
will
be
a
Keynote
speaker
at
the
upcoming
ABA
TechShow
in
March.
His
topic:
The
Lawyers
We’ll
Need:
Preparing
the
Legal
Profession
for
a
Post-AI
World).
They
need
to
commit
to
their
own
training,
something
that
they
heretofore
have
not
spent
much
time
or
energy
on.
Law
firms
will
have
to
recognize
that
future
lawyers
aren’t
going
to
learn
to
be
good
lawyers
in
traditional
ways.
They
have
to
recognize
that
young
lawyers
are
going
to
use
GenAI
tools
and
that
that
doesn’t
mean
they
are
necessarily
going
to
produce
better
work
faster.
In
fact,
it
may
mean
just
the
opposite.
And
they
have
to
know
that
young
lawyers
are
going
to
make
mistakes,
mistakes
that
may
be
different
than
the
past
that
need
to
be
spotted
and
fixed.
That
takes
an
investment
in
the
long-term
development
of
lawyers
in
a
hands-on
way.
If
we
want
young
lawyers
to
develop
the
skills
experienced
lawyers
have,
let’s
start
with
training
the
trainers.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.
