The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Thomson Reuters White Paper: The Future Is Here – It’s Just Not Evenly Distributed – Above the Law

AI
here,
AI
there,
AI
everywhere.
That
seems
to
be
the
trend.
But
are
we
willing
to
cede
good
lawyer
skills
to
a
bot?
That
seems
to
be
a
risk
according
to
a

white
paper

from

Thomson
Reuters
.

There’s
a
famous
quote
attributed
to
the
science
fiction
writer

William
Gibson
:
“The
future
is
already
here

it’s
just
not
evenly
distributed.”
The
white
paper
demonstrates
this
very
point:
AI
is
eroding
critical
thinking
skills
at
an
alarming
rate.
The
future
will
be
distributed
to
those
who
figure
out
how
to
retain
and
enhance
these
skills.


The
Paper

The
white
paper
amplifies
a
troubling
trend
that
I

have
discussed

before:
AI
is
eroding
lawyers’
critical
thinking
skills.
Reading
the
paper
confirms
what
many,
including
me,
have
feared:
“As
AI
becomes
more
capable,
lawyers
risk
becoming
less
so.”
Without
these
critical
thinking
skills,
a
lawyer
simply
cannot
exercise
analytical
skills
to
identity
and
define
legal
problems,
much
less
find
solutions.

The
paper
was
written
by

Valerie
McConnell
,
Thomson
Reuters
VP
of
solutions
engineering
and
former
litigator
and

Lance
Odegard
,
Thomson
Reuters
director
of
legaltech
platform
services.


The
Current
Threat

The
findings
should
scare
the
hell
out
of
seasoned
lawyers:

The
headline?
Research
from
the

SBS
Swiss
Business
School

found
significant
correlations
between
AI
use
and
cognitive
offloading
on
the
one
hand
and
a
lack
of
critical
thinking
on
the
other.
Critical
thinking
down,
cognitive
offloading
up. 

McConnell
says
that
“cognitive
muscles
can
atrophy
when
lawyers
become
too
dependent
on
automated
analysis.”
Odegard
adds
an
even
more
concerning
fact:
AI
is
different
than
previous
technologies
given
its
speed
and
depth.
And
the
fact
that
it
can
perform
some
cognitive
tasks
creates
a
greater
risk
of
overreliance
on
it.

I
recently
attended
a
panel
discussion
of
law
librarians
on
the
use
of
AI
in
their
law
firms.
One
telling
remark:
more
experienced
lawyers
were
able
to
form
better
prompts
because
they
understood
and
could
better
articulate
the
problem
than
less
experienced
ones.
And
they
could
quickly
determine
whether
the
output
was
bogus:
when
it
didn’t
look
or
sound
quite
right.
They
got
these
skills
through
developing
a
critical
way
of
thinking
from
seeing
patterns
and
prior
experiences.
AI
short
circuits
and
replaces
the
pattern-recognition
experiences.

The
classic
example
of
this
is
where
the
AI
tool
explains
a
legal
concept
with
certainty
but
the
explanation
doesn’t
not
look
right
to
an
experienced
lawyer
who
has
dealt
with
that
concept
and
understands
how
and
why
it
was
developed.


The
Accelerated
Risks
Of
Agentic
AI

But
there’s
more
danger
ahead
according
to
the
paper. Agentic
AI
can
perceive
its
environment,
plan
and
execute
complex
multistep
workflows,
make
real-time
decisions
and
adapt
strategies,
and
proactively
pursue
goals,
all
without
human
input.
This
means,
according
to
the
paper,
that
agentic
AI
could
intensify
cognitive
offloading.
In
other
words,
we
turn
off
our
brains
and
let
AI
do
the
thinking
for
us.
And
as

discussed

before,
we
don’t
have
a
clue
how
it
is
doing
all
this.

McConnell
and
Odegard
believe
agentic
AI
creates
“unprecedented
professional
responsibility
challenges.”
How
can
lawyers
ethically
supervise
the
systems?
What
levels
of
competency
will
we
expect
and
demand
from
human
lawyers?
How
will
lawyers
ethically
communicate
with
clients
about
strategies
developed
by
the
“black
box”?
Lawyers
have
an
ethical
duty
to
explain
the
risks
and
benefits
of
strategic
options:
how
can
we
do
that
when
those
risks
and
benefits
are
developed
in
ways
we
don’t
understand?

I

recently
wrote

about
the
phenomenon
of
legal
tech
companies
buying
law
firms
and
the
danger
of
a
reduced
lawyer
in
the
loop.
Agentic
AI
magnifies
these
dangers
significantly.


Do
We
Need
Critical
Thinking?

As
with
any
“truism”
it’s
always
useful
to
pause
and
reflect
whether
it’s
really
a
truism:
how
much
will
future
lawyers
even
need
critical
thinking
skills
when
AI
can
do
it
for
them?

McConnell
and
Odegard
certainly
believe
that
future
lawyers
will
need
these
skills.
They
believe
that
AI
cannot
replicate
these
skills,
nor
can
it
yet
replace
the
creativity
and
nuanced
understanding
of
a
good
human
lawyer.

I
agree
with
them
on
this
point.
I
see
it
frequently
as
AI
spits
out
solutions
as
if
handed
down
from
above.
And
it
sticks
to
its
guns
even
if
wrong.
The
fact
that
the
tools
are
so
easy
and
quick
to
use
also
makes
it
pretty
tempting
to
just
accept
what
it
says
without
thinking
it
over.
This
is
especially
the
case
for
busy
lawyers. 

And
that’s
one
reason
we
are
continuing
to
see
hallucinated
cases
cited
in
briefs
and
even
judicial
opinions.

But
what
happens
when
we
rely
on
the
bot
instead
of
our
own
instincts
borne
out
of
experience?
A
few
years
ago,
I
trusted
the
handling
of
a
significant
hearing
to
local
counsel.
The
day
before
the
hearing,
I
got
the
feeling
after
talking
to
the
local
counsel
that
something
was
not
quite
right.
So,
I
quickly
hopped
on
a
plane
and
went
to
the
hearing
myself.
Good
thing:
the
local
counsel
didn’t
show
and
sent
a
first-year
associate
to
handle
the
critical
hearing.
I
doubt
a
bot
would
have
picked
up
that
nuance.


The
Risks
For
Future
Generations

McConnell
and
Odegard
also
cite
the
danger
of
overreliance
on
AI
to
replace
these
skills
will
erode
younger
lawyer
development.
It
may
result
in
lawyers
depending
too
much
on
AI
instead
of
thinking
for
themselves.
It
may
result
in
“lawyers
skilled
at
managing
AI
but
lacking
independent
strategic
thinking.” 

I
too
have

discussed

this
very
real
problem.
Doing
what
many
call
scut
work
as
a
young
lawyer
was
boring
and
tedious,
but
it
helped
you
begin
to
see
patterns
that
could
be
helpful
later
in
similar
circumstances. 

But
now
we
are
urged
to
dump
these
tasks
into
a
chatbot
and
forget
it.
The
result
in
10
years?
Minds
full
of
mush.
The
old
notion
of
thinking
like
a
lawyer
may
be
replaced
by
thinking
like
a
bot.

Another
danger:
the
erosion
of
legal
education.
According
to
the
paper
“students
increasingly
arrive
with
diminished
critical
thinking
skills
due
to
pre-law
AI
exposure
while
expecting
to
use
AI
tools
throughout
their
careers.”
If
we
don’t
take
steps
to
disrupt
that
expectation,
we
can
be
sure
that
these
students,
when
they
become
lawyers,
will
continue
to
use
AI
tools
in
exactly
the
same
way.


Can
The
Risks
Be
Managed?

To
be
fair,
McConnell
and
Odegard
believe
these
risks
can
all
be
managed
by
responsible
use
of
existing
AI
tools.
That
may
be
true
but
as
with
most
technology,
some
lawyers
and
legal
professionals
will
figure
out
how
to
do
this
and
become
future
superstars.
Many
will
not.
And
maybe
that’s
OK
since
many
legal
jobs
and
work
done
by
humans
will
be
replaced
by
AI. 

Certainly,
AI
will
allow
lawyers
and
legal
professionals
to
do
the
high-end
stuff
for
which
they
were
trained.
But
let’s
be
real
here:
there
is
not
enough
demand
for
the
high-end
work
to
go
around.
And
many
lawyers
and
legal
professionals
are
not
that
good
at
it. 


The
Future:
It
Won’t
Be
Evenly
Distributed

So,
want
to
prepare
for
the
future?
Figure
out
how
to
encourage
and
develop
critical
thinking
skills
among
your
work
force
in
the
age
of
AI.
Figure
out
what
to
do
when
the
only
work
to
be
done
is
high-end
thinking.
That
means
preparing
for
a
law
firm
that
looks
very
different
from
today. 

Get
ready
for
the
future,
it’s
not
going
to
be
evenly
distributed.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law