The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Like Lawyers In Pompeii: Is Legal Ignoring The Coming AI Trust Crisis? (Part III) – Above the Law

Picture
this:
A
senior
partner
at
a
major
firm
now
spends
her
evenings
personally
checking
every
citation
in
briefs
drafted
by
associates.
Or
local
counsel
pouring
over
the
cites
in
a
brief
sent
by
national
counsel.
Or
an
overworked
judge
having
to
review
the
work
of
their
clerk
for
accuracy.
Why?
Because
none
of
them
can
trust
that
someone
else
has
used
ChatGPT.

I
have
previously
written
about
the
risks
that
the
legal
AI
volcano
may
be
about
to
erupt
due
to
an
infrastructure
gap
and
the
fact
that
the
savings
from
AI
tools
will
be
more
than
offset
by
the
cost
of
verifying
the
output,
as
discussed
in
a
Cornell
study.

But
there’s
one
more
reason
for
concern:
the
reality
of
verification
requirement
is
creating
a
situation
that’s
not
sustainable.
Every
lawyer
simply
can’t
check
every
citation
to
ensure
the
necessary
verification.
The
time
and
cost
burden
are
too
great.
So
not
only
will
the
cost
of
verifying
exceed
the
AI
savings,
it
will
create
a
systemic
breakdown
of
trust
relationships
with
which
we
have
gotten
work
done
for
decades.
This
creates
an
impossible
situation
that
threatens
the
entire
AI
adoption
thesis.


Why
the
Bubble
May
Burst
(Part
III)

Why
does
the
verification
burden
suggest
that
the
AI
bubble
may
be
about
to
burst,
and
the
volcano
erupt?
The
way
most
lawyers
and
many
judges
traditionally
work
has
been
to
rely
on
others
for
things
like
drafting
and
research.
The
associate.
The
law
clerk.
The
national
counsel.
Indeed,
there
are
reports
of
hallucinations
contained
in
judicial
opinions
where
the
research
and
drafting
was
done
by
law
clerks
who
unbeknownst
to
the
judges
used
a
LLM
to
assist
in
their
work.

But
we
are
already
seeing
that
reliance
breaks
down
as
those
with
less
experience
and
training
take
the
easy
way
out
and
rely
on
ChatGPT,
resulting
in
hallucinations
and
inaccuracies
in
important
papers
with
far-ranging
results.
It
only
takes
one
slip-up
by
a
super
busy
but
high-quality
associate
who
resorts
to
ChatGPT
that
leads
to
financial
penalties
for
the
senior
lawyer
and
firm,
if
not
worse.

The
fact
that
the
use
of
hallucinated
and
inaccurate
cases
is
occurring
so
often
suggests
more
and
more
people
are
using
LLMs
to
do
things
they
should
not
be
doing.
And
that
suggests
that
the
trust
between
partners
and
associates,
local
and
national
counsel,
and
judges
and
their
clerks
may
erode
if
the
use
of
AI
continues
on
its
present
course.


The
Risks
May
Be
Too
Great
to
Trust

As
also
pointed
out
in
the
Cornell
study,
because
law
requires
such
a
high
degree
of
accuracy,
the
impact
and
exposure
from
hallucination
and
exposure
are
indeed
significant
as
discussed
before.
Courts
are
imposing
large
fines.
There
are
ethical
concerns.
There
is
the
publicity
and
embarrassment
of
the
lawyers
and
their
firms.
There
is
the
potential
loss
of
business
and
even
malpractice
claims.

And
as
pointed
out
in
the
Cornell
study,
the
impact
of
hallucinations
in
judicial
opinions
can
have
a
cascading
effect.

Because
of
the
high
risks,
can
any
lawyer
ever
justify
not
verifying
every
citation
in
every
pleading
they
sign?
Can
any
judge?
Given
the
risks
and
the
number
of
reported
cases,
can
anyone
rely
on
the
representation
of
someone
else
that
no
AI
tools
were
used
in
their
work
when
they
are
signing
the
pleading?

Consider
the
implications
of
this.
Every
lawyer
signing
every
pleading
and
every
judge
signing
every
opinion
must
verify
the
citations
and
the
output
for
accuracy.
Rely
on
an
associate
to
draft
a
brief
and
do
research,
check
their
cites.
Rely
on
your
law
clerk
to
draft
an
opinion,
check
the
cites.
Get
a
brief
from
national
counsel
and
your
local
counsel,
check
the
cites.
It’s
not
an
excuse
to
say
to
the
judge
or
the
client,
my
ace
associate
dropped
the
ball
and
used
ChatGPT
a
little
too
much.

But
every
lawyer
verifying
everything
is
simply
not
a
workable
or
cost-effective
system.
And
it’s
certainly
not
one
that
yields
the
savings
that’s
being
touted.
In
fact,
it
may
end
up
being
a
more
costly
system.

It’s
not
that
AI
is
now
too
big
to
fail.
It’s
that
the
risk
of
its
use
is
too
big
to
trust.


But
What
About
Humans?

Why?
When
we
rely
on
humans
for
these
kinds
of
tasks,
we
have
some
element
of
trust
for
how
they
approach
things,
how
they
process
problems
and
information.
The
likelihood
a
human
will
make
up
a
fictitious
case
is
pretty
low:
they
understand
the
repercussions
pretty
well.
ChatGPT
doesn’t.

The
chances
for
a
citation
to
be
inaccurate
and
not
support
the
proposition
for
which
it
is
offered
is
perhaps
higher
but
still
low.
It’s
certainly
not
as
high
as
that
of
AI.
It’s
the
consistency
in
thinking
patterns,
the
transparency,
that
allows
us
to
have
that
trust
and
reliance
in
fellow
humans.

But
that’s
not
the
case
with
AI.
The
verification
problem
destroys
the
trust
in
the
output
of
anyone
and
everyone.
The
costs
of
verification
are
too
great.
The
disruption
to
the
process
too
great.

When
I
was
an
associate,
I
knew
the
cost
of
screwing
up.
I
would
never
have
dreamed
of
creating
a
fictitious
case
citation.
None
of
us
would.
But
in
the
age
of
AI,
is
it
realistic
to
expect
that
overworked
associates
won’t
resort
to
an
LLM
in
an
unguarded
moment?
And
picture
local
counsel
getting
a
brief
at
4
p.m.
for
a
5
p.m.
filing,
with
no
time
to
verify
dozens
of
citations
from
lawyers
they’ve
never
met.
(And
who
might
not
get
paid
to
verify
anyway.)


What
Can
We
Do?

No
doubt
AI 
is
a
good
tool
for
some
things.
But
as
its
flaws
get
exposed
and
the
risks
of
its
use
are
magnified,
we
may
see
the
clock
turned
back
on
the
riskier
use
cases.
We
may
see
the
realization
that
it
is
simply
not
a
viable
tool
where
the
risks
of
being
wrong
are
not
tolerable.

When
the
volcano
of
problems
erupts,
law
firms
and
courts
may
come
to
the
conclusion
to
put
away
the
expensive
tools
that
can
cause
the
harm.
But
before
the
volcano
erupts,
smart
lawyers
may
want
to
think
twice
about
investing
too
heavily
in
AI
or
thinking
it’s
a
panacea
for
all
problems
that
beset
the
system.
Or
buying
into
the
hype.
We’re
lawyers,
risk
avoidance
and
skepticism
is
what
we
do
best.
Don’t
leave
it
at
the
door
just
because
it’s
AI
that’s
knocking.

That
rumbling
sound
you
are
hearing?
That
may
be
the
volcano.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law




Melissa
“Rogo”
Rogozinski
is
an
operations-driven
executive
with
more
than
three
decades
of
experience
scaling
high-growth
legal-tech
startups
and
B2B
organizations.
A
trusted
partner
to
CEOs
and
founders,
Rogo
aligns
systems,
product,
marketing,
sales,
and
client
success
into
a
unified,
performance-focused
engine
that
accelerates
organizational
maturity.
Connect
with Rogo
on
LinkedIn
.