
In
2002,
MCI
Worldcom
was
charged
with
the
largest
accounting
fraud
in
U.S.
history.
They
capitalized
regular
operating
expenses,
making
it
appear
as
if
they
were
allocating
more
toward
future
business
investments
when
they
were
not.
The
financials
reported
to
investors
were
analogous
to
AI
hallucinations. The
numbers
weren’t
real.
Current
Developments
With
Legal
Hallucinations
In
2023,
the
first
brief
to
include
AI-generated
cases
was
filed.
It
wasn’t
the
first
time
a
brief
had
been
submitted
with
mistakes,
but
it
marked
a
new
precedent
where
convincing
AI-generated
errors
could
happen
more
often.
Since
that
first
case,
there
have
been
over
1,200
reported
situations
where
hallucinations
have
made
their
way
into
filings.
In
2025,
Graciela
Dela
Torre
filed
dozens
of
documents
as
a
pro
se
litigant,
allegedly
using
ChatGPT.
Her
filings
were
related
to
a
recently
dismissed
case
with
her
insurance
company.
Some
documents
included
hallucinated
cases,
but
the
sheer
volume
of
filings
was
burdensome.
This
March,
Nippon
Insurance
decided
to
sue
OpenAI
regarding
those
filings,
claiming,
among
other
things,
that
OpenAI
was
engaging
in
the
unauthorized
practice
of
law.
Last
week,
a
brief
filed
with
the
Sixth
District
Court
of
Appeals
had
real
citations
but
quoted
sentences
that
don’t
appear
in
the
cited
sources.
This
brief
was
filed
by
an
attorney
who
used
a
reputable
legal
research
vendor’s
AI
offering.
AI
Governance
Is
More
Important
Than
Ever
The
unfortunate
reality
is
that
hallucinations
are
a
feature
of
LLM
systems,
not
a
bug.
And
they
are
very
convincing.
Law
firms
need
to
ensure
they
have
strong
processes
around
the
use
of
AI.
Staff
training
needs
to
be
ongoing.
Greater
emphasis
needs
to
be
placed
on
reviewing
materials,
as
creating
documents
with
AI
is
faster
and
easier.
This
needs
to
include
who
reviews,
when
reviews
happen,
and
against
what
standard.
In
short,
firms
need
to
self-monitor
to
ensure
hallucinations
of
all
types
don’t
find
their
way
into
work
product.
This
is
not
limited
to
court
filings
either.
Let’s
not
forget
that
contracts
and
legal
advice
provided
to
clients
can
also
include
hallucinations.
Legal
Should
Take
A
Cue
From
The
Accounting
Industry
The
idea
that
critical
errors
can
be
hidden
and
need
fact-checking
is
not
unique
to
legal.
Investors
have
long
relied
on
a
company’s
financials
and
bookkeeping,
and
the
accounting
industry
has
been
built
around
the
need
for
trust
in
the
numbers.
Not
only
are
there
Generally
Accepted
Accounting
Principles
(GAAP)
that
guide
accounting,
but
there
are
also
protocols
for
auditors
to
follow
when
independently
attesting
to
the
integrity
of
the
numbers.
The
issues
with
MCI
WorldCom
and
Enron
resulted
in
the
passage
of
Sarbanes-Oxley.
The
accounting
industry,
which
had
previously
been
self-regulated,
became
regulated
as
a
result.
Now,
auditors
review
the
accuracy
of
the
numbers
and
the
processes
used
to
produce
them.
If
the
processes
are
shaky,
an
auditor
may
be
required
to
call
out
weaknesses
in
the
controls
the
company
has
put
in
place.
Large
businesses
also
have
internal
auditors
who
serve
as
checks
and
balances,
identifying
issues
before
they
get
to
an
independent
auditor.
Trust
Trust
is
at
the
foundation
of
our
financial
markets
and
also
our
legal
system.
If
it’s
harder
to
trust
and
validate
the
veracity
of
a
legal
document,
then
what
does
this
mean
for
our
justice
system?
It’s
my
view
that
we
are
at
an
inflection
point
where
law
firms
and
attorneys
must
up
their
game
in
how
they
review
their
work.
With
agentic
solutions
and
client
pressures,
the
amount
of
AI-assisted
work
product
created
will
increase
tenfold,
or
perhaps
a
hundredfold.
Validating
citations
and
using
Shepards
or
KeyCite
is
table
stakes.
There
are
now
independent
systems
on
the
market
that
can
help
with
citation
verification
and
hallucinations.
More
firms
should
incorporate
processes
that
interrogate
and
use
adversarial
approaches
to
root
out
issues
and
errors
in
work
product
before
a
court,
opposing
counsel,
or
a
client
does.
AI
solutions
can
be
adapted
to
support
this
function.
Organizationally,
perhaps
firms
should
consider
an
internal
audit
function
that
is
structurally
independent
of
practice
areas,
similar
to
those
in
corporations.
Dare
I
suggest
that
there
may
be
a
need
for
systems
that
serve
as
confidential,
independent
validation,
similar
to
the
role
of
financial
auditors?
Shared
Problems
Benefit
From
Collaboration
Innovation
will
drive
the
development
of
solutions,
especially
if
standards
emerge.
Each
firm
must
solve
for
itself,
but
leaders
across
the
industry
can
leverage
associations
to
work
together
on
shared
problems
and
best
practices. (For
example,
the
SALI
Alliance
is
an
existing
forum
used
for
data
standards.)
Rule
11
And
Attorney
Ethics
The
ABA
has
provided
initial
guidance
on
professional
standards
for
AI
under
Rule
11.
Lawyers
know
what
they
are
responsible
for,
but
they
must
decide
how
to
meet
those
standards
because
there
is
no
formal
operational
guidance.
The
AICPA
provides
GAAP
as
guidelines
for
accounting
and
financial
reporting.
Perhaps
the
ABA
might
eventually
offer
similar
guidance
on
operationalizing
Rule
11.
What
guidance
should
exist?
And
when
should
specific
guidance
begin
to
be
offered?
Can
it
start
through
industry
collaboration?
Here
are
a
few
ideas
for
consideration:
-
Citation
verification?
Fact-checking? -
Adversarial
AI
review
(a
second
model
tasked
with
disproving
the
first)? -
Sampling
protocols
for
high-volume
activity
(e.g.,
mass
tort,
e-discovery
summaries)? -
Document-level
confidence
scoring? -
Confidential
and
independent
review? -
Human
sign-off
tied
to
defined
review
thresholds?
I’ve
written
elsewhere
that
innovation
leads
and
regulation
follows.
But
if
AI
innovation
is
going
to
cause
friction
or
undermine
trust
in
a
way
that
can
impede
justice,
then
bar
associations,
regulators,
or
the
courts
may
need
to
step
in
earlier.
Pro
Se
Litigants
Maybe
courts
need
to
consider
some
minimum
standards
before
a
pro
se
litigant
can
file
using
AI?
Should
there
be
a
requirement
and
mechanism
to
disclose
that
AI
was
used
in
creating
a
filing?
Perhaps
the
federal
system
or
a
state
court
could
offer
a
service
to
pro
se
litigants
to
use
before
filing?
That
could
mitigate
some
of
the
downsides
while
supporting
greater
access
to
justice.
Summary
The
current
legal
system
is
engineered
for
accuracy,
given
the
speed
at
which
humans
create
documents.
AI
breaks
that
balance,
automating
more
drafting
and
generating
work
product
at
a
scale
that
overwhelms
traditional
validation
methods.
The
review
process
needs
automation
to
keep
up.
Accounting
has
faced
automation
and
complexity
while
adapting
to
maintain
trust.
Similarly,
legal
professionals
will
need
tools
to
support
more
automated
content
creation
and
maintain
trust
in
the
documents
they
produce.
Law
firms
need
to
protect
their
reputations
and
their
clients,
and
the
legal
profession
needs
to
ensure
legal
documents
can
be
trusted.
Just
as
investors
need
to
have
confidence
in
financial
reporting,
the
legal
industry
will
need
greater
confidence
that
hallucinations
are
manageable
when
AI
is
part
of
work-product
creation.
The
review
of
AI-generated
work
product
may
be
the
greatest
systemic
limitation
the
legal
industry
faces
in
AI
adoption
today.
Ken
Crutchfield
has
over
40
years
of
experience
in
legal,
tax,
and
other
industries.
Throughout
his
career,
he
has
focused
on
growth,
innovation,
and
business
transformation. His
consulting
practice
advises
investors,
legal
tech
startups
and
others.
As
a
strategic
thinker
who
understands
markets
and
creating
products
to
meet
customer
needs,
he
has
worked
in
start-ups
and
large
enterprises.
He
has
served
in
General
Management
capacities
in
six
businesses.
Ken
has
a
pulse
on
the
trends
affecting
the
market.
Whether
it
was
the
Internet
in
the
1980s
or
Generative
AI,
he
understands
technology
and
how
it
can
impact
business.
Crutchfield
started
his
career
as
an
intern
with
LexisNexis
and
has
worked
at
Thomson
Reuters,
Bloomberg,
Dun
&
Bradstreet,
and
Wolters
Kluwer.
Ken
has
an
MBA
and
holds
a
B.S.
in
Electrical
Engineering
from
The
Ohio
State
University.
