
In
2026,
generative
artificial
intelligence
(AI)
use
in
law
firms
is
becoming
commonplace.
Data
from
the
soon-to-be-released
8am
2026
Legal
Industry
Report
shows
that
the
majority
of
legal
professionals
have
personally
used
generative
AI
for
work-related
purposes.
If
you’re
one
of
those
lawyers,
you’ve
undoubtedly
discovered
that
AI
tools,
both
legal-specific
and
general-purpose
versions,
can
rapidly
draft
legal
briefs
that,
at
first
glance,
are
thorough
and
convincing.
But
look
closer,
and
you’ll
discover
that
the
output
often
includes
inaccurate
information,
including
fake
case
citations,
misquotes,
and
misstated
legal
principles.
Unfortunately,
not
all
lawyers
take
that
step,
ultimately
failing
to
realize
that
what
appears
to
be
a
high-quality
legal
document
is,
in
fact,
a
house
of
hallucinated
cards.
Overwhelmed
by
looming
deadlines
and
full
caseloads,
they’ve
conducted
cursory
reviews
of
AI-drafted,
mistake-ridden
briefs
and
unknowingly
submitted
them
to
the
courts.
Don’t
make
that
mistake!
AI
can
assist
with
traditional
legal
research,
but
does
not
replace
the
need
to
verify
all
cited
authoritative
sources.
You
must
review
the
cases,
laws,
and
regulations
to
confirm
their
accuracy
and
applicability
to
the
issues
at
hand.
But
don’t
take
my
word
for
it.
Let’s
see
what
the
judges
have
to
say
about
your
ethical
obligations
when
using
AI
tools
as
part
of
the
legal
research
process.
First,
there’s
Special
Master
Michael
R.
Wilner,
former
United
States
Magistrate
Judge
for
the
Central
District
of
California.
He
recently
expressed
his
frustration
with
briefs
submitted
to
the
court
that
included
“bogus
AI-generated
research.”
In
Lacey
v.
State
Farm
General
Ins.
Co.,
Case
No.
CV
24-5205
FMO
(C.D.
Cal.
May
5,
2025),
he
granted
the
motion
to
strike
the
offending
attorneys’
supplemental
briefs,
denied
their
discovery
motion,
and
imposed
sanctions
in
the
amount
of
$26,100
to
reimburse
the
court
for
its
time
and
$5,000
in
fees
for
opposing
counsel.
The
Special
Master
explained
the
rationale
for
his
decision:
“The
initial,
undisclosed
use
of
AI
products
to
generate
the
first
draft
of
the
brief
was
flat-out
wrong.
Even
with
recent
advances,
no
reasonably
competent
attorney
should
out-source
research
and
writing
to
this
technology
—
particularly
without
any
attempt
to
verify
the
accuracy
of
that
material.
And
sending
that
material
to
other
lawyers
without
disclosing
its
sketchy
AI
origins
realistically
put
those
professionals
in
harm’s
way.”
Similarly,
United
States
Magistrate
Judge
for
the
Southern
District
of
Indiana,
Mark
J.
Dinsmore,
was
equally
displeased
in
Mid
Cent.
Operating
Eng’rs
Health
&
Welfare
Fund
v.
HoosierVac
LLC,
No.
2:24-cv-00326-JPH-MJD
(S.D.
Ind.
Feb.
21,
2025).
He
recommended
that
the
attorney
before
the
court
—
who,
on
three
occasions,
submitted
hallucinated
briefs
—
should
be
personally
sanctioned
in
the
amount
of
$15,000.
According
to
Judge
Dinsmore,
“It
is
one
thing
to
use
AI
to
assist
with
initial
research,
and
even
non-legal
AI
programs
may
provide
a
helpful
30,000-foot
view.
It
is
an
entirely
different
thing,
however,
to
rely
on
the
output
of
a
generative
AI
program
without
verifying
the
current
treatment
or
validity
—
or,
indeed,
the
very
existence
—
of
the
case
presented.
Confirming
a
case
is
good
law
is
a
basic,
routine
matter
and
something
to
be
expected
from
a
practicing
attorney.”
The
need
to
carefully
review
material
cited
in
AI-generated
legal
documents
was
also
emphasized
in
N.Z.
v.
Fenix
Int’l
Ltd.,
8:24-cv-01655-FWS-SSC
(C.D.
Cal.
December
25,
2025).
The
court
determined
that
sanctions
were
appropriate
because
the
attorney
“used
ChatGPT
to
assist
in
drafting
the
opposition
briefs
but
failed
to
verify
the
validity
of
the
AI-generated
material
…
and
failed
to
realize
when,
and
to
what
event,
ChatGPT
was
modifying
her
research/writing
—
supplementing
and/or
cross-pollinating
concepts
and
authorities.”
Courts
and
ethics
committees
have
made
one
point
unmistakably
clear:
using
AI
does
not
change
your
duties
or
lower
the
standard
of
competence.
Every
case,
citation,
and
legal
proposition
must
still
be
read,
checked,
and
confirmed
as
part
of
your
professional
and
ethical
obligations.
Generative
AI
does
not
exercise
legal
judgment,
and
it
cannot
tell
you
what
the
law
is,
whether
a
case
exists,
or
whether
it
applies
to
your
facts.
That
responsibility
remains
with
the
lawyer.
You
bear
responsibility
for
the
finished
work
product,
and
AI
has
not
changed
that
fact.
The
buck
stops
with
you.
Nicole
Black is
a
Rochester,
New
York
attorney
and
Principal
Legal
Insight
Strategist
at 8am,
the
team
behind
8am
MyCase,
LawPay,
CasePeer,
and
DocketWise.
She’s
been blogging since
2005,
has
written
a weekly
column for
the
Daily
Record
since
2007,
is
the
author
of Cloud
Computing
for
Lawyers,
co-authors Social
Media
for
Lawyers:
the
Next
Frontier,
and
co-authors Criminal
Law
in
New
York.
She’s
easily
distracted
by
the
potential
of
bright
and
shiny
tech
gadgets,
along
with
good
food
and
wine.
You
can
follow
her
on
Twitter
at @nikiblack and
she
can
be
reached
at [email protected].
