Survey Finds Majority of Federal Judges Have Used AI in Their Work, But Daily Use Remains Rare

A
first-of-its-kind
random-sample
survey
of
federal
judges
has
found
that
more
than
60%
have
used
generative
artificial
intelligence
tools
in
their
judicial
work,
though
fewer
than
one
in
four
use
these
tools
on
a
daily
or
weekly
basis.

The
study,
conducted
by
researchers
at
Northwestern
University
in
collaboration
with
the
New
York
City
Bar
Association,
provides
an
empirical
snapshot
of
how
AI
is
being
integrated

and
not
integrated

into
federal
court
chambers.

The
research,
Artificial
Intelligence
in
Federal
Courts:
A
Random-Sample
Survey
of
Judges
,”
forthcoming
in
Volume
27
of

The
Sedona
Conference
Journal
,
surveyed
502
randomly
selected
bankruptcy,
magistrate,
district
court
and
court
of
appeals
judges
in
late
2025.
Of
those,
112
responded,
for
a
22.3%
response
rate.

The
survey
found
that
61.6%
of
responding
judges
use
at
least
one
AI
tool
in
their
judicial
work.
Of
those,
however,
few
use
it
frequently.
Only
5.4%
reported
daily
use,
while
17%
use
AI
tools
weekly.
Another
19.6%
use
AI
monthly,
and
the
same
percentage
use
it
rarely.
The
remaining
38.4%
reported
never
using
any
of
the
listed
AI
tools
in
their
work.

“Although
a
majority
of
responding
judges
at
least
occasionally
use
AI
tools
in
their
judicial
work,
relatively
few
report
using
AI
on
a
daily
or
weekly
basis,”
the
report
states.
“This
pattern
suggests
that
AI
is
present
in
federal
judicial
chambers
but
not
yet
a
routine,
embedded
part
of
most
judges’
decision-making
processes.”

A
Preference
for
Legal
AI
Tools

The
survey
found
a
clear
preference
among
judges
for
legal-specific
AI
tools
integrated
into
established
research
platforms
rather
than
general-purpose
AI
systems
such
as
ChatGPT.

That
said,
while
Westlaw
AI-Assisted
Research
or
Deep
Research
was
the
most
commonly
used
tool,
with
38.4%
of
judges
reporting
some
level
of
use,
ChatGPT
came
second
at
28.6%.


However,
the
frequency
of
use
differs
between
legal-specific
and
general
tools.
For
legal-specific
AI
tools,
5.4%
of
judges
reported
daily
use
and
9.8%
reported
weekly
use.
For
general-purpose
AI
tools,
only
0.9%
reported
daily
use
and
9.8%
reported
weekly
use.

“This
pattern
indicates
that
vendor
familiarity
and
perceived
reliability
may
strongly
shape
which
AI
tools
judges
are
willing
to
deploy
in
chambers,”
the
report
notes.

Other
AI
tools
showed
minimal
adoption.
Anthropic’s
Claude
was
used
by
only
0.9%
of
judges,
all
at
a
frequency
of
“rarely.”
Harvey
and
Legora
showed
0%
usage
across
all
responding
judges.
Vincent
AI
(vLex)
similarly
showed
only
0.9%
rare
usage.

Legal
Research
Dominates
Usage

When
asked
about
specific
applications,
judges
overwhelmingly
pointed
to
legal
research
as
their
primary
AI
use
case.
Thirty
percent
of
judges
reported
using
AI
to
conduct
legal
research,
making
it
the
most
common
application
by
a
significant
margin.

Document
review
came
in
second
at
15.5%,
followed
by
drafting
documents

not

filed
in
cases
(7.3%),
summarizing
text
or
audio
(7.3%),
and
preparing
case
timelines
or
chronologies
(5.5%).


Notably,
judges
reported
minimal
use
of
AI
for
drafting
or
editing
documents
that
are
filed
in
cases.
Only
1.8%
reported
using
AI
to
draft
filed
documents
such
as
orders,
opinions
or
judgments,
and
2.7%
reported
using
AI
to
edit
such
documents.

This
contrasts
with
higher
rates
for
non-filed
documents:
7.3%
use
AI
to
draft
letters,
emails
or
articles,
and
4.5%
use
AI
to
edit
such
materials.

The
survey
also
found
that
1.8%
of
judges
reported
using
AI
to
“make
decisions,”
while
4.5%
reported
using
AI
to
“inform
decisions.”

Staff
Show
Similar
Patterns

Judges
reported
slightly
higher
AI
usage
compared
to
others
in
their
chambers.
While
50.9%
of
judges
said
they
do
not
use
AI
in
their
work,
a
somewhat
lower
45%
reported
that
others
in
their
chambers
do
not
use
AI.

Legal
research
remained
the
top
use
case
for
chambers
staff
at
39.8%,
followed
by
document
review
at
16.7%.
The
patterns
largely
mirrored
judges’
own
usage,
though
judges
reported
that
staff
use
AI
for
legal
research
approximately
10
percentage
points
more
frequently
than
judges
themselves
do.

Several
judges
indicated
uncertainty
about
how
their
staff
actually
use
AI.
One
responded
simply,
“I
am
not
certain
whether
they
use
any
type
of
AI.”
Another
recounted
an
incident
where
“my
law
clerk
wrote
a
memo
for
me,
and
then
after
she
finished,
out
of
curiosity,
she
asked
AI
to
write
a
memo
on
the
same
question.
Of
the
11
cases
AI
cited
in
its
version,
10
of
them
were
fake.”

Training
Gap
Identified

The
survey
revealed
what
the
researchers
describe
as
“unmet
demand”
for
AI
training
in
the
judiciary.
Nearly
half
of
judges
(45.5%)
reported
that
AI
training
had
not
been
provided
by
court
administration,
and
an
additional
15.7%
were
unsure
whether
training
had
been
offered.

Among
the
38.9%
who
recalled
training
being
offered,
a
significant
majority
(73.8%)
attended.
This
suggests
that
when
training
is
provided
and
visible,
judges
are
receptive
to
it.

Training
availability
and
attendance
varied
by
judge
type.
Magistrate
judges
reported
the
highest
rate
of
attending
training
at
40%,
followed
by
bankruptcy
judges
at
36.7%.
District
court
judges
reported
attending
at
a
lower
rate
of
16.7%.

Chambers
Policies:
A
Mixed
Picture

The
survey
found
no
dominant
approach
to
AI
governance
within
chambers.
Approximately
one-third
of
judges
either
permit
and
encourage
(7.4%)
or
permit
(25.9%)
AI
use
by
those
working
in
their
chambers.
Another
third
either
formally
prohibit
(20.4%)
or
discourage
but
do
not
formally
prohibit
(17.6%)
AI
use.

One
in
four
judges
(24.1%)
reported
having
no
official
policy
on
AI
use.
If
those
who
merely
discourage
AI
without
formal
prohibition
are
included,
41.7%
of
judges
lack
an
official
AI
policy.

Several
judges
who
selected
“permitted”
or
“permitted
and
encouraged”
described
significant
limitations.
One
wrote:
“I
have
a
firm
policy,
though,
against
AI
generating
content
of
orders,
opinions,
or
communications.”

Another
specified
that
AI
is
“permitted
and
encouraged,
but
within
very
narrow
guardrails.
Only
as
part
of
Westlaw
or
Lexis
research
tools,
and
only
to
summarize
voluminous
materials.”

Similarly,
some
judges
who
selected
“formally
prohibited”
carved
out
exceptions.
One
noted:
“My
clerks
can
use
AI
for
legal
research
(Westlaw)
but
not
for
other
functions.”

Another
wrote:
“It’s
fine
to
use
for
something
like
a
poem
celebrating
a
birthday
or
anniversary.
But
I
do
not
permit
it
for
case-related
work.”

Personal
Use
Correlates
with
Professional

The
survey
found
a
statistically
significant
correlation
between
judges’
personal
and
professional
AI
use.
The
researchers
used
a
statistical
analysis
tool,
the
chi-square
test,
and
found
what
they
described
as
“strong
statistical
evidence”
of
association.
Another
statistical
analysis
method,
the
Cramér’s
V
test,
found
a
moderate
strength
of
association
between
their
personal
and
professional
use.

Overall,
38%
of
judges
reported
using
AI
daily
or
weekly
outside
of
work.
When
asked
about
personal
AI
uses,
judges
described
a
wide
range
of
applications:
trip
planning,
restaurant
recommendations,
general
knowledge
searches,
drafting
personal
correspondence
and
household
questions.

One
judge
who
uses
AI
daily
outside
work
wrote:
“I
use
them
every
day
to
get
answers
to
questions
as
they
pop
up
throughout
the
day.
I
do
not
ever
use
AI
to
work
on
my
cases.”

One
in
five
judges
(20.4%)
reported
never
using
AI
in
either
their
personal
lives
or
their
work.

A
Split
Between
Optimism
and
Concern

When
asked
about
their
general
outlook
on
AI’s
potential
for
the
judiciary,
judges
were
nearly
evenly
divided.
Slightly
more
than
43%
expressed
optimism
(13%
very
optimistic,
30.6%
somewhat
optimistic),
while
approximately
42%
expressed
concern
(13.9%
very
concerned,
27.8%
somewhat
concerned).
Another
14.8%
were
neutral.

The
free-response
comments
revealed
recurring
themes
on
both
sides.

Optimistic
judges
emphasized
efficiency
gains
and
research
capabilities.
One
wrote:
“Summarizing
trial
transcripts
and
voluminous
documents
and
pinpointing
instances
of
specific
testimony
in
a
closed
universe
environment
is
a
huge
time
saver.”

Another
noted:
“I
believe
it
will
be
a
significant
benefit
to
conserving
judicial
resources.
So
long
as
accuracy
can
be
confirmed.”

Concerned
judges
focused
primarily
on
hallucinations
and
skill
atrophy.
One
wrote:
“The
consistent
reports
of
zombie
cases
and
other
instances
where
AI
conjures
law
or
facts
is
terrifying
and
forms
the
basis
for
how
we
use
AI
in
chambers.”

Another
expressed
worry
about
broader
effects:
“My
[spouse]
teaches
and
has
sensitized
me
to
the
harmful
effects
that
AI
is
having
on
students’
ability
to
think
and
write
for
themselves.
The
undergraduate
students
of
2025
are
the
law
clerks
of
2030,
so
yes,
I’m
concerned.”

Several
judges
expressed
mixed
feelings.
One
neutral
respondent
wrote:
“I’m
optimistic
that
AI
can
help
us
become
more
efficient
…,
but
I
am
highly
concerned
that
AI
is
causing
younger
generations
of
lawyers
and
laypeople
not
to
think
critically
and
to
lose
essential
research
and
writing
skills.”

One
very
concerned
judge
wrote:
“If
I
had
published
an
opinion
with
hallucinated
citations,
I’d
have
to
give
serious
consideration
to
resigning.”

Differences
Across
Judge
Types

The
survey
revealed
variations
in
AI
adoption
and
attitudes
across
different
categories
of
federal
judges,
though
the
researchers
caution
that
some
findings

particularly
for
court
of
appeals
judges,
where
only
six
responded

should
be
viewed
as
anecdotal
rather
than
representative.

Bankruptcy
judges
showed
the
highest
rate
of
daily
or
weekly
AI
use
at
32.2%,
compared
to
21.9%
for
magistrate
judges
and
13.9%
for
district
court
judges.
Conversely,
46.5%
of
district
court
judges
reported
never
using
AI
in
their
work,
compared
to
35.5%
of
bankruptcy
judges
and
37.5%
of
magistrate
judges.

On
outlook,
magistrate
judges
were
more
optimistic
than
concerned
(46.7%
versus
30%),
while
bankruptcy
judges
(50%
concerned
versus
40%
optimistic)
and
district
court
judges
(47.6%
concerned
versus
40.5%
optimistic)
leaned
toward
concern.

Other
AI
Tools
and
Use
Cases

When
given
the
opportunity
to
describe
other
AI
tools
and
uses,
some
judges
identified
applications
beyond
the
survey’s
listed
options.
One
judge
mentioned
using
Speechify,
an
AI-based
text-to-speech
tool,
on
a
weekly
basis.
Several
described
using
AI
for
preparing
presentations,
talks
and
CLE
program
outlines

activities
related
to
but
distinct
from
case
work.

One
judge
raised
a
definitional
question:
“It
depends
on
how
you
define
AI
tools.
I
assume
you’re
referring
to
Generative
AI.
Even
assuming
it’s
Gen
AI
you’re
concerned
with,
would
text
prediction
be
included?”

Limitations
Acknowledged

The
researchers
acknowledged
several
limitations.
The
112-judge
sample,
while
providing
a
foundation
for
analysis,
carries
a
margin
of
error
of
approximately
±9%
at
a
95%
confidence
level
for
the
overall
findings.
Margins
of
error
are
larger
for
specific
judge
types,
and
findings
for
court
of
appeals
judges
(six
respondents)
cannot
be
considered
representative.

The
researchers
also
noted
potential
biases
including
self-selection
(judges
with
strong
opinions
about
AI
may
have
been
more
likely
to
respond)
and
social
desirability
bias
(judges
might
under-
or
over-report
AI
use
based
on
how
they
perceive
such
use
is
viewed).

The
study
was
limited
to
federal
judges
and
did
not
include
Supreme
Court
justices,
Court
of
International
Trade
judges,
or
state
court
judges.

Methodology

The
survey
was
conducted
between
Dec.
1
and
Dec.
19,
2025.
Researchers
used
a
stratified
random
sampling
method,
selecting
approximately
29%
of
judges
from
each
category
(bankruptcy,
magistrate,
district
court
and
court
of
appeals)
from
a
compiled
population
of
1,738
federal
judges.

The
survey
featured
both
multiple-choice
and
free-response
questions
and
was
approved
by
Northwestern
University’s
Institutional
Review
Board.
Only
the
Northwestern
researchers
had
access
to
the
unprocessed
data;
other
authors
and
collaborators
received
only
aggregated
visualizations
and
de-identified
individual
responses.

The
research
was
conducted
by

Anika
Jaitley
,
research
assistant
for
the
Law
and
Technology
Initiative
at
Northwestern
University
Pritzker
School
of
Law;

Daniel
W.
Linna
Jr.
,
professor
of
instruction
and
director
of
Law
and
Technology
Initiatives
at
Pritzker;
U.S.
District
Judge

Xavier
Rodriguez

of
the
Western
District
of
Texas;

V.S.
Subrahmanian
,
Walter
P.
Murphy
professor
of
computer
science
at
Northwestern
University
and
Buffett
faculty
fellow
at
Northwestern’s
Buffett
Institute
for
Global
Affairs;
and

Siyu
Tao
,
law
student
and
research
assistant
at
Pritzker.

HIPAA Security Rule 2026: The Law Firm Compliance Checklist – Above the Law

The
2025
HIPAA
Security
Rule
updates
take
effect
in
2026
and
introduce
stricter
technical
safeguard
requirements
for
any
entity
that
handles
electronic
protected
health
information
(ePHI)

including
law
firms
that
receive
medical
records. 

This
checklist
from
our
friends
at
LlamaLab
helps
your
firm
assess
exposure
and
close
gaps
before
OCR
enforcement
ramps
up.


Get
the
checklist
today!

  

Prestigious Biglaw Firm Ups The Ante On Six-Figure Bonuses For Federal Law Clerks – Above the Law

One
of
the
many
benefits
of
having
a
federal
clerkship
is
the
extra
bonus
you’ll
receive
if
you
decide
to
head
to
a
Biglaw
firm
post-clerkship.
Many
elite
firms
really
want
people
with
highly
demanding
clerkship
experience
to
work
for
them,
and
that’s
why
the
high
end
of
bonuses
for
federal
clerks
who
decide
to
join
some
Biglaw
and
boutique
firms
post-clerkship
can
reach
six
figures.
To
that
end,
we’ve
got
exciting
news
about
yet
another
litigation
powerhouse
that’s
boosted
its
clerkship
signing
bonus.

Susman
Godfrey

a
firm
that
recently
decided
to

reject
Biglaw’s
broken
recruiting
model

in
favor
of
a
solution
that
should
work
for
law
students
and
law
firms
alike

has
also
announced
that
it
will
increase
its
signing
bonus
for
all
federal
district
or
appellate
clerkships
to
$180,000
(up
from
$125,000).
For
two
or
more
qualifying
clerkship,
the
firm
will
offer
an
additional
$20,000,
for
a
total
signing
bonus
of
$200,000
(up
from
$150,000).
Click here to
see
more
information
on
the
firm’s
U.S.
compensation
and
benefits.

“Every
one
of
our
associates
has
completed
at
least
one
(and
sometimes
several)
federal
clerkships,”

Hunter
Vance
,
partner
and
co-chair
of
the
firm’s
employment
committee,
told
Above
the
Law.
“This
increase
in
clerkship
bonuses
reflects
the
firm’s
commitment
to
paying
these
superstar
associates
what
they
are
worth:
at
the
very
top
of
the
market.”

So,
which
other
firms
are
offering
six-figure
bonuses
to
former
clerks? Susman
now
joins
Hueston
Hennigan
in
offering
a
market-leading $180,000
bonus
 to
federal
clerks
who
join
the
firm.
Quinn
Emanuel
offers
an
impressive $175,000
bonus
 for
a
single
year
of
clerkship
experience,
with
an
additional
$25,000
if
the
applicant
completes
a
second
qualifying
clerkship.
Boies
Schiller
offers
$150,000
bonus
 for
all
federal
clerkships,
or
$175,000
for
those
who
have
completed
multiple
clerkships.
Plaintiffs
firm Dovel
&
Luner
 offers
$140,000 as
a
clerkship
bonus. Cravath offers
clerkship
bonuses
of
$125,000,
while
those
who
have
completed
a
clerkship
of
two
years
or
two
one-year
clerkships
will
receive
a
bonus
of
$150,000. Munger
Tolles
pays
 a
bonus
of
$125,000
for
a
single
federal
clerkship,
and
$150,000
for
those
with
two
federal
clerkships
under
their
belt.
Fish
&
Richardson
offers
a

$115,000
clerkship
bonus
,
but
that
only
applies
to
folks
with
Federal
Circuit
experience
and
it
requires
two
years
of
service
as
a
clerk. Robins
Kaplan offers
$100,000
 bonuses
to
former
federal
clerks. 

With
the
rush
on
top
talent
in
a
still
hot
lateral
market,
what
are
the
other
firms
waiting
for?
Don’t
they
want
to
capture
some
of
the
magic
that
former
federal
clerks
can
offer?
If
you
have
information
about
any
firm’s
clerkship
bonuses,
you
should email
us
 or
text
us
(646-820-8477)
with
all
the
details.
Thanks.





Staci
Zaretsky
 is
the
managing
editor
of
Above
the
Law,
where
she’s
worked
since
2011.
She’d
love
to
hear
from
you,
so
please
feel
free
to email her
with
any
tips,
questions,
comments,
or
critiques.
You
can
follow
her
on BlueskyX/Twitter,
and Threads, or
connect
with
her
on LinkedIn.

Judicial Nominee’s Twitter Fingers Come Back To Haunt Her – Above the Law

Last
week,Trump
judicial
nominee
Kara
Westercamp
had
the
ignominious
task
of
apologizing
for
her
social
media
use
in
her
appearance
before
the
Senate
Judiciary
Committee,
which
is
deeply
millennial-coded.
Westercamp,
currently
serving
in
the
White
House
Counsel’s
Office,
is
Donald
Trump’s
pick
for
a
lifetime
appointment
on
the
U.S.
Court
of
International
Trade.
And
yes,
the
subject
of
her
questionable
Twitter
account
absolutely
came
up.

According
to

reporting
from

Balls
&
Strikes,
Westercamp’s
social
media
history
is
a
deeply
online
hodgepodge
of
far-right
talking
points.
And
though,
yes,
she
took
the
CYA
step
of
protecting
her
tweets,
the
internet
has
a
way
of
remembering
(it’s
the

wayback
machine
).

Also
scattered
across
Westercamp’s
Twitter
timeline
between
October
2016
and
February
2023
are
tweets
and
retweets
that
(among
many
other
things)
question
the
results
of
the
2020
election,
parrot
transphobic
talking
points,
sympathize
with
January
6
insurrectionists,
and
generally
express
unbridled
enthusiasm
for
Trump
and
his
political
movement.

Plus
she
refers
to
Senator
Mitch
McConnell
as
“Cocaine
Mitch”

a
nickname
that,
while
not
exactly
obscure
in
certain
corners
of
the
internet,
tends
to
raise
eyebrows
when
you’re
asking
that
same
Senate
to
hand
you
a
lifetime
appointment.
She
also
took
swings
at
Senate
Democrats
as
well
as
Lindsey
Graham
and
Susan
Collins,
proving
once
again
that
bipartisan
snark
is
still…
snark.

To
her
credit
(or
at
least
to
her
survival
instincts),
Westercamp
came
to
the
hearing
prepared
to
eat
a
healthy
portion
of
crow.

“I
do
sincerely
apologize
for
those
posts,”
she
told
the
committee,
emphasizing
they
were
made
in
her
“personal
capacity.”
She
added
that
she
has
“seriously
considered”
deactivating
her
X
account.

But
the
real
trouble
started
when
Ranking
Member
of
the
Committee,
Dick
Durbin,
turned
the
conversation
to
January
6.
Specifically,
Westercamp’s
apparent
amplification
of
posts
downplaying
the
violence
of
the
Capitol
attack.

Westercamp
insisted
she
condemns
the
violence
of
that
day,
but
when
pressed
on
whether
she
would
reject
conspiracy
theories
suggesting
law
enforcement,
rather
than
rioters,
were
responsible,
she
sidestepped.
The
retweets,
she
explained,
came
from
“people
I
don’t
know,”
and
she
now
regrets
sharing
them.

That’s…
not
exactly
the
full-throated
rejection
of
conspiracy
nonsense
one
might
hope
for
from
a
would-be
federal
judge.
Or,
frankly,
from
anyone
with
a
law
license.

Look,
everyone
understands
that
lawyers
are
human
beings
who
occasionally
say
controversial
things
online.
(Some
of
us
even
make
a
career
out
of
it.)
But
there’s
a
difference
between
a
stray
hot
take
and
a
pattern
of
posts
that
call
into
question
your
judgment…
especially
when
you’re
angling
for
a
lifetime
gig
interpreting
federal
law.




Kathryn
Rubino
is
a
Senior
Editor
at
Above
the
Law,
host
of

The
Jabot
podcast
,
and
co-host
of

Thinking
Like
A
Lawyer
.
AtL
tipsters
are
the
best,
so
please
connect
with
her.
Feel
free
to
email

her

with
any
tips,
questions,
or
comments
and
follow
her
on
Twitter

@Kathryn1
 or
Mastodon

@[email protected].

Judge grants Anthropic preliminary injunction but Pentagon CTO says ban still stands – Breaking Defense

WASHINGTON

Federal
Judge
Rita
Lin
issued
a
sweeping
preliminary
injunction
in
Anthropic’s
favor
Thursday,
the
latest
move
in
the
weeks-long
conflict
between
the
AI
company
and
the
US
government.

“The
record
strongly
suggests
that
the
reasons
given
for
designating
Anthropic
a
supply
chain
risk
were
pretextual
and
that
[the
government’s]
real
motive
was
unlawful
retaliation,”
Lin,
who
was
appointed
by
former
President
Joe
Biden,
wrote
in
the
48-page
order
[PDF].


By
granting
the
preliminary
junction,
she
fo
und
Anthropic
was
“likely
to
succeed”
in
its
lawsuit
against
the
government
and
therefore,
the
17
federal
agencies
named
as
defendants

from
the
Pentagon
to
the
National
Endowment
for
the
Humanities

are
not
allowed
to
implement
the
orders
designating
Anthropic
as
a
supply
chain
risk
until
the
lawsuit
is
decided.

After Anthropic
refused
to
accept
new
contract
language
 allowing
“all
lawful
use”
of
its
Claude
AI
by
the
military,
President
Donald
Trump
in
a
Feb.
27
Truth
Social
post
directed
federal
agencies
to
“IMMEDIATELY
CEASE
all
use
of
Anthropic’s
technology,”
and
Defense
Secretary
Pete
Hegseth posted
on
X

that
“no
contractor,
supplier,
or
partner
that
does
business
with
the
United
States
military
may
conduct
any
commercial
activity
with
Anthropic.”

Then
on
March
4,
two
formal
letters
from
the
administration
simultaneously
designated
Anthropic
as
a
Supply
Chain
Risk
under
two
statutes:

Title
41,
Section
4713

(41
USC
4713),
which
covers
the
federal
government
as
a
whole
(covering
Trump’s
order),
and

Title
10,
Section
3252
, which
spells
out
a
streamlined
process
for
use
solely
by
the
Department
of
Defense
(covering
Hegseths).

While
the
official
designation
was

less
harsh
than
envisioned
,
Anthropic
CEO
Dario
Amodei
said
he
still
intended
to
sue
the
government
to
overturn
the
decision.
The
company
filed
two
separate
lawsuits:
a
general
one
in
the
Northern
District
of
California
and
one
in
the
DC
Circuit
specifically
on
the
Sec.
4713
designation.


Just
hours
after
Thursday’s
injunction
in
the
California
case,
Undersecretary
of
Defense
and
Chief
Technology
Officer

Emil
Michael
,
the
Pentagon’s
point
man
in
the
dispute,
posted
on
X
that
Lin’s
order
contained
dozens
of
factual
errors

and
that
“the
Supply
Chain
Risk
designation

is

in
full
force
and
effect

under
Sec.
4713,
which
he
claimed
was
not
subject
to
her
jurisdiction
in
any
case.
When
asked
for
comment,
a
Pentagon
spokesperson
referred
Breaking
Defense
to
Michael’s
X
posts.


An
Anthropic
spokesperson
told
Breaking
Defense
the
company
is
“still
waiting
on
the
decision
on
the
DC
circuit.

“We’re
grateful
to
the
[California]
court
for
moving
swiftly,
and
pleased
they
agree
Anthropic
is
likely
to
succeed
on
the
merits,”
the
spokesperson
said.
“While
this
case
was
necessary
to
protect
Anthropic,
our
customers,
and
our
partners,
our
focus
remains
on
working
productively
with
the
government
to
ensure
all
Americans
benefit
from
safe,
reliable
AI.”

Legal

opinion
on
Lin’s
order
is
divided.

“Some
smart
lawyers
I’ve
talked
to
about
this
think
that
Judge
Lin’s
injunction
basically
just
doesn’t
cover
the
other
(41
USC
4713
)
designation
at
all
and
that
only
a
DC
Circuit
stay
could
affect
that
designation,”
said

Charlie
Bullock
,
a
senior
fellow
at
the
Institute
for
Law
and
AI.
“So
under
that
theory,
Anthropic
is
in
a
pretty
similar
position
today
to
the
position
they
were
in
[Wednesday],
practically
speaking.”



RELATED:

Trump
admin’s
comments
could
undermine
case
against
Anthropic
in
court:
Experts

On
the
other
hand,
Bullock
said
in
an
email
to
Breaking
Defense,
“Judge
Lin’s
order
can
be
interpreted
to
enjoin
DoW
[the
Department
of
War]
from
enforcing
the
4713
designation.”


In
her
ruling
Thursday,
L
in
imposed
a
seven-day
stay
on
her
own
order,
meaning
her
preliminary
injunction
doesn’t
go
into
effect
for
a
week.

“That’s
not
too
uncommon,”
said

Sean
Timmons

of

Tully
Rinckey
,
a
former
military
JAG
who
now
regularly
represents
current
and
former
servicemembers
against
the
government.
“It
gives
everybody
time
to
file
appropriate
pleading
for
either
reconsideration
or
appellate
intervention.”


In
this
case,
the
appellate
court
for
Lin’s
ruling
would
be
the
federal
9th
Circuit,
which
is
comprised
of
appointees
from
across
the
political
spectrum,
Timmons
said,
making
its
rulings
harder
to
predict.
“I
don’t
think
they’d
be
inclined
to
grant
the
government
relief,”
he
added.

Of
the
lawsuit
overall,
he
said,
“this
could
drag
out
for
a
year
or
two.

In
the
meantime
damages
continue
to
incur,
and
the
government
could
be
liable
for
a
breach
of
contract
and
ultimately
payment
for
the
money
lost
by
Anthropic.”

Morning Docket: 03.31.26 – Above the Law

*
Over
half
of
all
federal
judges
report
using
AI
as
part
of
their
workflow.
[Reuters]

*
Kirkland
snags
several
Latham
lawyers
in
Houston.
[American
Lawyer
]

*
New
wrinkle
in
Charlie
Kirk
case
as
defense
argues
ATF
couldn’t
match
bullet
to
alleged
shooter’s
gun.
[Politico]

*
Judge
invokes
Kafka
in
Defense
Department
press
credential
dust
up.
[Law360]

*
Meta
counsel
says
AI
has
changed
the
outsourcing
game.
Notice
how
she
didn’t
say
“the
Metaverse”
has
changed
anything.
[Legalech
News
]

*
Baldoni
lawyers
handed
a
little
benchslap
in
Lively
case.
[Bloomberg
Law
News
]

Thank You For Your Service. Now Get Out! – See Also – Above the Law

Judge
Gets
Mad
At
IT
Guy
For
Doing
His
Job:
Gotta
love
a
judicial
temper
tantrum
and
a
hot
mic!
Can
The
President
Put
Lawyers
On
A
Leash?:
Fighting
the
executive
orders
is
a
big
deal
for
the
profession.
Democrats
Mad
Trump
Didn’t
Steal
More
Earlier:
Feckless
party.
Trump
Lawyers
Cite
Confederate
Legal
Theorists
To
Attack
Birthright
Citizenship:
Citing
Alexander
Porter
Morse
was
a
definite
choice.

General Counsel Nearly Doubles His Salary After Closing Major Acquisition – Above the Law

(Image
via
Getty)



Ed.
Note
:
Welcome
to
our
daily
feature

Trivia
Question
of
the
Day!


Capital
One
General
Counsel
Matthew
Cooper’s
compensation
package
went
up
a
stunning
93%
last
year
in
recognition
for
closing
the
acquisition
of
which
rival
credit
card
company?


Hint:
Cooper’s
payday
rose
to
$15.6
million
in
2025
and
he
took
over
the
task
of
integrating
the
newly
acquired
business
into
Capital
One.



See
the
answer
on
the
next
page.

Why Realistic Scenarios Matter More Than More AI – Above the Law

Legal
AI
is
often
evaluated
by
scale.
Bigger
models.
More
data.
Longer
lists
of
capabilities.
Demos
emphasize
volume:
how
many
questions
a
system
can
answer,
how
many
issues
it
can
spot,
how
fast
it
can
respond.

That
framing
misses
the
real
constraint.

The
problem
with
most
legal
AI
tools
is
not
that
they
are
insufficiently
powerful.
It
is
that
they
are
insufficiently
grounded
in
realistic
scenarios.
More
AI
does
not
compensate
for
shallow
context.

This
became
clear
during
a
series
of
empirical
classroom
pilots
run
through

Product
Law
Hub

using
an
AI-based
legal
coach
called
Frankie.
The
pilots
were
designed
to
observe
how
users
engage
with
AI
while
learning
judgment-based
legal
skills.
The
findings
draw
on
quantitative
engagement
data
and
qualitative
interviews
conducted
during
and
after
the
course.

The
signal
was
consistent.
Fewer,
richer
scenarios
produced
deeper
engagement,
stronger
reasoning,
and
higher
trust
than
high-volume
question
sets
ever
did.


Volume
Looks
Impressive.
Scenarios
Do
The
Work.

In
demos,
volume
is
persuasive.
A
system
that
can
answer
dozens
of
questions
in
seconds
feels
powerful.
Buyers
infer
competence
from
speed
and
breadth.

In
the
classroom,
that
illusion
collapsed
quickly.

When
students
were
presented
with
large
numbers
of
short,
repetitive
prompts,
engagement
dropped.
Sessions
shortened.
Follow-up
questions
declined.
Interviews
revealed
a
common
reaction:
the
interactions
felt
mechanical,
even
when
the
content
was
correct.

By
contrast,
when
students
were
given
fewer
scenarios
with
richer
context,
they
stayed
longer
and
worked
harder.
They
revisited
assumptions,
asked
clarifying
questions,
and
refined
their
analysis.
The
difference
was
not
sophistication
of
the
model.
It
was
quality
of
the
situation.


Ambiguity
Invites
Judgment

The
most
effective
scenarios
shared
a
common
feature.
They
were
ambiguous.

Exercises
that
included
stakeholder
disagreement,
incomplete
information,
or
competing
incentives
consistently
outperformed
cleaner
hypotheticals.
Students
leaned
in
when
they
had
to
decide
what
mattered,
not
when
they
were
asked
to
identify
what
applied.

Quantitative
data
showed
higher
completion
rates
and
longer
session
times
for
these
scenarios.
Qualitative
interviews
confirmed
that
students
found
them
more
credible
and
more
useful.
They
felt
closer
to
real
work.

Legal
judgment
does
not
emerge
from
clean
facts.
It
emerges
from
tension.
AI
that
avoids
ambiguity
to
simplify
interactions
undermines
the
very
skill
it
claims
to
support.


Repetition
Erodes
Trust
Faster
Than
Difficulty

One
of
the
more
counterintuitive
findings
was
how
users
responded
to
difficulty
versus
repetition.
Hard
problems
did
not
drive
disengagement.
Repetitive
ones
did.

When
scenarios
reused
the
same
structure
or
language,
users
quickly
lost
trust.
Even
minor
variations
felt
shallow.
The
system
appeared
inattentive,
as
though
it
were
pattern-matching
rather
than
reasoning.

In
contrast,
users
tolerated
complexity
and
uncertainty
when
the
scenario
felt
authentic.
They
did
not
expect
the
AI
to
make
the
problem
easier.
They
expected
it
to
take
the
problem
seriously.

This
distinction
matters
for
buyers
evaluating
tools.
A
demo
that
showcases
dozens
of
similar
questions
may
signal
capability,
but
it
does
not
predict
sustained
use.


Realism
Is
Not
About
Polish

It
is
tempting
to
equate
realism
with
polish.
Better
UX.
Cleaner
flows.
More
reassuring
language.
The
pilot
suggests
the
opposite.

Realism
came
from
friction.
Stakeholders
who
disagreed.
Constraints
that
could
not
be
optimized
away.
Tradeoffs
that
had
no
clean
resolution.
When
the
AI
engaged
with
those
elements
instead
of
smoothing
them
over,
users
trusted
it
more.

This
mirrors
real
legal
work.
Lawyers
trust
colleagues
who
acknowledge
uncertainty
and
wrestle
with
it.
They
distrust
those
who
offer
tidy
answers
to
messy
problems.

AI
that
prioritizes
smoothness
over
substance
feels
less
credible,
not
more.


Scenario
Quality
Shapes
Learning
And
Trust

The
classroom
setting
made
visible
something
that
is
harder
to
detect
in
practice.
Scenario
quality
shapes
not
just
learning
outcomes,
but
trust
in
the
system
itself.

When
scenarios
felt
generic,
users
disengaged
cognitively.
When
scenarios
felt
grounded,
users
attributed
more
intelligence
to
the
system,
even
when
its
responses
were
constrained.

Trust
followed
attention.
Systems
that
appeared
to
understand
the
situation
earned
credibility.
Systems
that
recycled
patterns
lost
it.

This
has
implications
beyond
education.
In
firms,
scenario
quality
influences
whether
lawyers
treat
AI
as
a
serious
tool
or
a
novelty.
High-volume
outputs
cannot
compensate
for
shallow
context.


Why
Buyers
Should
Rethink
Evaluation
Criteria

Legal
tech
buyers
often
ask
how
many
use
cases
a
tool
supports.
A
better
question
is
how
well
it
handles
one
difficult
case.

The
Product
Law
Hub
pilot
suggests
that
depth
beats
breadth
when
it
comes
to
judgment-based
work.
Tools
that
invest
in
realistic,
high-fidelity
scenarios
deliver
more
value
than
tools
that
chase
coverage.

That
may
require
different
procurement
thinking.
Scenario
design
is
harder
to
evaluate
than
feature
lists.
It
does
not
demo
well
in
five
minutes.
But
it
predicts
long-term
usefulness
far
better
than
model
size.


The
Quiet
Cost
Of
Shallow
Scenarios

The
cost
of
shallow
scenarios
is
not
just
wasted
time.
It
is
missed
development.

Junior
lawyers
do
not
build
judgment
by
answering
dozens
of
simplified
questions.
They
build
it
by
grappling
with
realistic
situations
that
force
prioritization
and
explanation.
AI
that
substitutes
volume
for
realism
accelerates
output
without
accelerating
growth.

The
classroom
data
made
this
visible
early.
In
practice,
the
cost
shows
up
later
as
stalled
development
and
diminished
confidence.


The
Takeaway
Vendors
Do
Not
Want
To
Hear

The
uncomfortable
takeaway
from
the
pilot
is
that
scenario
design
matters
more
than
AI
sophistication.
Bigger
models
will
not
fix
shallow
context.
Faster
answers
will
not
build
judgment.

Legal
AI
that
succeeds
will
not
be
defined
by
how
much
it
can
do,
but
by
how
well
it
can
inhabit
realistic
situations
and
resist
the
urge
to
oversimplify
them.

More
AI
is
easy
to
sell.
Better
scenarios
are
harder
to
build.
The
data
suggests
they
are
worth
the
effort.




Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.



A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.



She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.

Who’s Worse Than Trump? – Above the Law

(Photo
by
Mark
Wilson/Getty
Images)

I
was
chatting
with
a
guy
at
a
wedding
recently
who
told
me
that
Donald
Trump
was
the
best
president
in
American
history.

You
have
to
be
careful
these
days. Was
he
pulling
my
leg? Was
he
serious?

I
explored: “What
do
you
have
in
mind?”

“Trump
moved
the
U.S.
embassy
in
Israel
from
Tel
Aviv
to
Jerusalem,
and
he
negotiated
the
Abraham
Accords.”

Got
it. If
you’re
a
strong
supporter
of
Israel,
and
you
ignore
everything
else,
Trump’s
great.

I
headed
to
the
bar
for
another
drink.

If
you
ask
me,
Trump
is
the
worst
president
in
American
history. In
12
short
months,
Trump
has
changed
the
world
in
ways
that
it
will
take
decades

or
longer

to
recover
from. A
year
ago,
American
medical
research
was
the
envy
of
the
world. No
more. And
one
can’t
easily
replace
the
institutional
knowledge
that
has
departed
government
agencies. 

A
year
ago,
America
had
allies
around
the
world
who
we
could
count
on
for
support. No
more. And
now
that
countries
have
learned
that
the
United
States
is
untrustworthy,
it’s
not
clear
that
we’ll
ever
be
able
to
restore
our
reputation.

A
year
ago,
most
countries
were
showing
restraint
in
their
pursuit
of
nuclear
weapons. I’m
pretty
sure
those
days
are
over.
If
European
countries
in
NATO
and
Asian
allies
can
no
longer
trust
American
security
guarantees

and
Trump
has
made
clear
that
they
can’t

then
Germany
and
Poland
and
South
Korea
and
Japan,
not
to
mention
less
prominent
countries,
will
think
again
about
developing
nuclear
deterrents.

Trump
has
changed
the
world,
perhaps
permanently,
for
the
worse. There’s
no
going
back.

I’m
not
talking
about
political
foolishness

pardoning
the
January
6
crowd

or
economic
foolishness

imposing,
and
then
lifting,
and
then
imposing
again
tariffs. Those
are
short-term
harms,
from
which
the
country
might
recover. I’m
talking
about
long-term
damage
that
may
be
irreparable.

And
that’s
giving
Trump
the
benefit
of
the
doubt
about
whether
he’ll
try
to
do
something
venal
in
connection
with
this
year’s
midterm
elections.

Despite
all
that,
I
guarantee

guarantee

that
at
some
point
during
the
2028
presidential
election
campaign,
Democrats
will
say
that
one
or
more
Republican
candidates
are
“worse
than
Trump.”

These
are
Democrats
who
scream
that
Trump
is
a
fascist,
compare
him
to
Hitler,
and
believe
that
the
United
States
may
never
again
have
a
fair
election.  

I
guarantee
you
they’ll
say
the
2028
candidate
will
be
even
worse.

On
what
basis
do
I
give
this
guarantee?

First,
there’s
the
logic
of
political
campaigns. Every
Republican
candidate
is
the
worst
ever. Bush
lied
and
people
died;
no
one
could
be
worse. Romney
strapped
his
dog
to
the
roof
of
a
car;
no
one
could
be
worse. Trump
ran
companies
into
bankruptcy
and
had
no
government
experience;
how
could
things
get
worse?

Folks
always
go
nuts
in
political
campaigns
to
gain
an
advantage
over
the
other
side. Democrats
will
do
it
again
in
2028.

Second,
there’s
past
precedent. In
2024,
it
looked
for
a
while
like
Ron
DeSantis
was
gaining
ground
and
might
become
the
Republican
presidential
nominee. So
what
did
Democrats
say
about
him?

There
was
the
opinion
piece
in
the Los
Angeles
Times
 describing
a
DeSantis
presidency
as
a
“catastrophe
in
other
ways,”
potentially
more
dangerous
in
certain
respects
than
Trump. There
were political
forums
 in
which
folks
wrote
that
“DeSantis
would
be
worse
than
Trump.” There
were Facebook
discussion
groups
 in
which
people
explained
that
DeSantis’s
politics
were
to
the
right
of
Trump,
and
he
was
a
more
effective
spokesman
for
those
views,
so
DeSantis
was
the
greater
of
two
evils.

Finally,
I
guarantee
that
people
will
say
that
other
candidates
are
worse
than
Trump
because
they
already
are
doing
precisely
that. A
few
weeks
ago,
in
an
opinion
piece
for Fox
News
,
Hugh
Hewitt
canvassed
the
many
places
in
which
people
have
already
said
that
JD
Vance
is
worse
than
Trump. Vance
appears
calmer,
can
speak
more
crisply,
is
more
reactionary,
and
would
be
more
effective. He’s
worse
than
Trump!

No,
no,
no.

Trump
is
terrible.

Some
spineless
lickspittle
Republican
who
showered
praise
on
Trump
during
cabinet
meetings
or
didn’t
have
the
gumption
to
call
him
out
in
Congress
is
surely
a
disgusting
human
being. That
person
may
be
stupid
or
may
be
smart,
may
be
trustworthy
or
not,
may
be
a
felon
or
have
been
found
liable
for
damages
for
sexual
assault,
or
may
have
many
other
reasons
to
oppose
him
(or
her).

But
having
proclaimed
that
Trump
is
the
worst,
and
with
all
the
evidence
to
back
it
up,
the
2028
Democratic
campaign
must
be
run
on
other
issues.

No
matter
how
bad
the
Republican
candidate
in
2028,
that
person
is
not
worse
than
Trump.

It’s
simply
impossible.




Mark Herrmann spent
17
years
as
a
partner
at
a
leading
international
law
firm
and
later
oversaw
litigation,
compliance
and
employment
matters
at
a
large
international
company.
He
is
the
author
of 
The
Curmudgeon’s
Guide
to
Practicing
Law
 and Drug
and
Device
Product
Liability
Litigation
Strategy
 (affiliate
links).
You
can
reach
him
by
email
at 
[email protected].