
WASHINGTON
—
The
official
designation
of
Anthropic
as
a
“supply
chain
risk,”
delivered
to
the
company
Wednesday,
imposed
much
milder
penalties
on
the
AI
giant
than
Defense
Secretary
Pete
Hegseth
originally
threatened,
Anthropic
CEO
Dario
Amodei
said
Thursday
on
the
company’s
website.
After
Anthropic’s
refusal
to
accept
new
contract
language
allowing
“all
lawful
use”
of
its
Claude
chatbot
by
the
military,
Hegseth
declared
on
Feb.
27
that
“no
contractor,
supplier,
or
partner
that
does
business
with
the
United
States
military
may
conduct
any
commercial
activity
with
Anthropic.”
That
same
day,
President
Donald
Trump
declared
that
all
federal
agencies,
not
just
the
Defense
Department,
would
“IMMEDIATELY
CEASE
all
use
of
Anthropic’s
technology,”
albeit
over
“a
Six
Month
phase
out
period.”
The
actual
terms
of
the
March
4
designation,
however,
were
much
narrower,
Amodei
said
Thursday.
“The
vast
majority
of
our
customers
are
unaffected
by
a
supply
chain
risk
designation,”
he
wrote.
“It
plainly
applies
only
to
the
use
of
Claude
by
customers
as
a
direct
part
of
contracts
with
the
Department
of
War,
not
all
use
of
Claude
by
customers
who
have
such
contracts.”
Microsoft,
which
uses
Anthropic’s
Claude
in
its
software
suite,
concurred
in
statements
to
several
news
outlets.
“Our
lawyers
have
studied
the
designation
and
have
concluded
that
Anthropic
products,
including
Claude,
can
remain
available
to
our
customers
—
other
than
the
Department
of
War
—
through
platforms
such
as
M365,
GitHub,
and
Microsoft’s
AI
Foundry
and
that
we
can
continue
to
work
with
Anthropic
on
non-defense
related
projects,”
a
company
spokesperson
told
CNBC.
Despite
the
ban
being
less
harsh
than
feared,
Amodei
said
he
still
intends
to
sue
the
government
to
overturn
the
designation.
“We
do
not
believe
this
action
is
legally
sound,
and
we
see
no
choice
but
to
challenge
it
in
court.”
At
the
same
time,
he
struck
a
conciliatory,
even
apologetic
tone
in
public
statements.
“I
want
to
completely
apologize,”
he
told
The
Economist,
for
harsh
denunciations
of
the
Pentagon
and
rival
OpenAI
he
sent
Anthropic
employees
that
then
were
leaked
to
the
press.
He
added
that
“we
had
been
having
productive
conversations
with
the
Department
of
War
over
the
last
several
days.”
In
response,
Pentagon
CTO
Emil
Michael,
the
Undersecretary
for
Research
and
Engineering,
shared
in
an
X
post
Thursday
that
“there
is
no
active
@DeptofWar
negotiation
with
@AnthropicAI.”
Michael,
a
former
tech
exec
himself,
previously
told
reporters
it
was
“undemocratic”
for
the
company
to
“dictate”
restrictions
on
the
military’s
use
of
AI
that
went
beyond
the
laws
Congress
had
passed.
Michael,
Hegseth,
Pentagon
spokesman
Sean
Parnell
and
other
Pentagon
officials
have
publicly
denounced
Anthropic
for
insisting
on
limitations
beyond
those
already
in
law
and
regulation
on
the
use
of
AI
for
mass
surveillance
and
autonomous
weapons.
What
Now?
With
such
mixed
signals
coming
from
both
sides,
experts
who
spoke
to
Breaking
Defense
struggled
to
predict
what
would
happen
next.
But
two
of
the
three
doubted
that
the
supply
chain
risk
designation
would
stand
up
in
court.
Hegseth’s
initial
threat
last
week
was
simply
more
than
the
law
allows,
said
Paul
Scharre,
a
former
Army
ranger
who’s
now
executive
vice
president
of
the
Center
for
a
New
American
Security.
“What
Hegseth
said
on
Friday
[Feb.
27]
is
just
not
what
the
supply
chain
risk
designation
means,”
Scharre
told
Breaking
Defense.
“It
means
no
one
can
use
Anthropic
tools
when
executing
a
DoD
contract.”
But
even
the
narrower
ban
actually
imposed
on
Anthropic
in
this
week’s
official
letter
would
probably
not
hold
up
in
court,
he
went
on:
The
law
was
written
to
keep
foreign
companies
from
sabotaging
the
military
supply
chain,
not
to
punish
American
companies
for
not
doing
business
on
the
Pentagon’s
terms.
“I
fully
expect
Dario
to
take
legal
action,”
agreed
Jack
Shanahan,
an
AI
consultant
and
commentator.
Shanahan,
who
previously
led
the
military’s
AI-powered
Project
Maven
and
then
the
Pentagon’s
Joint
AI
Center,
told
Breaking
Defense,
“He
has
way
too
much
at
stake
to
be
booted
out
of
every
government
contract.
There
are
billions
of
dollars
at
stake
here.
“The
early
expert
consensus
is
that
the
most
draconian
punishment
—
supply
chain
risk
—
won’t
hold
up
in
court,”
he
added. But
too
much
damage
has
already
been
done
to
the
often-rocky
relationship
between
the
Pentagon
and
Silicon
Valley,
Shanahan
lamented,
undoing
a
decade
of
bridge-building.
“This
supply
chain
risk
designation
will
go
down
in
history
as
a
real
technology
low
point
of
this
administration,”
Shanahan
said.
“You
cannot,
for
a
second,
claim
you
want
to
‘dominate
globally
in
AI’
while
simultaneously
burying
a
shiv
in
the
heart
of
one
of
the
biggest
and
most
important
AI
companies
in
the
world.
Xi
Jinping
is
thrilled.”
Shanahan’s
successor
at
the
Joint
AI
Center,
however,
had
a
more
optimistic
take.
“I
think
that
there’s
a
rapprochement
here
that
is
in
the
making,”
said
Michael
Groen,
now
working
as
an
advisor
to
industry.
Even
if
Anthropic
can’t
come
to
terms
with
the
Pentagon,
“at
the
end
of
the
day,
there
will
be
plenty
of
AI
capabilities
and
companies
that
want
to
work
with
Defense,
that
want
to
make
sure
that
we
have
the
best
capability,
that
share
the
values
of
responsible
technology
and
responsible
warfighting,”
Groen
told
Breaking
Defense,
pointing
to
the
military’s
long
tradition
of
regulating
its
own
use
of
technology,
including
AI.
“We
can
do
this,”
Groen
said.
“It’s
natural
that
we
have
some
of
these
dust-ups,
[but]
it’s
shameful
if
our
technology
leaders
and
our
military
leaders
can’t
come
to
a
place
that
supports
our
young
warfighters
—
and
also
does
it
morally
and
ethically.”
