
WASHINGTON
—
The
Pentagon’s
Chief
Technology
Officer
today
weighed
in
on
a
reported
clash
between
the
Department
of
Defense
and
AI
giant
Anthropic,
publicly
rejecting
what
he
called
attempts
by
the
company
to
limit
military
use
of
its
Claude
AI
as
undemocratic.
“Congress
writes
bills,
the
president
signs
them,
agencies
write
regulations,
and
people
comply,
and
we’ve
always
complied,”
Under
Secretary
Emil
Michael
told
reporters
after
his
remarks
to
the
Microelectronics
Commons
consortium.
“What
we’re
not
going
to
do
is
let
any
one
company
dictate
a
new
set
of
policies
above
and
beyond
what
Congress
has
passed,”
Michael
said.
“That
is
not
democratic.
That
is
giving
any
one
company
control
over
what
new
policies
are,
and
that’s
for
the
president,
that’s
for
Congress,
and
that’s
for
the
agencies
to
determine
how
to
implement
those
rules.”
Last
summer,
the
Pentagon’s
Chief
Digital
&
AI
Office
awarded
Anthropic,
Google,
xAI,
and
OpenAI
contracts
worth
up
to
$200
million
apiece
to
customize
their
popular
generative
AI
applications
for
military
use.
Classified
versions
of
Anthropic’s
Claude
AI
are
also
available
to
Defense
Department
personnel through
Amazon
and
Palantir,
Semafor
has
reported.
But,
according
to
a
January
report
in
the
Wall
Street
Journal,
Anthropic’s
policies
forbidding
Claude’s
use
in
weapons
or
surveillance
programs
had
created
a
rift
with
the
Pentagon
that
put
its
contract
at
risk.
The
Journal
also
reported
that
least
one
instance
of
Claude
was
used
to
help
plan
the
raid
that
captured
Venezuelan
strongman
Nicolas
Maduro.
In
the
Journal’s
report
on
the
Maduro
raid,
an
Anthropic
spokesperson
declined
to
discuss
that
specific
operation
but
said,
“Any
use
of
Claude
—
whether
in
the
private
sector
or
across
government
—
is
required
to
comply
with
our
Usage
Policies,
which
govern
how
Claude
can
be
deployed.”
Those
policies
prohibit
using
the
AI
to
“produce,
modify,
design,
or
illegally
acquire
weapons”
or
to
“track
a
person’s
physical
location,
emotional
state,
or
communication
without
their
consent,
including
using
our
products
for
…
battlefield
management
applications.”
(Note
the
prohibition
covers
any
person,
not
just
US
citizens).
The
disagreement
has
reportedly
risen
to
the
attention
of
Defense
Secretary
Pete
Hegseth,
who’s
pushed
the
Pentagon
to
embrace
AI
but
also
chafes
against
outside
restrictions
on
the
military.
An
unnamed
senior
Pentagon
official
even
told
Axios
that
Hegseth
was
“close”
to
designating
the
company
a
“supply
chain
risk,”
a
draconian
measure
which
could
require
any
company
doing
business
with
the
Defense
Department
—
including
giant
corporations
like
Microsoft,
Google,
and
Amazon
—
to
cut
all
ties
with
Anthropic,
including
any
use
of
Claude.
An
official
statement
from
Pentagon
chief
spokesman
Sean
Parnell
was
more
restrained,
telling
The
Hill
that
“The
Department
of
War’s
relationship
with
Anthropic
is
being
reviewed.”
Michael
today
refrained
from
making
threats
and
said
he
hoped
for
Anthropic’s
success,
even
while
emphasizing
current
government
safeguards
should
be
enough.
“We
have
a
robust
set
of
laws
about
surveillance
in
this
country
that
have
been
run
through
the
democratic
process,”
he
said.
“In
terms
of
autonomy,
again,
[there]
are]
lots
of
regulations
that
have
been
promulgated
for
years
in
the
Department,”
he
added,
covering
such
questions
as,
“if
a
drone
swarm
is
coming
at
a
military
base,
what
are
your
options
to
take
it
down
if
the
human
reaction
time
is
not
fast
enough?”
Anthropic
did
not
immediately
respond
to
Breaking
Defense’s
request
for
a
response
to
Michael’s
remarks.
‘We
Want
Guardrails,’
But
…
Despite
the
impasse
over
usage
polices,
Michael
explicitly
said
that
he
considered
Anthropic
one
of
America’s
“national
champions”
in
AI
and
he
hoped
the
company
would
drop
its
restrictions
and
keep
working
with
the
military,
much
as
Google
did
after
an
internal
revolt
led
it
to
withdraw
from
the
military’s
Project
Maven
in
2018.
“The
great
news
in
AI
is
that
the
United
States
is
leading,”
Michael
told
the
annual
meeting
of
the
Microelectronics
Commons,
a
public-private
consortium
of
chipmakers,
academics,
and
others
that
work
with
the
Defense
Department.
“We
have
at
least
four
—
no,
probably
more
—
true
national
champions
that
are
investing,
between
them,
a
trillion
dollars
over
the
next
several
years
in
facilities,
in
R&D.”
When
a
reporter
asked
him
after
those
remarks
about
the
future
of
Anthropic’s
Pentagon
contracts,
Michael
swiftly
pivoted
to
the
positive:
“The
Secretary
has
said
the
relationship
is
under
review,
so
it’s
under
review.
We
want
all
our
American
champion
AI
companies
to
succeed.
I
want
Anthropic,
xAI,
OpenAI,
Google
to
succeed.”
“We
want
to
take
advantage
of
all
the
capabilities
that
…
I
believe
will
be
world
changing,”
he
went
on.
“And
if
you
think
back
to
2018
where
Google
didn’t
want
to
have
the
Department
of
War
use
its
cloud
business,
this
is
a
similar
moment.”
AI
should
include
appropriate
safeguards
against
misuse,
even
by
the
Defense
Department,
Michael
added
—
but
the
definition
of
“misuse”
can’t
be
so
broad
as
to
block
lawful
military
functions.
“We
want
guardrails,”
he
said.
“We
need
the
guardrails
tuned
for
military
applications.
You
can’t
have
an
AI
company
sell
AI
to
the
Department
of
War,
and
don’t
let
it
do
Department
of
War
things,
because
we’re
in
the
business
of
defending
the
country
and
defending
our
troops.”
