
The
White
House
released
“America’s
AI
Action
Plan”
last
week,
which
outlines
various
federal
policy
recommendations
designed
to
advance
the
nation’s
status
as
a
leader
in
international
AI
diplomacy
and
security.
The
plan
seeks
to
cement
American
AI
dominance
mainly
through
deregulation,
the
expansion
of
AI
infrastructure
and
a
“try-first”
culture.
Here
are
some
measures
included
in
the
plan:
-
Deregulation:
The
plan
aims
to
repeal
state
and
local
rules
that
hinder
AI
development
—
and
federal
funding
may
also
be
withheld
from
states
with
restrictive
AI
regulations. -
Innovation:
The
proposal
seeks
to
establish
government-run
regulatory
sandboxes,
which
are
safe
environments
in
which
companies
can
test
new
technologies. -
Infrastructure:
The
White
House’s
plan
is
calling
for
a
rapid
buildout
of
the
country’s
AI
infrastructure
and
is
offering
companies
tax
incentives
to
do
so.
This
also
includes
fast-tracking
permits
for
data
centers
and
expanding
the
power
grid. -
Data:
The
plan
seeks
to
create
industry-specific
data
usage
guidelines
to
accelerate
AI
deployment
in
critical
sectors
like
healthcare,
agriculture
and
energy.
Leaders
in
the
healthcare
AI
space
are
cautiously
optimistic
about
the
action
plan’s
pro-innovation
stance,
and
they’re
grateful
that
it
advocates
for
better
AI
infrastructure
and
data
exchange
standards.
However,
experts
still
have
some
concerns
about
the
plan,
such
as
its
lack
of
focus
on
AI
safety
and
patient
consent,
as
well
as
the
failure
to
mention
key
healthcare
regulatory
bodies.
Overall,
experts
believe
the
plan
will
end
up
being
a
net
positive
for
the
advancement
of
healthcare
AI
—
but
they
do
think
it
could
use
some
edits.
Deregulation
of
data
centers
Ahmed
Elsayyad
—
CEO
of
Ostro,
which
sells
AI-powered
engagement
technology
to
life
sciences
companies
—
views
the
plan
as
a
generally
beneficial
move
for
AI
startups.
This
is
mainly
due
to
the
plan’s
emphasis
on
deregulating
infrastructure
like
data
centers,
energy
grids
and
semiconductor
capacity,
he
said.
Training
and
running
AI
models
requires
enormous
amounts
of
computing
power,
which
translates
to
high
energy
consumption,
and
some
states
are
trying
to
address
these
increasing
levels
of
consumption.
Local
governments
and
communities
have
considered
regulating
data
center
buildouts
due
to
concerns
about
the
strain
on
power
grids
and
the
environmental
impact
—
but
the
White
House’s
AI
action
plan
aims
to
eliminate
these
regulatory
barriers,
Elsayyad
noted.
No
details
on
AI
safety
However,
Elsayyad
is
concerned
about
the
plan’s
lack
of
attention
to
AI
safety.
He
expected
the
plan
to
have
a
greater
emphasis
on
AI
safety
because
it’s
a
major
priority
within
the
AI
research
community,
with
leading
companies
like
OpenAI
and
Anthropic
dedicating
significant
amounts
of
their
computing
resources
to
safety
efforts.
“OpenAI
famously
said
that
they’re
going
to
allocate
20%
of
their
computational
resources
for
AI
safety
research,”
Elsayyad
stated.
He
noted
that
AI
safety
is
a
“major
talking
point”
in
the
digital
health
community.
For
instance,
responsible
AI
use
is
a
frequently
discussed
topic
at
industry
events,
and
organizations
focused
on
AI
safety
in
healthcare
—
such
as
the
Coalition
for
Health
AI
and
Digital
Medicine
Society
—
have
attracted
thousands
of
members.
Elsayyad
said
he
was
surprised
that
the
new
federal
action
plan
doesn’t
mention
AI
safety,
and
he
believes
incorporating
language
and
funding
around
it
would
have
made
the
plan
more
balanced.
He
isn’t
alone
in
noticing
that
AI
safety
is
conspicuously
absent
from
the
White
House
plan
—
Adam
Farren,
CEO
of
EHR
platform
Canvas
Medical,
was
also
stunned
by
the
lack
of
attention
to
AI
safety.
“I
think
that
there
needs
to
be
a
push
to
require
AI
solution
providers
to
provide
transparent
benchmarks
and
evaluations
of
the
safety
of
what
they
are
providing
on
the
clinical
front
lines,
and
it
feels
like
that
was
missing
from
what
was
released,”
Farren
declared.
He
noted
that
AI
is
fundamentally
probabilistic
and
needs
continuous
evaluation.
He
argued
in
favor
of
mandatory
frameworks
to
assess
AI’s
safety
and
accuracy,
especially
in
higher-stakes
use
cases
like
medication
recommendations
and
diagnostics.
No
mention
of
the
ONC
The
action
plan
also
fails
to
mention
the
Office
of
the
National
Coordinator
for
Health
Information
Technology
(ONC),
despite
naming
“tons”
of
other
agencies
and
regulatory
bodies,
Farren
pointed
out.
This
surprised
him,
given
the
ONC
is
the
primary
regulatory
body
responsible
for
all
matters
related
to
health
IT
and
providers’
medical
records.
“[The
ONC]
is
just
not
mentioned
anywhere.
That
seems
like
a
miss
to
me
because
one
of
the
fastest-growing
applications
of
AI
right
now
in
healthcare
is
the
AI
scribe.
Doctors
are
using
it
when
they
see
a
patient
to
transcribe
the
visit
—
and
it’s
fundamentally
a
software
product
that
should
sit
underneath
the
ONC,
which
has
experience
regulating
these
products,”
Farren
remarked.
Ambient
scribes
are
just
one
of
the
many
AI
tools
being
implemented
into
providers’
software
systems,
he
added.
For
example,
providers
are
adopting
AI
models
to
improve
clinical
decision
making,
flag
medication
errors
and
streamline
coding.
Call
for
technical
standards
Leigh
Burchell,
chair
of
the
EHR
Association
and
vice
president
of
policy
and
public
affairs
at
Altera
Digital
Health,
views
the
plan
as
largely
positive,
particularly
its
focus
on
innovation
and
its
acknowledgement
of
the
need
for
technical
standards.
Technical
data
standards
—
such
as
those
developed
by
organizations
like
HL7
and
overseen
by
National
Institute
of
Standards
and
Technology
(NIST)
—
ensure
that
healthcare’s
software
systems
can
exchange
and
interpret
data
consistently
and
accurately.
These
standards
allow
AI
tools
to
more
easily
integrate
with
the
EHR,
as
well
as
use
clinical
data
in
a
way
that
is
useful
for
providers,
Burchell
said.
“We
do
need
standards.
Technology
in
healthcare
is
complex,
and
it’s
about
exchanging
information
in
ways
that
it
can
be
consumed
easily
on
the
other
end
—
and
so
that
it
can
be
acted
on.
That
takes
standards,”
she
declared.
Without
standards,
AI
systems
risk
miscommunication
and
poor
performance
across
different
settings,
Burchell
added.
Little
regard
for
patient
consent
Burchell
also
raised
concerns
that
the
AI
action
plan
doesn’t
adequately
address
patient
consent
—
particularly
whether
patients
have
a
say
in
how
their
data
is
used
or
shared
for
AI
purposes.
“We’ve
seen
states
pass
laws
about
how
AI
should
be
regulated.
Where
should
there
be
transparency?
Where
should
there
be
information
about
the
training
data
that
was
used?
Should
patients
be
notified
when
AI
is
used
in
their
diagnostic
process
or
in
their
treatment
determination?
This
doesn’t
really
address
that,”
she
explained.
Actually,
the
plan
suggests
that
the
federal
government
could,
in
the
future,
withhold
funds
from
states
that
pass
regulations
that
get
in
the
way
of
AI
innovation,
Burchell
pointed
out.
But
without
clear
federal
rules,
states
must
fill
the
gap
with
their
own
AI
laws
—
which
creates
a
fragmented,
burdensome
landscape,
she
noted.
To
solve
this
problem,
she
called
for
a
coherent
federal
framework
to
provide
more
consistent
guardrails
on
issues
like
transparency
and
patient
consent.
While
the
White
House’s
AI
action
plan
lays
the
groundwork
for
faster
innovation,
Burchell
and
other
experts
agree
it
must
be
accompanied
by
stronger
safeguards
to
ensure
the
responsible
and
equitable
use
of
AI
in
healthcare.
Credit:
MR.Cole_Photographer,
Getty
Images
