
AI
agents
—
autonomous,
task-specific
systems
designed
to
perform
functions
with
little
or
no
human
intervention
—
are
gaining
traction
in
the
healthcare
world.
The
industry
is
under
massive
pressure
to
lower
costs
without
compromising
care
quality,
and
health
tech
experts
believe
agentic
AI
could
be
a
scalable
solution
that
can
help
with
this
arduous
goal.
However,
this
AI
category
comes
with
greater
risk
than
that
of
its
AI
predecessors,
according
to
one
cybersecurity
and
data
privacy
attorney.
Lily
Li,
founder
of
law
firm
Metaverse
Law,
noted
that
agentic
AI
systems,
by
definition,
are
designed
to
handle
actions
on
a
consumer
or
organization’s
behalf
—
and
this
takes
the
human
out
of
the
loop
for
potentially
important
decisions
or
tasks.
“If
there
are
hallucinations
or
errors
in
the
output,
or
bias
in
training
data,
this
error
will
have
a
real-world
impact,”
she
declared.
For
instance,
an
AI
agent
may
make
errors
such
as
refilling
a
prescription
incorrectly
or
mismanaging
emergency
department
triage,
potentially
leading
to
injury
or
even
death,
Li
said.
These
hypothetical
scenarios
shine
a
light
on
the
gray
area
that
arises
when
responsibility
shifts
away
from
licensed
providers.
“Even
in
situations
where
the
AI
agent
makes
the
‘right’
medical
decision,
but
a
patient
does
not
respond
well
to
treatment,
it
is
unclear
whether
existing
medical
malpractice
insurance
would
cover
claims
if
no
licensed
physician
was
involved,”
Li
remarked.
She
noted
that
healthcare
leaders
are
operating
in
a
complex
area
—
saying
she
believes
society
needs
to
address
the
potential
risks
of
agentic
AI,
but
only
to
the
extent
that
these
tools
contribute
to
excess
deaths
or
increased
harm
over
a
similarly-situated
human
physician.
Li
also
pointed
out
that
cybercriminals
could
take
advantage
of
agentic
AI
systems
to
launch
new
types
of
attacks.
To
help
avoid
these
dangers,
healthcare
organizations
should
incorporate
agentic
AI-specific
risks
into
their
risk
assessment
models
and
policies,
she
recommended.
“Healthcare
organizations
should
first
review
the
quality
of
underlying
data
to
remove
existing
errors
and
bias
in
coding,
billing
and
decision
making
that
will
feed
into
what
the
model
learns.
Then,
ensure
that
there
are
guardrails
on
the
types
of
actions
the
AI
can
take
—
such
as
rate
limitations
on
AI
requests,
geographic
restrictions
on
where
requests
come
from,
filters
for
malicious
behavior,”
Li
stated.
She
also
urged
AI
companies
to
adopt
standard
communication
protocols
among
their
AI
agents,
which
would
allow
for
encryption
and
identity
verification
to
avoid
the
malicious
use
of
these
tools.
In
Li’s
eyes,
the
future
of
agentic
AI
in
healthcare
might
depend
less
on
its
technical
capabilities
and
more
on
how
well
the
industry
is
able
to
build
trust
and
accountability
when
it
comes
to
the
use
of
these
models.
Photo:
Weiquan
Lin,
Getty
Images
