This
month,
two
of
the
hottest
AI
companies
in
San
Francisco
announced
a
major
push
into
healthcare
—
moves
that
experts
say
were
not
only
inevitable,
but
also
timely
and
high-stakes.
These
AI
rivals
—
Anthropic
and
OpenAI,
the
makers
of
the
widely
used
large
language
models
Claude
and
ChatGPT,
respectively
—
unveiled
new
suites
of
tools
for
healthcare
organizations
and
everyday
consumers.
These
moves
reflect
a
shift
in
how
patients
are
accessing
medical
guidance
—
one
that
experts
agree
is
simultaneously
expanding
access
to
information
while
raising
new
questions
about
trust
and
control.
What
these
healthcare
expansions
could
mean
for
startups
Anthropic
and
OpenAI’s
healthcare
buildouts
are
forcing
startups
across
the
health
tech
market
to
reassess
where
they
truly
have
defensible
advantages,
one
investor
pointed
out.
Kamal
Singh,
senior
vice
president
at
WestBridge
Capital,
thinks
consumer
wellness
and
nutrition
startups
are
the
most
vulnerable,
saying
that
these
types
of
broad,
chat-based
platforms
are
likely
to
be
commoditized.
Startups
offering
nutrition
or
wellness
advice
without
deep
specialization
now
face
weakened
value
propositions
—
given
that
Claude
and
ChatGPT
have
massive
distribution
and
habitual
usage,
he
pointed
out.
Some
examples
include
apps
like
Noom,
Fay
and
Zoe.
Others
will
probably
remain
insulated
—
or
even
strengthened
—
depending
on
how
robust
their
models
are,
Singh
said.
In
his
view,
companies
focused
on
specialized
clinical
areas,
such
as
chronic
disease
management,
will
be
far
more
resilient
to
large
tech
incumbents
entering
the
space.
These
types
of
companies
rely
on
deep
patient
data,
longitudinal
insights
and
disease-specific
expertise
—
capabilities
that
we
still
don’t
know
if
general
purpose
tech
companies
will
be
able
to
replicate
at
scale,
Singh
remarked.
He
also
pointed
to
care
coordination
and
care
management
as
areas
where
startups
can
maintain
an
edge,
particularly
when
they
combine
AI
with
human
clinicians.
Rather
than
competing
directly
with
large
language
models,
Singh
believes
startups
should
differentiate
by
prioritizing
outcomes
and
delivering
end-to-end
care
experiences.
Another
emerging
battleground
is
AI-driven
primary
care.
Singh
said
this
category
sits
between
consumer
wellness
and
specialized
medicine
—
sophisticated
enough
to
resist
full
commoditization,
but
still
vulnerable
to
pressure
from
popular
AI
platforms.
“On
the
startup
side,
you
don’t
really
have
any
winners
yet
—
there
are
a
couple
of
companies
like
Counsel
Health,
who
are
kind
of
inching
towards
that
goal,
but
these
announcements
make
it
a
very
interesting
dynamic
there,”
he
declared.
Counsel
Health
is
a
virtual
care
company
that
combines
AI
with
human
physicians
to
give
users
quick,
personalized
medical
advice.
To
survive,
Singh
said
startups
in
this
space
will
need
creative
business
models,
including
hybrid
approaches
that
integrate
real
clinicians
with
AI-powered
guidance.
The
inevitable
rise
of
AI
as
healthcare’s
front
door
It
was
inevitable
that
OpenAI
and
Anthropic
would
deepen
their
presence
in
healthcare.
Trends
in
user
activity
made
this
unavoidable
—
hundreds
of
millions
of
people
per
week
were
turning
to
their
chatbots
to
answer
their
health-related
inquiries.
“Almost
5%
of
their
traffic
is
healthcare-related.
There
are
about
40
million
unique
healthcare
questions
asked
by
users
in
a
day.
Given
that,
it
really
does
seem
that
they’re
in
the
healthcare
business,
and
so
if
they’re
seeing
that
much
traffic
to
their
sites
related
to
healthcare,
they
had
to
increase
their
capabilities
in
that
space,”
explained
healthcare
AI
expert
Saurabh
Gombar.
So
what
did
the
Anthropic
and
OpenAI
actually
roll
out?
OpenA
launched
two
new
offerings.
One
is
ChatGPT
Health,
a
dedicated
health
experience
within
ChatGPT
that
combines
a
user’s
personal
health
information
with
the
company’s
AI,
with
the
promise
of
helping
people
better
manage
their
health
and
wellness.
The
other
is
OpenAI
for
Healthcare,
a
suite
of
AI
tools
designed
to
help
healthcare
providers
reduce
administrative
burnout
and
improve
care
planning.
OpenAI
also
announced
its
acquisition
of
medical
records
startup
Torch
this
month
—
a
deal
that
is
reportedly
worth
$100
million.
Anthropic
followed
with
a
healthcare
splash
of
its
own,
unveiling
a
new
suite
of
Claude
tools.
The
company
is
releasing
new
agent
capabilities
for
tasks
like
prior
authorization,
healthcare
billing
and
clinical
trial
workflows,
as
well
as
letting
its
paid
users
connect
and
query
their
personal
medical
records
to
get
summaries,
explanations
and
guidance
for
doctor
visits.
Gombar,
the
AI
expert
mentioned
above,
believes
that
large
language
models
are
becoming
the
new
“front
door”
to
healthcare.
“The
LLMS
are
now
becoming
the
front
door
for
medical
advice
and
treatment
options,
and
the
actual
provider
is
becoming
the
second
opinion.
Because
chatbots
are
easier
to
interact
with,
and
they’re
free,
and
you
don’t
have
to
schedule
around
them,”
Gombar
stated.
Gombar
is
a
clinical
instructor
at
Stanford
Health
Care
and
chief
medical
officer
and
co-founder
of
Atropos
Health,
a
healthcare
AI
startup
that
generates
real-world
evidence
at
the
bedside.
In
his
eyes,
tech
companies
developing
public-facing
chatbots
are
already
in
the
healthcare
business,
whether
they
formally
acknowledge
it
or
not.
This
could
fundamentally
alter
the
physician-patient
relationship.
Gombar
noted
that
clinicians
are
already
beginning
to
see
more
and
more
patients
who
arrive
already
convinced
they
need
specific
tests
or
treatments
based
on
chatbot
advice.
He
thinks
traditional
providers
have
limited
control
over
this
shift,
given
consumer
behavior
is
clearly
changing
at
a
rapid
pace.
Not
only
has
the
use
of
chatbots
like
ChatGPT
and
Claude
skyrocketed
in
the
past
couple
of
years,
but
Americans
are
also
finding
it
more
difficult
to
access
healthcare
amid
sweeping
Medicaid
cuts
and
a
worsening
labor
shortage.
The
risks
of
chatbots
in
medicine
The
rise
of
large
language
models
in
healthcare
is
already
well
underway,
but
that
doesn’t
mean
there
aren’t
risks
involved.
Asking
for
medical
guidance
from
an
intelligent
software
program
is
very
different
than
asking
for
a
recipe
—
wrong
answers
can
cause
real
harm.
Traditional
healthcare
providers
have
accountability
mechanisms
—
such
as
medical
malpractice
rules,
audit
trails
and
liability
protocols
—
while
chatbots
rely
heavily
on
disclaimers
that
say
their
outputs
should
not
be
considered
medical
advice,
Gombar
pointed
out.
However,
in
practice,
many
users
treat
chatbot
responses
as
actual
medical
advice,
often
without
cross-checking
with
other
sources
or
their
providers,
he
added.
Gombar
hopes
companies
like
Anthropic
and
OpenAI
move
beyond
disclaimers
and
take
greater
responsibility
for
how
their
tools
handle
medical
information.
In
the
future,
he
would
like
to
see
them
be
more
transparent
about
the
limitations
of
their
systems
—
including
how
often
they
hallucinate,
when
answers
are
not
grounded
in
strong
evidence
and
when
medical
evidence
itself
is
uncertain
or
incomplete.
He
also
suggested
that
large
language
models
be
designed
to
more
clearly
communicate
uncertainty
and
gaps
in
knowledge,
rather
than
presenting
speculative
answers
with
unwarranted
confidence,
he
said.
Aside
from
accuracy,
there
are
also
concerns
related
to
data
privacy,
as
consumers’
growing
distrust
of
Big
Tech
companies
and
their
data
privacy
practices
remains
an
ongoing
issue.
Anthropic
said
that
its
new
health
products
are
designed
with
strict
safeguards
around
user
consent
and
data
protection.
“Users
give
express
consent
to
integrate
their
data
with
full
information
about
how
Anthropic
protects
that
data
in
our
consumer
health
data
privacy
policy.
Anthropic
does
not
train
on
user
health
data.
Period.
We
also
protect
sensitive
health
data
from
inadvertent
sharing
to
other
integrated
model
context
protocols
by
requiring
user
consent
to
each
integration
in
conversations
where
integrated
health
data
is
being
discussed.
Users
can
disconnect
the
integration
any
time
in
settings,”
an
Anthropic
spokesperson
explained
in
an
emailed
statement.
Even
before
it
rolled
out
ChatGPT
Health,
OpenAI
had
been
building
user
data
protections
across
ChatGPT,
including
permanent
deletion
of
chats
from
OpenAI’s
systems
within
30
days
and
training
its
models
not
to
retain
personal
information
from
user
chats,
a
company
spokesperson
said
in
a
statement.
For
its
new
consumer
health
offering,
OpenAI
has
added
more
encryption
protections,
as
well
as
isolated
the
chats
to
keep
health
conversations
and
memory
protected
and
compartmentalized.
Conversations
in
ChatGPT
Health
are
not
used
to
train
its
foundation
models,
the
spokesperson
said.
As
for
OpenAI’s
new
platform
for
healthcare
providers,
customers
will
have
full
control
over
their
data.
When
clinicians
enter
patient
information,
for
example,
it
will
stay
within
the
organization’s
secure
workspace
and
will
not
be
used
for
model
training.
Making
AI
work
for
clinicians
and
patients
By
releasing
tools
for
consumers
as
well
as
for
healthcare
providers,
OpenAI
is
signaling
that
it
understands
consumers
have
different
needs
and
goals
than
hospitals.
Patients
want
general
guidance
and
convenience,
while
providers
need
accurate,
actionable
information
that
can
be
safely
integrated
into
the
clinical
record,
noted
Kevin
Erdal,
senior
vice
president
of
transformation
and
innovation
services
at
Nordic,
a
health
and
technology
consultancy.
When
deploying
new
large
language
models,
he
recommended
hospitals
watch
out
for
shadow
workflows.
“Clinicians
may
start
informally
relying
on
patient-generated
summaries
or
AI-assisted
interpretations
without
clear
standards
for
validation
or
documentation.
If
no
one
validates
where
patient-reported
information
came
from,
or
oversees
how
that
information
is
reviewed,
incorporated
or
rejected,
risk
quietly
accumulates,”
Erdal
said.
When
it
comes
to
Anthropic
and
OpenAI’s
consumer-facing
healthcare
tools,
the
biggest
risk
isn’t
misinformation
so
much
as
missing
context,
he
remarked.
“Context,
intent
and
reasoning
can
live
in
a
chat
while
the
clinical
record
captures
only
the
outcome,
weakening
care
continuity
and
the
trust
between
patient
and
provider,”
Erdal
stated.
This
gap
in
context
underscores
why
consumer-facing
chatbots
are
ill-suited
for
clinician
use.
For
hospitals
and
other
providers,
Erdal
thinks
the
right
response
to
the
rise
of
consumer-facing
healthcare
AI
is
integration.
“It
will
look
like
health
systems
accepting
that
these
tools
already
exist,
and
designing
responsible
ways
to
absorb
their
output
without
fragmenting
care.
The
bar
is
continuity,
and
the
patient/provider
relationship
is
what’s
at
stake,”
he
declared.
If
consumer-facing
AI
models
help
patients
walk
into
healthcare
interactions
more
informed
and
better
prepared,
but
then
their
providers
are
unprepared
to
integrate
that
into
the
healthcare
conversation
in
a
thoughtful
or
deliberate
way,
access
to
healthcare
information
improves
while
trust
drops
off,
Erdal
explained.
At
a
deeper
level,
OpenAI
and
Anthropic’s
healthcare
push
reflects
a
broader
shift
in
the
healthcare
industry.
The
question
is
no
longer
whether
AI
will
become
part
of
the
patient
journey
—
it’s
clear
that
the
shift
is
already
underway.
The
real
question
is
who
will
control
it,
who
will
be
accountable
for
it,
and
how
much
influence
it
will
have
over
decisions
that
were
once
firmly
in
the
hands
of
clinicians.
Experts
agree
that
the
companies
that
adapt
—
by
integrating
AI
thoughtfully,
strengthening
trust
and
clarifying
responsibility
—
may
help
build
a
more
accessible
healthcare
system.
Those
that
don’t
may
find
themselves
left
behind.
Photo:
Pakorn
Supajitsoontorn,
Getty
Images