The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

AMA Calls on Congress to Improve Safeguards for AI Mental Health Chatbots – MedCity News

As
AI
chatbots
become
more
popular
in
mental
healthcare,
the
American
Medical
Association
is

urging

Congress
to
strengthen
safeguards.

The
organization
sent
letters
to
the
Congressional
Artificial
Intelligence
Caucus,
the
Congressional
Digital
Health
Caucus
and
the
Senate
Artificial
Intelligence
Caucus.
The
letters
follow
numerous
reports
of
AI
chatbots
encouraging
suicide
and
self-harm
among
vulnerable
populations.

Congress
held
hearings
on
the
role
of
AI
in
mental
health
last
year,
which
“emphasized
several
critical
mental
health
concerns,
including
emotional
dependency
on
AI
systems,
the
potential
distortion
of
reality
through
prolonged
interaction
with
chatbots,
and
the
current
lack
of
consistent
safety
protocols,”
the
AMA
said
in
the
letters. 

These
hearings
showed
the
need
for
“immediate
attention”
to
ensure
AI
tools
don’t
harm
those
seeking
mental
health
support,
the
letter
added.

That
said,
the
AMA
acknowledged
that
AI
tools
could
be
potentially
valuable
in
mental
health
care
if
used
safely.

“Across
the
country,
patients
persistently
struggle
to
access
mental
health
care,
either
for
reasons
of
access
or
affordability,”
the
AMA
said.
“Well-designed
AI-enabled
tools
may
serve
as
supportive
resources
that
expand
access
to
evidence-based
information,
facilitate
early
identification
of
mental
health
concerns,
and
connect
individuals
with
appropriate
clinical
services.
When
developed
and
deployed
within
clear
regulatory
guardrails,
these
technologies
have
the
potential
to
complement,
not
replace,
clinicians
and
help
mitigate
persistent
workforce
shortages
and
other
access
issues.”

The
AMA
provided
several
recommendations
for
AI
chatbot
safeguards,
including:

  • Improve
    transparency:
    Require
    chatbots
    to
    clearly
    disclose
    that
    users
    are
    communicating
    with
    AI,
    as
    well
    as
    ban
    systems
    from
    presenting
    themselves
    as
    licensed
    clinicians.
  • Create
    clear
    regulatory
    boundaries:
    Prevent
    chatbots
    from
    diagnosing
    or
    treating
    mental
    health
    conditions
    without
    the
    right
    regulatory
    review.
    The
    AMA
    calls
    on
    Congress
    to
    direct
    agencies
    to
    create
    a
    “modern,
    risk-based
    oversight
    framework
    and
    clarify
    when
    AI
    tools
    qualify
    as
    medical
    devices.”
  • Improve
    oversight:
    Require
    ongoing
    safety
    monitoring,
    reporting
    of
    adverse
    events
    and
    strict
    standards
    for
    technology
    used
    by
    children
    and
    adolescents.
  • Protect
    privacy
    and
    security:
    The
    AMA
    called
    for
    rigid
    data
    protection
    standards,
    such
    as
    limits
    on
    data
    collection
    and
    retention
    and
    clear
    user
    consent
    for
    data
    use.
  • Limit
    commercial
    use:
    Ban
    advertising
    on
    mental
    health
    chatbots,
    especially
    for
    minors.

“AI-enabled
tools
may
help
expand
access
to
mental
health
resources
and
support
innovation
in
health
care
delivery,
but
they
lack
consistent
safeguards
against
serious
risks,
including
emotional
dependency,
misinformation,
and
inadequate
crisis
response,”
said
Dr.
John
Whyte,
AMA
CEO,
in
a
statement.
“With
thoughtful
oversight
and
accountability,
policymakers
can
support
innovation
and
ensure
technologies
prioritize
patient
safety,
strengthen
public
trust,
and
responsibly
complement—not
replace—clinical
care.” 


Photo:
Witthaya
Prasongsin,
Getty
Images