OpenAI
is
quietly
waging
its
own
brand
of
legal
ethics
enforcement
—
and
the
target
isn’t
lawyers
or
law
firms,
it’s
ChatGPT
accounts
masquerading
as
them.
According
to
reporting
by Legal
Cheek,
OpenAI
has
banned
a
cluster
of
ChatGPT
accounts
tied
to
bogus
law
firms
and
fake
“lawyers”
running
scam-recovery
schemes,
aka
Operation
False
Witness.
The
playbook
is
familiar:
polished
websites,
convincing
attorney
bios,
legal-sounding
emails,
promises
to
recover
stolen
funds
—
and
then
requests
for
upfront
fees,
often
in
crypto.
The
twist?
AI
helped
generate
the
professional
gloss,
from
firm
profiles
to
client
communications,
making
the
operations
look
far
more
legitimate
than
your
average
internet
scam.
Here
are
some
additional
details:
OpenAI
says
the
network
promoted
at
least
six
supposed
law
firms.
In
some
instances,
it
claims
the
fraudsters
went
further,
impersonating
real
lawyers
and
even
law
enforcement
bodies.
Accounts
were
used
to
translate
messages,
rewrite
communications
into
“American
English”,
produce
text
“in
the
style
of
a
lawyer”
and
even
generate
fake
supporting
documents.The
tech
giant
says
it
will
continue
to
disrupt
malicious
uses
of
its
models
and
ban
accounts
linked
to
scams
and
impersonation.
OpenAI
removing
the
accounts
is
great,
but
the
larger
issue
for
the
legal
industry
is
obvious:
when
AI
can
effortlessly
produce
credible-sounding
legal
content,
the
barrier
to
running
a
fake
firm
drops
dramatically.
Innovation
is
great.
Fake
lawyers
powered
by
machine
learning?
Less
so.
OpenAI
bans
ChatGPT
accounts
linked
to
fake
law
firms
and
lawyers
[Legal
Cheek]

Staci
Zaretsky is
the
managing
editor
of
Above
the
Law,
where
she’s
worked
since
2011.
She’d
love
to
hear
from
you,
so
please
feel
free
to email her
with
any
tips,
questions,
comments,
or
critiques.
You
can
follow
her
on Bluesky, X/Twitter,
and Threads, or
connect
with
her
on LinkedIn.
