The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Bot’s Not Nice – Above the Law

We
know,
in
a
world
of
uncertainties,
AI
is
coming
for
all
of
us
in
different
ways.
Trying
to
keep
up
with
all
the
changes
(and
I
am
not
even
mentioning
our
overseas
adventures)
is
exhausting,
overwhelming,
and
frustrating.
How
to
cope?
More
reliance
on
AI? 

The
Wall
Street
Journal
recently
ran
an
article
comparing
the
three
large
learning
machines
(Claude,
Gemini,
and
OpenAI)
in
a
kind
of
LLM
legal
writing
Olympics. 

The
results
were
fascinating.
Each
of
the
three
competitors
was
better
in
some
ways,

and
worse
in
others
.
Each
bot
had
quirks
of
its
own.
How
to
tell
a
bot
from
a
human?

In
this
admittedly
unscientific
test,
one
way
to
tell
a
bot
from
a human
was
vocabulary.
If
it
sounds
like
it’s
“a
panicked
college
freshman
trying
to
sound
profound,”
it’s
a
bot.
If
the
article,
memo,
or
document
starts
out
by
telling
the
reader
what
it’s
about,
it’s
a
bot.  

All
three
bots
hedged,
reluctant
to
give
opinions.
“On
the
one
thing

on
the
other.”
That
wishy-washy
language
is
not
what
clients
are
paying
for.
They
are
paying
for
our
opinions
and
our
advice
with
available
options
about
how
to
proceed.
Clients
want
clear
directions
and
advice;
save
the
erudite
for
law
review
articles.  

The
time
will
come,
sooner
rather
than
later,
when
bot
writing
will
be
essentially
indistinguishable
from
what
we
humans
write.
It’s
about
to
become

a
lot
more
difficult
to
choose

the
real
from
the
artificial.

You
are
not
a
bot,
so
don’t
write
like
one.
Clients
do
not
want
to
read
(or
pay
for)
pages
and
pages
of
legal
gobbledygook
that,
in
the
end,
only
confuse
the
reader
while
the
meter
runs.
Perhaps
for
law
review
articles
and
other
scholarly
compositions,
more
is
more,
but
for
the
everyday
lawyer
who
is
just
trying
to
KISS
(Keep
It
Simple
Stupid),
twisting
yourself
into
a
legal
literary
pretzel
does
no
one
any
good,
especially
the
reader.
Get
to
the
point
quickly
before
eyes
glaze
over
and
the
reader
snores.

On
another
AI
topic,
is
a
lawsuit
really
final
even
if
it’s
been
settled
and
the
case
dismissed
with
prejudice?
No,
not

according
to
ChatGPT
,
a
font
of
legal
(mis)information
(ahem).

Nippon
Life
Insurance
has
sued
OpenAI
in
federal
court
in
Chicago,
alleging
that
OpenAI
engaged
in
UPL,
that
is,
the
unauthorized
practice
of
law.
The
basis?
ChatGPT
advised
the
settling
plaintiff
in
the
underlying
disability
case
that
she
could
reopen
that
dismissed
lawsuit.
(She
had
a
case
of
settler’s
remorse,
not
that
any
settling
party
has
ever
felt
that
way.)
Nippon’s
complaint
alleges
that
ChatGPT
is
not
an
attorney
and
therefore
cannot
give
legal
advice.

The
plaintiff
thought
that
her
attorney
(a
human,
not
a
bot)
had
given
her
bad
advice
about
whether
she
could
indeed
reopen
the
dismissed
case.
So,
she
went
“attorney
shopping”
and 
looked
to
ChatGPT
for
advice.
Guess
what?
ChatGPT
told
the
women
that
indeed
she
had
been
given
wrong
advice.
The
woman
fired
her
counsel
and
looked

solely
to
AI
for
advice

and
moved
to
reopen
the
closed
case.
After
that
was
denied,
she
filed
a
new
case
and
dozens
of
motions
allegedly
using
AI
again,
including
a
hallucinated
case.
OpenAI
says
that
Nippon’s
case
lacks
merit.
Really?
Who
is
responsible
for
a
bot’s
conduct?
Certainly
not
the
bot,
at
least
not
so
far. 

On
how
many
levels
is
this
scary?
Let
me
count
some
of
the
ways.
UPL
is
a
big
problem
for
bar
disciplinary
agencies.
Too
many
nonbarred
peeps
in
the
field.
How
to
enforce
UPL
against
a
bot?
That’s
trying
to
nail
Jell-O
to
a
tree.
How
could
the
disciplinary
process
be
used
to
outlaw
the
use
of
AI?
Should
it?
How
can
lawyers
protect
themselves,
if
at
all,
from
AI
dissing
their
advice
resulting
in
an
unhappy
client
who
fires
the
lawyer
and
then
files
a
complaint
with
the
bar
based
on
that
allegedly
bad
advice?
Which,
in
this
case,
was
correct
advice?
How
does
the
court
order
a
bot
to
pay
a
Rule
11
sanction?
Is
your
head
spinning
yet?

Reliance
on
incorrect
information
from
ChatGPT
or
any
other
bot
that
leads
to
frivolous
lawsuits,
both
in
court
and
in
unjustified
bar
discipline
cases,
only
makes
the
legal
system
grind
ever
more
slowly
and
lead
to
even
more
crap
filings.
Is
reliance
on
a
bot
merely
general
legal
information
or
specific
legal
advice?

Pass
the
Pepto,
please.
Or
an
Excedrin.
Or
maybe
both.
Perhaps
a
bot
can
suggest
what
to
take.

Or
would
that
be
practicing
medicine
without
a
license?




Jill
Switzer
has
been
an
active
member
of
the
State
Bar
of
California
for
over
40
years.
She
remembers
practicing
law
in
a
kinder,
gentler
time.
She’s
had
a
diverse
legal
career,
including
stints
as
a
deputy
district
attorney,
a
solo
practice,
and
several
senior
in-house
gigs.
She
now
mediates
full-time,
which
gives
her
the
opportunity
to
see
dinosaurs,
millennials,
and
those
in-between
interact

it’s
not
always
civil.
You
can
reach
her
by
email
at 
[email protected].