When
the American
Arbitration
Association (AAA)
recently announced that
it
would
be
launching an
AI-powered
arbitrator
in
November,
it raised the
question of
the
future role
of
AI in
litigation. Indeed,
it
could suggest
a
possible
future that many
litigators still insist
will
never
arrive.
I often give
presentations on
the use
of
AI
in
litigation
and
the
impact
it
could
and
will
have.
I frequently hear
from
older litigators that
they aren’t all
that concerned
about what
AI
could do to
their
practices.
After all,
they
reason, litigators
have
to effectively persuade
other
humans.
They need
to have empathy,
sympathy, and assess body language and subtleties in
others.
And
they
have
to
have the
proverbial
gut instinct.
None
of
these
things
does AI
have. Yet.
That
may
be
true, I
say.
But have you considered
the possibility
that,
in
the
future, the
decision
maker
is, itself, an
AI tool? How
necessary
will
litigators
be
when
all
the
relevant
information
is
fed
into
a
bot
which
then
makes
a
decision?
What
will
the
litigator’s
job
be? How
realistic
is
this?
The
AAA
Announcement
And
lest
we
think
that
AI
decision making
is far-fetched, eBay
has
been
using
an
AI
bot
to resolve disputes
between
buyers and
sellers
for
some
time. Then
came
the
AAA announcement
that
it
would be
launching its AI-powered
arbitrator
in November.
The AI arbitrator
will,
for
now,
be
deciding documents-only
construction
defect cases,
although
in
the
future, according
to
AAA, it may be used
for insurance cases
and specifically high-volume
but
low-dollar-amount payer
provider
disputes.
In
an
interview
on Bob
Ambrogi’s podcast, Bridget McCormack,
AAA’s president
and
CEO, claimed
that
use
of
the
tool would
reduce
the
cost
of construction cases by
some
30-50%
and the
time required
to litigate
and resolve a
case
by 25-35%.
She
expects improvement over
time.
It’s
All
About
Cost v. Exposure
It’s
those
metrics
that
stand
out. Particularly for arbitration but for
all litigation,
cost
and
time
are critical.
Lots
of disputes
go unresolved because of
these
two
factors.
And businesses and insurance companies would tell
you
that the
transactional costs
of
litigation are substantial.
In
thinking
about
whether
AI
decision
making
in
litigation
is
realistic,
think
about
the
following: I was
talking
to
a
general
counsel
recently
about
AI
and
its
impact.
I asked her if she
were
given
the
option
of
having
an
AI
tool
decide
a
case without so
much
cost would she
agree?
Her
answer,
even a
few
months
ago, was “Absolutely. If I
could
refer any
case
where
the
amount at
stake was
less
than,
say, $50k, I
would do
it
in
a heartbeat.”
Why? It’s because
she
was spending more
in
legal
fees and
transactional
costs
for
those
low exposure cases
than
what
they
were
worth. So
even
if
the
AI bot might
get a
few
cases
wrong
or achieve
a result worse than what a
human
lawyer might
achieve,
it
doesn’t matter
all
that
much
in
the long
run.
It’s
why
insurance
companies are
willing
to
pay lawyers
with
low
hourly rates:
the
difference
between an
A
job
and a
C
job doesn’t
affect
the
overall result that
much. So
why
pay
any
more in
legal
fees
than
you
have
to?
Cases
Ripe
for
AI
Decision
Making
If
that’s
the
case, there
are certain
kinds
of
cases that
might
be
ideal
for
this
kind
of
decision
making.
I
talked
recently
with Sarannah
McMurtry,
Executive
Vice
President
and
General
Counsel
of First Acceptance
Insurance
Company. First
Acceptance
Insurance
Company
provides
nonstandard
auto insurance
and specializes
in
coverage
for
high-risk
drivers
who
may
not
qualify
for
traditional
policies.
First Acceptance is
in
the
business
of
claims
that
are often lower exposure,
the
kinds of
cases previously mentioned
by
the
GC
that
could
be
ripe
for
AI
decision
making. These
are
cases
where
the
cost
of
litigating
the
cases
could
easily outweigh
the
exposure. Perhaps
not
surprisingly
then, McMurtry told
me
that
AI
is “going
to revolutionize the insurance business
from
rate,
claims,
intake.”
McMurtry agreed
that there are certain
types
of
claims
that
would
be better candidates
for
some
portion
of
AI
review
and
decision making. Claims with estimates and
photos
and
other
documentation
of
property damage that
could
be examined by
AI
for initial decision,
for
example.
AI
could also
help determine
the
claims
that could go
straight through for
payment,
saving
time
and
cost.
And another key area that
might
be
ripe
for
AI
decision
making
is
insurance subrogation.
For
those unfamiliar, subrogation
claims
occur
when
one
carrier
pays
a
claim
and
then
seeks
recovery from
some
other
entity,
often,
in
the automobile
context,
another insurance carrier.
For
those
claims,
AI
decision
making
may
make
sense. According
to McMurtry “where
you have
a defined submission
process, having those
claims
decided by
AI makes
sense.
For
one
thing
it’s
cost
effective.
It
allows
your
people
to
do
other
things. And you’re
not
impacting
claimants.
It’s
just
simply
a
transaction
between
the
two
insurance
companies
to
allocate
that
risk
appropriately.”
Some
Road
Blocks
But
there
are
roadblocks. For insurance companies
like First Acceptance,
the
biggest
roadblock
is
the specter of
bad
faith. Insurance companies have
a
duty
to
deal
with
policy
holders
in
good faith.
A
breach
of
that
duty
can
turn
a
minor
claim
into
one
that
may
result
in
a catastrophic nuclear
verdict since
the
damages far
exceed
the
policy policy limits. McMurtry explains:
“We’re
very very cautious
about
where
we
want
to
use
something like
AI or
insert
a
tool
that
would
not
be
human
reviewed. A tool
that
helps
with
the
initial
evaluation is
valuable but
there still must be
a significant
human
touch
in the
process.”
She
explained
that
if
an
AI
tool
approved
a
pay
out quickly,
great
but
if it denied
a
claim,
that
would
be
much
tougher.
And
Then
There
Is
That
Bias
Thing
I
also
discussed
the
bias
problem
with McMurtry. The
problem, she
says, is
that
the
data
going
into
AI
models
often comes from humans with
their
own
bias.
So,
the models will
always
have
some
bias. She
agreed the
trick
will
be
getting
the AI
decision maker
to
a
level
of acceptable bias,
keeping
in
mind
human decision makers
also
have
bias.
Indeed,
many
of
our procedural and evidentiary safeguards in
litigation are designed
to minimize
human
bias.
We
will
have
to
figure
out what kinds
of
guardrails need
to
be
in
place
to
reduce
bias
to
that
acceptable level if
AI
decision
making
is
to
be
used,
and
in
what
contexts.
Other Open
Questions
As
with
the
use
of
any
AI
tool,
particularly
in
dispute
resolution.
there
remain
open
questions:
•
How do
we
correct
errors
and
allow
for
appeal?
•
What about
transparency
and
explainability?
•
What should
the
regulatory
and
ethical
frameworks
be?
•
Who bears
liability
for
AI
mistakes?
Where
Are
We?
Going
back
to
the
AAA announcement, it’s important to
remember
that particularly with
businesses,
arbitration
is
an
agreed
to dispute resolution
technique. Indeed, I recently
wrote
about
a
tool
from Arbitrus.ai. The tool
is essentially an
AI
decision
maker:
where
the
parties
agree, Arbitrus.ai
can
be
used
to
resolve
any
disputes
arising
out
of
the
contract.
And
that’s
the
key
issue
at
least
for
now.
Where
the parties agree
that
a
dispute
or
disputes
can
be
resolved
by
AI,
great. It
makes
sense from
a
cost
and
time
perspective.
But
where
they
don’t, there’s
no
way we
can use
an
AI
decision
maker.
It’s much
like the
right
to
a
jury
trial: the
parties can agree to waive
their
right
to
trial by
jury
but
can’t
be
forced
to. The
danger
is
that
AI
decision
making
might
be
forced
by
contract
to
those
that
don’t
want
it
but
have
little bargaining power.
We
have
seen
this
often
where
large
companies
attempt
to
force arbitration by
contract
terms.
It
Depends
So,
yes, AI
dispute
resolution
may
hold
promise in
litigation, whether
it
will,
depends.
It
can’t
be
forced
on
unwilling
parties.
It
makes
the
most
sense
for
low-exposure
disputes,
particularly
between
businesses
with
equal
bargaining
power.
But
like
everything
with
AI,
we
need
guardrails.
For
now,
consent
must
remain
the
cornerstone.
We
must
ensure
that
consent
is
truly
voluntary,
not
coerced
through
adhesion
contracts
that
leave
consumers
with
no
real
choice.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.