The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

AI In The Courtroom: Will We Trade The Rule Of Law For Efficiency’s Sake? – Above the Law

What
happens
when
a
judge
relies
on
a
GenAI
tool
in
formulating
their
decision
on
a
key
issue,
particularly
one
that
could
impact
the
GenAI
providers?

It’s
not
only
law
firms
and
legal
departments
that
are
adopting
GenAI
systems
without
fully
understanding
what
they
can
and
cannot
do

court
systems
may
also
be
tempted
to
adopt
these
tools
to
short
circuit
workloads
in
the
face
of
limited
resources.
And
that
poses
some
risks
and
concerns
to
the
rule
of
law,
a
notion
that
hinges
on
accuracy,
fairness,
and
public
perception.


The
Role
of
UNESCO

That’s
why
what
organizations
like

UNESCO

(the
United
Nations
Educational,
Scientific
and
Cultural
Organization)
are
doing
are
important.

UNESCO
is
an
agency
that
attempts
to
foster
international
cooperation
in
various
fields.
It
often
sets
standards,
develops
programs,
and
creates
global
networks.
One
such
network
is
devoted
to
the
development
of

Guidelines

for
the
use
of
AI
in
courts.
A
recent

UNESCO
publication

discussed
the
programs
being
developed
to
assist
courts
and
tribunals
in
the
use
of
AI.
According
to
the
publication,
“The
Guidelines
provide
principles
and
recommendations
to
courts
and
judges
on
how
AI
systems
may
be
designed,
procured
and
used
to
strengthen
access
to
justice,
human
rights,
and
protect
judicial
independence.”


What
Are
The
Risks?

The
publication
identified
three
risks
which
resonate
given
the
current
political
climate:

  • Technology
    is
    in
    the
    hands
    of
    private
    companies
    that
    have
    little
    concern
    for
    judicial
    independence.
    These
    companies’
    primary
    motive
    is
    making
    a
    profit,
    not
    ensuring
    fairness
    and
    transparency
    in
    judicial
    decisions
  • Relatedly,
    there
    is
    the
    opportunity
    for
    subtle
    influence
    and
    manipulation
    of
    judicial
    decisions.
    As
    the
    publication
    puts
    it,
    “Even
    supportive
    AI
    functions,
    such
    as
    document
    summarization,
    can
    shape
    the
    facts
    considered
    in
    judgments.
    When
    judges
    use
    AI
    outputs,
    its
    dataset
    limitations
    can
    inadvertently
    affect
    legal
    reasoning.”
    What
    happens
    if
    that
    occurs?
  • There
    is
    public
    pressure
    on
    courts
    to
    adopt
    AI
    tools
    without
    sufficient
    safeguards.
    How
    can
    this
    pressure
    be
    tempered
    in
    favor
    of
    rational
    decision
    making
    when
    it
    comes
    to
    AI
    adoption
    by
    courts?


The
Risks
Are
Not
Theoretical,
They’re
Real

These
dangers
and
risks
are
real.

First,
tech
companies
trumpeting
AI
tools
are
growing
more
and
more
powerful.
They
create
tools
that
can
hallucinate
or
offer
outputs
that
are
inaccurate.
Yet
the
public
drums
seem
to
constantly
beat
the
refrain
of
all
the
wonders
of
these
tools
and
how
they
can
help
humanity
and
law
without
recognizing
the
inherent
risks,
particularly
to
the
judiciary.
The
lack
of
any
watchdogs
on
judicial
use
is
concerning.

Secondly,
given
this
power
and
potential
lack
of
understanding
by
judicial
users
of
the
risks
and
bias
of
the
tools,
there
is
the
opportunity
for
mischief
and
influence
by
the
vendors
to
achieve
their
ends.
Let’s
say
a
judge
is
confronted
with
an
issue
that
can
impact
a
significant
AI
player.
Could
the
tools
be
manipulated
to
increase
the
risks
of
a
favorably
ruling
perhaps
subtlety?
Who
would
know?

How
would
that
be
dealt
with?
In
today’s
political
climate
where
corporations
have
significant
control
over
all
kinds
of
things
from
what
we
are
allowed
to
see
to
what
we
can
say
on
their
controlled
sites,
the
risk
of
influence
is
certainly
not
insignificant.


Judge
Scott
Schlegel
,
an
appellate
judge
from
Louisiana
and
one
of
the
leading
voices
on
the
impact
of
AI
on
the
judiciary,
recently
raised
a

similar
point
.
What
if
there
were
hidden
or
white
text
in
legal
documents
that
was
designed
to
lead
AI
tools
to
make
certain
recommendations
and
reasoning?
What
if
the
tools
themselves
were
biased
to
reach
or
suggest
certain
decisions?

Indeed,
we
are
already

hearing
of
judges

citing
to
cases
that
don’t
exist.
Who
should
catch
this?
Should
judges
be
required
to
disclose
they
(or
their
clerks)
have
used
GenAI
tools?
Otherwise,
who
would
necessarily
know?
How
would
(or
could)
the
legitimacy
of
an
impacted
decision
be
determined?


The
Pressures
to
Use
AI
in
the
Courtroom

And
then
there
is
the
pressure
on
the
judiciary
to
adopt
these
tools.
The
AI
hype
machine
is
in
overdrive.
We
constantly
hear
of
all
the
wondrous
things
GenAI
can
achieve.
Will
legislatures
be
tempted
to
mandate
adoption
of
these
tools
to
reduce
the
costs
of
a
court
system?
Would
overworked
and
understaffed
judges
be
tempted
to
use
AI
tools
to
move
cases,
relying
on
vendor
promises
of
what
these
tools
can
do?

Not
to
mention
the
public
perception
of
the
court
system
already
under
siege:
what
happens
to
that
perception
as
more
and
more
judges
cite
to
cases
that
don’t
exist
and
where
the
case
cited
does
not
stand
for
the
proposition
asserted?
Courts
often
adopt
the
reasoning
in
the
briefs
of
the
successful
party.
What
if
those
briefs
are
wrong
or
contain
errors?
How
will
those
issues
be
dealt
with?

What
about
bias
in
the
models
themselves?
If
a
bias
impacts
a
judicial
decision,
how
will
we
deal
with
it?
What
will
be
the
appropriate
appellate
standards?
Do
we
need
some
new
ones
to
deal
with
AI
influence
on
judicial
decision-making?


Why
It
Matters

That’s
why
what
UNESCO
is
doing
is
important.
It’s
offering
guidelines.
It’s
putting
together
teams
of
experts.
It’s
asking
the
hard
questions.
It’s
trying
to
make
us
all
see
risks
before
the
GenAI
tools
impact
the
rule
of
law
instead
of
reacting
to
them.

The
rule
of
law
is
too
important
to
our
society,
our
way
of
living,
and
our
economic
standards
not
to
ask
these
hard
questions.
How
can
we
deal
with
the
concept
of
fairness
and
due
process
when
some
of
the
decision-making,
even
if
only
small
bites,
is
ceded
to
GenAI?

How
can
we
ensure
transparency
in
judicial
decision-making
when
it
comes
to
AI?
We
already
have
problems
knowing
how
judicial
decisions
are
sometimes
reached.
With
AI,
we
have
yet
another
transparency
barrier
as
we
struggle
to
know
on
what
a
judge
relied.
Should
judges
be
required
to
say
if
they
relied
on
GenAI
tools
and
to
what
extent
in
decision-making?

We
need
to
foresee
and
prepare
for
what
AI
could
bring.
From
all
indications,
UNESCO
is
doing
just
that.
But
we
need
more.
We
need
federal
courts
to
lead
the
way
in
thinking
about
these
issues.
We
need
bar
associations
to
step
up
and
demand
training
and
standards.
We
need
to
ensure
our
judiciary
gets
the
training
and
the
resources
to
understand
and
deal
with
both
the
benefits
and
risks
of
technology,
just
as
lawyers
and
legal
professionals
are
expected
to.

There’s
too
much
at
stake
not
to.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law