The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Law360 Using AI Bias Detector To Make Sure Stories Don’t Accidentally Tell The Truth – Above the Law

The
biggest
story
in
journalism
right
now
is
that

CBS
News
agreed
to
give
Donald
Trump
$16
million
in
a
legally
blessed
bribe
.
The
great
sin
of
“The
House
That
Edward
R.
Morrow
Built”
involved
60
Minutes
airing
a
run-of-the-mill
interview
with
Kamala
Harris
that
made
her
look
like
a
competent
public
servant
with
years
of
experience.
Since
Trump’s
interviews,
regardless
of
editing,
sound
like
a
dementia
patient
navigating
a
law
school
cold
call,
he
decided
CBS
had
committed
consumer
fraud
because
Harris
spoke
in
complete
sentences.

But
apparently
we
weren’t
done
with
today’s
“dystopian
assault
on
freedom
of
the
press”
news!
And
it
came
after
an
unlikely
target:
Law360.
I
certainly
didn’t
have
“legal
industry
trade
publication”
on
my
censorship
BINGO
card.
Then
again,

Biglaw
lateral
moves
have
suddenly
become
political
stories

so
perhaps
this
marks
inevitable
cowardice
creep
reaching
the
legal
press.

But
the
part
of
this
story
that
elevates
it
from
ominous
development
for
civil
liberties
to
comi-tragic
is
that
Law360
is
owned
by
LexisNexis
and
therefore
the
agent
of
Law360’s
doom
is…
an
AI
algorithm!
A
new
bias
detecting
ChatGPT
wrapper
slapped
together
by
some
LexisNexis
product
engineers
probably
taken
away
from

actually
useful
work

to
build
a
degenerative
AI
to
strip
news
articles
of
any
semblance
of
value.
2025,
man…
Does.
Not.
Miss.

NiemanLab,
Harvard’s
digital
journalism
center,
reports
that

Law360
has
ordered
its
reporters
run
their
stories
through
an
AI
bias
detector

designed
for
“applying
a
neutral
voice
to
copy”
and
to
be
mandatory
for
“headline
drafting,
story
tagging,
and
‘article
refinement
and
editing.’”

As
one
might
imagine
the
journalists,
represented
by
the
Law360
union,
object
to
this
half-baked
idea.
A
policy
this
ethically
bankrupt
could
only
arise
from
non-journalist
executive
input.

The
announcement
came
a
few
weeks
after
an
executive
at
Law360’s
parent
company
accused
the
newsroom
of
liberal
political
bias
in
its
coverage
of
the
Trump
administration.
At
an
April
town
hall
meeting, Teresa
Harmon
,
vice
president
of
legal
news
at
LexisNexis,
cited
unspecified
reader
complaints
as
evidence
of
editorial
bias.

Giving
uncritical
weight
to
squeaky
wheel
complaints,
especially
in
an
environment
where
a
government
official
weaponized
his
followers
to
act
on
their
every
grievance
up
to
and
including
STORMING
THE
FUCKING
CAPITOL,
is
a
dunderheaded
management
strategy
only
an
MBA
could
come
up
with.
But
it’s
almost
certainly
a
cynical
one.
If
we
all
start
writing
complaints
that
the
headlines
are
neutered
doublespeak,
will
Law360
be
ordered
to
reverse
course?
I’m
incredulous.

While
the
article
notes
that
there’s
not
an
established
throughline
from
those
remarks
to
the
implementation
of
the
policy,
it
speaks
to
a
mindset
that
clearly
got
out
of
hand.

But
let’s
put
aside
the
wisdom
of
the
policy
and
focus
on
the
fact
that
the
bias
detector
is
also
terrible
at
its
job.
Because
that’s
just
a
little
bit
more
fun.
Only
at
a
tech
company
could
someone
think
that
generative
AI
tools
being
developed
for
dedicated
legal
work
tasks
could
be
bolted
onto
the
editorial
process
of
a
news
publication.

Generative
AI
is
a
powerful
tool
in
the
same
way
a
screwdriver
is
a
powerful
tool.
But
you
wouldn’t
use
a
screwdriver
to
do
your
taxes.
Yet
that’s
the
thinking
involved
in
bringing
AI
into
an
editorial
process.
To
borrow
from
the
TV
series

Veep
,
it’s
like
using
a
croissant
as
a
dildo:
“It
doesn’t
do
the
job,
and
it
makes
a
fucking
MESS!”

She
also
criticized
the headline
of
a
March
28
story
 —
“DOGE
officials
arrive
at
SEC
with
unclear
agenda”

as
an
example.
In
the
same
town
hall,
Harmon
suggested
that
the
still
experimental
bias
indicator
might
be
an
effective
solution
to
this
problem,
according
to
two
employees
in
attendance.

But…
DOGE
officials

did

arrive
at
the
SEC
with
an
unclear
agenda.
The
White
House
couldn’t
be
clear

about
who
was
running
DOGE

let
alone
its
agenda.
This
is
just
a
factual
statement
that,
if
anything,
is
biased
in
favor
of
DOGE
since
its
suspected
agenda
to
steal
data
and
hamper
regulation
was
about
as
disguised
as
three
raccoons
in
a
trench
coat.

The
report
notes
another
story
about
the
Trump
decision
to
mobilize
the
California
National
Guard:

Several
sentences
in
the
story
were
flagged
as
biased,
including
this
one:
“It’s
the
first
time
in
60
years
that
a
president
has
mobilized
a
state’s
National
Guard
without
receiving
a
request
to
do
so
from
the
state’s
governor.”
According
to
the
bias
indicator,
this
sentence
is
“framing
the
action
as
unprecedented
in
a
way
that
might
subtly
critique
the
administration.”
It
was
best
to
give
more
context
to
“balance
the
tone.”

It

was

the
first
time
in
60
years
though!
That
is
the
relevant
context.
As
is
the
juxtaposition
with
the
civil
rights
era
since
the
last
time
a
president
did
this,
it
was
to
push
back
against
segregationists
while
this
time
it
was
about
breaking
up
a
conga
line.
Absent
that
context,
it
strips
a
radical
encroachment
on
state
sovereignty
of
its
newsworthiness.

The
algorithm
also
apparently
wanted
the
article
to
tone
down
its
characterization
of
Judge
Breyer’s
response:

Another
line
was
flagged
for
suggesting
Judge
Charles
Breyer
had
“pushed
back”
against
the
federal
government
in
his
ruling,
an
opinion
which
had
called
the
president’s
deployment
of
the
National
Guard
the
act
of
“a
monarchist.”
Rather
than
“pushed
back,”
the
bias
indicator
suggested
a
milder
word,
like
“disagreed.”

This
new
bot
would
have
reported
Watergate
as
a
tenant
association
dispute.

In
another
example,
BiasBot
told
Law360
that
its
coverage
of
a
case
should
“state
the
facts
of
the
lawsuit
without
suggesting
its
broader
implications.”
Given
that
the
law
is
still
ostensibly
a
function
of
precedent,
reporting
on
caselaw
is…

all
about
broader
implications
.

It’s
kind
of
the
whole
reason
LexisNexis
is
in
business,
actually!

As
a
sometimes
tech
reporter,
I
have
great
relationships
with
the
LexisNexis
folks
working
to
make
the
legal
profession
more
efficient.
But
that’s
because
my
contacts
aren’t
the
people
trying
to
micromanage
news
coverage
to
make
sure
every
article
earns
the
right-wing
podcaster
seal
of
approval
as
“fair.”
It
seems
to
me,
the
company
might
need
to
get
control
of
its
rogue
unit.

There
are,
admittedly,
opportunities
to
leverage
generative
AI
in
the
journalist
workflow.
Detecting
bias
is
not
one
of
them
for
several
reasons.
The
most
straightforward
and
technical
of
which
is
that
generative
AI
tools
are
designed
to
give
the
user
pleasing
answers
come
hell
or
high
water.
It’s
how

AI
hallucinates
cases
to
match
the
user’s
research
query
.
So
if
you
build
an
AI
to
“detect
bias”
it
guarantees
that
it
will
find
some
bias.
Probably
4
or
5
bulleted
examples
no
matter
what.
Does
it
really
have
a
problem
with
“pushed
back”
or
was
that
just
something
it
grabbed
to
fill
its
answer
quota?

But
the
more
philosophical
answer
is
that
objective
facts
often
have
a
lean.
When
99
percent
of
climate
scientists
say
climate
change
is
real,
do
news
outlets
have
to
give
equal
time
to
Professor
Daniel
Plainview
about
the
medicinal
benefits
of
drinking
crude
oil?
Because
the
algorithm
can’t
handle
that
nuance.
Based
on
the
examples
in
the
NiemanLab
piece,
it’s
just
performing
the
barest
level
of
sentiment
analysis
and
flagging
phrasing
that
carry
even
the
slightest
impact
beyond
the
superficial.
But
that
in
and
of
itself
is
an
act
of
bias.
I
used
to
tell
deponents
not
to
speculate
because
if
they
don’t
know
something

no
matter
how
much
they
think
they’re
helping

they’re
actually
lying
if
they
don’t
admit
that
they
don’t
know.

The
flip
side
is
also
true.
A
news
report
that
says
Charles
Breyer
had
a
tepid
disagreement
with
the
DOJ
is,
in
fact,
a
lie.
And
it’s
not
any
less
of
a
lie
because
you
asked
the
robot
to
say
the
lie
for
you.




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.