The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Why Making Social Media Companies Liable For User Content Doesn’t Do What Many People Think It Will – Above the Law

Brazil’s
Supreme
Court
appears
close
to
ruling
that social
media
companies
should
be
liable
for
content
hosted
on
their
platforms
—a
move
that
appears
to
represent
a
significant
departure
from
the
country’s
pioneering
Marco
Civil
internet
law.
While
this
approach
has
obvious
appeal
to
people
frustrated
with
platform
failures,
it’s
likely
to
backfire
in
ways
that
make
the
underlying
problems
worse,
not
better.

The
core
issue
is
that
most
people
fundamentally
misunderstand
both
how
content
moderation
works
and
what
drives
platform
incentives.
There’s
a
persistent
myth
that
companies
could
achieve
near-perfect
moderation
if
they
just
“tried
harder”
or
faced
sufficient
legal
consequences.
This
ignores
the
mathematical
reality
of
what
happens
when
you
attempt
to
moderate
billions
of
pieces
of
content
daily,
and
it
misunderstands
how
liability
actually
changes
corporate
behavior.

Part
of
the
confusion,
I
think,
stems
from
people’s
failure
to
understand the
impossibility
of
doing
content
moderation
well
at
scale
.
There
is
a
very
wrong
assumption
that
social
media
platforms
could
do
perfect
(or
very
good)
content
moderation
if
they
just
tried
harder
or
had
more
incentive
to
do
better.
Without
denying
that some entities
(*cough*
ExTwitter
*cough*)
have
made
it
clear
they
don’t
care
at
all,
most
others
do
try
to
get
this
right,
and discover
over
and
over
again
 how
impossible
that
is.

Yes,
we
can
all
point
to
examples
of
platform
failures
that
are
depressing
and
seem
obvious
that
things
should
have
been
done
differently,
but
the
failures
are
not
there
because
“the
laws
don’t
require
it.”
The
failures
are
because
it’s
impossible
to
do
this
well
at
scale.
Some
people
will
always
disagree
with
how
a
decision
comes
out,
and
other
times
there
are
no
“right”
answers.
Also,
sometimes,
there’s
just
too
much
going
on
at
once,
and
no
legal
regime
in
the
world
can
possibly
fix
that.

Given
all
of
that,
what
we
really
want
are better
overall
incentives
 for
the
companies
to
do
better.
Some
people
(again,
falsely)
seem
to
think
the
only
incentives
are
regulatory.
But
that’s
not
true.
Incentives
come
in
all
sorts
of
shapes
and
sizes—and
much
more
powerful
than
regulations
are
things
like the
users
themselves,
 along
with
advertisers
and
other
business
partners
.

Importantly,
content
moderation
is
also
a
constantly
moving
and
evolving
issue.
People
who
are
trying
to
game
the
system
are
constantly
adjusting.
New
kinds
of
problems
arise
out
of
nowhere.
If
you’ve
never done
content
moderation
,
you
have
no
idea
how
many
“edge
cases”
there
are.
Most
people—incorrectly—assume
that
most
decisions
are
easy
calls
and
you
may
occasionally
come
across
a
tougher
one.

But
there
are
constant
edge
cases,
unique
scenarios,
and
unclear
situations.
Because
of
this,
every
service
provider will
make
many,
many
mistakes
 every
day.
There’s
no
way
around
this.
It’s
partly
the
law
of
large
numbers.
It’s
partly
the
fact
that
humans
are
fallible.
It’s
partly
the
fact
that
decisions
need
to
be
made
quickly
without
full
information.
And
a
lot
of
it
is
that
those
making
the
decisions
just
don’t
know
what
the
“right”
approach
is.

The
way
to
get
better
is constant
adjusting
 and
experimenting.
Moderation
teams
need
to
be
adaptable.
They
need
to
be
able
to
respond
quickly.
And
they
need
the
freedom
to
experiment
with
new
approaches
to
deal
with
bad
actors
trying
to
abuse
the
system.


Putting
legal
liability
on
the
platform
makes
all
of
that
more
difficult

Now,
here’s
where
my
concerns
about
the
potential
ruling
in
Brazil
get
to:
if
there
is legal
liability,
 it
creates
a
scenario
that
is
actually less
likely
 to
lead
to
good
outcomes.
First,
it
effectively
requires
companies
to
replace
moderators
with
lawyers.
If
your
company
is
now
making
decisions
that
come
with
significant
legal
liability,
that
likely
requires
a
much
higher
type
of
expertise.
Even
worse,
it’s
creating
a
job
that
most
people
with
law
degrees
are
unlikely
to
want.

Every
social
media
company
has
at
least
some
lawyers
who
work
with
their
trust
&
safety
teams
to
review
the
really
challenging
cases,
but
when
legal
liability
could
accrue
for
every
decision,
it
becomes
much,
much
worse.

More
importantly,
though,
it
makes
it way
more
difficult
 for
trust
&
safety
teams
to
experiment
and
adapt.
Once
things
include
the
potential
of
legal
liability,
then
it
becomes
much
more
important
for
the
companies
to
have
some
sort
of
plausible
deniability—some
way
to
express
to
a
judge
“look,
we’re
doing
the
same
thing
we
always
have,
the
same
thing
every
company
has
always
done”
to
cover
themselves
in
court.

But
that
means
that
these
trust
&
safety
efforts
get
hardened
into
place,
and
teams
are
less
able
to
adapt
or
to
experiment
with
better
ways
to
fight
evolving
threats.
It’s
a
disaster
for
companies
that
want
to
do
the
right
thing.

The
next
problem
with
such
a
regime
is
that
it
creates
a
real
heckler’s
veto-type
regime.
If anyone complains
about anything, companies
are
quick
to
take
it
down,
because
the
risk
of
ruinous
liability
just
isn’t
worth
it.
And
we
now have decades of
evidence
 showing
that
increasing
liability
on
platforms
leads
to
massive
overblocking
of
information.
I
recognize
that
some
people
feel
this
is
acceptable
collateral
damage…
right
up
until
it
impacts
them.

This
dynamic
should
sound
familiar
to
anyone
who’s
studied
internet
censorship.
It’s
exactly
how
China’s
Great
Firewall
originally
operated—not
through
explicit
rules
about
what
was
forbidden,
but
by telling
service
providers
 that
the
punishment
would
be
severe
if
anything
“bad”
got
through.
The
government
created
deliberate
uncertainty
about
where
the
line
was,
knowing
that
companies
would
respond
with
massive
overblocking
to
avoid
potentially
ruinous
consequences.
The
result
was
far
more
comprehensive
censorship
than
direct
government
mandates
could
have
achieved.

Brazil’s
proposed
approach
follows
this
same
playbook,
just
with
a
different
enforcement
mechanism.
Rather
than
government
officials
making
vague
threats,
it
would
be
civil
liability
creating
the
same
incentive
structure:
when
in
doubt,
take
it
down,
because
the
cost
of
being
wrong
is
too
high.

People
may
be
okay
with
that,
but
I
would
think
that
in
a
country
with
a
history
of
dictatorships
and
censorship,
they
would
like
to
be
a
bit
more
cautious
before
handing
the
government
a
similarly
powerful
tool
of
suppression.

It’s
especially
disappointing
in
Brazil,
which
a
decade
ago
put
together the
Marco
Civil
,
an
internet
civil
rights
law
that
was
designed
to
protect
user
rights
and
civil
liberties—including
around
intermediary
liability.
The
Marco
Civil
remains
an
example
of
more
thoughtful
internet
lawmaking
(way
better
than
we’ve
seen
almost
anywhere
else,
including
the
US).
So
this
latest
move
feels
like
backsliding.

Either
way,
the
longer-term
fear
is
that
this
would
actually
limit
the
ability
of
smaller,
more
competitive
social
media
players
to
operate
in
Brazil,
as
it
will
be
way
too
risky.
The
biggest
players
(Meta)
aren’t
likely
to
leave,
but
they
have
buildings
full
of
lawyers
who
can
fight
these
lawsuits
(and
often,
likely,
win).
A
study
we
conducted
a
few
years
back
detailed
how
as
countries
ratcheted
up
their
intermediary
liability,
the
end
result
was,
repeatedly, fewer
online
places
to
speak
.

That
doesn’t
actually
improve
the
social
media
experience
at
all.
It
just
gives
more
of
it
to
the
biggest
players
with
the
worst
track
records.
Sure,
a
few
lawsuits
may
extract
some
cash
from
these
companies
for
failing
to
be
perfect,
but
it’s
not
like
they
can
wave
a
magic
wand
and
not
let
any
“criminal”
content
exist.
That’s
not
how
any
of
this
works.


Some
responses
to
issues
raised
by
critics

When
I
wrote
about
this
on
a
brief
Bluesky
thread,
I
received
hundreds
of
responses—many
quite
angry—that
revealed
some
common
misunderstandings
about
my
position.
I’ll
take
the
blame
for
not
expressing
myself
as
clearly
as
I
should
have
and
I’m
hoping
the
points
above
lay
out
the
argument
more
clearly
regarding
how
this
could
backfire
in
dangerous
ways.
But,
since
some
of
the
points
were
repeated
at
me
over
and
over
again
(sometimes
with
clever
insults),
I
thought
it
would
be
good
to
address
some
of
the
arguments
directly:


But
social
media
is
bad,
so
if
this
gets
rid
of
all
of
it,
that’s
good.
 I
get
that
many
people
hate
social
media
(though,
there
was
some
irony
in
people
sending
those
messages
to
me
on
social
media).
But,
really
what
most
people
hate
is
what
they
see
on
social
media.
And
as
I
keep
explaining,
the
way
we
fix
that
is
with
more
experimentation
and
more
user
agency—not
handing
everything
over
to
Mark
Zuckerberg
and
Elon
Musk
or
the
government.


Brazil
doesn’t
have
a
First
Amendment,
so
shut
up
and
stop
with
your
colonialist
attitude.
 I
got
this
one
repeatedly
and
it’s…
weird?
I
never
suggested
Brazil
had
a
First
Amendment,
nor
that
it
should
implement
the
equivalent.
I
simply
pointed
out
the
inevitable
impact
of
increasing
intermediary
liability
on
speech.
You
can
decide
(as
per
the
comment
above)
that
you’re
fine
with
this,
but
it
has
nothing
to
do
with
my
feelings
about
the
First
Amendment.
I
wasn’t
suggesting
Brazil
import
American
free
speech
laws
either.
I
was
simply
pointing
out
what
the
consequences
of
this
one
change
to
the
law
might
create.


Existing
social
media
is
REALLY
BAD,
so
we
need
to
do
this.
 This
is
the
classic
“something
must
be
done,
this
is
something,
we
will
do
this”
response.
I’m
not
saying
nothing
must
be
done.
I’m
just
saying
this
particular
approach
will
have
significant
consequences
that
it
would
help
people
to
think
through.


It
only
applies
to
content
after
it’s
been
adjudicated
as
criminal.
 I
got
that
one
a
few
times
from
people.
But,
from
my
reading,
that’s
not
true
at
all.
That’s
what
the existing
law
 was.
These
rulings
would
expand
it
greatly
from
what
I
can
tell.
Indeed,
the
article
notes
how
this
would
change
things
from
existing
law:


The
current
legislation
states
social
media
companies
can
only
be
held
responsible
if
they
do
not
remove
hazardous
content
after
a
court
order.


[….]


Platforms
need
to
be
pro-active
in
regulating
content,
said
Alvaro
Palma
de
Jorge,
a
law
professor
at
the
Rio-based
Getulio
Vargas
Foundation,
a
think
tank
and
university.


“They
need
to
adopt
certain
precautions
 that
are
not
compatible
with
simply
waiting
for
a
judge
to
eventually
issue
a
decision
 ordering
the
removal
of
that
content,”
Palma
de
Jorge
said.


You’re
an
anarchocapitalist
who
believes
that
there
should
be
no
laws
at
all,
so
fuck
off.
 This
one
actually
got
sent
to
me
a
bunch
of
times
in
various
forms.
I
even
got
added
to
a
block
list
of
anarchocapitalists.
Really
not
sure
how
to
respond
to
that
one
other
than
saying
“um,
no,
just
look
at
anything
I’ve
written
for
the
past
two
and
a
half
decades.”


America
is
a
fucking
mess
right
now,
so
clearly
what
you
are
pushing
for
doesn’t
work.
 This
one
was
the
weirdest
of
all.
Some
people
sending
variations
on
this
pointed
to
multiple
horrific
examples
of
US
officials
trampling
on
Americans’
free
speech,
saying
“see?
this
is
what
you
support!”
as
if
I
support
those
things,
rather
than
consistently
fighting
back
against
them.
Part
of
the
reason
I’m
suggesting
this
kind
of
liability
can
be
problematic
is
because
I
want
to stop other
countries
from
heading
down
a
path
that
gives
governments
the
power
to
stifle
speech
like
the
US
is
doing
now.

I
get
that
many
people
are—reasonably!—frustrated
about
the
terrible
state
of
the
world
right
now.
And
many
people
are
equally
frustrated
by
the
state
of
internet
discourse.
I
am
too.
But
that
doesn’t
mean any solution
will
help.
Many
will
make
things
much
worse.
And
the
solution
Brazil
is
moving
towards
seems
quite
likely
to
make
the
situation
worse
there.


Why
Making
Social
Media
Companies
Liable
For
User
Content
Doesn’t
Do
What
Many
People
Think
It
Will


More
Law-Related
Stories
From
Techdirt:


SCOTUS
Simply
Ignores
Precedent,
Rather
Than
Overruling
It,
In
Allowing
Trump
To
Fire
Officials
Congress
Deemed
Independent


Feds
Arrest
Yet
Another
Democrat
For
The
Crime
Of
Helping
Others
Under
Attack
From
ICE


Surprise:
Minnesota
Killer
Used
Data
Brokers
To
Target
And
Murder
Politicians