First
things
first:
Meta
is
a
terrible
company
that
has
spent
years making
terrible
decisions and
being terrible
at
explaining the
challenges
of
social
media
trust
&
safety,
all
while prioritizing
growth metrics
over
user
safety.
If
you’ve
been
reading
Techdirt
for
any
length
of
time,
you
know
we’ve
been
critical
of
the
company
for
years.
Mark
Zuckerberg
deserves
zero
benefit
of
the
doubt.
So
when
a
New
Mexico
jury ordered
Meta
to
pay
$375
million on
Tuesday
for
“enabling
child
exploitation”
on
its
platforms,
and
a
California
jury found
Meta
and
YouTube
liable
for
designing
addictive
products that
supposedly
harmed
a
young
user
on
Wednesday,
awarding
$6
million
in
total
damages,
the
reaction
from
a
lot
of
people
was
essentially:
good,
screw
’em,
they
deserve
it.
And
on
a
visceral,
emotional
level?
Sure.
Meta
deserves
to
feel
bad.
Zuckerberg
deserves
to
feel
bad.
But
if
you
care
about
the
internet
—
if
you
care
about
free
speech
online,
about
small
platforms,
about
privacy,
about
the
ability
for
anyone
other
than
a
handful
of
tech
giants
to
operate
a
website
where
users
can
post
things
—
these
two
verdicts
should
scare
the
hell
out
of
you.
Because
the
legal
theories
that
were
used
to
nail
Meta
this
week
don’t
stay
neatly
confined
to
companies
you
don’t
like.
They
will
be
weaponized
against
everyone.
And
they
will
functionally
destroy
Section
230
as
a
meaningful
protection,
not
by
repealing
it,
but
by
making
it
irrelevant.
Let
me
explain.
The
“Design”
Theory
That
Ate
Section
230
For
years,
Section
230
has
served
as
the
legal
backbone
of
the
internet.
If
you’re
a
regular
Techdirt
reader,
you
know
this.
But
in
case
you’re
not
familiar,
here’s
the
short
version:
it
says
that
if
a
user
posts
something
on
a
website,
the
website
can’t
be
sued
for
that
user’s
content.
The
person
who
created
the
content
is
liable
for
it,
not
the
platform
that
hosted
it.
That’s
it.
That’s
the
core
of
it.
It
serves
one
key
purpose:
put
the
liability on
the
party
who
actually
does
the
violative
action.
It
applies
to
every
website
and
every
user
of
every
website,
from
Meta
down
to
the
smallest
forum
or
blog
with
a
comments
section
or
person
who
retweets
or
sends
an
email.
Plaintiffs’
lawyers
have
been
trying
to
get
around
Section
230
for
years,
and these
two
cases
represent
them
finally
finding
a
formula
that
works:
don’t
sue
over
the content on
the
platform.
Sue
over
the design of
the
platform
itself.
Argue
that
features
like
infinite
scroll,
autoplay,
algorithmic
recommendations,
and
notification
systems
are
“product
design”
choices
that
are
addictive
and
harmful,
separate
and
apart
from
whatever
content
flows
through
them.
The
trial
judge
in
the
California
case
bought
this
argument,
ruling
that
because
the
claims
were
about
“product
design
and
other
non-speech
issues,”
Section
230
didn’t
apply.
The
New
Mexico
court
reached
a
similar
conclusion.
Both
cases
then
went
to
trial.
This
distinction
—
between
“design”
and
“content”
—
sounds
reasonable
for
about
three
seconds.
Then
you
realize
it
falls
apart
completely.
Here’s
a
thought
experiment:
imagine
Instagram,
but
every
single
post
is
a
video
of
paint
drying.
Same
infinite
scroll.
Same
autoplay.
Same
algorithmic
recommendations.
Same
notification
systems.
Is
anyone
addicted?
Is
anyone
harmed?
Is
anyone
suing?
Of
course
not.
Because
infinite
scroll
is
not inherently harmful.
Autoplay
is
not inherently harmful.
Algorithmic
recommendations
are
not inherently harmful.
These
features
only
matter
because
of
the content they
deliver.
The
“addictive
design”
does
nothing
without
the
underlying
user-generated
content
that
makes
people
want
to
keep
scrolling.
As
Eric
Goldman
pointed
out
in
his
response
to
the
verdicts:
The
lower
court
rejected
Section
230’s
application
to
large
parts
of
the
plaintiffs’
case,
holding
that
the
claims
sought
to
impose
liability
on
how
social
media
services
configured
their
offerings
and
not
third-party
content.
But
social
media’s
offerings
consist
of
third-party
content,
and
the
configurations
were
publishers’
editorial
decisions
about
how
to
present
it.
So
the
line
between
first-party
“design”
choices
and
publication
decisions
about
third-party
content
seems
illusory
to
me.
If
every
editorial
decision
about
how
to present third-party
content
is
now
a
“design
choice”
subject
to
product
liability,
Section
230
protects
effectively
nothing.
Every
website
makes
decisions
about
how
to
display
user
content.
Every
search
engine
ranks
results.
Every
email
provider
filters
spam.
Every
forum
has
a
sorting
algorithm,
even
if
it’s
just
“newest
first.”
All
of
those
are
“design
choices”
that
could,
theoretically,
be
blamed
for
some
downstream
harm.
The
whole
point
of
Section
230
was
to
keep
platforms
from
being
held
liable
for
harms
that
flow
from
user-generated
content.
The
“design”
theory
accomplishes
exactly
what
230
was
meant
to
prevent
—
it
just
uses
different
words
to
get
there.
Bad
defendants
make
bad
law.
Meta
is
unsympathetic.
It’s
understandable
why
they
get
so
much
hate.
It’s
understandable
why
people
(including
those
on
juries)
are
willing
to
accept
legal
theories
against
them
that
would
be
obviously
problematic
if
applied
to
anyone
else.
But
legal
precedent
doesn’t
care
about
your
feelings
toward
the
defendant.
What
works
against
Meta
works
against
everyone.
The
Return
Of
Stratton
Oakmont
If
this
all
sounds
familiar,
it
should.
This
is
almost
exactly
the
legal
landscape
that
existed
before
Section
230
was
passed
in
1996,
and
the
reason
Congress
felt
it
needed
to
act.
In
the
early
1990s,
Prodigy
ran
an
online
service
with
message
boards
and
made
the
decision
to
moderate
them
to
create
a
more
“family-friendly”
environment.
In
the
resulting
lawsuit, Stratton
Oakmont
v.
Prodigy,
the
court
ruled
that
because
Prodigy
had
made
editorial
choices
about
what
to
allow,
it
was
acting
as
a
publisher
and
could
therefore
be
held
liable
for
everything
users
posted
that
it
failed
to
catch.
The
perverse
incentive
was
obvious:
moderate,
and
you’re
on
the
hook
for
everything
you
miss.
Don’t
moderate
at
all,
and
you’re
safer.
Congress
recognized
that
this
was
insane
—
it
punished
companies
for
trying
to
do
the
right
thing
—
and
passed
Section
230
to
fix
it.
The
law
explicitly
said
that
platforms
could
moderate
content
without
being
treated
as
the
publisher
or
speaker
of
that
content.
And,
as
multiple
courts
rightly
decided,
this
was
designed
to
apply
to
all
publisher
activity
of
a
platform
—
every
editorial
decision,
every
way
to
display
content.
The
whole
point
was
to
allow
online
services
and
users
to
feel
free
to
make
decisions
regarding
other
people’s
content,
including
how
to
display
it,
without
facing
liability
for
that
content.
And
a critical
but
often
overlooked
function
of
Section
230 is
that
it
provides
a procedural shield:
it
lets
platforms
get
baseless
lawsuits
dismissed
early,
before
the
ruinous
costs
of
discovery
and
trial.
These
two
verdicts
effectively
bring
us
back
to
Stratton
Oakmont
territory
through
the
back
door.
By
recharacterizing
platform
liability
as
“product
design”
liability
rather
than
content
liability,
plaintiffs’
lawyers
have
found
a
way
to
nullify
Section
230
without
anyone
having
to
vote
to
repeal
it.
Every
design
decision
—
moderation
algorithms,
recommendation
systems,
notification
settings,
even
the
order
in
which
posts
appear
—
can
now
be
characterized
by
some
lawyer
as
a
“defective
product”
rather
than
an
editorial
choice
about
third-party
content.
Except
this
time,
instead
of
people
being
horrified
by
the
implications,
they’re
cheering.
The
Trial
Is
the
Punishment
The
dollar
amounts
in
these
cases
tell
an
interesting
story
if
you
pay
attention.
The
California
jury
awarded
$6
million
total
—
$4.2
million
from
Meta,
$1.8
million
from
YouTube.
For
companies
that
bring
in
tens
of
billions
in
quarterly
revenue,
that’s
effectively
nothing.
It’s
not
even
a
slap
on
the
wrist.
Meta
will
barely
notice.
But
that’s
exactly
the
problem.
The
real
cost
here
is
the process.
The
California
trial
lasted
six
weeks.
The
New
Mexico
trial
lasted
nearly
seven.
Both
involved
extensive
discovery,
depositions
of
top
executives
including
Zuckerberg
himself,
production
of
enormous
volumes
of
internal
documents,
and
armies
of
lawyers
on
both
sides.
Meta
can
afford
that.
Google
can
afford
that.
You
know
who
can’t?
Basically
everyone
else
who
runs
a
platform
where
users
post
things.
And
this
is
already
happening.
TikTok
and
Snap
were
also
named
as
defendants
in
the
California
case.
They
both
settled
before
trial
—
not
because
they
necessarily
thought
they’d
lose
on
the
merits,
but
because
the
cost
of
fighting
through
a
multi-week
jury
trial
can
be
staggering.
If
companies
the
size
of
TikTok
and
Snap
can’t
stomach
the
expense,
imagine
what
this
means
for
mid-size
platforms,
small
forums,
or
individual
website
operators.
The
California
case
is
just
the
first
of
multiple
“bellwether”
trials
scheduled
in
the
near
future.
Hundreds
of
federal
cases
are
lined
up
behind
those.
There
are
over
1,600
plaintiffs
in
the
consolidated
California
litigation
alone.
As
Goldman
noted:
Together,
these
rulings
indicate
that
juries
are
willing
to
impose
major
liability
on
social
media
providers
based
on
claims
of
social
media
addiction.
That
liability
exposure
jeopardizes
the
entire
social
media
industry.
There
are
thousands
of
other
plaintiffs
with
pending
claims;
and
with
potentially
millions
of
dollars
at
stake
for
each
victim,
many
more
will
emerge.
The
total
amount
of
damages
at
issue
could
be
many
tens
of
billions
of
dollars.
This
is
the
Stratton
Oakmont
problem
all
over
again,
but
worse.
At
least
in
1995,
only
companies
that moderated faced
liability.
Now,
any
company
that
makes
any
“design
choice”
about
how
to
present
user
content
—
which
is
to
say,
literally
every
platform
on
the
internet
—
is
potentially
on
the
hook
if
any
harm
comes
to
any
user
which
some
lawyer
can
claim
was
because
they
used
that
service.
The
lawsuit
becomes
a
weapon
regardless
of
outcome,
because
the
cost
of
defending
yourself
is
ruinous
for
anyone
who
isn’t
a
trillion-dollar
company.
The
Encryption
Problem:
Where
“Design
Liability”
Leads
If
the
“design
choices
create
liability”
framework
seems
worrying
in
the
abstract,
the
New
Mexico
case
provides
a
concrete
example
of
where
it
leads
in
practice.
One
of
the
key
pieces
of
evidence
the
New
Mexico
attorney
general
used
against
Meta
was
the
company’s
2023
decision
to
add
end-to-end
encryption
to
Facebook
Messenger.
The
argument
went
like
this:
predators
used
Messenger
to
groom
minors
and
exchange
child
sexual
abuse
material.
By
encrypting
those
messages,
Meta
made
it
harder
for
law
enforcement
to
access
evidence
of
those
crimes.
Therefore,
the
encryption
was
a
design
choice
that
enabled
harm.
The
state
is
now
seeking
court-mandated
changes
including
“protecting
minors
from
encrypted
communications
that
shield
bad
actors.”
Yes,
the
end
result
of
the
New
Mexico
ruling
might
be
that
Meta
is
ordered
to
make
everyone’s
communications
less
secure.
That
should
be
terrifying
to
everyone.
Even
those
cheering
on
the
verdict.
End-to-end
encryption
protects
billions
of
people
from
surveillance,
data
breaches,
authoritarian
governments,
stalkers,
and
domestic
abusers.
It’s
one
of
the
most
important
privacy
and
security
tools
ordinary
people
have.
Every
major
security
expert
and
civil
liberties
organization
in
the
world
has
argued
for
stronger
encryption,
not
weaker.
But
under
the
“design
liability”
theory,
implementing
encryption
becomes
evidence
of
negligence,
because
a
small
number
of
bad
actors
also
use
encrypted
communications.
The
logic
applies
to
literally
every
communication
tool
ever
invented.
Predators
also
use
the
postal
service,
telephones,
and
in-person
conversation.
The
encryption itself harms
no
one.
Like
infinite
scroll
and
autoplay,
it
is
inert
without
the
choices
of
bad
actors
—
choices
made
by people,
not
by
the
platform’s
design.
The
incentive
this
creates
goes
far
beyond
encryption,
and
it’s
bad.
If
any
product
improvement
that
protects
the
majority
of
users
can
be
held
against
you
because
a
tiny
fraction
of
bad
actors
exploit
it,
companies
will
simply
stop
making
those
improvements.
Why
add
encryption
if
it
becomes
Exhibit
A
in
a
future
lawsuit?
Why
implement
any
privacy-protective
feature
if
a
plaintiff’s
lawyer
will
characterize
it
as
“shielding
bad
actors”?
And
it
gets
worse.
Some
of
the
most
damaging
evidence
in
both
trials
came
from
internal
company
documents
where
employees
raised
concerns
about
safety
risks
and
discussed
tradeoffs.
These
were
played
up
in
the
media
(and
the
courtroom)
as
“smoking
guns.”
But
that
means
no
company
is
going
to
allow
anyone
to
raise
concerns
ever
again.
That’s
very,
very
bad.
In
a
sane
legal
environment,
you want companies
to
have
these
internal
debates.
You
want
engineers
and
safety
teams
to
flag
potential
risks,
wrestle
with
difficult
tradeoffs,
and
document
their
reasoning.
But
when
those
good-faith
deliberations
become
plaintiff’s
exhibits
presented
to
a
jury
as
proof
that
“they
knew
and
did
it
anyway,”
the
rational
corporate
response
is
to
stop
putting
anything
in
writing.
Stop
doing
risk
assessments.
Stop
asking
hard
questions
internally.
The
lesson
every
general
counsel
in
Silicon
Valley
is
learning
right
now:
ignorance
is
safer
than
inquiry.
That
makes
everyone
less
safe,
not
more.
The
Causation
Problem
We
also
need
to
talk
about
the
actual
evidence
of
harm
in
these
cases,
because
it’s
thinner
than
most
people
realize.
The
California
plaintiff,
known
as
KGM,
testified
that
she
began
using
YouTube
at
age
6
and
Instagram
at
age
9,
and
that
her
social
media
use
caused
depression,
self-harm,
body
dysmorphic
disorder,
and
social
phobia.
Those
are
real
and
serious
harms
that
genuinely
happened
to
a
real
person,
and
no
one
should
minimize
her
suffering.
But
as
Goldman
noted:
KGM’s
life
was
full
of
trauma.
The
social
media
defendants
argued
that
the
harms
she
suffered
were
due
to
that
trauma
and
not
her
social
media
usage.
(Indeed,
there
was
some
evidence
that
social
media
helped
KGM
cope
with
her
trauma).
It
is
highly
likely
that
most
or
all
of
the
other
plaintiffs
in
the
social
media
addiction
cases
have
sources
of
trauma
in
their
lives
that
might
negate
the
responsibility
of
social
media.
The
jury
was
asked
whether
the
companies’
negligence
was
“a
substantial
factor”
in
causing
harm.
Not the factor.
Not
the
primary
factor. A substantial
factor.
This
standard
is
doing
enormous
work
here,
and
nobody
in
the
coverage
seems
to
be
paying
attention
to
it.
In
most
product
liability
cases,
causation
is
relatively
straightforward:
the
car’s
brakes
failed,
the
car
crashed,
the
plaintiff
was
injured.
You
can
trace
a
mechanical
chain
of
events.
There
needs
to
be
a
clear
causal
chain
between
the
product
and
the
harm.
But
what’s
the
equivalent
chain
here?
The
plaintiff
scrolled
Instagram,
saw
content
that
made
her
feel
bad
about
her
body,
developed
body
dysmorphic
disorder?
Which
content?
Which
scroll
session?
How
do
you
isolate
the
“design”
from
the
specific
posts
she
saw,
the
comments
she
read,
the
accounts
she
followed?
With
a
standard
that
loose,
applied
to
a
teenager
with
multiple
documented
sources
of
trauma
in
her
life,
how
do
you
disentangle
what
was
caused
by
social
media
and
what
was
caused
by
everything
else?
The
honest
answer
is:
you
can’t.
And
neither
could
the
jury,
not
with
any
scientific
rigor.
They
made
a
judgment
call
based
on
vibes
and
sympathy
—
which
is
what
juries
do,
but
it’s
a
terrifying
foundation
for
reshaping
internet
law.
The
research
on
social
media’s
causal
relationship
to
teen
mental
health
problems
is
incredibly
weak.
Over
and
over
and
over
again
researchers
have
tried
to
find
a
causal
link. And
failed.
Every
time.
Lots
of
people
(including
related
to
both
these
cases)
keep
comparing
social
media
to
things
like
cigarettes
or
lead
paint.
But,
as
we’ve
discussed, that’s
a
horrible
comparison.
Cigarettes
cause
cancer
regardless
of
what
else
is
happening
in
a
smoker’s
life.
Lead
paint
causes
neurological
damage
regardless
of
a
child’s
home
environment.
Social
media
is
not
like
that.
The
relationship
between
social
media
use
and
mental
health
outcomes
is
complex,
highly
individual,
and
mediated
by
dozens
of
confounding
factors
that
researchers
are
still
trying
to
untangle.
And,
also,
neither
cigarettes
nor
lead
paint
are
speech.
The
issues
involving
social
media
are
all
about
speech.
And
yes,
speech
can
be
powerful.
It
can
both
delight
and
offend.
It
can
make
people
feel
wonderful
or
horrible.
But
we
protect
speech,
in
part,
because
it’s
so
powerful.
But
a
jury
doesn’t
need
to
untangle
those
factors.
A
jury
just
needs
to
feel
that
a
sympathetic
plaintiff
was
harmed
and
that
a
deeply
unsympathetic
defendant
probably
had
something
to
do
with
it.
And
when
the
defendant
is
Mark
Zuckerberg,
that’s
a
very
easy
emotional
call
to
make.
Which
is
exactly
why
this
is
so
dangerous
as
precedent.
If
“a
substantial
factor”
is
the
standard,
and
the
defendant’s
internal
documents
showing
employees discussing
concerns
about
safety count
as
proof
of
wrongdoing,
then
essentially
any
plaintiff
who
used
social
media
and
experienced
mental
health
difficulties
has
a
viable
lawsuit.
Multiply
that
by
every
teenager
in
America
and
you
start
to
see
the
scale
of
the
problem.
Then
recognize
that
this
applies
to everything on
the
internet,
not
just
the
companies
you
hate.
A
Discord
server
for
a
gaming
community
uses
a
bot
to
surface
active
conversations
—
design
choice.
A
small
forum
for
chronic
illness
patients
sends
email
notifications
when
someone
replies
to
your
post
—
design
choice.
A
blog
lets
readers
comment
on
articles
and
notifies
writers
when
they
do
—
design
choice.
A
local
news
site
has
a
comments
section
that
displays
newest-first
—
design
choice.
Every
one
of
these
could
theoretically
be
characterized
as
“features
that
increase
engagement”
and
therefore
potential
vectors
of
liability.
And
the
claims
of
“addiction”
are
even
worse.
As
we’ve
discussed,
studies
show
very
little
support
for
the
idea
that
“social
media
addiction”
is
a
real
thing,
but many
people
believe
it
is.
But
it’s
not
difficult
for
a
lawyer
to
turn
anything
that
makes
people
want
to
use
a
service
more
into
a
claimed
“addictive”
feature.
Oh,
that
forum
has
added
gifs?
That
makes
people
use
it
more!
Sue!
Yes,
some
of
these
may
sound
crazy,
but
lawyers
are
going
to
start
suing
everyone,
and
the
sites
you
like
are
going
to
be
doing
everything
they
can
to
appease
them,
which
will
involve
making
services
way
worse.
Who’s
Not
in
the
Room
There’s
also
something
that
got
zero
attention
in
either
trial:
the
people
for
whom
social
media
is
genuinely,
meaningfully
beneficial.
Goldman’s
observation
on
this
deserves
to
be
read
carefully:
Due
to
the
legal
pressure
from
the
jury
verdicts
and
the
enacted
and
pending
legislation,
the
social
media
industry
faces
existential
legal
liability
and
inevitably
will
need
to
reconfigure
their
core
offerings
if
they
can’t
get
broad-based
relief
on
appeal.
While
any
reconfiguration
of
social
media
offerings
may
help
some
victims,
the
changes
will
almost
certainly
harm
many
other
communities
that
rely
upon
and
derive
important
benefits
from
social
media
today.
Those
other
communities
didn’t
have
any
voice
in
the
trial;
and
their
voices
are
at
risk
of
being
silenced
on
social
media
as
well.
LGBTQ+
teenagers
in
hostile
communities
who find
support
and
connection
online.
People
with
rare
diseases
who find
communities
of
fellow
patients.
Activists
in
authoritarian
countries
who
use
social
media
to
organize.
Artists
and
creators
who
built
careers
on
these
platforms.
People
with
disabilities
who
rely
on
social
media
as
their
primary
social
outlet.
None
of
them
were
in
that
courtroom.
None
of
them
had
a
voice
in
the
proceedings
that
will
reshape
the
platforms
they
depend
on.
When
platforms
are
forced
to
“reconfigure
their
core
offerings”
to
reduce
liability
—
which
could
mean
anything
from
removing
algorithmic
recommendations
to
eliminating
features
that
enable
connection
and
discovery
—
the
costs
won’t
fall
evenly.
Meta
and
Google
will
survive.
They’ll
make
their
products
blander,
less
useful,
and
more
locked
down.
It’s
the
users
who
relied
on
those
features
who
will
pay
the
price.
Bad
Defendants
Make
Bad
Law
Both
Meta
and
YouTube
have
said
they
will
appeal,
and
they
have
plausible
grounds.
The
product
liability
theory
applied
to
what
are
fundamentally
speech
platforms
raises
serious
First
Amendment
questions.
The
Section
230
issue
—
whether
“design
choices”
about
presenting
third-party
content
are
really
just
editorial
decisions
that
230
was
designed
to
protect
—
will
almost
certainly
get
a
serious
look
from
appellate
courts.
The
causation
questions
are
genuinely
unresolved.
But
appeals
take
years.
In
the
meantime,
every
plaintiffs’
attorney
in
America
now
has
a
proven
template
for
suing
any
social
media
platform.
The
bellwether
structure
means
more
trials
are
already
scheduled
—
the
next
California
state
court
one
is
in
July,
with
a
similar
federal
case
starting
in
June.
The
litigation
flood
has
started,
and
230’s
procedural
protection
—
the
ability
to
get
these
cases
dismissed
before
they
become
multi-million-dollar
ordeals
—
has
already
been
neutralized.
Goldman
is
right
to
frame
this
as
existential:
There
are
thousands
of
other
plaintiffs
with
pending
claims;
and
with
potentially
millions
of
dollars
at
stake
for
each
victim,
many
more
will
emerge.
The
total
amount
of
damages
at
issue
could
be
many
tens
of
billions
of
dollars.
None
of
this
means
the
harms
kids
face
don’t
deserve
serious
attention.
They
do.
There
are
ways
to
address
legitimate
concerns
about
teen
mental
health
that
don’t
require
treating
every
editorial
decision
about
third-party
content
as
a
defective
product
—
but
they
involve
hard,
unglamorous
work,
like
actually
funding
mental
health
care
for
young
people.
But
suing
Meta
is
more
fun!
Meta
can
absorb
tens
of
billions.
But
this
legal
theory
doesn’t
apply
only
to
Meta.
It
applies
to
every
platform
that
makes
“design
choices”
about
how
to
present
content
—
which
again,
is
every
platform.
The
next
wave
of
lawsuits
won’t
just
target
trillion-dollar
companies.
They’ll
target
anyone
with
a
recommendation
algorithm,
a
notification
system,
or
an
infinite
scroll
feature,
which
in
2025
is
basically
everyone.
We
got
Section
230
because
Congress
looked
at
the Stratton
Oakmont decision
and
realized
the
legal
system
had
created
a
set
of
incentives
that
would
destroy
the
open
internet.
The
incentive
now
is
arguably
worse:
not
just
“don’t
moderate”
but
“don’t
build
anything
that
makes
user-generated
content
engaging,
discoverable,
or
easy
to
access,
because
if
someone
is
harmed
by
that
content,
the
way
you
presented
it
makes
you
liable.”
I
get
why
people
are
cheering.
Meta
is
a
bad
company
that
has
made
bad
choices
and
treated
its
users
badly.
Zuckerberg
has
earned
most
of
the
contempt
coming
his
way.
Kids
have
been
genuinely
harmed,
and
the
instinct
to
want
someone
powerful
to
be
held
accountable
is
about
as
human
as
it
gets.
But
bad
defendants
make
bad
law.
And
the
law
being
made
here
—
that
platforms
are
liable
for
the
“design”
of
how
they
present
the
third-party
content
that
is
their
entire
reason
for
existing
—
will
not
stay
confined
to
companies
you
don’t
like.
It
will
be
used
against
every
website,
every
app,
every
platform,
every
small
operator
who
ever
made
a
choice
about
how
to
display
user-generated
content.
It
will
make
Section
230
a
dead
letter
without
anyone
having
to
vote
to
repeal
it.
It
will
create
a
legal
environment
where
only
the
largest
companies
can
afford
to
operate,
because
only
they
can
absorb
the
cost
of
endless
litigation.
What
you
won’t
get
out
of
this
is
anything
approaching
“accountability.”
You’ll
get
overly
lawyered-up
systems
that
prevent
you
from
doing
useful
things
online,
and
eventually
the
end
of
the
open
internet
—
cheered
on
by
people
who
think
they’re
punishing
a
bully
but
are
actually
handing
the
bully’s
biggest
competitors
a
death
sentence.
Everyone
Cheering
The
Social
Media
Addiction
Verdicts
Against
Meta
Should
Understand
What
They’re
Actually
Cheering
For
from
the bad-defendants-make-bad-law dept
:
America’s
Self-Proclaimed
Free
Speech
Warrior,
Brendan
Carr,
Gets
A
Letter
Documenting
His
First
Amendment
Violations
Trump
Issues
Meaningless
Executive
Order
To
Try
And
Protect
Larry
Ellison’s
CBS
(And
The
Army
Navy
Game)
From
Competition
Judge
Rejects
Government’s
Weak
Attempt
To
Memory-Hole
DOGE
Deposition
Videos