The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

How One 1990s Browser Decision Created Big Tech’s Data Monopolies (And How We Might Finally Fix It) – Above the Law

There’s
a
fundamental
architectural
flaw
in
how
the
internet
works
that
most
people
have
never
heard
of,
but
it
explains
nearly
every
frustration
you
have
with
modern
technology.
Why
your
photos
are
trapped
in
Apple’s
ecosystem.
Why
you
can’t
easily
move
data
between
apps.
Why
every
promising
new
service
starts
from
scratch,
knowing
nothing
about
you.
And
most
importantly,
why
AI—for
all
its
revolutionary
potential—risks
making
Big
Tech
even
bigger
instead
of
putting
powerful
tools
in
your
hands.

Former
Google
and
Stripe
executive
Alex
Komoroske
(who
recently
wrote
for
us
about
why
the
future
of
AI need
not
be
centralized
)
has
written an
equally
brilliant
analysis
 that
traces
all
of
these
problems
back
to
something
called
the
“same
origin
paradigm”—a
quick
security
fix
that
Netscape’s
browser
team
implemented
one
night
in
the
1990s
that
somehow
became
the
invisible
physics
governing
all
modern
software.

The
same
origin
paradigm
is
simple
but
devastating:
Every
website
and
app
exists
in
its
own
completely
isolated
universe.
Amazon
and
Google
might
as
well
be
on
different
planets
as
far
as
your
browser
is
concerned.
The
Instagram
app
and
the
Uber
app
on
your
phone
can
never
directly
share
information.
This
isolation
was
meant
to
keep
you
safe,
but
it
created
something
Komoroske
calls
“the
aggregation
ratchet”—a
system
where
data
naturally
flows
toward
whoever
can
accumulate
the
most
of
it.

This
is
a
much
clearer
explanation
of
a
problem I
identified
almost
two
decades
ago
—the
fundamental
absurdity
of
having
to
keep
uploading
the
same
data
to
new
services,
rather
than
being
able
to
tell
a
service
to
access
our
data
at
a
specific
location
on
the
internet.
Back
then,
I
argued
that
the
entire
point
of
the
open
internet
shouldn’t
be
locking
up
data
in
private
silos,
but
enabling
users
to
control
their
data
and
grant
services
access
to
it
on
their
own
terms,
for
their
own
benefit.

What
Komoroske’s
analysis
reveals
is
the
architectural
root
cause
of
why
that
vision
failed.
The
“promise”
of
what
we
optimistically
called
“the
cloud”
was
that
you
could
more
easily
connect
data
and
services.
The
reality
became
a
land
grab
by
internet
giants
to
collect
and
hold
all
the
data
they
could.
Now
we
understand
why:
the
same
origin
paradigm
made
the
centralized
approach
the
path
of
least
resistance.

As
Komoroske
explains,
this
architectural
choice
creates
an
impossible
constraint
for
system
designers.


This
creates
what
I
call
the
iron
triangle
of
modern
software.
It’s
a
constraint
that
binds
the
hands
of
system
designers—the
architects
of
operating
systems
and
browsers
we
all
depend
on.
These
designers
face
an
impossible
choice.
They
can
build
systems
that
support:


  1. Sensitive
    data
    (your
    emails,
    photos,
    documents)

  2. Network
    access
    (ability
    to
    communicate
    with
    servers)

  3. Untrusted
    code
    (software
    from
    developers
    you
    don’t
    know)


But
they
can
only
enable
two
at
once—never
all
three.
If
untrusted
code
can
both
access
your
sensitive
data
and
communicate
over
the
network,
it
could
steal
everything
and
send
it
anywhere.


So
system
designers
picked
safety
through
isolation.
Each
app
becomes
a
fortress—secure
but
solitary.
Want
to
use
a
cool
new
photo
organization
tool?
The
browser
or
operating
system
forces
a
stark
choice:
Either
trust
it
completely
with
your
data
(sacrificing
the
“untrusted”
part),
or
keep
your
data
out
of
it
entirely
(sacrificing
functionality).


Even
when
you
grant
an
app
or
website
permission
only
to
look
at
your
photos,
you’re
not
really
saying,
“You
can
use
my
photos
for
this
specific
purpose.”
You’re
saying,
“I
trust
whoever
controls
this
origin,
now
and
forever,
to
do
anything
they
want
with
my
photos,
including
sending
them
anywhere.”
It’s
an
all-or-nothing
proposition.

This
creates
massive
friction
every
time
data
needs
to
move
between
services.
But
that
friction
doesn’t
just
slow
things
down—it
fundamentally
reshapes
where
data
accumulates.
The
service
with
the
most
data
can
provide
the
most
value,
which
attracts
more
users,
which
generates
more
data.
Each
click
of
the
ratchet
makes
it
harder
for
new
entrants
to
compete.


Consider
how
you
might
plan
a
trip:
You’ve
got
flights
in
your
email,
hotel
confirmations
in
another
app,
restaurant
recommendations
in
a
Google
document,
your
calendar
in
yet
another
tool.
Every
time
you
need
to
connect
these
pieces
you
have
to
manually
copy,
paste,
reformat,
repeat.
So
you
grant
one
service
(like
Google)
access
to
all
of
this.
Suddenly
there’s
no
friction.
Everything
just
works.
Later,
when
it
comes
time
to
share
your
trip
details
with
your
fellow
travelers,
you
follow
the
path
of
least
resistance.
It’s
simply
easier
to
use
the
service
that
already
knows
your
preferences,
history,
and
context.


The
service
with
the
most
data
can
provide
the
most
value,
which
attracts
more
users,
which
generates
more
data.
Each
click
of
the
ratchet
makes
it
harder
for
new
entrants
to
compete.
The
big
get
bigger
not
because
they’re
necessarily
better,
but
because
the
physics
of
the
system
tilts
the
playing
field
in
their
favor.


This
isn’t
conspiracy
or
malice.
It’s
emergent
behavior
from
architectural
choices.
Water
flows
downhill.
Software
with
the
same
origin
paradigm
aggregates
around
a
few
dominant
platforms.

Enter
artificial
intelligence.
As
Komoroske
notes,
AI
represents
something
genuinely
new:
it
makes
software
creation
effectively
free.
We’re
entering
an
era
of
“infinite
software”—endless
custom
tools
tailored
to
every
conceivable
need.


AI
needs
context
to
be
useful.
An
AI
that
can
see
your
calendar,
email,
and
documents
together
might
actually
help
you
plan
your
day.
One
that
only
sees
fragments
is
just
another
chatbot
spouting
generic
advice.
But
our
current
security
model—with
policies
attached
at
the
app
level—makes
sharing
context
an
all-or-nothing
gamble.


So
what
happens?
What
always
happens:
The
path
of
least
resistance
is
to
put
all
the
data
in
one
place.


Think
about
what
we’re
trading
away:
Instead
of
the
malleable,
personal
tools
that
Litt
envisions,
we
get
one-size-fits-all
assistants
that
require
us
to
trust
megacorporations
with
our
most
intimate
data.
The
same
physics
that
turned
social
media
into
a
few
giant
platforms
is
about
to
do
the
same
thing
to
AI.


We
only
accept
this
bad
trade
because
it’s
all
we
know.
It’s
an
architectural
choice
made
before
many
of
us
were
born.
But
it
doesn’t
have
to
be
this
way—not
anymore.

But
here’s
the
hopeful
part:
the
technical
pieces
for
a
fundamentally
different
approach
are
finally
emerging.
The
hopes
I
had
two
decades
ago
about
the
cloud
being
able
to
separate
us
from
having
to
let
services
collect
and
control
all
our
data
may
finally
be
possible.

Perhaps
most
interestingly,
Komoroske
argues
that
the
technological
element
that
makes
this
possible
is
the
secure
enclaves
now
found
in
chips.
This
is
actually
a
tech
that
many
of
us were
concerned
 would
lead
to
the
death
of
general
purpose
computers,
and
give
more
power
to
the
large
companies.
Cory
Doctorow
has
warned
about
how
these
systems
can
be
abused—he
calls
them
Demon-haunted
computers
—but
could
we
also
use
that
same
tech
to
regain
control?

That’s
part
of
Komoroske’s
argument:


These
secure
enclaves
can
also
do
something
called
remote
attestation.
They
can
provide
cryptographic
proof—not
just
a
promise,
but
mathematical
proof—of
exactly
what
software
is
running
inside
them.
It’s
like
having
a
tamper-proof
seal
that
proves
the
code
handling
your
data
is
exactly
what
it
claims
to
be,
unmodified
and
uncompromised.


If
you
combine
these
ingredients
in
just
the
right
way,
what
this
enables,
for
the
first
time,
are
policies
attached
not
to
apps
but
to
data
itself.
Every
piece
of
data
could
carry
its
own
rules
about
how
it
can
be
used.
Your
photos
might
say,
“Analyze
me
locally
but
never
transmit
me.”
Your
calendar
might
allow,
“Extract
patterns
but
only
share
aggregated
insights
in
a
way
that
is
provably
anonymous.”
Your
emails
could
permit
reading
but
forbid
forwarding.
This
breaks
the
iron
triangle:
Untrusted
code
can
now
work
with
sensitive
data
and
have
network
access,
because
the
policies
themselves—not
the
app’s
origin—control
what
can
be
done
with
the
data.

Years
of
recognizing
that
Cory’s
warnings
are
usually
dead-on
accurate
has
me
approaching
this
embrace
of
secure
enclaves
with
some
amount
of
caution.
The
same
underlying
technologies
that
could
liberate
users
from
platform
silos
could
also
be
used
to
create
more
sophisticated
forms
of
control.
But
Komoroske’s
vision
represents
a
genuinely
different
deployment—using
these
tools
to
give
users
direct
control
over
their
own
data
and
to
cryptographically
limit
what
systems
can
do
with
that
data,
rather
than
giving
platforms
more
power
to
lock
things
down.
The
key
difference
is
who
controls
the
policies.
(And
I’m
genuinely
curious
to
hear
what
Cory
thinks
of
this
approach!)

The
vision
Komoroske
paints
is
compelling:
imagine
tools
that
feel
like
extensions
of
your
will,
private
by
default,
adapting
to
your
every
need—software
that
works for you,
not on you.
A
personal
research
assistant
that
understands
your
note-taking
system.
A
financial
tracker
designed
around
your
specific
approach
to
budgeting.
A
task
manager
that
reshapes
itself
around
your
changing
work
style.

To
the
extent
that
any
of
this
was
possible
before,
it
required
you
simply
handing
over
all
your
data
to
a
big
tech
firm.
The
possibility
of
being
able
to
separate
those
things…
is
exciting.

This
isn’t
just
about
better
apps.
It’s
about
a
fundamental
shift
in
the
power
dynamics
of
the
internet.
Instead
of
being
forced
to
choose
between
security
and
functionality,
between
privacy
and
convenience,
we
could
have
systems
where
those
aren’t
trade-offs
at
all.

The
same
origin
paradigm
got
us
here,
creating
the
conditions
for
data
monopolies
and
restricting
user
agency.
But
as
Komoroske
argues
in
both
the
piece
he
wrote
for
us
and
this
new
piece,
we
built
these
systems—we
can
build
better
ones.
We
might
finally
deliver
on
its
promises
of
user
empowerment
rather
than
further
concentration.

As
we’ve
argued
at
Techdirt
for
years,
the
internet
works
best
when
it
empowers
users
rather
than
platforms.
The
same-origin
paradigm
was
an
understandable
choice
given
the
constraints
of
the
1990s.
But
we’re
no
longer
bound
by
those
constraints.
The
tools
now
exist
to
put
users
back
in
control
of
their
data
and
their
digital
experiences.

We
can
move
past the
learned
helplessness
 that
has
characterized
the
last
decade
of
internet
discourse.
We
can
reject
the
false
choice
that
says
the
only
way
to
access
powerful
new
technologies
is
to
surrender
our
freedoms
to
tech
giants.
We
can
actually
build
toward
a
world
where
end
users
themselves
have both
the
power
and
control
.

We
just
need
to
embrace
that
opportunity,
rather
than
assuming
that
the
way
the
internet
has
worked
for
the
past
30
years
is
the
way
it
has
to
run
going
forward.


How
One
1990s
Browser
Decision
Created
Big
Tech’s
Data
Monopolies
(And
How
We
Might
Finally
Fix
It)


More
Law-Related
Stories
From
Techdirt:


The
IRS
Is
Building
A
Vast
System
To
Share
Millions
Of
Taxpayers’
Data
With
ICE


DHS
Abandons
Fighting
Actual
Crime
To
Focus
All
Of
Its
Attention
On
Undocumented
Migrants


UnitedHealth’s
Response
To
People
Cheering
Their
CEO’s
Murder:
Silence
The
Critics