The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

From Boilerplate To Architecture: How AI Broke The Monolithic IP Clause – Above the Law

For
a
long
time,
IP
risk
lived
in
one
place.

One
clause.
One
indemnity.
One
catch-all
promise
that
everything
would
be
fine
if
it
wasn’t.

That
approach
worked
reasonably
well
when
software
had
clear
authorship,
clear
inputs,
and
outputs
that
behaved
as
lawyers
expected.
AI
ended
that
illusion.
And
2025
was
the
year
the
market
finally
stopped
pretending
otherwise.

A
wave
of
litigation
didn’t
resolve
all
the
hard
questions
around
AI
and
intellectual
property.
What
it
did
do
was
force
contract
drafters
to
confront
something
they
had
been
papering
over
for
years:
IP
risk
in
AI
systems
isn’t
singular.
It’s
layered.
And
it
doesn’t
fit
inside
a
single
indemnity
anymore.


What
The
Litigation
Actually
Exposed

The
cases
themselves
varied.
The
takeaway
didn’t.

Training
data
became
impossible
to
ignore.
Derivative
works
stopped
being
a
theoretical
debate
and
started
showing
up
in
pleadings.
Output
ownership,
attribution,
and
labeling
all
surfaced
as
real
points
of
contention
rather
than
academic
hypotheticals.

None
of
this
was
entirely
new.
What
changed
was
that
courts
and
counterparties
alike
began
asking
the
same
uncomfortable
question:

what
exactly
is
this
indemnity
supposed
to
cover?

The
honest
answer,
increasingly,
was
“not
all
of
this.”


Why
The
Traditional
IP
Indemnity
Stopped
Working

The
classic
IP
indemnity
assumed
a
few
things
that
AI
quietly
breaks.

It
assumed
that
infringement
flows
from
a
discrete
act.
It
assumed
inputs
and
outputs
are
cleanly
separable.
It
assumed
authorship
is
identifiable.
And
it
assumed
risk
can
be
transferred
wholesale
from
customer
to
vendor.

AI
systems
collapse
those
assumptions.
Training
happens
continuously.
Outputs
are
probabilistic.
Models
evolve.
Risk
emerges
from
combinations
of
data,
architecture,
and
use
context
rather
than
a
single
act
of
copying.

Trying
to
force
that
reality
into
a
single
clause
doesn’t
simplify
things.
It
obscures
them.

By
2025,
contracts
started
reflecting
that
reality.
Not
because
lawyers
suddenly
became
more
creative,
but
because
pretending
otherwise
became
too
risky.


The
shift
from
boilerplate
to
rights
architecture

What
replaced
the
monolithic
IP
clause
wasn’t
chaos.
It
was
structure.

Instead
of
one
sweeping
indemnity,
contracts
began
separating
rights
and
obligations
into
components
that
roughly
track
how
AI
systems
actually
work.

Input
rights
started
to
stand
on
their
own.
Training
rights
became
explicit
rather
than
implied.
Output
rights
were
carved
out
and
qualified.
Labeling
and
attribution
obligations
appeared
where
they
hadn’t
before.

This
wasn’t
about
adding
pages
for
the
sake
of
complexity.
It
was
about
admitting
that
different
parts
of
the
AI
lifecycle
create
different
kinds
of
IP
exposure.

IP
didn’t
get
more
complicated.
It
got
more
honest.


Why
IP
risk
is
now
itemized,
not
abstract

The
practical
effect
of
this
shift
is
that
IP
risk
stopped
being
a
vague
background
concern
and
became
something
parties
negotiate
line
by
line.

That’s
why
indemnities
feel
narrower
even
when
contracts
are
longer.
Risk
hasn’t
disappeared.
It’s
been
disaggregated.

Training
data
risk
might
be
excluded
but
addressed
through
representations
and
disclosures.
Output
risk
might
be
capped
or
shared.
Derivative
works
might
trigger
obligations
that
look
more
like
governance
than
remediation.

For
lawyers,
this
means
the
“real”
IP
risk
often
lives
outside
the
indemnity
section.
It’s
embedded
in
definitions,
use
restrictions,
audit
rights,
and
documentation
requirements.

If
you’re
only
reading
the
indemnity,
you’re
missing
the
architecture.


What
this
means
for
practitioners
right
now

This
shift
explains
why
IP
negotiations
around
AI
feel
harder
than
they
used
to.

Clients
expect
the
same
comfort
they
got
from
legacy
software
deals.
Vendors
resist
promises
they
can’t
realistically
keep.
Everyone
senses
the
risk,
but
it
no
longer
has
a
single
home.

The
danger
is
treating
this
like
a
drafting
problem
instead
of
a
structural
one.
Swapping
language
without
understanding
how
the
pieces
fit
together
can
create
gaps
that
only
show
up
when
something
goes
wrong.

The
more
useful
question
isn’t
“is
the
indemnity
broad
enough?”
It’s
“where
is
this
risk
actually
being
carried?”


Looking
ahead:
there’s
no
going
back
to
one
clause

There’s
no
path
back
to
the
single,
catch-all
IP
indemnity
for
AI
systems.
The
market
has
crossed
that
line.

What
comes
next
isn’t
uniformity.
It’s
modularity.
Contracts
will
continue
to
experiment
with
different
ways
of
allocating
input,
training,
and
output
risk
depending
on
use
case,
industry,
and
tolerance
for
uncertainty.

The
work
now
is
aligning
legal
structure
with
technical
reality.
That’s
slower
than
boilerplate.
It’s
also
more
defensible.

These
patterns
show
up
repeatedly
across
2025
commercial
agreements
and
are
explored
in
more
detail
in
a
recent

Contract
Trust
Report

examining
how
AI
is
reshaping
IP
risk
in
contracts. 

In
2025,
IP
risk
stopped
being
theoretical
and
started
being
drafted.
The
era
of
pretending
otherwise
is
over.




Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.



A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.



She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.