The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

AI Governance Starts With Contract Governance, Says Christine Uri – Above the Law

Generative
AI
might
be
the
shiny
new
object
in
legal
departments
right
now,
but
Christine
Uri
wants
in-house
counsel
to
slow
down.
Not
because
she’s
skeptical
of
the
technology,
but
because
she’s
seen
what
happens
when
legal
teams
rush
to
implement
tools
before
they’ve
addressed
the
most
foundational
risk:
governance.

And
governance
doesn’t
start
with
AI.
It
starts
with
contracts.

In
a
recent
episode
of
“Notes
to
My
(Legal)
Self,”
Christine,
former
chief
legal
and
sustainability
officer
at
ENGIE
Impact
and
a
leading
voice
in
ESG
and
AI
strategy,
laid
out
a
clear
message
for
legal
leaders.
AI
will
change
everything,
but
only
if
Legal
is
prepared
to
guide
that
change
responsibly.

Watch
the
episode
here:


Contracts
Are
The
First
AI
Risk
You
Already
Signed
For

Uri
made
one
point
that
should
hit
close
to
home
for
in-house
teams:
AI
is
already
in
your
systems.
It
is
embedded
in
software
tools,
layered
into
services,
and
working
behind
the
scenes
in
platforms
your
business
uses
every
day.
Most
of
that
came
through
the
door
via
contracts,
vendor
agreements,
procurement
deals,
license
terms,
and
partner
relationships.

If
you
don’t
know
what
you’ve
agreed
to,
you
can’t
govern
it.

“You
have
to
watch
the
regulations,”
Uri
said.
“There’s
a
lot
of
regulatory
uncertainty.
But
you
also
have
to
understand
your
own
internal
use.
What
AI
is
already
in
your
company?
What
have
you
already
exposed
yourself
to?”

That’s
not
just
a
procurement
problem.
It’s
a
legal
risk.
Terms
around
data
use,
liability,
privacy,
and
intellectual
property
are
often
buried
deep
in
vendor
contracts.
The
question
isn’t
whether
you’re
exposed.
It’s
whether
you
know
where
the
exposure
lives.


Good
Governance
Means
Knowing
What’s
In
Your
Stack

Uri
pointed
out
that
the
EU
AI
Act
and
other
emerging
regulations
are
pushing
companies
to
be
more
transparent
about
how
AI
is
used
internally.
That
starts
with
mapping
out
your
AI
footprint.
But
mapping
only
works
if
you
can
see
the
landscape.

Many
companies
can’t.

“If
you’ve
said
nothing
about
AI
to
your
employees,
they
are
already
uploading
your
confidential
information
into
ChatGPT,”
Uri
warned.
“They’re
not
trying
to
cause
harm.
They
just
don’t
know
it’s
dangerous.”

This
lack
of
visibility
is
a
contracting
issue
at
its
core.
Who
owns
the
data?
Who
can
audit
the
outputs?
Who
is
liable
if
the
model
makes
a
bad
decision?

Legal
teams
need
the
ability
to
answer
those
questions
without
spending
weeks
parsing
redlines.
If
your
contracts
aren’t
searchable,
comparable,
and
structured,
you’re
not
ready
for
AI.


Risk
Isn’t
New.
Business
Models
Are.

One
of
Uri’s
most
insightful
observations
came
when
she
reframed
AI
risk
not
as
a
legal
novelty,
but
as
a
business
shift.

“The
legal
issues
aren’t
new,”
she
said.
“What’s
new
are
the
business
models,
the
liability
theories,
the
risk
patterns.
We’ve
seen
privacy
and
IP
disputes
before.
But
now
we
have
use
cases
we
haven’t
seen
before.”

In
other
words,
the
playbook
needs
an
update.
And
that
update
starts
with
contracting.
AI-related
risks
are
rarely
addressed
in
standalone
policies.
They
live
in
clauses,
side
letters,
vendor
terms,
SLAs,
and
partnership
agreements.
Knowing
what
your
company
has
agreed
to
is
no
longer
optional.
It
is
governance.


ESG
And
AI:
A
Common
Language
Of
Responsibility

Uri,
who
has
long
worked
at
the
intersection
of
ESG
and
legal
strategy,
sees
AI
governance
as
a
natural
evolution
of
the
same
principles:
transparency,
responsibility,
and
resilience.

She
draws
clear
lines
between
environmental
risk
and
digital
risk.
Both
require
long-term
thinking.
Both
demand
clear
oversight.
And
both
must
be
translated
into
actual
business
behavior,
not
just
high-level
policy.

That’s
where
contracts
come
in.
They’re
the
most
concrete
way
to
operationalize
governance.
Whether
it’s
carbon
reporting
or
data
security,
the
commitments
a
company
makes
on
paper
define
how
seriously
it
takes
its
responsibilities.

“Governance
is
how
we
bring
those
risks
under
control
and
create
human-centric
systems,”
Uri
said.
“It’s
not
just
compliance.
It’s
about
doing
the
right
thing.”


Want
To
Govern
AI
Responsibly?
Start
With
Your
Contracts

Uri
offered
a
practical
checklist
for
general
counsel:

  1. Join
    or
    establish
    an
    AI
    oversight
    council
    that
    includes
    legal,
    IT,
    product,
    and
    business
    stakeholders.
  2. Build
    rules
    of
    the
    road,
    internal
    policies
    that
    clarify
    what
    is
    allowed,
    what
    is
    not,
    and
    how
    employees
    should
    use
    AI
    safely.
  3. Audit
    existing
    contracts
    to
    identify
    how
    AI-related
    terms
    are
    handled
    today
    and
    where
    gaps
    exist.
  4. Train
    your
    workforce.
    Not
    just
    lawyers,
    but
    everyone
    who
    might
    interact
    with
    or
    be
    affected
    by
    AI
    tools.

That
third
step
is
where
many
legal
teams
falter.
It
is
also
where
contract
governance
shows
its
true
value.

If
legal
wants
a
seat
at
the
table
in
AI
strategy,
and
it
should,
the
price
of
that
seat
is
knowing
what’s
already
in
the
drawer.
Contracts
reflect
the
reality
of
how
AI
is
used,
what’s
been
promised,
and
where
the
risks
are.

Without
that
understanding,
AI
governance
is
just
theory.

Watch
the

full
interview
with
Christine
Uri
here
.











Olga
V.
Mack
 is
the
CEO
of 
TermScout,
an
AI-powered
contract
certification
platform
that
accelerates
revenue
and
eliminates
friction
by
certifying
contracts
as
fair,
balanced,
and
market-ready.
A
serial
CEO
and
legal
tech
executive,
she
previously
led
a
company
through
a
successful
acquisition
by
LexisNexis.
Olga
is
also
Fellow
at
CodeX,
The
Stanford
Center
for
Legal
Informatics
,
and
the
Generative
AI
Editor
at
law.MIT.
She
is
a
visionary
executive
reshaping
how
we
law—how
legal
systems
are
built,
experienced,
and
trusted.
Olga 
teaches
at
Berkeley
Law
,
lectures
widely,
and
advises
companies
of
all
sizes,
as
well
as
boards
and
institutions.
An
award-winning
general
counsel
turned
builder,
she
also
leads
early-stage
ventures
including 
Virtual
Gabby
(Better
Parenting
Plan)
Product
Law
Hub
ESI
Flow
,
and 
Notes
to
My
(Legal)
Self
,
each
rethinking
the
practice
and
business
of
law
through
technology,
data,
and
human-centered
design.
She
has
authored 
The
Rise
of
Product
Lawyers
Legal
Operations
in
the
Age
of
AI
and
Data
Blockchain
Value
,
and 
Get
on
Board
,
with Visual
IQ
for
Lawyers (ABA)
forthcoming.
Olga
is
a
6x
TEDx
speaker
and
has
been
recognized
as
a
Silicon
Valley
Woman
of
Influence
and
an
ABA
Woman
in
Legal
Tech.
Her
work
reimagines
people’s
relationship
with
law—making
it
more
accessible,
inclusive,
data-driven,
and
aligned
with
how
the
world
actually
works.
She
is
also
the
host
of
the
Notes
to
My
(Legal)
Self
podcast
(streaming
on 
SpotifyApple
Podcasts
,
and 
YouTube),
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
Newsweek,
VentureBeat,
ACC
Docket,
and
Above
the
Law.
She
earned
her
B.A.
and
J.D.
from
UC
Berkeley.
Follow
her
on 
LinkedIn and
X
@olgavmack.