
Most
lawyers
think
the
hard
part
of
AI
is
the
technology.
It
isn’t.
The
hard
part
is
that
the
law
is
moving
at
a
fraction
of
its
speed.
If
you
are
in-house,
you
are
already
feeling
the
pressure.
Your
business
wants
to
deploy
a
new
AI
capability,
buyers
are
asking
for
commitments
you’ve
never
seen
before,
and
your
executives
want
a
straight
answer
about
risk
in
a
landscape
where
even
regulators
seem
unsure.
In
my
conversation
with
John
Pavolotsky,
technology
transactions
attorney
and
co-head
of
the
AI
practice
at
Stoel
Rives,
he
put
it
plainly:
“You
draft
to
the
lay
of
the
land
right
now,
and
to
where
things
might
go
in
the
next
six
to
twelve
months.”
For
in-house
teams,
that
window
is
already
uncomfortably
small.
This
is
the
moment
when
legal
teams
either
adapt
or
fall
behind
the
speed
of
their
own
companies.
Understanding
this
tension
is
the
first
step.
Acting
on
it
is
the
second.
The
Regulatory
Terrain
Is
Shifting
Under
Your
Feet
John
described
the
current
patchwork
of
AI
regulation
as
a
moving
target.
California
alone
has
dozens
of
bills
that
are
labeled
“AI-related.”
The
EU
AI
Act
categorizes
systems
into
risk
tiers
that
many
U.S.
companies
will
feel
the
effects
of,
even
if
they
are
not
directly
subject
to
it.
For
in-house
teams,
the
problem
isn’t
tracking
every
bill.
The
problem
is
staying
aligned
with
the
small
subset
that
actually
intersects
your
business.
That
requires
more
than
scanning
headlines.
It
requires
ongoing
conversations
inside
the
company
about
how
the
technology
is
designed,
deployed,
updated,
and
used.
John’s
point
here
is
useful:
the
states
remain
laboratories
of
governance,
and
they
will
continue
experimenting
ahead
of
federal
frameworks.
In-house
lawyers
should
assume
that
a
“stable”
AI
regulatory
landscape
is
years
away.
The
job
is
not
to
predict
the
outcome
but
to
build
contracting
strategies
that
survive
the
volatility.
High-Risk
Use
Cases
Are
Already
Defined.
The
Market
Is
Paying
Attention.
One
practical
insight
John
shared
is
that
the
definition
of
“high-risk”
is
not
as
mysterious
as
people
assume.
The
EU
AI
Act
and
the
Colorado
AI
Act
list
them
clearly:
education,
housing,
financial
services,
government
services,
and
any
domain
with
a
meaningful
impact
on
a
person’s
livelihood.
Most
in-house
counsel
already
know
whether
their
company’s
products
or
internal
use
cases
touch
those
areas.
The
gap
is
often
operational,
not
conceptual.
Has
the
organization
mapped
its
AI
use
cases?
Do
product
managers
know
how
the
company
defines
“high-risk”?
Are
procurement
workflows
flagging
these
systems
before
a
contract
hits
legal?
If
the
answer
is
no,
the
issue
is
not
regulatory
uncertainty.
The
issue
is
internal
clarity.
This
is
where
legal
can
lead.
AI
Is
Software,
But
Contracting
for
AI
Is
Not
SaaS
2.0
John
made
a
point
that
sounds
simple
but
has
massive
implications:
AI
is
still
software.
Yet
once
AI
becomes
more
agentic,
“the
entire
risk
model
shifts.”
If
systems
begin
taking
actions
on
a
user’s
behalf,
making
decisions
without
human
sign-off,
or
interacting
with
other
systems
autonomously,
the
SaaS
analogy
breaks
down.
In
SaaS,
we
negotiate
availability,
uptime,
data
rights,
SLAs,
disaster
recovery,
audits.
With
agentic
systems,
we
shift
toward
questions
about
delegation,
autonomy
boundaries,
and
failure
modes.
We
shift
toward:
What
happens
when
the
system
does
something
unanticipated?
What
is
the
chain
of
accountability
when
a
system
acts
on
incomplete
or
misleading
data?
How
do
you
evaluate
risk
when
the
system’s
internal
reasoning
is
not
deterministic?
This
is
not
theoretical.
John
gave
the
example
of
a
future
AI
travel
concierge.
You
tell
it
to
plan
your
hiking
trip
in
the
Bavarian
Alps.
It
books
your
flights,
pays
for
your
lodging,
coordinates
guides,
and
executes
decisions
across
multiple
vendors.
Today,
that
would
be
a
cute
demo.
In
a
few
years,
it
may
be
real.
And
once
AI
tools
begin
transacting,
negotiating,
and
executing
autonomously,
contract
clauses
built
for
SaaS
workflows
will
collapse
under
their
own
assumptions.
In-house
counsel
should
expect
this
shift,
not
react
to
it.
Experimentation
Is
Now
A
Professional
Obligation
One
of
John’s
most
valuable
pieces
of
advice
is
simple:
legal
teams
can’t
meaningfully
advise
on
AI
unless
they
are
using
it.
He
encourages
lawyers
to
pick
a
couple
of
tools
and
get
comfortable
with
them.
Feed
them
real
prompts.
Ask
them
to
draft
clauses.
Pressure-test
the
outputs.
Learn
where
the
seams
are.
Learn
where
they
hallucinate,
misinterpret,
or
oversimplify.
Learn
where
they
shine.
This
is
not
about
becoming
a
prompt
engineer.
It
is
about
understanding
the
mechanics
of
the
tools
shaping
modern
contracting.
If
the
business
is
experimenting
and
legal
is
not,
legal
will
not
be
ready
when
the
real
risk
decisions
show
up.
Experimentation
also
forces
clarity.
It
helps
you
define
what
“good
enough”
looks
like
for
your
organization.
As
John
noted,
humans
still
struggle
to
agree
on
shared
language,
and
AI
will
inherit
those
struggles.
Using
the
tools
gives
you
a
stronger
foundation
to
establish
drafting
standards,
review
checklists,
and
guidance
your
teams
can
rely
on.
The
In-House
Advantage:
You
Sit
Closest
To
The
Technology
John
spent
years
at
Intel
and
Roku
before
returning
to
private
practice,
and
he
emphasized
something
in-house
counsel
underestimate:
proximity
to
the
business
is
the
superpower.
You
see
product
roadmaps
before
outside
counsel.
You
see
design
discussions.
You
see
experimentation.
You
see
failures.
That
visibility
is
the
raw
material
needed
to
draft
contracts
that
reflect
how
the
technology
actually
behaves,
not
how
a
product
sheet
describes
it.
AI
risk
will
always
look
different
inside
the
company
than
from
the
outside.
Your
engineers
know
where
the
model
is
brittle.
Your
product
teams
know
what
happens
in
edge
cases.
Your
security
team
knows
the
real
data
flows.
If
legal
isn’t
in
those
conversations,
your
contracts
will
over-index
on
theoretical
risk
and
under-index
on
the
risks
your
company
is
actually
exposed
to.
This
is
the
moment
to
lean
in.
Focus
Your
AI
Contracting
Strategy
On
Your
Actual
Sandbox
John
ended
with
a
point
that
deserves
more
attention:
trying
to
track
every
bill,
proposal,
and
headline
is
a
waste
of
time.
Your
job
is
to
understand
your
slice
of
the
world
and
tailor
your
contracting
playbook
to
it.
That
starts
with
mapping:
What
AI
are
we
building?
What
AI
are
we
buying?
What
AI
are
we
embedding
in
third-party
platforms?
Where
are
the
autonomy
boundaries?
Where
does
data
go?
What
decisions
are
being
delegated?
Once
you
know
this,
you
can
structure
contracts
around
the
real
risks,
not
speculative
patterns.
The
temptation
right
now
is
to
boil
the
ocean.
Resist
it.
Build
targeted
frameworks.
Train
your
team
on
those
frameworks.
Revisit
them
quarterly.
Align
them
with
product
reality,
not
headlines.
This
is
how
you
build
a
contracting
function
that
stays
ahead
of
regulatory
changes
without
chasing
every
draft
bill.
The
Only
Sustainable
Strategy
Is
Continuous
Dialogue
When
I
asked
John
for
one
takeaway,
he
said:
“Have
more
conversations.”
He’s
right.
None
of
us
will
get
this
right
in
isolation.
The
technology
is
evolving
quickly,
and
expertise
will
come
from
talking
with
each
other,
testing
ideas,
comparing
notes,
and
refining
our
approaches
over
time.
In-house
counsel
do
not
need
perfect
foresight.
They
need
adaptable
frameworks,
grounded
risk
assessment,
and
a
willingness
to
revise
their
approach
as
the
landscape
shifts.
The
companies
that
thrive
will
be
the
ones
whose
legal
teams
stay
engaged,
curious,
and
close
to
the
technology,
not
the
ones
waiting
for
regulators
to
hand
them
the
answers.
AI
contracting
is
moving
fast.
Your
organization
needs
you
to
move
with
it.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
