
Most
in-house
lawyers
talk
about
AI
as
if
it
is
a
future
event
that
will
arrive
with
a
contract,
a
vendor,
and
a
clean
implementation
plan.
The
truth
is
far
less
organized.
AI
is
already
inside
your
company.
It
arrived
through
your
employees’
browsers,
their
phones,
their
inbox
extensions,
their
creativity,
their
exhaustion,
and
their
desire
to
get
more
done
in
a
day
than
the
system
allows.
You
are
governing
it
whether
you
mean
to
or
not.
The
only
question
is
whether
you
understand
what
has
already
begun.
I
recorded
a
“Notes
to
My
(Legal)
Self”
conversation
with
Heath
Morgan,
an
in-house
attorney
who
spends
his
days
thinking
about
AI
governance
and
his
nights
writing
speculative
fiction.
His
book,
“The
Memory
Project,”
explores
a
world
in
which
digital
personas
from
the
past
and
future
become
part
of
daily
life.
As
we
talked,
it
became
clear
that
his
fictional
world
is
less
of
a
leap
and
more
of
a
mirror.
Companies
are
already
building
their
own
“memory
projects”
without
realizing
it.
Not
curated.
Not
intentional.
Just
accumulating.
Every
prompt,
every
tool,
every
autopilot,
every
quiet
workflow
decision
is
creating
a
parallel
record
of
your
business.
Heath
said
something
that
stuck
with
me:
“The
question
is
not
whether
your
employees
are
using
AI.
It
is
whether
they
are
using
it
intentionally
or
unintentionally.”
That
distinction
is
the
heart
of
the
problem
for
in-house
counsel.
Because
unintentional
adoption
is
where
risk
concentrates.
It
is
also
where
culture
forms.
The
New
Latchkey
Generation
Is
Already
In
Your
Org
Chart
Heath
draws
a
comparison
to
what
he
calls
the
“social
media
latchkey
kid
generation.”
For
20
years,
we
gave
an
entire
population
powerful
technology
without
meaningful
guidance.
We
are
living
with
the
consequences.
In
the
workplace,
something
similar
is
happening
with
AI.
Tools
are
being
marketed
directly
to
employees.
They
promise
convenience.
They
promise
saved
hours.
They
rarely
mention
downstream
risk.
By
the
time
legal
is
ready
to
publish
its
polished
AI
policy,
the
workforce
is
already
three
steps
ahead,
adopting
tools
informally.
This
is
how
every
major
technology
wave
has
entered
the
enterprise.
BYOD.
Cloud
storage.
Enterprise
messaging.
Shadow
IT.
AI
is
simply
faster
and
more
embedded
than
anything
before
it.
Heath’s
point
is
that
the
legal
team’s
assumptions
are
already
outdated.
You
cannot
govern
AI
as
if
the
organization
started
from
zero.
You
have
to
govern
the
reality
you
inherited.
That
means
mapping
actual
behavior
instead
of
theoretical
workflows.
Corporate
Memory
Is
Being
Built
Bot
By
Bot
One
of
the
most
interesting
ideas
in
Heath’s
book
is
“conversational
time
travel.”
He
imagines
a
world
where
people
talk
to
digital
versions
of
themselves
constructed
from
data
and
past
interactions.
While
it
sounds
like
science
fiction,
the
corporate
version
is
happening
right
now.
Every
AI
tool
used
across
your
company
is
learning
your
patterns,
documents,
tone,
internal
preferences,
and
workflows.
Even
if
you
never
approved
it.
If
you
do
nothing,
that
becomes
your
corporate
memory.
Not
the
official
retention
schedule.
Not
the
carefully
governed
document
library.
The
machine
memory
is
built
from
prompt
histories,
scraped
emails,
and
user
behavior.
And
once
that
memory
exists
in
external
systems,
you
cannot
meaningfully
retrieve
it.
Most
organizations
are
not
prepared
for
that.
It
affects
IP
strategy.
It
affects
confidentiality.
It
affects
investigations
and
discovery.
It
affects
employment.
It
affects
vendor
risk.
And
it
affects
culture,
because
an
organization
eventually
becomes
what
it
repeats.
The
Ethical
Frame:
Legacy
Is
Being
Written
Without
Consent
When
Heath
talks
about
legacy,
he
means
something
broader
than
posterity.
He
means
the
record
of
who
we
are
that
persists
in
data
and
models
long
after
the
moment
passes.
The
same
applies
to
organizations.
Every
decision
to
use
or
ignore
AI
tools
becomes
part
of
a
legacy
of
accountability.
Ignoring
unintentional
adoption
does
not
protect
the
organization.
It
cedes
control.
It
also
weakens
your
moral
authority
to
govern
intentional
adoption
later.
If
your
teams
have
spent
two
years
improvising
with
AI,
they
will
not
welcome
restrictions
that
arrive
late
and
without
context.
Governance
fails
when
it
does
not
reflect
reality.
Heath
puts
it
simply:
“If
we
do
not
engage
now,
we
are
outsourcing
our
legacy
to
whoever
builds
these
tools.”
For
in-house
counsel,
that
should
feel
familiar.
It
is
the
same
lesson
the
profession
learned
with
SaaS,
cloud,
and
messaging
platforms.
Technology
expands
faster
than
policy.
Culture
stabilizes
before
legal
notices.
And
by
the
time
legal
catches
up,
the
risk
surface
is
already
shaped.
The
Practical
Question:
How
Should
In-House
Counsel
Respond
Now
The
first
step
is
acknowledging
that
unintentional
adoption
is
already
happening.
This
is
not
a
failing.
It
is
a
signal.
Employees
are
trying
to
solve
real
workflow
pain
that
the
business
has
not
solved
for
them.
That
makes
AI
governance
a
partnership
project,
not
an
audit.
The
second
step
is
to
map
reality.
Not
a
theoretical
inventory.
A
real
one.
Which
teams
are
using
AI?
Which
tools.
For
what
purposes?
With
what
data?
If
you
run
a
risk
program,
treat
it
like
a
shadow
supply
chain
mapping
exercise.
You
cannot
govern
what
you
cannot
see,
and
you
cannot
see
what
you
do
not
ask.
Once
you
know
what
is
actually
happening,
you
can
design
something
livable.
Lightweight
approvals.
Clear
no-go
zones.
A
set
of
recommended
tools
that
do
not
expose
the
company
to
unnecessary
risk.
A
permissions
framework
that
reflects
the
actual
risk
of
the
underlying
work.
And
a
governance
model
that
prioritizes
what
matters
rather
than
policing
every
experiment.
Lastly,
you
have
to
think
several
steps
ahead.
Because
the
real
risk
is
not
the
tools
your
employees
are
using
today.
That
is
the
short-term
headache.
The
long-term
risk
is
the
implicit
“corporate
memory”
being
built
from
those
choices.
You
will
inherit
it
if
you
do
nothing.
You
can
shape
it
if
you
intervene.
Why
This
Matters
Right
Now
Heath’s
fictional
world
includes
a
moment
he
calls
the
“gray
data
breach.”
A
catastrophic
exposure
of
intimate
personal
data
that
forces
society
to
split
into
two
markets.
Privacy
by
default
and
privacy
as
a
luxury.
It
is
fiction.
It
is
also
plausible.
And
in
the
corporate
context,
we
are
already
watching
that
divide
form.
Some
companies
treat
privacy
as
a
fundamental
value.
Others
treat
it
as
a
premium
feature.
Employees
are
making
similar
choices,
sometimes
unconsciously,
every
time
they
choose
a
tool.
Fiction
is
useful
because
it
lets
us
ask
future
questions
early.
For
in-house
counsel,
the
core
question
is
this:
What
version
of
your
company
do
you
want
the
future
to
inherit?
If
you
do
nothing,
the
answer
will
be
accidental.
You
will
inherit
a
patchwork
of
AI
tools,
fragmented
data
trails,
inconsistent
decision
logic,
and
models
trained
on
content
you
never
reviewed.
If
you
engage,
you
can
make
your
organization
intentional.
You
can
define
what
is
protected,
what
is
shared,
what
is
stored,
and
what
is
deleted.
You
can
set
the
tone
for
identity,
governance,
and
culture
long
before
regulators
decide
what
the
floor
looks
like.
This
is
the
work
of
in-house
counsel
in
the
age
of
AI.
Identify
what
is
already
happening.
Shape
behavior.
Protect
the
organization.
And
build
a
legacy
that
the
future
will
not
regret
inheriting.
Olga
V.
Mack
is
the
CEO
of
TermScout,
where
she
builds
legal
systems
that
make
contracts
faster
to
understand,
easier
to
operate,
and
more
trustworthy
in
real
business
conditions.
Her
work
focuses
on
how
legal
rules
allocate
power,
manage
risk,
and
shape
decisions
under
uncertainty.
A
serial
CEO
and
former
General
Counsel,
Olga
previously
led
a
legal
technology
company
through
acquisition
by
LexisNexis.
She
teaches
at
Berkeley
Law
and
is
a
Fellow
at
CodeX,
the
Stanford
Center
for
Legal
Informatics.
She
has
authored
several
books
on
legal
innovation
and
technology,
delivered
six
TEDx
talks,
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
VentureBeat,
TechCrunch,
and
Above
the
Law.
Her
work
treats
law
as
essential
infrastructure,
designed
for
how
organizations
actually
operate.
