
When
it
comes
to
artificial
intelligence,
I
have
heard
more
than
one
lawyer
say,
“I
don’t
need
to
know
how
it
works,
I
just
need
to
know
if
it
is
legal.”
That
is
like
a
pilot
saying,
“I
don’t
need
to
know
how
the
engines
work,
I
just
need
to
know
if
we
can
take
off.”
You
might
get
airborne,
but
I
would
not
book
a
ticket.
AI
products
do
not
exist
in
a
vacuum.
They
are
the
product
of
countless
technical
decisions,
each
with
potential
legal
consequences.
For
in-house
counsel,
understanding
the
mechanics
of
AI
is
no
longer
optional.
It
is
the
foundation
for
giving
advice
that
actually
works
in
the
real
world.
A
New
Skillset
For
A
New
Era
The
old
model,
where
engineers
build
and
lawyers
approve,
is
breaking
down.
AI
systems
are
not
static
products.
They
learn,
adapt,
and
make
decisions
in
ways
that
blur
the
line
between
design
and
deployment.
Reviewing
them
only
at
the
end
of
development
is
too
late
to
catch
many
of
the
most
serious
risks.
Today’s
in-house
product
counsel
needs
a
dual
fluency.
You
must
be
able
to
grasp
how
an
AI
model
operates
while
mapping
those
details
onto
rapidly
evolving
legal
frameworks.
This
combination
allows
you
to
enter
product
discussions
not
only
as
a
risk
manager
but
as
a
partner
in
shaping
design
choices.
Understanding
The
Technical
Side
You
do
not
need
to
be
an
engineer,
but
you
should
be
able
to
follow
a
conversation
about
training
datasets,
model
architecture,
and
performance
testing.
This
means
engaging
with
your
product
teams
early
and
asking
for
explanations
that
are
clear
and
concise.
Understanding
whether
a
model
is
generative
or
predictive,
how
it
was
trained,
and
how
it
will
be
tested
for
fairness
and
accuracy
will
tell
you
far
more
about
potential
legal
exposure
than
a
product
launch
deck
ever
could.
Seeing
The
Legal
Risks
Early
We
have
already
seen
examples
of
what
happens
when
legal
and
technical
teams
work
in
isolation.
An
AI
hiring
tool
that
learned
to
prefer
one
gender
over
another.
An
art
generator
trained
on
copyrighted
images
without
permission.
These
were
not
inevitable
outcomes.
They
were
the
result
of
missed
opportunities
to
ask
the
right
questions
before
the
product
was
locked
in.
When
counsel
understands
the
technical
architecture,
potential
problems
can
be
spotted
while
they
are
still
inexpensive
and
feasible
to
fix.
By
the
time
the
product
is
live,
those
same
issues
can
be
costly,
public,
and
far
more
difficult
to
resolve.
The
Cost
Of
Staying
In
One
Lane
If
you
stay
solely
in
the
legal
lane,
you
may
miss
the
subtle
ways
an
AI’s
design
can
introduce
bias,
create
explainability
gaps,
or
run
afoul
of
privacy
laws.
If
you
focus
only
on
the
technical
side,
you
might
underestimate
how
a
single
compliance
failure
can
escalate
into
a
regulatory
investigation
or
a
reputational
crisis.
Either
approach
leaves
important
risks
unaddressed
and
potential
value
untapped.
Building
Your
AI
Fluency
For
in-house
counsel,
building
fluency
starts
with
curiosity.
Attend
engineering
demos,
sit
in
on
technical
reviews,
and
ask
your
product
teams
to
walk
you
through
how
their
systems
make
decisions.
Keep
track
of
developments
in
AI
regulation,
not
only
in
your
home
jurisdiction
but
in
every
market
where
your
product
might
operate.
Create
ways
to
translate
legal
requirements
into
technical
design
choices
and
vice
versa,
so
both
teams
are
speaking
the
same
language.
This
is
not
about
becoming
a
programmer.
It
is
about
understanding
enough
to
connect
the
dots
between
technical
realities
and
legal
outcomes.
The
Payoff
When
in-house
counsel
can
speak
both
AI
and
law,
they
move
from
being
the
final
checkpoint
before
launch
to
being
a
trusted
partner
in
innovation.
They
help
design
products
that
are
more
compliant,
more
transparent,
and
more
resilient
to
both
market
and
regulatory
pressure.
In
an
AI-driven
world,
translation
between
code
and
case
law
is
not
a
peripheral
skill.
It
is
a
core
leadership
capability
that
separates
the
teams
who
simply
launch
products
from
those
who
launch
products
built
to
last.
Olga
V.
Mack is
the
CEO
of TermScout,
an
AI-powered
contract
certification
platform
that
accelerates
revenue
and
eliminates
friction
by
certifying
contracts
as
fair,
balanced,
and
market-ready.
A
serial
CEO
and
legal
tech
executive,
she
previously
led
a
company
through
a
successful
acquisition
by
LexisNexis.
Olga
is
also
a Fellow
at
CodeX,
The
Stanford
Center
for
Legal
Informatics,
and
the
Generative
AI
Editor
at
law.MIT.
She
is
a
visionary
executive
reshaping
how
we
law—how
legal
systems
are
built,
experienced,
and
trusted.
Olga teaches
at
Berkeley
Law,
lectures
widely,
and
advises
companies
of
all
sizes,
as
well
as
boards
and
institutions.
An
award-winning
general
counsel
turned
builder,
she
also
leads
early-stage
ventures
including Virtual
Gabby
(Better
Parenting
Plan), Product
Law
Hub, ESI
Flow,
and Notes
to
My
(Legal)
Self,
each
rethinking
the
practice
and
business
of
law
through
technology,
data,
and
human-centered
design.
She
has
authored The
Rise
of
Product
Lawyers, Legal
Operations
in
the
Age
of
AI
and
Data, Blockchain
Value,
and Get
on
Board,
with Visual
IQ
for
Lawyers (ABA)
forthcoming.
Olga
is
a
6x
TEDx
speaker
and
has
been
recognized
as
a
Silicon
Valley
Woman
of
Influence
and
an
ABA
Woman
in
Legal
Tech.
Her
work
reimagines
people’s
relationship
with
law—making
it
more
accessible,
inclusive,
data-driven,
and
aligned
with
how
the
world
actually
works.
She
is
also
the
host
of
the
Notes
to
My
(Legal)
Self
podcast
(streaming
on Spotify, Apple
Podcasts,
and YouTube),
and
her
insights
regularly
appear
in
Forbes,
Bloomberg
Law,
Newsweek,
VentureBeat,
ACC
Docket,
and
Above
the
Law.
She
earned
her
B.A.
and
J.D.
from
UC
Berkeley.
Follow
her
on LinkedIn and
X
@olgavmack.
