The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

The Race To Keep Law Firms From Completely Screwing Up Generative AI With Disastrous Results – Above the Law

Every
year
I
show
up
at
ILTACON
expecting
to
sit
in
on
at
least
some
of
the
overwhelming
volume
of
educational
sessions
and
learn
something
about
legal
tech
from
the
decision
makers
driving
adoption.
And
every
year
I
don’t
manage
to
see
any
sessions.
Because,
frankly,
there’s
too
much
happening
outside
the
sessions
to
take
an
hour
off
to
strap
into
some
learning.

Every
provider
is
there
showing
off
the
latest
updates,
the
industry
consolidators
are
busy
consolidating,
and
there
are
too
many
candid
implementation
war
stories
getting
swapped
in
the
halls
(and
bars)
not
to
join
in.
Sadly
this
all
keeps
me
away
from
the
sessions.
Obviously,
as
a
reporter,
my
priorities
aren’t
the
same
as
most
attendees,
but
even
the
most
committed
to
the
educational
programming
will
be
found
on
the
last
day
lamenting
that
they
missed
some
intriguing
panel
because
they
were
torn
in
a
million
different
directions.
But
that’s
what
you
get
when
you
build
one
of
the
pillars
of
the
legal
technology
calendar.

But
it’s
also
why
ILTA
Evolve
is
such
a
smart
addition
to
the
conference
calendar.
Taking
just
two
hot
topics
a
year
and
never
scheduling
more
than
two
dueling
sessions
at
a
time,
it’s
an
opportunity
to
slow
down
and
actually
listen
to
some
sessions.

This
year’s
event
tackled
privacy/security
and
generative
AI,
so
the
obvious
kickoff
session
is
the
one
focused
on
the
nexus
of
the
two.
In
“Privacy
v.
Security

How
GenAI
Creates
Challenges
to
Both,”
Reanna
Martinez,
Solutions
Manager
at
Munger,
Tolles
&
Olson
LLP,
and
Kenny
Leckie,
Senior
Technology
&
Change
Management
Consultant
at
Traveling
Coaches,
walk
through
the
looming
GenAI
adoption
moment(s)
that
firms
will
navigate.

By
way
of
laying
the
foundation,
Martinez
broke
down
the
various
AI
tools
that
partners
are
absolutely
just
going
to
call
“ChatGPT”
no
matter
what.
But
for
the
more
tech
savvy,
the
universe
breaks
down
into
consumer
facing
free
products
like
the
aforementioned
ChatGPT,
the
enterprise
level
versions
of
those
technologies,
and
the
legal
specific
offerings
like
CoCounsel
or
Lexis+
AI.
It
probably
goes
without
saying,
but
the
risk
profile
of
each
category
moves
from
the
deepest
red
of
red
flags
— 
Leckie
cited
a
conversation
where
he
was
told
to
think
of
public
GenAI
as
the
“opposite”
of
data
security

through
cautiously
medium
amounts
of
worry.

That
lawyers
are
going
to
consistently
disregard
the
line
between
“ChatGPT”
and
“our
enterprise
ChatGPT,”
is
inevitable.
The
next
few
years
are
going
to
be
pure
hell
for
IT.

While
the
lawyers
don’t
necessarily
need
to
know
the
whole
process
that
tech
staff
will
deploy
to
keep
the
firm
from
becoming
a
cautionary
tale,
it
might
help
at
a
30,000
foot
level
to
develop
an
appreciation
of
what
goes
into
bringing
new
tech
under
the
firm’s
roof.

Screenshot 2024-04-29 at 2.56.43 PM

The
evaluation
process
involves
assessing
a
product’s
Data
Privacy
and
Confidentiality,
Security
of
Model
Training
and
Deployment,
Data
Handling
and
Retention
Policies,
Vendor
Security
and
Reliability,
Risk
of
Bias
and
Fairness,
and
Legal
and
Ethical
Considerations.
This
isn’t
necessarily
AI-specific

most
products
touch
on
these
concerns

but
this
process
is
going
on
before
the
lawyers
ever
see
this
stuff.

Preparing
the
internal
environment
involves
building
all
the
permissions,
firewalls,
encryptions,
monitoring
systems,
audit
trails,
and
crisis
response
strategies.
This
is
where
some
lucky
pilot
program
users
figure
out
exactly
how
broken
the
product
will
be
before
it
has
a
chance
to
ruin
everything.

The
next
stage
is
where
the
rest
of
you
come
in

what
Martinez
coined
“the
wild
card.”
This
is
where
they
train
users/plead
with
them
not
to
bypass
all
their
work
and
just
dump
the
client’s
personal
data
into
ChatGPT.
But
also
where
they
have
to
convince
lawyers
to
actually
use
the
product
before
it
becomes
a
fancy
digital
doorstop.
Understanding
the
work
that
gets
the
product
to
this
point
should
inform
the
rest
of
the
firm
about
how
confident
the
experts
are
in
the
product
by
the
time
you’re
sitting
in
training.

You
are
not
a
unique
and
special
snowflake
brought
in
on
day
1
to
opine
about
the
product.
You
have
joined
the
game
on
third
base.
Act
like
it.

The
next
subject
in
the
model

the
internal
GPT
model

involves
firms
building
their
own
LLMs
from
scratch.
The
general
takeaway
from
this
was…
don’t.
Very
few
firms
have
the
resources
to
do
it
competently
and
if
the
firm
doesn’t
already
know
whether
or
not
they
have
those
resources,
then
they
do
not,
in
fact,
have
those
resources.
So,
don’t
tell
your
tech
staff,
“why
don’t
we
just
build
our
own
AI?
I
mean,
how
hard
can
it
be?”

Finally,
after
everything
is
up
and
running,
the
tech
side
remains
vigilant
to
stop
Data
Poisoning,
Model
Theft
and
Intellectual
Property
Theft,
Privacy
Breaches,
Deployment
Risks,
and
Misuse
of
Generated
Content.

So
it
is
not
a
case
of
“let’s
go
buy
some
AI.”
This
is
a
detailed
process
and
it’s
deliberate
because
the
risks
are
higher
than
Snoop
on
April
20th.
Understand
your
place
in
the
machine
that
gets
the
firm
into
the
21st
century.


HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.

CRM Banner