The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Like Lawyers In Pompeii: Is Legal Ignoring The AI Definitional Crisis? (Part V) – Above the Law


“Thinking
of
AI
as
only
GenAI
leads
to
the
adoption
of
solutions
that
don’t
work
when
there
are
practical,
non-GenAI
ways
to
solve
real
problems.”

Over
the
past
several
parts
of
this
series,
we
have
discussed
the
problems
and
risks
confronting
AI,
its
use
by
legal,
and
how
those
problems
may
lead
to
the
eruption
of
the
GenAI
volcano.
The
truth
is
GenAI
has
been
overhyped
and
oversold.
As
a
result,
there
is
a
real
risk
of
overreliance
on
GenAI
by
those
who
don’t
understand
it
and
what
it
does
which
could
lead
to
disaster.

There’s
yet
another
danger
contributing
to
the
potential
eruption
we
haven’t
addressed,
one
that
is
more
fundamental
than
all
the
others:
a
definitional
confusion
that’s
helping
drive
the
overreliance
we’ve
been
worried
about.


The
Definitional
Danger

The
legal
community
has
gone
from
carefully
distinguishing
GenAI
as
a
category
of
AI
to
using
the
term
“AI”
as
a
reference
to
GenAI
itself.
As
in
only
GenAI
is
AI
and
the
anything
else
isn’t.
In
fact,
AI
is
a
much
broader
concept
and
refers
to
a
whole
category
of
tools
with
different
uses,
benefits
and
value
apart
from
GenAI
completely.

This
confusion,
made
worse
by
vendor
marketing,
fuels
overreliance
on
GenAI
tools
on
the
one
hand,
and
under
reliance
on
solid,
accurate,
and
performing
non-GenAI
tools
on
the
other.

In
fact,
real
AI
experts
understand
conceptually
what
AI
is,
what
it
can
do,
and
the
differences
and
drawbacks
of
confusing
GenAI
with
AI
generally.


AI
Expert
Insights

One
such
expert
is
Baron
Reichart
Von
Wolfshield
who
goes
by
the
single
name

Ki
.
Ki
has
worked
extensively
on
AI
from
the
late
70s.
By
the
90s
he
was
building
and
designing
AI
programs
for
the
US
military,
Disney,
the
architectural
community
and,
yes,
for
law
firms.
He
routinely
consults
with
some
of
the
world’s
largest
companies
and
law
firms
on
AI
and
AI
development.
In
addition,
Ki
has
a
unique
way
of
designing
AI
programs
to
solve
human
problems
that
involves
observation,
logic,
and
simplicity,
not
smoke
and
mirrors.

Like
all
true
experts,
he
has
a
way
of
explaining
complicated
concepts
simply
and
understandably.
I
know
from
years
of
experience
as
a
trial
lawyer
how
rare
that
is.


Ki’s
Insights

Ki
makes
it
simple:
AI
should
be
thought
of
as
something
that
appears
to
act
like
an
intelligent
thing.
He
uses
a
mechanical
spring
to
make
this
point,
“The
simplest
artificial
intelligence
in
the
world
is
a
spring.
You
set
it
up,
push
it
down,
and
it’ll
push
back
against
you.
That’s
the
core
of
AI:
it
is
something
you
can
ask
to
do
something
later,
and
it
will.
That’s
AI.
It
acts
like
a
human.“

Thinking
of
AI
in
this
kind
of
broad
way
illustrates
the
point
that
the
key
is
finding
the
right
tool
to
solve
the
problem,
not
adopting
tools
just
because
they
happen
to
be
in
vogue.
It’s
what
he
calls
the
procrustean
effect,
aka
known
as
trying
to
fit
a
square
peg
in
a
round
hole.

Ki
is
also
quick
to
rightfully
point
out
that
this
doesn’t
mean
you
can
use
AI
tools
without
understanding
what
they
are
doing,
how
they
work,
and
without
proof
they
will
do
what
is
claimed.
That’s
Ki’s
beef
with
LLMs
and
GenAI:
it’s
that
the
hype
doesn’t
match
reality,
and
most
users
don’t
bother
to
get
it.

That’s
why
he
calls
LLMs
a
“parlor
trick”:
“Everything
with
LLMs
right
at
this
moment
is
on
par
with
and
a
child
of
autocomplete.”

He
also
believes
the
hallucination
problems
can’t
be
fixed:
“The
reason
AI
lies
is
the
same
reason
a
human
lie,
because
AI
is
modeling
the
same
neural
system
of
a
human.
You
can’t
get
an
LLM
to
stop
lying
any
more
than
you
can
stop
a
human
from
lying.”
It’s
just
part
of
what
LLMs
are,
and
that’s
not
going
to
change.

Because
of
all
this,
he
concludes
that
the
current
proven
usefulness
of
LLMs
is
little
more
than
that
of
a
glorified
search
engine.
So,
thinking
of
AI
as
only
GenAI
leads
to
the
adoption
of
solutions
that
don’t
work
when
there
are
practical
non-GenAI
ways
to
solve
real
problems.


Practical
Non-GenAI
Examples

Ki
gave
a
couple
of
examples.
He
actually
sat
with
a
lawyer
for
a
day
and
watched
what
he
was
doing.
What
he
found
was
that
the
lawyer
spent
a
lot
of
time
trying
to
figure
out
where
and
how
to
file
attachments
to
the
multitude
of
emails.

Sounds
kind
of
trivial,
but
I
know
this
guy’s
pain.
You’re
trying
to
work
quickly
and
make
filing
decisions
among
a
multitude
of
files,
and
a
mistake
could
be
costly
in
terms
of
lost
materials
and
information.
To
top
it
all
off,
you
can’t
enter
time
for
looking
for
a
file
and
be
paid
for
it.

Ki
figured
out
a
simple,
non-GenAI
solution:
create
a
bot
that
could
automatically
file
the
attachment
and
then
tell
you
where
it
put
it.
Simple
but
saves
lawyers
and
legal
professionals
a
hell
of
a
lot
of
time
and
stress.
For
all
the
hype
of
GenAI,
it’s
not
a
tool
that
can
do
that
simple
task.
Says
Ki,
his
bot
 “is
AI
but
it’s
not
an
LLM.”

Another
example:
Ki
noticed
that
a
lot
of
time
was
spent
on
calendaring
significant
events
like
hearings,
depositions,
court
dates
and
the
like.
Having
humans
do
that
was,
at
best,
clumsy
and
error
prone
since
it
required
a
number
of
steps
to
be
taken
to
get
the
item
accurately
on
multiple
calendars,
let
everyone
know,
and
then
set
up
a
process
to
deal
with
it.
He
ultimately
set
up
a
complete
project
management
system
that
did
all
this
and
more.
By
recognizing
patterns
over
multiple
cases,
it
could
even
help
predict
what
might
happen
and
what
an
opponent
might
be
doing.

The
important
thing
is
that,
in
both
situations,
he
first
learned
what
lawyers
and
legal
professionals
really
need
to
do
their
everyday
jobs
and
what
they
care
about.
Then
he
developed
a
simple,
usable
AI
solutions.

This
is
not
GenAI
but
is
AI
that
works,
doesn’t
hallucinate,
doesn’t
make
errors,
and
doesn’t
need
to
be
verified.


Implications
for
Legal

Of
course,
as
we
have

discussed
before
,
the
hallucination
problem
has
enormous
implications
for
legal.
In
many
areas
of
legal,
inaccuracies
and
hallucinations
can’t
be
tolerated.
“That
danger
is
missed,”
says
Ki,
“by
those
who
don’t
understand
the
tool.”

“But
there
is
second,
and
perhaps
more
serious
risk
here,”
says
Ki.
And
that
is
by
lumping
all
AI
into
the
GenAI
bucket,
more
valid
and
error
free
AI
and
automation
tools
will
be
ignored.
Tools
that
can
make
life
simpler
and
better
for
lawyers.
Tools
that
solve
what
Ki
refers
to
as
“boring”
problems.
Problems
that
are
stress
points
for
every
attorney.

Instead
of
focusing
on
these
solutions,
GenAI
providers
often
try
to
coat
every
solution
in
a
GenAI
wrapper
without
considering
the
real
problem,
and
a
simple
solution
that
works.
By
doing
so,
they
suggest
to
legal
customers
that
all
AI
is
GenAI
and
only
GenAI
can
solve
legal
their
problems.
The
result
is
that
customers
are
often
getting
something
that’s
expensive,
doesn’t
solve
their
real
problem,
and
doesn’t
work
as
they
thought.
At
the
end
of
the
day,
they
discard
the
tools
altogether.

There
are
in
fact
things
that
non-GenAI
does
quite
well
and
quite
accurately
if
you
understand
what
it
is
doing
and
analyze
the
problem
correctly
on
the
front
end.
Often
these
problems
result
in
work
for
which
lawyers
are
not
trained
for
but
have
to
do
anyway.
Ki
wants
to
stamp
all
these
out,
leaving
lawyers
and
legal
professionals
to
do
what
they
are
good
at.

By
thinking
that
AI
is
GenAI
only,
the
boring
repetitive
tasks
that
Ki
tackles
would
be
left
undone,
perpetuating
inefficiencies
that
could
be
eliminated
while
instead
adopting
GenAI
systems
that

create
greater
inefficiencies

instead.


The
Over
Reliance
Problem

There’s
also
the
danger
that
lawyers
and
legal
professionals
will
come
to
believe
all
the
GenAI
hype
and
just
rely
on
it.
It’s
the
“if
GenAI
tools
says
it,
it
must
be
true”
syndrome.
Here’s
an
example
of
how
that
could
work.
Admittedly,
if
you
create
the
right
prompt,
a
GenAI
tool
can
give
you
a
list
of
questions
to
ask
in
a
deposition
or
even
assist
you
in
the
deposition
itself
to
spot
inconsistencies
or
correct
bad
questions.

But
the
temptation
for
a
busy
lawyer,
particularly
a
less
experienced
one,
is
to
just
take
that
list
and
doggedly
ask
every
question
on
it.
We
have
all
seen
lawyers
who
make
that
kind
of
list
on
their
own
and
do
just
that.
They
end
up
asking
questions
that
clearly
were
no
longer
relevant
based
on
what
the
witness
previously
said.
They
miss
nuance
and
body
language
that
may
lead
to
unexpected
and
unplanned
questions
that
sometimes
can
break
open
a
case.
They
fail
to
follow
up.

I
once
took
the
deposition
of
a
class
rep.
I
made
a
list
of
questions
in
advance
to
ask.
At
one
point
in
the
deposition,
I
happened
to
ask
what
I
thought
was
a
throw
away
question:
what
claims
the
witness
had
made
or
had
been
made
against
him.
There
was
something
in
the
way
he
looked
when
he
answered.
A
certain
hesitancy
that
made
me
dig
in
on
what
seemed
to
be
a
meaningless
line
of
inquiry.
Come
to
find
out,
he
had
filed
bankruptcy
a
few
months
before.
That
fact
ended
the
case.
Blind
adherence
to
a
GenAI
deposition
list
of
questions
would
never
have
led
me
to
that
question.


Lessons
For
Law
Firms

All
of
this
poses
particular
problems
for
lawyers,
legal
professionals,
and
law
firms.
They
aren’t
Ki
and
most
don’t
have
a
Ki
working
for
them.

But
there
are
some
practical
steps
firms
can
take
and
some
lessons
fordealing
with
AI
and
GenAI.
First
and
foremost,
firms
need
to
realize
that
there
is
a
difference
between
AI
and
GenAI
and
that
there
are
solutions
to
problems
that
don’t
involve
GenAI
at
all.

Firms
should
also
understand
that
there
are
issues
yet
with
GenAI
that
haven’t
been
solved.
Issues
with
respect
to
things
like

accuracy

and
the

costs
of
verification
,
the

infrastructure
,
and
the
robustness
of
the
investment
and
capital.

So
before
purchasing
GenAI
products
out
of
FOMO
or
over
relying
on
their
outputs,
ask
the
hard
questions.
Identify
the
actual
pain
points
you
want
to
eliminate
and
then
determine
whether
the
tools
can
really
solve
your
problem
or
would
simpler,
non-GenAI
tools
do
a
better
job.

And
for
God’s
sake,
don’t
accept
what
vendors
or
others
are
telling
you.
Remember,
that
for
a
variety
of
reasons
we
have
discussed,
the
GenAI
volcano
may
be
about
to
erupt
as
better
and
more
accurate
AI
solutions
surface,
and
the
hype
is
replaced
by
reality.

Next
time
we
will
look
at
how
a
non-GenAI
solution
may,
in
fact,
even
solve
some
of
GenAI’s
real
problems.


Read
our
entire
“Pompeii”
Series:



Like
Lawyers
In
Pompeii:
Is
Legal
Ignoring
The
Coming
AI
Infrastructure
Crisis?
(Part
I)



Like
Lawyers
In Pompeii: Is Legal
Ignoring
The
Coming AI
Cost
Crisis?
(Part
II)



Like
Lawyers
In
Pompeii:
Is
Legal
Ignoring
The
Coming
AI
Trust
Crisis?
(Part
III)



Like
Lawyers
In
Pompeii:
Is
Legal
Ignoring
The
Coming
AI
Financial
Crisis?
(Part
IV)




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law



Melissa
“Rogo”
Rogozinski
is
an
operations-driven
executive
with
more
than
three
decades
of
experience
scaling
high-growth
legal-tech
startups
and
B2B
organizations.
A
trusted
partner
to
CEOs
and
founders,
Rogo
aligns
systems,
product,
marketing,
sales,
and
client
success
into
a
unified,
performance-focused
engine
that
accelerates
organizational
maturity.
Connect
with Rogo
on
LinkedIn
.