The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

So What Is Actually Working With GenAI? – Above the Law

(Photo
by
Jason
Kempin/FilmMagic)



Ahoy
matey,
any
seadog
worth
their
salt
knows
that
Force
Majeure
is
an
unforeseeable
event
like
the
wind
whippin’
up
and
rippin’
yer
sails
apart.
Shiver
me
timber,
ye
want
to
be
sure
ye
have
a
clause
in
yer
contracts
to
protect
yerself.
If
not,
ye
may
end
up
in
the
brig!

We
hear
the
hype
and
fear
around
GenAI,
and,
by
now,
most
people
know
the
novelty
of
explaining
something
in
a
pirate’s
voice.
Most
lawyers
are
also
aware
of
common
GenAI
pitfalls:

  • Don’t
    put
    confidential
    firm
    or
    client
    information
    into
    ChatGPT
  • Beware
    of
    hallucinations
  • There
    are
    no
    links
    to
    authoritative
    sources
    for
    verification

Last
month,
I
covered
how

GenAI
Is
Not
Going
To
Replace
Lawyers

But
It
Will
Change
How
They
Work
.
So
what
is
actually
working
with
GenAI?
What
is
it
good
for?

Before
answering
that
question,
let
me
address
a
common
pitfall.


Confidential
Information

Solved

Organizations
can
license
a
private
instance
of
ChatGPT
in
Microsoft’s
Azure
cloud
service.
Doing
this
allows
the
application
of
settings
to
protect
confidential
information.
Other
GenAI
vendors
offer
similar
capabilities.

Here
are
a
few
examples
of
what
is
working
with
GenAI
that
address
the
other
two
pitfalls.


Large
Language
Models
(LLMs)
Are
Good
At
Language

That
pirate
novelty?
It
illustrates
how
good
GenAI,
especially
ChatGPT,
is
at
language.

The
ability
to
express
a
complex
legal
concept
in
the
language
of
a
5-year-old
or
to
explain
a
legal
issue
to
an
executive

ChatGPT
can
do
this.

In
your
secure
instance,
prompting
ChatGPT
to
reword
a
document
to
better
communication
is
an
example
of
how
lawyers
can
use
ChatGPT.
Let’s
say
you
want
to
explain
the
key
legal
risks
of
a
contract
you’ve
drafted
to
a
client.
ChatGPT
might
be
a
great
start.
You
are
familiar
with
the
contract,
and
you
can
compare
ChatGPT’s
output
against
the
authoritative
document
(i.e.,
the
contract)
to
ensure
accuracy
and
no
hallucinations.


Summarization,
Manipulation,
And
Extraction

Because
LLMs
are
good
at
language,
they
can
be
good
at

summarizing

information
from
your
trusted
documents.
The
key
here
is
to
prompt
GenAI
to
summarize.
Do
you
want
an
executive
summary?
Key
legal
issues
outlined
in
bullet
format?
A
table
of
key
facts
and
figures?
Extract
the
party
names
and
terms
of
the
contract?
GenAI
can
do
those
tasks
well.

Manipulating
a
document
or
extracting
data
will
require
more
scrutiny
from
a
human.
Non-GenAI
solutions
in
the
market
use
other
techniques
to
analyze
and
extract
data,
so
GenAI
isn’t
the
only
approach
to
the
extraction
problem.

When
you
prompt
an
LLM
to
operate
on
information

within

controlled
documents,
hallucinations
can
be
minimized.
David
Colarusso,
co-director
of
Suffolk
Law
School’s
Legal
Innovation
&
Technology
Lab,
developed
a
series
of
GenAI
prompts
he
calls
spells
that
can
perform
many
tasks
along
these
lines.


Large-Scale
Summaries

Summarization
is
an
advancing
field.
State-of-the-art
is
no
longer
putting
everything
into
a
context
window.
AI
can
perform
summaries
across
bodies
of
information
beyond
the
capacity
of
the
human
brain
to
process
easily.
This
is
where
machines
excel.

Imagine
having
to
review
50
legal
briefs
to
find
the
commonalities
and
the
strengths
and
weaknesses
of
the
arguments.
Now
imagine
doing
this
in
minutes

it’s
impossible
for
a
human.
But
GenAI
can
summarize
each
document
individually
and
then
perform
a
summary
of
summaries.
Special
purpose
applications
can
be
developed
to
automate
tasks
like
this.
Several
are
on
the
market
right
now.

There
is
a
technique
emerging
for
accomplishing
large-scale
summaries
called

RAPTOR

(Recursive
Abstractive
Processing
for
Text
Retrieval).
RAPTOR
solutions
address
large-scale
summarization
by
creating
hierarchies
of
summaries
that
a
system
can
search
through
and
summarize
across
documents.


Retrieval
Augmented
Generation
(
RAG)

A
common
misconception
is
to
think
about
GenAI
as
a
database.
ChatGPT
and
other
GenAI

are
not
databases
.
They
are
LLMs
that
respond
to
prompts
and
predict
the
next
word
(really
a
token)
in
a
sequence
of
output.
GenAI
creates
output
from
training
data
through
math
and
probabilities.
Probabilities
aren’t
certainties,
so
when
the
LLM
predicts
the
next
word
incorrectly,
it
can
go
down
a
path
of
words
that
makes
probabilistic
sense
but
is
not
true.
This
is
a
hallucination.
Remember
that
training
data
can
include
information
that
is
not
authoritative
and
can
be
even
outright
false.

RAG,
on
the
other
hand,
involves
accessing
or
searching
a
controlled
data
set
or
a
database.
Internal
law
firm
documents
or
material
licensed
from
trusted
sources
are
updated
frequently
to
ensure
accuracy.
RAG
solutions
combine
GenAI’s
mastery
of
language
and
summarization
with
controlled
data
to
get
better
results
that
can
be
more
trusted.
RAG
also
avoids
retraining
an
LLM,
which
is
costly.
Organizations
building
GenAI
applications
to
search
their
own
data
are
using
a
RAG
approach.
The
leading
legal
research
vendors
are
using
RAG
approaches
too.

Because
RAG
accesses
a
database,
it
can
provide
links
to
the
authoritative
documents
being
referenced.
Hallucinations
can
be
minimized
as
prompts
are
directed
to
summarize
the
contents
of
documents.

RAG
solutions
can
also
be
designed
so
you
can
“chat
with
your
data”
using
an
approach
called

multi-turn

conversation.


Temperature

is
a
GenAI
setting
that
focuses
on
the
randomness
and
creativity
of
answers.
A
higher
temperature
setting
gives
more
freedom
to
select
lower
probability
tokens
to
generate
the
next
word.
This
increases
creativity
but
also
increases
hallucinations.
A
temperature
setting
of
zero
requires
an
LLM
to
select
the
highest
probability
token
when
developing
a
response.
By
using
a
low
temperature
setting,
a
RAG
application
with
multi-turn
chat
capabilities
is
less
likely
to
hallucinate.

RAG
applications
have
benefits
for
lawyers
as
they
can
operate
from
trusted
authoritative
data,
and
they
can
provide
citations
and
references.
Hallucinations
are
minimized,
particularly
if
the
applications
limit
temperature
or
limit
multi-turn
functionality.


Embrace
Hallucinations
As
A
Feature

The
last
example
of
what
is
actually
working
with
GenAI
involves
hallucinations.
AI
hallucinations
are
viewed
as
a
bug
or
limitation.
The
language
of
ChatGPT
is
so
authoritative
sounding
that
some
lawyers
have
court
documents
referencing
fictitious
cases.
Lawyers
must
be
careful.

However,
if
a
lawyer
is
stumped
and
looking
for
a
fresh
legal
argument,
brainstorming
with
ChatGPT
may
help.
Prompting
ChatGPT
to
develop
a
creative
argument
might
result
in
some
wild
ideas
from
hallucinations.
Even
absurd
arguments
might
spark
a
new
thought
process
for
a
lawyer
to
come
up
with
a
legally
sound
new
argument.
As
long
as
the
lawyer
understands
what
hallucinations
are,
new
arguments
might
be
surfaced.

We
are
still
in
the
early
days
of
GenAI
applications.
Lawyers
are
still
learning
new
ways
to
use
the
technology
and
what
is
useful.
Measuring
return
on
investment
and
prioritizing
efforts
are
still
a
work
in
progress
for
law
firms
and
law
departments
alike.
AI
is
going
to
change
the
way
lawyers
work,
but
that
will
happen
over
time.
By
leveraging
the
concepts
outlined
above,
perhaps
lawyers
can
begin
to
achieve
some
measure
of
success
and
evolve
their
work
as
AI
creates
new
possibilities.



So
speakin
like
a
swashbuckler
may
seem
like
a
novelty,
but
yo-ho-ho,
ye
may
just
find
it
useful
matey!




Ken Crutchfield HeadshotKen
Crutchfield
is
Vice
President
and
General
Manager
of
Legal
Markets
at
Wolters
Kluwer
Legal
&
Regulatory
U.S.,
a
leading
provider
of
information,
business
intelligence,
regulatory
and
legal
workflow
solutions.
Ken
has
more
than
three
decades
of
experience
as
a
leader
in
information
and
software
solutions
across
industries.
He
can
be
reached
at 
ken.crutchfield@wolterskluwer.com.

CRM Banner