The
International
Legal
Technology
Association
conference
is,
as
always,
the
big
one
on
the
legal
tech
calendar.
It’s
the
Davos
of
doc
review,
the
Super
Bowl
of
software
integrations,
and
the
Oscars
of
people
who
unironically
use
“lawyers”
and
“technology”
in
the
same
breath.
This
year,
4600+
legal
tech
professionals
and
vendors
gathered
in
Maryland
for
a
weeklong
salute
to
all
technology…
but
mostly
AI.
National
Harbor
exists
not
so
much
a
town
as
a
synthetic
terrarium
meticulously
constructed
to
drag
conference
business
out
of
DC.
A
simulacrum
of
a
city
center,
with
themed
restaurants
and
bars
ideally
suited
to
host
client
dinners
or
vendor
happy
hours.
As
a
conference
site,
it’s
a
perfect,
enclosed
economy
running
on
lanyard-strung
badges
and
drink
tickets.
There
are
people
who
complain
about
this
venue
and
I
will
fight
every
one
of
them.
Anyone
whining
about
National
Harbor
as
a
venue
should
be
sentenced
to
ninth
circle
of
hell
(just
past
the
Magnolia
Rooms
in
the
Gaylord
Nashville
—
which
is,
while
we’re
on
the
subject,
the
location
of
this
event
next
year).
This
trade
show
Narnia
served
as
an
eerie
counterpoint
to
the
tragi-buffoonery
unfolding
across
the
Potomac.
A
peaceful,
walkable
village
contrasted
by
a
city
under
siege.
Neither
reflects
reality
much.
National
Harbor
is
basically
a
soundstage
while
D.C.
is
the
safest
it’s
been
in
decades.
The
Humvees
cruising
the
drop-off
lane
of
Union
Station
stood
in
for
the
piano
bars
in
National
Harbor,
but
both
are
just
props
backing
up
an
illusion.
For
National
Harbor
it’s
to
provide
a
comfy
conference
host
that
doesn’t
feel
like
it’s
been
plopped
in
the
middle
of
nowhere.
For
D.C.
it’s
to
distract
everyone
from
how
many
times
Donald
Trump
might
show
up
in
the
Epstein
files.
One
of
these
projects
is
more
successful
than
the
other.
But
the
key
to
appreciating
National
Harbor
is
recognizing
it
as
a
custom-built
conference
center.
It’s
not
a
real
city,
it’s
a
simulation
designed
for
dispensing
knowledge
and
sterno-warmed
chicken
dumplings.
It’s
ruthlessly
efficient
at
delivering
both,
but
you
can’t
lose
sight
of
the
fact
that
it
exists
to
make
attendees
feel
good
about
the
experience.
Not
unlike
the
AI
products
dominating
the
conversation.
Since
arriving
on
the
scene,
artificial
intelligence
has
inspired
legal
tech
vendors
to
design
all
manner
of
products
delivering
on
the
promise
of
increased
productivity.
But
most
of
what
we
hear
about
are
the
lawyers
who
forgot
that
the
AI
experience
is
an
illusion
of
its
own.
“Hallucinations”
aren’t
the
problem,
lawyers
failing
to
check
their
work
before
sending
it
out
the
door
is
the
problem.
But
that’s
also
a
bit
reductionist.
There’s
a
psychological
dimension
to
the
chatbot
interface.
Lawyers
wouldn’t
trust
a
summer
associate’s
brief
on
face,
so
why
are
they
trusting
ChatGPT?
The
Christine
Lemmer-Webber
description
of
it
as
Mansplaining
as
a
Service
gets
a
lot
of
the
way
there
—
as
a
tool,
AI
delivers
results
with
supreme
confidence
no
matter
how
wrong
it
might
be.
Though
that’s
not
the
whole
story,
because
mansplaining
takes
a
condescending
tone
while
the
problem
with
these
bots
continues
to
be
their
overzealous
compulsion
to
give
the
user
the
results
they
want.
Standing
in
the
Gaylord’s
Belvedere
Lounge
last
week,
I
explained
that
it’s
more
like
the
guy
who’s
become
convinced
that
the
stripper
is
in
love
with
him.
The
medium
is
the
message,
as
McLuhan
would
say.
Legal
tech
vendors
expend
massive
resources
to
make
sure
AI
products
deliver
more
reliable
results,
but
they’re
fighting
a
constant
battle
against
a
public
AI
sales
pitch
telling
the
world
that
AI
isn’t
just
finding
evidence,
it’s
finding
answers.
Even
though
“the
answer”
is
often
what
the
lawyer
is
hired
to
get
around.
But
that’s
going
to
be
the
sales
pitch,
because
no
one
gets
megarich
promising
cautious
improvement,
they
have
to
drive
revolutionary
change…
whether
it’s
warranted
or
not.
Which
becomes
its
own
hallucination.
Do
we
have
a
word
for
industries
built
on
shared
illusions?
The
official
theme
at
ILTACON
was…
pirates.
Attendees
dressed
up
like
pirates,
trading
plastic
doubloons
for
free
drinks
and
snapping
pictures
around
the
impressive
Crow’s
Nest
at
the
end
of
the
exhibit
hall.
Nothing
says
“NOT
A
BUBBLE”
like
dressing
up
as
the
romantic
ideal
of
people
showering
themselves
in
stolen
wealth.
To
be
clear,
this
isn’t
to
say
the
legal
technology
sector
—
or
more
specifically
the
AI
component
of
it
—
is
a
bubble.
Vendors
outdid
themselves
this
year
in
developing
new
and
more
interesting
ways
to
deploy
AI
to
improve
the
legal
workflow.
Definely
launched
its
Cascade
product,
using
AI
technology
to
track
first,
second,
and
third-order
knock-on
effects
from
contract
changes
to
combat
negotiation
whack-a-mole.
Everlaw
showed
off
a
new
deep
dive
tool
to
allow
more
senior
attorneys
to
interrogate
their
document
sets
at
every
stage
of
the
litigation.
Both
NetDocuments
and
iManage
continue
finding
new
ways
to
automate
the
process
of
making
the
firm’s
own
data
be
more
useful.
Legal
AI
providers
may
not
be
a
bubble,
but
they
could
well
be
one
of
those
rainbow
swirls
shimmering
beautifully
on
the
surface
of
an
underlying
AI
bubble.
Most
of
the
AI
on
display
at
the
show
is
still
“building
off”
other
products
—
the
“foundational
models”
to
use
the
parlance
of
the
trade.
At
the
end
of
the
day,
a
lot
of
this
stuff
rests
on
the
energy
guzzling
backs
of
OpenAI,
Anthropic,
Gemini,
and
MechaHitler
(or
whatever
Grok
is
calling
itself
now).
Fawning
media
coverage
and
half-trillion-dollar
valuations
suggest
this
is
a
gravy
train
extending
decades
into
the
future,
but
can
this
really
hold
up?
To
quote
tech
industry
analyst
Ed
Zitron,
“by
the
end
of
2025,
Meta,
Amazon,
Microsoft,
Google,
and
Tesla
will
have
spent
over
$560
billion
in
capital
expenditures
on
AI
in
the
last
two
years,
all
to
make
around
$35
billion.”
What
happens
when
one
—
or
more
—
of
the
companies
behind
these
models
runs
out
of
cash
to
pay
the
bills?
The
fact
that
legal
tech
providers
are
building
stuff
on
top
that
can
actually
pay
the
bills
won’t
matter
much
if
the
foundational
bot
goes
dark.
“Pirates,
Be
Ye
Warned,”
that
we’re
a
lackluster
NVIDIA
earnings
call
from
a
precipitous
drop.
Seriously,
how
does
OpenAI
pay
off
a
$500+
billion
valuation?
How
is
this
revenue
supposed
to
arrive?
Tokens
are
the
coin
of
the
realm
in
AI,
and
unlike
groceries,
they’re
actually
getting
less
expensive.
A
massive
surge
in
users
isn’t
likely
barring
the
imposition
of
year-round
school
—
a
recent
study
shows
AI
subscriptions
plummet
when
students
don’t
need
it
to
write
papers
for
them
over
the
summer
—
it’s
hard
to
imagine
where
the
AI
world
expects
to
get
all
enough
volume
to
make
back
their
money.
It
sounds
crazy
to
suggest
a
half-trillion
would
just
disappear,
but
it
sounded
crazy
to
suggest
Lehman
Brothers
would
disappear
until
it
did.
Few
tech
observers
are
ready
to
embrace
a
notion
this
grim.
But
several
at
the
show
seemed
willing
to
acknowledge
this
risk,
even
if
they
wouldn’t
admit
it
out
loud.
When
I’d
suggest
the
possibility
of
a
foundational
model
provider
going
belly
up,
one
industry
insider
said
directly,
“I
think
that’s
very
possible.”
The
show
arrived
against
the
backdrop
of
GPT-5
arriving
with
a
general
“meh,”
which
certainly
didn’t
help
promote
the
sense
that
we
were
all
riding
AI
to
the
moon.
The
newest
OpenAI
model
didn’t
come
up
much
throughout
the
week,
surprising
for
an
announcement
with
so
much
hype,
and
it’s
prompting
the
broader
computer
science
community
to
ask
“What
If
A.I.
Doesn’t
Get
Much
Better
Than
This?”
(which,
as
a
title,
certainly
sounds
familiar…).
While
the
most
audience-friendly
take
on
that
question
deals
with
the
much
ballyhooed
“displacement”
and
whether
or
not
it
will
actually
place
every
human
in
a
Matrix-style
incubator
by
2027,
it’s
worth
realizing
that
without
an
exponential
step
improvement
for
AI,
the
only
displaced
humans
will
be
the
ones
working
for
the
foundational
model
providers.
Though
most
kept
a
positive
outlook
while
signaling
caution.
Lexis,
showing
off
their
new
AI-powered
legal
research
tool
stressed
that
they’re
constantly
evaluating
the
foundational
models
and
suggested
they
could
quickly
swap
to
another
model
as
necessary.
In
fact,
one
of
the
most
singularly
practical
features
of
the
new
Protégé
offering
is
the
option
of
using
general
AI
models
of
the
lawyer’s
choice
(4o,
3o,
5,
or
Claude
Sonnet)
from
within
the
Lexis
tool.
Questions
that
lawyers
might
otherwise
plug
into
these
consumer
facing
models
can
be
posed
within
the
secure
Lexis
environment.
The
purpose
was
empowering
choice,
but
it’s
also
useful
for
keeping
options
open.
Of
note,
GPT-5
failed
to
sufficiently
impress
Lexis
into
updating
the
melange
of
algorithms
they
use
for
different
purposes
within
its
Legal
AI
tool
—
another
interesting
dig
at
the
newest
model
heard
at
the
conference.
Thomson
Reuters
CEO
Steve
Hasker
underscored
the
company’s
commitment
to
the
large
language
models
we
all
know
and
love/hate,
but
interestingly
took
the
opportunity
to
heavily
tout
the
company’s
acquisition
of
Safe
Sign
Technologies.
Hasker
said
he
believed
the
Safe
Sign
scientists
“were
in
the
process
of
building
the
best,
small
language
models
for
the
legal
profession,”
and
that
Thomson
Reuters
is
investing
significantly
in
their
work.
As
a
strategic
matter,
the
decision
to
devote
resources
to
building,
for
lack
of
a
better
analogy,
an
American
DeepSeek
seems
like
the
powerful
hedge.
Even
if
none
of
the
big
providers
collapses
under
their
own
bloated
valuations
and
paltry
revenues
—
and
I
still
think
they
will
—
Thomson
Reuters
will
have
a
model
on-hand
that
can
be
thrown
at
problems
far
more
cheaply
and,
continuing
the
DeepSeek
analogy,
more
accurately
based
on
a
cleaner
training
regimen.
Some
firms,
he
said,
are
already
asking
about
bringing
instances
of
these
model
behind
their
firewalls.
Maybe
the
lasting
accomplishment
of
large
language
models
will
be
the
small
language
models
we
make
along
the
way.
Hang
around
National
Harbor
long
enough
and
the
cracks
become
apparent.
Illusions
only
last
so
long,
after
all.
The
bar
immediately
across
from
the
Gaylord
sat
abandoned
this
time
around,
a
particularly
jarring
development
considering
it
had
a
built-in
customer
base
of
sloppy
conventioneers.
It
certainly
seemed
unthinkable
that
one
of
these
curated
entertainment
experiences
could
fail.
Is
it
so
absurd
to
imagine
an
artificial
intelligence
behemoth
will
go
under
like
that
bar?
National
Harbor
carried
on
without
this
watering
hole,
of
course,
but
the
absence
served
as
a
reminder:
all
gin
joints
are
fleeting.
AI
is
too
powerful
to
disappear
from
the
legal
workflow,
but
strolling
the
conference
this
year,
it
seemed
as
though
providers
would
do
well
to
consider
the
possibility
that
one
or
more
of
the
models
underlying
all
this
progress
might
implode.
For
that
matter,
customers
need
to
make
sure
they’ll
be
covered.
Until
then,
the
band
plays
on,
the
rum
flows,
and
everyone
convinces
themselves
they’ll
be
holding
the
doubloons
when
the
lights
come
on.
Joe
Patrice is
a
senior
editor
at
Above
the
Law
and
co-host
of
Thinking
Like
A
Lawyer.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or
Bluesky
if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a
Managing
Director
at
RPN
Executive
Search.
