
The
buzz
this
year
at
CES
isn’t
just
agentic
AI.
It’s
also
wearables
and
their
coming
power
and
use.
And
as
with
agentic
AI,
these
wearables
could
have
significant
implications
and
pose
challenges
for
legal.
The
Rise
of
Wearables
When
we
talk
about
wearables,
we
are
talking
about
things
like
glasses,
watches,
and
necklaces
that
are
not
only
fashion
pieces,
but
which
can
actually
do
things.
While
the
notion
of
wearing
something
like
a
smart
watch
that
can
do
things
like
show
your
emails,
enable
your
smart
devices
to
do
things,
or
track
your
heart
rate
has
been
around
for
a
while,
the
difference
now
is
that
these
wearables
can
combine
with
AI,
agentic
AI,
virtual
reality,
and
augmented
reality
to
do
much
more.
A
simple
such
wearable
is
the
Meta
glasses.
These
glasses
can
read
your
text
messages
and
allow
you
to
take
a
video
or
picture
with
a
touch
of
the
temple.
But
you
can
also
verbally
ask
the
glasses
questions
like
“what
am
I
looking
at?”
or
“tell
me
about
this
painting
I
am
seeing
in
the
museum.”
By
combining
with
AI,
the
glasses
can
whisper
an
answer
in
your
ear
which
no
one
else
can
hear.
And
that’s
just
the
beginning.
I
attended
a
panel
discussion
in
which
Resh
Sidhu,
the
Senior
Director
of
Innovation
of
Snap
Inc.,
talked
about
what
her
company
is
developing.
Snap
is
the
company
behind
the
social
media
tool
Snapchat.
Snapchat
introduced
the
first
glasses
wearable
back
in
2016
and
has
been
working
on
them
ever
since.
Sidhu
showed
a
short
video
of
how
future
versions
of
Snapchat
glasses
could
combine
with
AR,
VR,
and
AI
to
do
amazing
things,
like
line
her
up
for
a
perfect
3
pointer
in
a
pickup
basketball
game.
Or
be
her
companion
on
a
trip
to
Paris
like
an
experienced
tour
guide.
At
the
Lenovo
keynote,
the
presenters
talked
about
a
wearable
necklace
that
could
do
similar
things.
It’s
still
a
proof
in
concept
but
the
direction
is
clear.
Several
presenters
in
several
contexts
talked
about
AI
wearables
that
“see
what
you
see
and
hear
what
you
hear”
and
can
respond
to
your
needs.
The
advantage,
of
course,
is
that
these
wearables
allow
the
wearer
to
“do
things
in
the
moment
without
reaching
out
to
a
screen
that
pulls
us
away,”
according
to
Sidhu.
These
wearables
have
tremendous
potential.
They
can
increase
safety.
They
can
be
training
guides.
They
can
provide
useful
information
and
understanding
of
complex
issues.
They
can
always
be
on
awaiting
you
to
say
“hey
Meta,”
or
whatever
the
command
should
be
(don’t
worry,
“hey
Siri”
still
won’t
get
you
very
far).
Advantages
for
Lawyers
For
lawyers,
it’s
easy
to
see
some
advantages.
Think
about
taking
a
deposition
where
your
glasses
suggest
questions
and
follow
up
while
you
are
looking
at
the
witness
for
body
language
instead
of
your
screen.
Or
one
of
things
that
used
to
bedevil
me
as
a
young
lawyer
in
a
courtroom:
your
glasses
can
tell
you,
“Hey,
object,
hearsay.”
And
tell
you
why.
Or
supply
you
information
to
answer
your
client’s
questions
in
an
in-person
meeting.
Or
combine
with
other
tools
to
explain
what
your
opponent
is
doing
when
he
makes
certain
arguments
to
a
judge
or
takes
a
position.
Or
help
you
deal
with
and
understand
what
a
mediator
is
doing
in
a
mediation.
Lots
of
benefits.
But
also,
some
real
issues
and
therein
lies
the
challenges
to
legal.
Legal
Issues
These
high-powered
wearables
raise
some
interesting
issues.
I
wrote
recently
about
an
AI
proctor
that
detects
if
a
witness
is
using
AI
in
a
remote
interview
in
large
part
by
determining
if
the
person
is
looking
at
a
screen.
Good
idea.
But
what
happens
if
the
person
doesn’t
need
a
screen
to
get
the
AI
answer?
It’s
provided
through
the
witness’s
glasses.
Suppose
a
witness
takes
the
stand
to
testify
wearing
glasses.
How
do
we
know
that
they
aren’t
being
fed
the
answers
by
a
bot?
Do
we
demand
that
the
glasses
be
examined?
I’m
not
sure
our
courts
are
ready
for
that.
The
Bot
is
Lying
And
what
happens
if
the
advice
the
bot
gives
is
wrong
and
someone
acts
on
it.
Most
of
us
know
that
LLMs
make
mistakes
and
hallucinate
regularly.
It’s
one
thing
when
it
provides
the
output
on
a
screen,
it’s
another
when
it
provides
the
output
in
the
moment,
in
your
ear.
We
have
enough
problems
with
people
acting
in
the
spur
of
the
moment
with
screen
output;
the
temptation
to
run
with
an
output
whispered
in
your
ear
is
far
more.
Privacy
Issues
There
are
privacy
issues
as
well.
All
these
devices
are
creating
data.
Where
does
it
go?
Who
has
access?
Will
it
be
discoverable?
Imagine
your
client
getting
a
discovery
demand
for
everything
their
glasses
created
and
kept.
We
have
enough
trouble
with
clients
creating
evidentiary
trails
when
they
type
in
inputs
—
wearables
will
increase
the
problems
multifold.
And
it’s
not
just
your
privacy
that’s
at
stake.
I
have
a
pair
of
first-generation
Meta
glasses.
I
can
take
a
picture
or
video
that
those
around
me
would
scarcely
detect,
violating
their
privacy.
The
Impact
on
Dispute
Resolution
Certainly,
that
kind
of
world
would
eliminate
a
lot
of
“he
said,
she
said”
disputes
if
there
is
data
someplace
that
would
clarify
it,
much
like
police
body
cams
often
tell
the
real
story,
provided
they
are
turned
on.
These
kinds
of
disputes
are
often
difficult
to
litigate
since
they
often
turn
on
who
the
fact
finder
finds
more
credible
and
that
can
hinge
on
a
variety
of
the
unpredictable
factors.
But
even
in
those
kinds
of
disputes,
our
litigation
system
is
designed
to
make
determinations
about
who
is
telling
the
truth
based
on
a
totality
of
facts
and
testimony
about
interactions
between
people.
But
AI
wearables
could
easily
turn
the
totality
of
facts
that
explain
behavior
into
a
sound
bite.
And
what
would
that
kind
of
world
be
where
you
have
to
think
about
everything
you
say
or
do?
Can
you
imagine
the
posturing
and
games
that
would
be
played?
Set
ups,
where
one
party
employs
an
orchestrating
letter
or
statement
designed
to
provoke
a
reaction
are
already
pretty
common.
It’s
a
gamesmanship
tactic
I’ve
seen
used
over
and
over
again
by
both
lawyers
and
clients.
Wearables
increase
the
opportunity
and
temptation
to
do
just
that.
A
Lack
of
Guardrails
Right
now,
there
are
few
rules
or
guardrails
in
place
except
for
those
the
vendors
may
provide
out
of
the
goodness
of
their
hearts.
The
only
law
is
the
notion
that
there
must
be
consent
for
a
conversation
to
be
recorded.
While
a
few
states
require
both
parties’
consent,
most
states
only
require
that
one
person
consent,
rendering
the
rule
moot
to
begin
with.
Do
we
need
to
require
those
with
AI
wearables
to
disclose
that
fact
when
interacting
with
others?
Isn’t
there
an
inherent
disadvantage
in
substantive
interactions
where
one
person
has
access
to
AI
and
can
create
a
record
and
the
other
doesn’t?
And
don’t
forget,
there
is
still
the
issue
of
deepfakes.
Outputs
from
AI
wearables
could
easily
be
manipulated
to
make
what
happened
look
a
lot
different
than
what
really
did.
Our
Responsibility
It’s
often
said
to
whom
much
is
given,
much
is
expected.
The
concept
applies
here.
Wearables
offer
tremendous
potential
benefits
across
a
broad
spectrum
of
life.
But
with
those
benefits
comes
our
responsibility
as
lawyers
and
legal
professionals
to
think
hard
about
the
issues
and
risks
these
wearables
bring
to
the
legal
process
and
to
dispute
resolutions.
We
have
already
seen
the
result
of
a
lack
of
planning
and
thinking
about
the
risks
of
evidence
manipulation
that
deepfakes
have
brought.
Courts
and
litigants
unprepared
to
deal
with
those
scenarios
and
questions.
A
lack
of
rules
and
guidance.
A
threat
to
our
system.
Without
planning
and
forethought,
we
could
end
up
in
the
same
place
with
wearable
issues.
Legal
has
not
only
been
slow
to
embrace
technology,
it’s
also
been
slow
to
understand
the
risks
technology
brings
to
things
like
the
rule
of
law
and
fundamental
fairness.
It’s
been
said
that
insanity
is
doing
the
same
thing
over
and
over
and
expecting
different
results.
The
time
is
now
to
think
about
how
to
manage
the
risks
to
legal
while
appreciating
the
benefits
and
use
of
these
tools
by
society.
Otherwise,
we
will
be
facing
the
same
crisis
with
wearables
as
we
are
with
deepfakes:
scrambling
to
deal
with
technology
we
don’t
understand.
Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law.
