The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

An AI Proctor For Remote Depositions: Has Its Time Come? – Above the Law

Opening
day
at
the

Media
Days

at

CES

brought
an
unexpected
discovery:
an
AI
proctor
designed
to
detect
whether
someone
being
remotely
questioned
might
be
using
AI
for
answers.
Has
its
time
come?
Or
is
it
an
unfair
tool
that
creates
more
problems
than
it
solves?

For
several
years,
the
Media
Days
have
kicked
off
with
a
startup
pitch
competition
hosted
by
the

Japan
External
Trade
Organization

(JETRO)
in
partnership
with

Showstoppers
,
which
also
hosts
numerous
media
events
at
CES.

As
I
mentioned
in
my
CES

kickoff
piece
,
I
attend
the
show
to
identify
developments
that
could
impact
legal
practice
in
general,
and
given
my
litigation
background,
litigation
proceedings
in
particular.
I
wasn’t
expecting
to
stumble
across
something
relevant
at
the
initial
startup
competition
among
primarily
Japan-based
entrepreneurs.

In
truth,
I
was
sort
of
half
listening
as
a
series
of
entrepreneurs
took
the
stage
to
talk
about
things
like
AI-generated
avatars,
meeting
anime
characters,
and
autonomous
microgravity
devices.
Then,
a
young
man
took
the
stage
to
talk
about
a
company
that’s
designed
a
tool
to
detect
potential
cheating
in
online
tests
and
more
importantly,
online
interviews,
by
detecting
when
candidates
use
AI
to
provide
the
answers.


Qlay


Tom
Nakata

is
the
co-founder
and
CEO
of

Qlay

which
has
created
AI
Proctor
that
does
just
that.
The
tool
listens
in
on
remote
interviews
and
detects
if
the
interviewee
is
using
AI
to
generate
an
ideal
answer
to
a
question
and
who
then
reads
it
off
a
teleprompter.
It
works
by
detecting
eyeball
movement
and
speech
analytics.
It
also
has
a
feature
where
the
interviewee
can
be
required
to
log
into
the
Qlay
app
and
set
up
their
mobile
phone
as
a
side
camera
as
a
second
check.

It
all
sounded
reasonable
in
a
pitch
environment,
but
of
course,
as
we
all
know,
the
devil
will
be
in
the
details.
But
I’m
pretty
sure
using
AI
tools
to
cheat
is
a
fact
of
life
and
a
tool
that
helps
ferret
it
out
is
a
logical
and
timely
idea.


What
Does
This
Have
to
Do
with
Legal?

There’s
a
lot
of
parallels
between
remote
depositions
and
remote
interviews.
A
remote
interview
format
is
where
questions
are
asked
by
interviewers
and
answers
given
by
interviewees.
The
answers
are
then
evaluated
to
determine
if
the
interviewee
is
really
qualified.
In
that
regard,
it’s
pretty
similar
to
a
deposition
where
the
answer
to
a
deposition
can
be
critical
to
seeking
truth
and
therefore
the
credibility
of
the
answer
and
the
witness
is
important.

And
while
it’s
not
got
much
publicity,
in
the
age
of
remote
proceedings

depositions
and
court
proceedings

cheating
with
AI
tools
has
to
be
a
real
risk.
Even
in
my
day
a
kind
of
cheating
in
depositions
was
not
all
that
unusual.
A
lawyer
tapping
their
witness
under
the
table
where
the
answer
was
important
or
the
witness
was
droning
on
too
long.
A
prearranged
cough
as
a
signal.
A
sudden
need
to
use
the
restroom
to
keep
the
witness
on
track.
I
even
had
an
opposing
lawyer
knock
over
a
pitcher
of
water
to
disrupt
the
questioning.

But
when
you
combine
the
fact
of
remote
proceedings
with
the
existence
of
AI
tools
that
can
suggest
a
“right”
or
“best”
or
even
a
more
articulate
answer,
we
now
have
a
real
problem.
Testimony
by
an
AI
bot
is
akin
to
the
deepfake
problems
I

recently
wrote

about
in
that
it
poisons
the
validity
of
the
answer
and
the
proceeding.

And
it’s
a
real
risk.
Nakata
told
us
he
used
to
run
a
recruiting
service,
and
he
estimated
that
some
40%
of
the
interviewees
were
using
AI
tools
to
cheat
in
remote
interviews.
He
showed
us
a
video
of
an
interviewee
cheating
and
the
cheating
was
nondetectable
until
Nakata
pointed
it
out.
Early
last
year,
a
startup
with
an
app
that
promised
to
help
people
“cheat
on
everything”
including
interviews

reportedly

raised
$5.3
million.
Moreover,
the
ease
with
which
this
can
be
accomplished
makes
it
awfully
tempting
for
a
nervous
witness
to
seek
help
from
a
smooth-talking
bot.

So,
it’s
naïve
to
think
that
witnesses
in
remote
depositions
or
other
proceedings
are
not
doing
the
same
thing.
The
cheating
may
not
even
involve
the
lawyer

the
witness
could
set
up
an
AI
tool
unbeknownst
to
their
lawyer.
I
can
also
see
this
kind
of
cheating
being
particularly
tempting
for
expert
witnesses
to
help
them
give
the
correct
technical
answer,
stretch
their
credentials,
or
even
find
support
for
their
findings.


The
Advantages

Nakata
cited
several
advantages
of
the
tool
that
should
resonate
with
lawyers.
For
example,
he
told
us
that
the
Qlay
tool
is
different
than
that
of
its
competitors
who
rely
on
humans
to
make
the
determination.
Interviewers
get
tired,
especially
after
going
through
multiple
interviews
in
one
day,
and
would
be
less
likely
to
notice
badges
of
cheating
as
the
day
went
on.
The
same
is
true
of
lawyers
taking
depositions,
especially
after
several
hours
looking
at
a
screen.

Nakata
also
noted
the
difficulty
for
a
human
trying
to
determine
if
cheating
is
occurring
and
concentrating
on
the
question.
Lawyers
have
the
same
problem.

Using
a
tool
like
this
would
allow
the
lawyer
to
dig
in
on
questions
where
the
proctor
noted
there
was
evidence
of
this
kind
of
cheating.
Asking
the
witness
if
it
was
using
an
AI
tool
for
answers
would
force
the
witness
to
admit
or
deny
it.
It
would
give
the
examiner
grounds
to
ask
for
a
180-degree
camera
view.
It
would
give
grounds
for
the
examiner
to
take
a
break
and
ask
for
a
second
camera
such
as
what
Qlay
has
developed
be
put
in
place.

Ultimately,
it
would
allow
the
lawyer
to
make
credibility
arguments
to
the
judge
or
jury
based
on
what
the
tool
has
revealed.
It
would
allow
folks
like
Nakata
to
testify
as
expert
witnesses
as
to
what
the
tool
suggests.


It’s
Not
Foolproof

Nakata
admitted
that
the
tool
is
“not
the
judge
of
whether
cheating
has
occurred.”
It
merely
records
the
interview,
brings
up
evidence
of
possible
cheating,
and
notes
when
it
occurred.
It’s
up
to
the
human
to
decide
if
cheating
has
really
happened.

And
of
course,
it
could
be
claimed
that
a
proctor
makes
people
nervous
and
affects
their
testimony.
Or
that
it’s
biased
and
finds
possible
cheating
when
it’s
not
there.
That
it’s
somehow
not
fair.

But
as
long
as
we
say
it’s
not
determinative
but
something
that
a
fact
finder
needs
to
know,
it
could
on
balance
be
an
aid.
Even
if
the
AI
testimony
is
not
false
or
fabricated
but
just
more
articulate
than
what
it
would
otherwise
be,
isn’t
that
something
a
fact
finder
should
know?
An
AI-generated
answer
is
not
the
witness’s
answer,
it’s
the
bot’s
answer.
And
if
the
answer
is
generated,
isn’t
that
something
that
a
lawyer
should
be
able
to
inquire
into?

Is
it
fair
for
a
witness
to
secretly
substitute
what
should
be
his
testimony
with
that
of
a
bot?
The
whole
point
of
discovery
and
witness
examination
is
to
get
the
witness’s
testimony,
not
that
of
someone
else.


The
AI
Proctor:
Its
Time
Has
Come

Just
because
the
potential
for
deposition
cheating
didn’t
exist
before
AI
doesn’t
mean
we
should
ignore
it
now.

With
more
and
more
depositions
being
taken
remotely
and
more
and
more
proceedings
being
conducted
online,
it
stands
to
reason
that
more
witnesses
will
cheat.
If
Nakata’s
estimate
that
40%
of
people
use
AI
to
cheat
in
interviews
is
even
half
right,
the
problem
is
significant
and
a
like
percentage
probably
applies
to
depositions
as
well.

Like
deepfakes,
this
kind
of
substitution
of
AI
for
what
is
real
has
the
capacity
to
impinge
on
the
validity
and
integrity
of
proceedings
and
ultimately
our
rule
of
law.
It
makes
a
joke
of
the
notion
of
witness
veracity.

Whether
Nakata’s
tool
can
do
what
he
says
remains
to
be
seen.
He
candidly
admitted
that
it
was
a
challenge
to
create
a
tool
to
simultaneously
live
stream
an
interview
while
the
analytics
detect
what
the
candidate
is
doing.
“It’s
hard
to
check
if
the
interviewee
is
using
a
cheating
device
in
real
time,”
he
noted.

While
Nakata
was
personable,
articulate,
and
frankly
seemed
credible,
I
have
no
way
of
knowing
how
accurate
what
he
said
is
and
what
his
tools
actually
can
do.
But
I
do
know
that
we
need
to
face
the
fact
that,
like
deep
fakes,
cheating
in
testimony
is
a
real
threat.
It
can’t
be
ignored
if
we
want
to
protect
the
integrity
of
legal
proceedings
and
the
rule
of
law.




Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law