The law firm of choice for internationally focused companies

+263 242 744 677

admin@tsazim.com

4 Gunhill Avenue,

Harare, Zimbabwe

Stat(s) Of The Week: Right Place, Wrong Time – Above the Law

As
more
people
use
AI-powered
searches
to
get
a
quick
handle
on
the
latest
news,
it’s
worth
considering
how
reliable
the
information
they’re
getting
is.

Not
very,
it
turns
out.

According
to
research
recently
published
by
the
BBC
and
the
European
Broadcasting
Union,
AI
assistants
misrepresent
news
content
45%
of
the
time.

The
report,

“News
Integrity
in
AI
Assistants
,”
is
based
on
a
study
involving
22
public
service
media
organizations
in
18
countries
to
assess
how
four
common
AI
assistants

OpenAI’s
ChatGPT,
Microsoft’s
Copilot,
Google’s
Gemini,
and
Perplexity

answer
questions
about
news
and
current
affairs.

Each
organization
asked
a
set
of
30
news-related
questions
(e.g.,
“Who
is
the
pope?”
Can
Trump
run
for
a
third
term
?”
Did
Elon
Musk
do
a
Nazi
salute
?”).
More
than
2,700
AI-generated
responses
were
then
assessed
by
journalists
against
five
criteria:
accuracy,
sourcing,
distinguishing
opinion
from
fact,
editorialization,
and
context.

Overall,
81%
of
responses
were
found
to
have
issues,
and
45%
had
at
least
one
“significant”
issue.
Sourcing
was
the
most
pervasive
problem,
with
31%
providing
misleading
or
incorrect
attributions
or
omitting
sources
entirely.
In
addition,
20%
of
responses
contained
“major
accuracy
issues,”
such
as
factual
errors,
outdated
information,
or
outright
hallucinations.


Largest
study
of
its
kind
shows
AI
assistants
misrepresent
news
content
45%
of
the
time

regardless
of
language
or
territory

[BBC]


But
see


Vals
AI’s
Latest
Benchmark
Finds
Legal
and
General
AI
Now
Outperform
Lawyers
in
Legal
Research
Accuracy

[LawSites]