
Automation
undeniably
has
some
useful
applications.
But
the
folks
hyping
modern
“AI”
have
not
only
dramatically overstated
its
capabilities,
many
of
them
generally
view
these
tools
as
a
way
to lazily
cut
corners
or
undermine
labor.
There’s
also
a
weird
innovation
cult
that
has
arisen
around
managers
and
LLM
use,
resulting
in
the mandatory
use
of
tools
that
may
not
be
helping
anybody —
just
because.
The
result
is
often
a
hot
mess, as
we’ve
seen
in
journalism.
The
AI
hype
simply
doesn’t
match
the
reality,
and
a
lot
of
the
underlying
financial
numbers
being
tossed
around aren’t
based
in
reality;
something
that’s
very
likely
going
to
result
in
a
massive
bubble
deflation
as
the
reality
and
the
hype
cycles
collide
(Gartner
calls
this
the
“trough
of
disillusionment,”
and expects
it
to
arrive
next
year).
One recent
study
out
of
MIT
Media
Lab found
that
95%
of
organizations
see
no
measurable
return
on
their
investment
in
AI
(yet).
One
of
many
reasons
for
this,
as
noted
in a
different
recent
Stanford
survey (hat
tip: 404
Media),
is
because
the
mass
influx
of
AI
“workslop”
requires
colleagues
to
spend
additional
time
trying
to
decipher
genuine
meaning
and
intent
buried
in
a
sharp
spike
in
lazy,
automated
garbage.
The
survey
defines
workslop
as
“AI
generated
work
content
that
masquerades
as
good
work,
but
lacks
the
substance
to
meaningfully
advance
a
given
task.”
Somewhat
reflective
of
America’s
obsession
with artifice.
And
it
found
that
as
use
of
ChatGPT
and
other
tools
have
risen
in
the
workplace,
it’s
created
a
lot
of
garbage
that
requires
time
to
decipher:
“When
coworkers
receive
workslop,
they
are
often
required
to
take
on
the
burden
of
decoding
the
content,
inferring
missed
or
false
context.
A
cascade
of
effortful
and
complex
decision-making
processes
may
follow,
including
rework
and
uncomfortable
exchanges
with
colleagues.”
Confusing
or
inaccurate
emails
that
require
time
to
decipher.
Lazy
or
incorrect
research
that
requires
endless
additional
meetings
to
correct.
Writing
full
of
errors
that
requires
supervisors
to
edit
or
correct
themselves:
“A
director
in
retail
said:
“I
had
to
waste
more
time
following
up
on
the
information
and
checking
it
with
my
own
research.
I
then
had
to
waste
even
more
time
setting
up
meetings
with
other
supervisors
to
address
the
issue.
Then
I
continued
to
waste
my
own
time
having
to
redo
the
work
myself.”
In
this
way,
a
technology
deemed
a
massive
time
saver
winds
up
creating
all
manner
of
additional
downstream
productivity
costs.
This
is
made
worse
by
the
fact
that
a
lot
of
these
technologies
are
being
rushed
into
mass
adoption
in
business
and
academia
before
they’re
fully
cooked.
And
by
the
fact
the
real-world
capabilities
of
the
products
are
being
wildly
overstated
by
both
companies
and
a
lazy
media.
This
isn’t
inherently
the
fault
of
the
AI,
it’s
the
fault
of
the
reckless,
greedy,
and
often
incompetent
people
high
in
the
extraction
class
dictating
the
technology’s
implementation.
And
the
people
so
desperate
to
be
innovation-smacked,
they’re simply
not
thinking
things
through.
“AI”
will
get
better;
though
any
claim
of
HAL-9000
type
sentience
will
remain
mythology
for
the
foreseeable
future.
Obviously
measuring
the
impact
of
this
workplace
workslop
is
an
imprecise
science,
but
the
researchers
at
the
Stanford
Social
Media
Lab
try:
“Each
incidence
of
workslop
carries
real
costs
for
companies.
Employees
reported
spending
an
average
of
one
hour
and
56
minutes
dealing
with
each
instance
of
workslop.
Based
on
participants’
estimates
of
time
spent,
as
well
as
on
their
self-reported
salary,
we
find
that
these
workslop
incidents
carry
an
invisible
tax
of
$186
per
month.
For
an
organization
of
10,000
workers,
given
the
estimated
prevalence
of
workslop
(41%),
this
yields
over
$9
million
per
year
in
lost
productivity.”
The
workplace
isn’t
the
only
place
the
rushed
application
of
a
broadly
misrepresented
and
painfully
under-cooked
technology
is
making
unproductive
waves.
When
media
outlets
rushed
to
adopt
AI
for
journalism
and
headlines
(like
at
CNET),
they,
too,
found
that
the human
editorial
costs
to
correct
and
fix
all
the
problems,
plagiarism,
false
claims,
and
errors
really
didn’t
make
the
value
equation
worth
their
time.
Apple
found
that
LLMs couldn’t
even
do
basic
headlines
with
any
accuracy.
Elsewhere
in
media
you
have
folks
building
giant
(badly)
automated
aggregation
and
bullshit
machines,
devoid
of
any
ethical
guardrails, in
a
bid
to
hoover
up
ad
engagement.
That’s
not
only
repurposing
the
work
of
real
journalists,
it’s
redirecting
an
already
dwindling
pool
of
ad
revenue
away
from
their
work.
And
it’s
undermining
any
sort
of
ethical
quest
for
real,
informed
consensus
in
the
authoritarian
age.
This
is
all
before
you
even
get
to
the
environmental
and
energy
costs
of
AI
slop.
Some
of
this
are
the
ordinary
growing
pains
of
new
technology.
But
a ton of
it
is
the
direct
result
of
poor
management,
bad
institutional
leadership,
irresponsible
tech
journalism,
and
intentional
product
misrepresentation.
And
next
year
is
going
to
likely
be
a
major
reckoning
and
inflection
point
as
markets
(and
people
in
the
real
world)
finally
begin
to
separate
fact
from
fiction.
Stanford
Study:
‘AI’
Generated
‘Workslop’
Actually
Making
Productivity
Worse
More
Law-Related
Stories
From
Techdirt:
Ted
Cruz
Kills
America’s
Latest
Attempt
To
Have
Functional
Privacy
Laws
ABC/Disney
Gets
Rewarded
For
Kissing
Trump’s
Ass:
FCC
Moves
To
Eliminate
Any
Remaining
Media
Consolidation
Limits
Ninth
Circuit
Brings
Trader
Joe’s
Bullshit
Trademark
Suit
Against
Employee
Union
Back
From
The
Dead
