Law School Dean Survives Two-Year Search, Falls To One-Week Culture War – Above the Law

After
a
two-year
search,
the
University
of
Arkansas
at
Fayetteville
found
its
new
law
school
dean.
Then,
a
week
later,
it
unfound
her.

Emily
Suski,
a
professor
and
associate
dean
at
the
University
of
South
Carolina’s
law
school,
was
announced
as
the
new
dean
of
the
law
school
on
January
9.
By

January
14
,
the
university
had
“decided
to
go
a
different
direction
in
filling
the
vacancy”
based
on
“feedback
from
key
external
stakeholders
about
the
fit
between
Professor
Suski
and
the
university’s
vacancy.”

By
“key
external
stakeholders,”
the
school
means
conservative
politicians
seeking
cheap
headlines.
Because
the
professor
signed
onto
an
amici
brief
in
the
Idaho
and
West
Virginia
trans
student
sports
ban
cases
heard
by
the
Supreme
Court
this
week
and
with
that
story
dominating
the
news,
right-wing
lawmakers
saw
an
opportunity
to
score
points
by
torpedoing
the
law
school’s
new
dean.


The
brief
in
question
,
prepared
by
Keker,
Van
Nest
&
Peters
and
Suzanne
B.
Goldberg,
the
director
of
Columbia
Law’s
Sexuality
and
Gender
Law
Clinic,
isn’t
particularly
controversial.
It
doesn’t
even
wade
into
the
Equal
Protection
issues
in
these
cases,
limiting
its
inquiry
to
the
West
Virginia
half
of
the
case,
noting
that
Title
IX

by
its
text
and
existing
caselaw

should
protect
the
student
involved
because
the
record
is
undisputed
that
they
have
not
undergone
puberty
and
are
already
undergoing
female
hormonal
puberty
treatment,
meaning
any
attempt
to
force
them
into
male
sports
puts
the
student
at
a
competitive
disadvantage
on
the
basis
of
sex.

This
involved
too
much
reading
for
the
professional
grievance
industry.

Senate
President
Pro
Tempore
Bart
Hester,
a
Republican,

explained
his
objection
to
the
Arkansas
Advocate
:

There’s
no
way
the
people
of
Arkansas
want
somebody
running
and
educating
our
next
generation
of
lawyers
and
judges
[to
be]
someone
that
doesn’t
understand
the
difference
between
a
man
and
a
woman.

This
was,
of
course,
not
the
argument
in
the
brief.
But
it
does
play
to
the
Republican
fascination
with
kids’
genitals
that
continues
to
deliver
them
votes
from
the
sort
of
people

asking
Grok
to
strip
pictures
of
teen
actresses
.
Alas,
as
the
Trump
administration
has
clarified,
using
AI
to
create
child
sexually
explicit
material

is
a
national
free
speech
concern

and
a
kid
joining
the
“wrong”
bowling
team
is
a
grave
concern.

Hester
also
said
he
was
“surprised
that
this
person
who
has
these
beliefs
made
it
through
the
initial
scanning
processes,”
a
telling
confession
that
Republican
lawmakers
believe
the
hiring
process
should
focus
on
theocratic
wrongthink.
The
amicus
brief
isn’t
about
“beliefs,”
it’s
about
the
legal
significance
of
puberty
in
competitive
sports
and
that,
while
male
puberty
is
the
inflection
point
that
gives
male
athletes
competitive
advantages,
all
the
parties
in
the
case
agree
that
the
student
involved
has
not
and
will
never
undergo
male
puberty.

Look,
I’m
not
going
to
pretend
law
schools
shouldn’t
consider
a
candidate’s
past
work.
If
a
candidate
for
the
job
has
a
long
history
of
posting
racial
slurs
or
something
like
that,
it
matters.
But
it
doesn’t
matter
because
that’s
the
candidate’s

beliefs
,
it
matters
because
it
suggests
the
candidate
will

act

in
a
manner
that
bring
illegal
discrimination
and
a
hostile
environment
into
the
institution.
There’s
nothing
about
a
brief
outlining
Title
IX
law
and
puberty
that’s
going
to
impact
the
law
school.

Hester
insisted
he
didn’t
threaten
funding,
but
added
that
“there’s
just
a
basic
understanding
that
the
legislature
controls
the
purse
strings.”
Very
cool.
Very
not
extortion.

Governor
Sarah
Huckabee
Sanders’s
office
took
a
break
from
assuring
us
that

the
children
yearn
for
the
mines

to
praise
the
university
for
“reaching
the
commonsense
decision
on
this
matter
in
the
best
interests
of
students.”
Attorney
General
Tim
Griffin

who
definitely
didn’t
request
she
be
fired,
his
office
assures
us

“applauds
the
decision
nonetheless.”
He
just
“expressed
his
dismay
at
the
selection
and
his
confidence
that
many
more
qualified
candidates
could
have
been
identified.”
More
qualified
candidates?
They
searched
for
TWO
YEARS!
Arkansas,
this
may
be
tough
to
hear,
but…
maybe
the
problem
is
you.


The
ACLU
condemned
the
decision

to
fire
Suski:

If
state
officials
can
threaten
to
cut
funding
because
they
dislike
a
professor’s
legal
analysis,
then
no
public
employee
in
Arkansas
is
safe
to
speak
freely.
Under
this
logic,
any
public
worker
could
be
punished
for
expressing
a
belief
unless
it
has
first
been
approved
by
politicians.
That
is
not
governance

it
is
ideological
control.

That
is
their
goal.

Conservatives
nab
every
opportunity
to
warn
that
the
woke
mob
would
end
academic
freedom.
Then
they
ended
academic
freedom.
Every
accusation
is
a
confession.
All
that
whining
whenever
a
law
professor
is

chastised
for
using
racial
slurs

or
students

peacefully
protest
a
hate
group
,
it’s
just
to
set
the
stage
for
their
more
robust
assault.
Refusing
to
tolerate
illegal
discriminatory
behavior
(at
least
illegal
on
paper
until
this
Supreme
Court
says
otherwise)
is
not
the
same
as
firing
someone
for
making
straightforward
legal
analysis
in
a
brief.
To
use
a
poster
child
of
this
right-wing
whinging,

Amy
Wax

wasn’t
disciplined
for
making
arguments
about
labor
law,
she
was
disciplined
for
bad-mouthing
minority
students.
But
these
folks
spent
years
blurring
the
distinctions
so
they
could
some
day
fire
a
professor
just
for
recognizing
that
anti-discrimination
laws
are
real.


As
State
Representative
Nicole
Clowney
put
it
:
“Veiled
threats
and
comments
behind
closed
doors
about
the
political
leanings
of
University
of
Arkansas
faculty
and
staff
are
nothing
new,
sadly.
But
state
elected
officials
threatening
to
withhold
funding
to
the
entire
School
based
on
the
political
beliefs
of
the
newly
hired
Dean
is
a
new,
terrifying
low.”

Tsk
tsk.
It’s
the
worst
terrifying
low…
so
far.


Culture
warriors
cancel
new
U
of
A
law
dean
before
she
started

[Arkansas
Times]

Amid
Criticism
From
Lawmakers,
U
of
Arkansas
Rescinds
Dean
Offer

[Inside
Higher
Ed]

UPDATED:
University
of
Arkansas
withdraws
incoming
law
dean’s
offer
in
wake
of
Republican
complaints

[Arkansas
Advocate]

In
capitulating
to
political
pressure
to
fire
new
dean,
U
of
A
violated
Constitution,
ACLU
says

[Arkansas
Times]




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.

Founder Of Prominent Midsize Firm Passes Away – Above the Law

(Image
via
Getty)

We
have
some
unfortunate
news
to
report
out
of
Washington,
D.C.,
where
one
of
the
founding
partners
of
well-known
midsize
firm
Beveridge
&
Diamond
recently passed
away

More
than
50
years
ago,
Albert
Beveridge
III,
90,
founded
the
firm
now
known
as
Beveridge
&
Diamond,
with
two
of
his
childhood
friends
from
Indiana,
the
late
attorneys
William
Ruckelshaus
and
Richard
Fairbanks.
The
trio
later
brought
on
Henry
Diamond,
and
from
then
on,
the
firm
began
to
flourish
into
one
of
the
nation’s
most
prominent
the
environmental
law
and
litigation
practices.
Beveridge
&
Diamond
now
has
more
than
175
lawyers
and
seven
offices
across
the
country.

The
firm
offered
the
following
statement
on
Beveridge’s
passing
to
the

National
Law
Journal
:

“Albert
fueled
Beveridge
&
Diamond’s
early
success,
counseling
clients
on
high-stakes
corporate
transactions
and
strategic
matters,
and
nurtured
a
firm
culture
grounded
in
intellectual
curiosity,
collegiality
and
public
service.”

Beveridge,
a
Harvard
Law
graduate,
continued
to
serve
the
firm
as
Senior
Counsel
until
his
death,
and
was
often
called
upon
to
give
historical
lessons
about
environmental
law,
as
well
as
the
firm’s
origins.

We
here
at
Above
the
Law
would
like
to
extend
our
condolences
to
Albert
Beveridge’s
family,
friends,
and
colleagues
during
this
difficult
time.


Beveridge
&
Diamond
Cofounder
Albert
Beveridge
Dies
at
90

[National
Law
Journal]





Staci
Zaretsky
 is
the
managing
editor
of
Above
the
Law,
where
she’s
worked
since
2011.
She’d
love
to
hear
from
you,
so
please
feel
free
to email her
with
any
tips,
questions,
comments,
or
critiques.
You
can
follow
her
on BlueskyX/Twitter,
and Threads, or
connect
with
her
on LinkedIn.

The Inevitable Commoditization Of GenAI: What Will It Mean For Legal? – Above the Law

It’s
early
2027.
Most
law
firms
and
in-house
legal
departments
are
rapidly
moving
to
OpenAI
Legal
which
launched
in
the
third
quarter
of
2026.
Most
of
them
cite
cost
since
OpenAI
Legal
is
still
just
$20
per
month.
They
are
also
confident
that
OpenAI
has
addressed
privacy
and
confidentiality
concerns,
and
that
its
new
automatic
cite-checking
ability
can
accurately
verify
all
outputs.
What’s
remarkable
about
the
shift
is
not
so
much
that
it
happened
but
the
speed
at
which
the
transition
was
made.


This
Is
Commoditization

The
above
hypothetical
is
what
happens
when
a
product
becomes
commoditized,
which
happens
often.
A
commoditized
product
is
one
that
has
become
so
commonplace
and
interchangeable
that
it
loses
its
uniqueness.

And
when
that
happens,
it
also
loses
its
pricing
power.

Why?
Once
commoditization
occurs,
users
see
little
meaningful
difference
between
the
various
vendor
options
other
than
price.
So,
sellers
can’t
charge
a
premium
for
what
they
provide,
particularly
when
the
lower
cost
option
provides
roughly
the
same
features,
quality,
and
performance.

Some
examples
include
things
like
economy
seats
on
airlines:
most
customers
shop
based
on
price
and
the
difference
in
service
is
relatively
small.

Another
example

is
cloud
storage
which
is
now
an
expected
feature

providers
are
interchangeable,
and
the
market
is
price-driven.

Commoditization
shows
up
in
legal
tech
if
tools
that
once
felt
novel
become
expected
infrastructure.
At
that
point,
lawyers
stop
asking
“what
does
this
do?”
and
start
asking
“why
does
this
cost
more
than
the
other
one?”
If
it
happens
to
GenAI,
it
will
have
direct
impact
on
legal
tech
vendors
whose
products
are
based
on
GenAI.
And
their
customers.

Indeed,
a
pretty
big
tech
player
may
be
betting
that
this
may
soon
happen
with
GenAI.
As

Chance
Miller

noted
in
a
recent
episode
of
the
daily

9to5
Daily
,
citing
a
report
in

The
Information
by
Aaron
Tilley
,
the
potential
for
commoditization
may
be
why
Apple
is
proceeding
cautiously
with
developing
its
own
GenAI
tools
and
it
could
foreshadow
what
may
happen
in
legal.
Tilley
says
(emphasis
added):

Apple
still
has
a
team
working
on
its
own
internal
models
that
it
could
take
advantage
of
in
the
future.

But
some
Apple
leaders
hold
the
view
that
large
language
models
will
become
commodities
in
the
years
to
come

and
that
spending
a
fortune
now
on
its
own
models
doesn’t
make
sense.

Here’s
what
Miller
concludes:
“If
Apple
leadership
truly
does
believe
LLMs
will
become
commodities,
then
the
company’s
AI
success
will
depend
less
on
bespoke
new
models,
and
more
on
its
ability
to
control
the
hardware,
software,
and
services
that
AI
runs
on.”


Commoditization
of
Legal
GenAI

Thus
far,
legal
GenAI
vendors
have
faced
little
competition
from
outside
the
legal
community. 
But
what
would
happen
if,
say,
OpenAI
decided
to
target
the
legal
market
and
release
general
tools
offering
the
strong
privacy
protections,
enhanced
accuracy,
and
stronger
security
lawyers
and
legal
professionals
crave?

If
this
were
to
occur,
other
players
like
Google,
Anthropic,
and
Perplexity
might
follow.
The
greater
market
power
of
these
companies
could
shift
the
legal
GenAI
market
towards
commoditization,
where
price
becomes
the
primarily
criterion.

It
was
just
this
kind
of
thing
I
mentioned
in

my
post

about
a
Business
Insider

interview

of
the
founders
of
Harvey,
Winston
Weinberg
and
Gabe
Pereyra,
back
in
October.

At
that
point
I
noted
their
statements
to
the
effect
that
they
were
less
concerned
about
legal
tech
vendors
and
more
about
competition
from
OpenAI
itself.
Somewhat
candidly,
they
admit
that
OpenAI
could
enter
the
legal
tech
space
directly
and
cut
out
the
middleman
legal
tech
vendors.

These
statements
prompted
me
to
observe:
“[Weinberg
and
Pereyra]
admit
that
OpenAI
could
enter
the
legal
tech
space
directly
and
cut
out
the
middleman
legal
tech
vendors.
Moreover,
even
if
OpenAI
never
targets
the
legal
field
directly,
it
very
well
could
release
general
tools
offering
the
strong
privacy
protections,
enhanced
accuracy,
and
stronger
security
lawyers
and
legal
professionals
crave.
In
fact,
OpenAI recently
mentioned
 a
contract
review
tool
it
developed
and
is
using
internally.”


Today’s
Legal
AI
Marketplace

Today,
there
is
a
plethora
of
vendors
offering
all
sorts
of
GenAI
tools
at
a
fairly
high
price.
Their
argument
is
that
legal
is
a
specialized
market
due
to
a)
the
ethical
and
privacy
concerns
and
b)
the
need
for
accuracy.
They
go
on
to
say
that
only
they
can
offer
the
protections
the
specialized
market
requires
and
that
open
or
public
systems
like
ChatGPT,
Gemini,
Perplexity,
or
Claude
simply
can’t
meet
legal
demands.
Some
even
go
so
far
as
to
say
it’s
malpractice
to
use
the
open
systems.

And
when
it
comes
to
legal
research,
vendors
explain
that
only
they
have
the
data
to
make
the
systems
work
accurately
and
that
this
moat
protects
them.
But
the
moat
is
not
foolproof.
The
vendor
argument
ignores
that
the
moat-protected
legal
research
is
only
part
of
overall
legal
needs.
Moreover,
much
of
the
data
also
exists
within
client
databases
that
are
not
protected.
More
importantly,
big
players
like
Google
and
OpenAI
or
any
of
the
other
large
players
could
simply
license
or
acquire
the
data
themselves,
spreading
those
costs
across
far
more
customers
while
still
undercutting
specialized
vendors
on
price.

Also,
ignoring
for
the
moment
that
their
GenAI
tools
are
also
capable
of
making
mistakes
and
making
stuff
up,
a
characteristic
of
LLMs
that
is
intractable,
legal
tech
vendors
assume
that
just
because
the
open
systems
haven’t
made
the
case
that
their
products
can
meet
legal’s
requirements,
they
won’t.
Indeed,
many
of
the
vendor
products
depend
in
part
on
those
open
systems’
platforms
to
make
their
products
function.
And
OpenAI
at
least
is
an
investor
in
legal
vendors
like
Harvey.

And
as
far
as
the
hallucination
and
inaccuracy
problem
goes,
we
are
already
seeing
vendors
like

Clearbrief

offering
solutions
to
the
hallucination
problem
with
tools
that
automatically
verify
LLM
outputs
primarily
with
non-GenAI
tools.
That
potentially
solves
the
cost
of
verification
problem.
What
if
OpenAI
decided
to
do
the
same?


A
Reality
for
Legal

Could
GenAI
legal
tools
become
commoditized?
The
short
answer
is
yes.
The
open
GenAI
providers
have
vast
resources
and
capabilities.
There
is
little
to
stop
them
from
offering
the
privacy
and
confidentiality
protections
that
lawyers
demand.
There
is
little
to
prevent
them
from
offering
tools
like
that
being
offered
by
Clearbrief.
And
if
they
put
their
minds
to
it,
they
could
provide
many
of
the
same
tools
the
legal
tech
vendors
who
trumpet
their
uniqueness
do
now.

And
if
that
happens,
the
legal
GenAI
vendors
could
lose
their
uniqueness
and
pricing
power.
The
big
GenAI
players
would
be
forced
to
compete
primarily
on
price.
Legal
tech
vendors
may
not
be
able
to
legitimately
compete
on
that
basis:
they
have
neither
the
financial
staying
power
nor
resources.
The
bigger
players
can
spread
costs
across
many
more
customers,
legal
and
non-legal,
can
bundle
features
into
larger
platforms,
and
absorb
margin
pressure
longer
than
the
smaller
legal
vendors.

And
let’s
not
forget
many
lawyers
already
used
the
open
tools
to
do
all
sorts
of
things,
so
transitioning
to
relying
on
them
for
everything
would
be
neither
difficult
nor
time-consuming.

Thus
far,
the
open
GenAI
providers
have
been
content,
like
Microsoft,
to
merely
offer
their
tools
to
the
legal
tech
vendors
as
wrappers.
But
that
doesn’t
mean
the
open
systems
won’t
decide
to
compete
directly.


So,
What’s
Legal
to
Do?

It
would
be
easy
for
law
firms
to
just
throw
up
their
hands
and
just
ignore
the
commoditization
potential.
But
that’s
not
necessarily
correct.
In
fact,
law
firms
and
in-house
departments
can
do
some
things
to
better
prepare
for
what
may
be
the
inevitable
commoditization
of
GenAI
tools.

But
law
firms
typically
ignore
what’s
developing
in
the
tech
market
until
it
already
happens.
By
doing
so,
they
risk
waking
up
one
morning
locked
into
a
bunch
of
overpriced
technology
when
there
are
just
as
good
and
cheaper
products
suddenly
available.

Firms
can
avoid
this
by
paying
attention
to
what
is
going
on
in
the
marketplace
and
what
vendors
are
doing.
Indeed,
the
best
strategy
right
now
may
be
to
keep
their
powder
dry.
To
pay
attention
to
the
marketplace.
To
regularly
review
and
monitor
their
tech
stack
and
tech
contractual
commitments.
To
avoid
long-term
contractual
commitments
that
lock
them
in.
To
look
hard
at
things
like
termination
rights
and
obligations.
And
to
make
sure
they
have
an
exit
strategy
should
things
quickly
change.






Stephen
Embry
is
a
lawyer,
speaker,
blogger,
and
writer.
He
publishes TechLaw
Crossroads
,
a
blog
devoted
to
the
examination
of
the
tension
between
technology,
the
law,
and
the
practice
of
law

Law Firm Sent Out Fake Christmas Vouchers. Staff Want To Ram Coal Up Leadership’s Chimneys. – Above the Law

Phishing
attacks
represent
an
ever-increasing
threat
to
law
firms.
A
law
firms
can
find
itself

staring
down
massive
ransom
payments

to
protect
client
data,
just
because
someone
clicked
on
a
bogus
file
from
an
address
that
looked
familiar.

But
robust
firm
cybersecurity
leans
on
two
pillars:
education
to
nurture
careful
and
conscientious
employees,
and
employees
who
wouldn’t
crack
a
smile
if
the
firm
burned
to
the
ground.
Sometimes
these
pfishing
tests
put
those
goals
in
conflict.


According
to
RollOnFriday
,
one
firm
decided
to
use
the
holiday
season
in
a
pfishing
test/disgruntled
employee
accelerator.

Browne
Jacobson
,
a
UK-based
law
firm
with
over
800
lawyers,
had
the
bright
idea,
the
week
before
Christmas,
to
email
employees
promising
a
£100
Christmas
voucher
to
anyone
who
filled
out
their
employee
feedback
survey.
Clicking
the
link
revealed

surprise!

a
cybersecurity
training
exercise.
Merry
Christmas!
Your
reward
is
humiliation!

In
the
immortal
words
of
Otter:

While
getting
hacked
by
teenagers
sitting
in
a
Russian
government
warehouse
presents
an
exotic
threat,
disgruntled
employees
are
still
a
more
likely
threat.
Good
job
pissing
everyone
off!
Oh,
and
HR
must
be

super

excited
to
learn
that
no
one
will
ever
fill
out
an
employee
survey
again
because
IT
has
conditioned
them
to
auto-delete
internal
communications.
Discretion
is
the
better
part
of
valor,
folks.
Not
every
potential
threat
should
be
the
basis
of
a
test.

If
the
firm’s
position
is
“we
will
never
offer
you
money
via
email,”
then
say
that!
Blast
that
message
every
quarter.
“All
compensation
and
bonus
announcements
will
be
delivered
in
person
or
through
[specific
verified
channel].
If
you
receive
an
email
promising
money,
it’s
a
scam.”
That’s
actually
useful
guidance
and
builds
institutional
trust.

There
should
be
no
guessing.
Running
“gotcha”
tests
just
poisons
the
well.

A
spokesperson
for
Browne
Jacobson
told
ROF,
“We
recognise
that
our
recent
cybersecurity
training
exercise
caused
concern
among
some
colleagues,
and
we
understand
why
people
drew
a
link
with
our
prize
draw
initiative
from
earlier
in
the
year”.

Drew
a
link?
This
fake
offer
was
styled
to
echo

a
real
one

that
the
firm
used
before?
That’s
not
a
pfishing
test
then!
The
only
people
who
would
know
enough
about
the

legitimate

program
to
use
it
as
a
ploy
would
be
people
inside
the
firm
anyway.

This
isn’t
even
the
first
time
that
a
firm
got
dragged
for

using
false
compensation
promises
as
a
pfishing
test
.
In
another
story
that

RollOnFriday
broke
last
summer
,
Knights
sent
around
an
email
purporting
to
inform
them
of
a
salary
increase
and
scolding
anyone
who
opened
it
for
falling
for
the
test.

LOL,
why
would
you
think
we’d
pay
your
ass
more
money?!?

And
Baker
McKenzie
actually

ran
almost
this

exact

same
scam
before
.
Last
Christmas,
they
gave
staff
a
voucher
promise,
but
the
very
same
day,
they
took
it
away.
But
in
that
case,
it
just
promised
a
bonus,
tying
it
to
a
feedback
survey
is
the
new
twist.

You’d
think
firms
would
learn
from
these
stories.
Or
at
least
follow
the
advice
of
their
own
national
cybersecurity
experts.
The
National
Cyber
Security
Centre

explicitly
warns
companies
not
to
run
simulated
pfishing
attacks
like
these
.
According
to
the
NCSC,
pfishing
simulations
both
don’t
work
and
erode
institutional
trust.

A
source
told
ROF
it
“left
staff
absolutely
livid”.

Well,
yeah.

If
you
want
staff
to
be
vigilant
about
phishing,
you
need
them
to
be

on
your
team
.
You
need
them
invested
in
the
firm’s
security
because
they
feel
like
valued
members
of
the
organization.
Pfishing
tests
will
always
involve
a
little
humiliation,
but
if
a
firm
insists
on
running
them,
those
tests
have
to
be
tempered
by
the
need
to
keep
folks
happy.
You
especially
cannot
build
a
cooperative
security
environment
while
also
playing
Three-Card
Monte
with
people’s
livelihoods.
Because
money
around
the
holidays
matters
a
lot.
Yes,
that’s
what
makes
these
promises
a
more
dangerous
pfishing
risk.

But
it’s
also
what
makes
punking
people
a
more
damning
morale
blow.


EXCLUSIVE
Lawyers
livid
over
Browne
Jacobson’s
Xmas
phishing
trap

[Roll
on
Friday]




HeadshotJoe
Patrice
 is
a
senior
editor
at
Above
the
Law
and
co-host
of

Thinking
Like
A
Lawyer
.
Feel
free
to email
any
tips,
questions,
or
comments.
Follow
him
on Twitter or

Bluesky

if
you’re
interested
in
law,
politics,
and
a
healthy
dose
of
college
sports
news.
Joe
also
serves
as
a

Managing
Director
at
RPN
Executive
Search
.

How Appealing Weekly Roundup – Above the Law



Ed.
Note
:

A
weekly
roundup
of
just
a
few
items
from
Howard
Bashman’s

How
Appealing
blog
,
the
Web’s
first
blog
devoted
to
appellate
litigation.
Check
out
these
stories
and
more
at
How
Appealing.


“Did
a
Supreme
Court
Loss
Embolden
Trump
on
the
Insurrection
Act?
In
refusing
to
let
the
president
deploy
National
Guard
troops
in
Illinois
under
an
obscure
law,
the
justices
may
have
made
him
more
apt
to
invoke
greater
powers.”
 Adam
Liptak
of
The
New
York
Times
has this
news
analysis
.


“Conservatives
On
X
Are
Pretty
Sure
Amy
Coney
Barrett
Is
Woke
Now;
The
author
of
the
perhaps
the
most
aggressively
anti-trans
Supreme
Court
opinion
in
recent
memory
is
getting
branded
as
an
ideological
traitor
who
ignores
‘biological
truth’”:
 Jay
Willis
has this
post
 at
his
“Balls
&
Strikes”
Substack
site.


“7
Predictions
For
The
Legal
World
In
2026:
SCOTUS
retirements,
$10
million
in
profits
per
partner,
Trump
v.
Biglaw,
Kirkland
v.
Wachtell

whatever
it
ends
up
being,
the
year
ahead
won’t
be
boring.”
 David
Lat
has this
post
 at
his
“Original
Jurisdiction”
Substack
site.


“Supreme
Court
allows
Illinois
congressman
to
challenge
mail-in
balloting;
The
high
court’s
7-2
ruling
dealt
with
the
narrow
question
of
whether
Republican
congressman
Michael
Bost
and
others
had
standing
to
sue”:
 Justin
Jouvenal
and
Patrick
Marley
of
The
Washington
Post
have this
report
.


“Newsom
Says
California
Will
Not
Extradite
Abortion
Provider
to
Louisiana;
The
case,
escalating
the
interstate
battle
over
abortion,
is
the
second
time
Louisiana
has
criminally
charged
out-of-state
doctors
with
sending
abortion
pills
to
Louisiana
residents”:
 Pam
Belluck
of
The
New
York
Times
has this
report
.


“Renee
Good’s
Family
Should
Be
Able
to
Sue
the
Officer
Who
Killed
Her”:
 Law
professors Erwin
Chemerinsky
 and Burt
Neuborne
 have this
guest
essay
 online
at
The
New
York
Times.

State Department Threatens UK Over Grok Investigation, Because Only The US Is Allowed To Ban Foreign Apps – Above the Law

So
let
me
get
this
straight.
The
United
States
government
spent
years championing
a
ban
on
TikTok
rushed
it
through
the
Supreme
Court
 with
claims
of
grave
national
security
threats, got
a
9-0
ruling
blessing
government
censorship
 of
an
entire
platform
used
by
170
million
Americans…
and
now
it’s
the
US
State
Department
thinking
that
it’s
all
cool
to threaten the
United
Kingdom
for considering
similar
action
 against
X’s
Grok
chatbot
over
its
generation
of
sexualized
deepfake
images,
including
those
of
children?

We
all
know
that
the
US
can
be
hypocritical,
but
this
all
seems
a
bit
over
the
top.

Here’s
what
actually
happened:
the
UK’s
communications
regulator
Ofcom opened
an
investigation
 into
whether
X
violated
the
country’s
Online
Safety
Act
by
allowing
Grok
to
create
and
distribute
non-consensual
intimate
images
(NCII).
This
isn’t
some
theoretical
concern—as
I
detailed
last
week,
Grok
has
been churning
out
sexualized
images
 at
an
alarming
rate,
with
users
publicly
generating
“undressing”
content
and
worse,
in
many
cases
targeting
real
women
and
girls.
UK
Technology
Secretary
Liz
Kendall
told
Parliament
that
Ofcom
could
impose
fines
up
to
£18
million
or seek
a
court
order
to
block
X
entirely
 if
violations
are
found.

Enter
Sarah
B.
Rogers,
the
Trump-appointed
Under
Secretary
of
State
for
Public
Diplomacy,
who
decided
this
was
the
perfect
moment
to threaten
a
close
US
ally
.
In
an
interview
with
GB
News,
Rogers
declared:


I
would
say
from
America’s
perspective

nothing
is
off
the
table
when
it
comes
to
free
speech.
Let’s
wait
and
see
what
Ofcom
does
and
we’ll
see
what
America
does
in
response.

She
went
further,
accusing
the
British
government
of
wanting
“the
ability
to
curate
a
public
square,
to
suppress
political
viewpoints
it
dislikes”
and
claiming
that
X
has
“a
political
valence
that
the
British
government
is
antagonistic
to.”

This
is
weapons-grade
nonsense,
and
Rogers
knows
it.

The
UK
isn’t
investigating
X
because
they
don’t
like
Elon
Musk’s
politics.
They’re
investigating
because
Grok
is
being
used
to
create
sexualized
deepfakes
of
real
people
without
consent,
including
minors.
Unless
Rogers
is
prepared
to
stand
up
and
argue
that
generating
non-consensual
sexualized
imagery
of
real
people—including
children—is
somehow
quintessential
“conservative
speech”
that
the
US
must
defend,
she’s
deliberately
mischaracterizing
what’s
happening
here.
Is
that
really
the
hill
the
State
Department
wants
to
die
on?
That
deepfake
NCII
is
conservative
speech?

As
UK
Prime
Minister
Keir
Starmer’s
spokesperson
put
it:


“It’s
about
the
generation
of
criminal
imagery
of
children
and
women
and
girls
that
is
not
acceptable.
We
cannot
stand
by
and
let
that
continue.
And
that
is
why
we’ve
taken
the
action
we
have.”

But
here’s
where
the
hypocrisy
becomes
truly
spectacular:
just
this
week,
the
Republican-led
Senate
unanimously passed
the
DEFIANCE
Act
 for
the
second
time.
This
legislation
would
create
a
federal
civil
cause
of
action
allowing
victims
of
non-consensual
deepfake
intimate
imagery
to
sue
the
producers
of
such
content.
No
matter
what
you
think
of
that
particular
bill
(I
have
my
concerns
about
the
specifics
of
how
the
bill
works),
it’s
quite
something
when
you
have
the
State
Department’s
mafioso-like
threat
being
issued
to
the
UK
if
they
take any action
to
respond
to
what’s
happening
on
X
at
the
same
time
the
MAGA-led
US
Senate
is
voting
unanimously
to
move
forward
on
a
bill
that
could
have
a
similar
impact.

So
let’s
review
the
US
government’s
position:

  • Banning
    an
    entire
    social
    media
    platform
    because
    China might access
    data
    (that
    they
    can
    already
    buy
    from
    data
    brokers
    anyway)?
    Perfectly
    fine,
    rush
    it
    through
    SCOTUS.
  • Allowing
    victims
    to
    sue
    over
    non-consensual
    sexualized
    deepfakes?
    Great
    idea,
    unanimous
    Senate
    support.
  • Another
    country
    investigating
    whether
    a
    platform
    violated
    laws
    against
    generating
    sexualized
    deepfakes
    of
    minors?
    UNACCEPTABLE
    CENSORSHIP,
    NOTHING
    IS
    OFF
    THE
    TABLE.

The
MAGA
mindset
in
a
nutshell:
performative
nonsense
when
it
fits
within
a
certain
bucket
(in
this
case
the
“OMG
Europeans
censoring
Elon”)
no
matter
that
it
conflicts
with
stated
beliefs
elsewhere.

It’s
important
to
consider
all
of
this
in
light
of
the
whole
TikTok
ban
fiasco.
When
the
Supreme
Court
blessed
Congress’s
decision
to
ban
an
app
based
on
vague
national
security
concerns—concerns
so
urgent
that
the
Biden
administration
immediately
decided
not
to
enforce
the
ban
after
winning
in
court
and
which
Trump
has
continued
to
not
enforce
for
an
entire
year—America
effectively
torched
its
moral
authority
to
criticize
other
countries
for
restricting
platforms.

As
I
wrote
when
that
ruling
came
down,
we
essentially
said
it’s
okay
to
create
a
Great
Firewall
of
America.
We
told
the
world
that
if
you
claim
“national
security”
loudly
enough,
with
sufficient
“bipartisan
support,”
you
can
ban
whatever
app
you
want,
First
Amendment
concerns
be
damned.
Chinese
officials
have
pointed
to
the
US’s
TikTok
ban
to
justify
their
own
internet
restrictions,
and
now
we’re
handing
authoritarian
regimes
another
gift:
the
US
will
threaten
retaliation
if
you
try
to
enforce
laws
against
platforms
generating
sexualized
imagery
of
children.

When
you
blow
up
the
principle
that
countries
shouldn’t
ban
apps
based
on
content
concerns,
you
don’t
get
to
suddenly
rediscover
those
principles
when
it’s
your
billionaire’s
app
on
the
chopping
block.

And
make
no
mistake
about
what
Rogers
is
really
defending
here.
Grok
continues
to
generate
sexualized
content
at
scale.
Elon
Musk
continues
running
X
like
an
edgelord
teenager
who
knows
he’s
rich
enough
to
avoid
consequences,
and
women—especially
young
women—continue
facing
harassment
and
abuse
via
these
tools.

The
State
Department’s
threats
aren’t
about
defending
free
speech.
They’re
about
protecting
Musk’s
business
interests.
It’s
about
maintaining
the
double
standard
that
got
us
here:
American
companies
can
do
whatever
they
want
globally,
but
foreign
companies
operating
in
America
face
existential
threats
for
far
less.

The
UK
is
investigating
potential
violations
of
laws
against
generating
sexualized
imagery
of
minors
and
non-consenting
adults.
If
the
State
Department
thinks
that’s
“censorship,”
they
should
explain
why
the
Senate
just
voted
unanimously
to
let
victims
sue
over
exactly
that
conduct.

Look,
the
UK’s
investigation
may
or
may
not
lead
anywhere.
Ofcom
may
find
violations,
or
it
may
not.
They
may
impose
fines,
or
they
may
not.
They
may
seek
to
block
X,
or
they
may
not.
But
the
one
thing
the
US
government
absolutely
cannot
do
with
a
straight
face
is
threaten
them
for
even
considering
it.

You
don’t
get
to
ban
TikTok
and
then
act
outraged
when
other
countries
contemplate
similar
actions
against
American
companies.
You
don’t
get
to
pass
unanimous
legislation
allowing
lawsuits
over
deepfake
NCII
while
your
State
Department
calls
investigations
into
that
same
deepfake
NCII
“censorship.”
You
don’t
get
to
spend
years
claiming
that
national
security
justifies
any
restriction
on
platforms
and
then
suddenly
discover
that
“free
speech”
means
other
countries
can’t
enforce
their
laws.

There
are
no
principles
here,
only
sheer
abuse
of
power.
And
Sarah
Rogers’s
threat
to
the
UK
makes
that
abundantly
clear:
the
rules
we
claimed
justified
banning
TikTok
apparently
only
apply
when
we’re
the
ones
doing
the
banning.


State
Department
Threatens
UK
Over
Grok
Investigation,
Because
Only
The
US
Is
Allowed
To
Ban
Foreign
Apps


More
Law-Related
Stories
From
Techdirt
:


Justice
Gorsuch
Reminds:
The
Fourth
Amendment
Isn’t
Dead
Yet


Trump,
Ellison
Wage
War
On
‘Woke
Netflix’
In
Effort
To
Scuttle
Warner
Brothers
Deal,
Dominate
U.S.
Media


Trump
Tries
To
Disappear
Impeachment
References
At
Smithsonian

Morning Docket: 01.16.26 – Above the Law

*
ICE
detaining
Native
Americans
and
then
telling
their
tribes
that
they
will
only
release
information
about
the
people
they’ve
illegally
detained
if
the
tribes
agree
to
sign
over
sovereignty
to
assist
in
immigration
sweeps.
[Washington
Post
]

*
Massive
college
basketball
point
shaving
scheme
charged.
I
guess
this
is
why
you
always
take
the
under.
[NBC
News
]

*
Speaking
of
gambling,
Tom
Goldstein
trial
began
yesterday.
[National
Law
Journal
]

*
Florida
follows
Texas
in
dropping
ABA
accreditation.
Smart
law
students
should
follow
their
friends
to
out
of
state
schools.
[Inside
Higher
Ed
]

*
Judge
suspended
for
giving
defendant
a
dollar
to
cover
her
bond.
[ABA
Journal
]

*
Appeals
court
decides
along
party
lines
that
federal
judges
can’t
stop
deportations
even
if
they’re
unconstitutional
until
the
immigration
adjudication
process
is
complete.
Anything
to
make
constitutional
rights
more
difficult
to
exercise!
[ACLU]

Lawyer Of The Year Stays In Good Trouble – See Also – Above the Law

Rachel
Cohen
Keeps
Pushing
Against
Authoritarianism:
Somebody
has
to
take
these
10-year-old
phone
thieves
to
task.
Bonus
Dollars
And
A
Nonequity
Partner
Track:
Check
out
Sullivan
&
Cromwell’s
new
bonus
program!
Who
Needs
TV
When
You
Can
Read
Complaints?:
Kyrsten
Sinema
sued
for
alienation
of
affection.
Time
To
Make
GenAI
Competency
Mandatory?:
It’s
a
bold
opinion
for
bold
times.
UMaine
Law
School
Preps
Community
Against
ICE:
Know
your
rights
and
stay
safe!

No Longer The Baby Among Law Schools – Above the Law


Elon
University
plans
to

open
a
new
law
school
in
Charlotte
,
beginning
in
2027.
What
is
the
most
recent
new
law
school
to
earn
ABA
accreditation,
becoming
the
197th
approved
law
school
program
and
opening
its
doors
in
2023.


Hint:
Unlike
the
new
school
bound
for
Charlotte,
this
one
is
not
in
North
Carolina…
though
you
might
be
forgiven
for
thinking
it
is.



See the
answer
on
the
next
page.

AI Startup AlphaLit Raises $3.2M Seed Round To Screen and Score Smaller Cases and Route them to Lawyers

Over
$55
million
worth
of
meritorious
civil
claims
go
unfiled
annually,
particularly
in
working-class
communities,
because
over
64%
of
prospective
plaintiffs’
calls
to
law
firms
are
ignored,
says
legal
AI
startup

AlphaLit
.

The
reason
firms
ignore
those
calls
is
that
they
cannot
financially
justify
vetting
all
those
small
cases.
“You
might
need
to
have
100
conversations
to
take
on
five
or
six
cases,”
says
AlphaLit
founder
and
CEO

Anand
Upadhye
.
“That
doesn’t
pencil.”

Aiming
to
use
AI
to
solve
this
problem
for
smaller
cases
and
smaller
law
firms,
AlphaLit
said
today
it
has
raised
a
$3.2
million
seed
round.

Participants
in
the
round
were
venture
capital
firms
Lux
Capital,
Slow
Ventures
and
Bright
Ventures,
alongside
several
angel
investors,
including

Ken
Cornick
,
the
cofounder
of
CLEAR,
and

Jason
Boehmig
,
executive
chair
and
cofounder
of
Ironclad.

They
join
previous
investors
including
Sequoia
Scout
Fund,
Base
Ventures,
and
Casetext
co-founder

Jake
Heller
.

Scoring
Smaller
Cases

AlphaLit
tackles
this
problem
through
a
combination
of
voice
AI
and
algorithmic
case
scoring.

When
prospective
plaintiffs
engage
with
AlphaLit’s
voice
AI
platform,
it
interviews
them
to
understand
their
issue,
evaluates
their
evidence
against
legal
frameworks,
and
then
drafts
a
case
memo.

Using
its
proprietary
algorithms,
the
company
assigns
each
case
an
AlphaLit
score,
which
is
based
on 
liability,
evidence
quality,
and
potential
damages.
If
the
score
reaches
a
certain
threshold,
AlphaLit
engages
with
the
plaintiff
and
sends
the
case
to
an
attorney
in
its
network.

Already,
the
company
has
created
some
80
cases
through
its
platform.
It
is
operating
only
in
California
for
now,
and
only
for
employment-related
cases,
but
it
plans
to
expand
both
the
types
of
cases
it
handles
and
the
jurisdictions
it
covers.

“Unless
your
case
is
worth
millions
or
you
are
well-connected,
it’s
almost
impossible
to
get
a
lawyer
on
the
phone,”
said
Upadhye.
“By
using
AI
to
handle
the
heavy
lifting
of
intake
and
fact-gathering,
we
are
lowering
the
cost
of
pre-litigation
and
opening
legal
access
for
millions
of
Americans.”

Solving
the
Small
Case
Problem

For
attorneys
in
smaller
law
firms,
AlphaLit
helps
them
get
over
three
major
obstacles
that
make
it
too
expensive
for
them
to
accept
smaller
cases,
Upadhye
told
me
in
an
interview:

  • Marketing
    and
    advertising.
    Marketing
    can
    be
    complicated
    and
    costly,
    especially
    for
    smaller
    firms
    that
    lack
    marketing
    staff.
    AlphaLit
    does
    the
    marketing
    for
    them.
  • Intake.
    Intake
    can
    be
    time-consuming
    and
    difficult
    to
    schedule,
    especially
    for
    plaintiffs
    who
    work
    during
    the
    day.
    The
    actual
    intake
    process
    often
    requires
    specialized
    staff
    and
    specialized
    expertise.
    AlphaLit’s
    voice
    AI
    platform
    handles
    all
    the
    intake
    and
    delivers
    a
    case
    memo.
  • Evaluation
    and
    underwriting.
    Even
    after
    the
    prior
    steps,
    an
    attorney
    needs
    to
    evaluate
    the
    case
    and
    decide
    whether
    to
    take
    it
    on.
    AlphaLit’s
    algorithm
    performs
    that
    evaluation,
    only
    referring
    cases
    that
    meet
    a
    threshold.

In
a
statement
provided
by
AlphaLit,
Peter
Hebert,
partner
and
co-founder
at
Lux
Capital,
said:
“AlphaLit
is
attacking
a
massive,
latent
market.
The
legal
industry
has
struggled
with
the
economics
of
high-volume,
lower-dollar
claims.
Anand
and
his
team
have
built
the
technical
infrastructure
to
turn
these
overlooked
claims
into
a
viable,
scalable
asset
class.”

Before
founding
AlphaLit
in
2024,
Upadhye
was
director
of
investments
at
the
litigation
funding
company
Legalist.
Earlier,
he
was
vice
president
of
business
development
at
Casetext,
before
it
was
acquired
by
Thomson
Reuters.

“We
are
a
mission-driven
company,”
Upadhye
told
me,
aiming
to
make
a
meaningful
impact
on
the
number
of
people
who
are
protected
under
the
law.”