Commit aa68744f authored by Waiman Long's avatar Waiman Long Committed by Ingo Molnar
Browse files

locking/qspinlock: Avoid redundant read of next pointer

With optimistic prefetch of the next node cacheline, the next pointer
may have been properly inititalized. As a result, the reading
of node->next in the contended path may be redundant. This patch
eliminates the redundant read if the next pointer value is not NULL.
Signed-off-by: default avatarWaiman Long <>
Signed-off-by: default avatarPeter Zijlstra (Intel) <>
Cc: Andrew Morton <>
Cc: Davidlohr Bueso <>
Cc: Douglas Hatch <>
Cc: H. Peter Anvin <>
Cc: Linus Torvalds <>
Cc: Paul E. McKenney <>
Cc: Peter Zijlstra <>
Cc: Scott J Norton <>
Cc: Thomas Gleixner <>

Signed-off-by: default avatarIngo Molnar <>
parent 81b55986
......@@ -396,6 +396,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* p,*,* -> n,*,*
old = xchg_tail(lock, tail);
next = NULL;
* if there was a previous node; link it and wait until reaching the
......@@ -463,10 +464,12 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
* contended path; wait for next, release.
* contended path; wait for next if not observed yet, release.
while (!(next = READ_ONCE(node->next)))
if (!next) {
while (!(next = READ_ONCE(node->next)))
pv_kick_node(lock, next);
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment