Commit 43b3f028 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

locking/qspinlock/x86: Fix performance regression under unaccelerated VMs

Dave ran into horrible performance on a VM without PARAVIRT_SPINLOCKS
set and Linus noted that the test-and-set implementation was retarded.

One should spin on the variable with a load, not a RMW.

While there, remove 'queued' from the name, as the lock isn't queued
at all, but a simple test-and-set.
Suggested-by: default avatarLinus Torvalds <>
Reported-by: default avatarDave Chinner <>
Tested-by: default avatarDave Chinner <>
Signed-off-by: default avatarPeter Zijlstra (Intel) <>
Cc: Peter Zijlstra <>
Cc: Thomas Gleixner <>
Cc: Waiman Long <>
Cc: # v4.2+

Signed-off-by: default avatarIngo Molnar <>
parent edcd591c
......@@ -39,15 +39,23 @@ static inline void queued_spin_unlock(struct qspinlock *lock)
#define virt_queued_spin_lock virt_queued_spin_lock
#define virt_spin_lock virt_spin_lock
static inline bool virt_queued_spin_lock(struct qspinlock *lock)
static inline bool virt_spin_lock(struct qspinlock *lock)
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;
while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
* On hypervisors without PARAVIRT_SPINLOCKS support we fall
* back to a Test-and-Set spinlock, because fair locks have
* horrible lock 'holder' preemption issues.
do {
while (atomic_read(&lock->val) != 0)
} while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0);
return true;
......@@ -111,8 +111,8 @@ static inline void queued_spin_unlock_wait(struct qspinlock *lock)
#ifndef virt_queued_spin_lock
static __always_inline bool virt_queued_spin_lock(struct qspinlock *lock)
#ifndef virt_spin_lock
static __always_inline bool virt_spin_lock(struct qspinlock *lock)
return false;
......@@ -289,7 +289,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (pv_enabled())
goto queue;
if (virt_queued_spin_lock(lock))
if (virt_spin_lock(lock))
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment