locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()

The atomic_long_cmpxchg_acquire() in rwsem_try_read_lock_unqueued() is
replaced by atomic_long_try_cmpxchg_acquire() to simpify the code and
generate slightly better assembly code.

There is no functional change.

Signed-off-by: Waiman Long <longman@redhat.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Link: http://lkml.kernel.org/r/20190404174320.22416-5-longman@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Waiman Long 2019-04-04 13:43:13 -04:00 committed by Ingo Molnar
parent 12a30a7fc1
commit a338ecb07a

View File

@ -259,21 +259,16 @@ static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
*/ */
static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem) static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
{ {
long old, count = atomic_long_read(&sem->count); long count = atomic_long_read(&sem->count);
while (true) { while (!count || count == RWSEM_WAITING_BIAS) {
if (!(count == 0 || count == RWSEM_WAITING_BIAS)) if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
return false; count + RWSEM_ACTIVE_WRITE_BIAS)) {
old = atomic_long_cmpxchg_acquire(&sem->count, count,
count + RWSEM_ACTIVE_WRITE_BIAS);
if (old == count) {
rwsem_set_owner(sem); rwsem_set_owner(sem);
return true; return true;
} }
count = old;
} }
return false;
} }
static inline bool owner_on_cpu(struct task_struct *owner) static inline bool owner_on_cpu(struct task_struct *owner)