kernel_optimize_test/kernel/locking
Davidlohr Bueso 07879c6a37 sched/wake_q: Reduce reference counting for special users
Some users, specifically futexes and rwsems, required fixes
that allowed the callers to be safe when wakeups occur before
they are expected by wake_up_q(). Such scenarios also play
games and rely on reference counting, and until now were
pivoting on wake_q doing it. With the wake_q_add() call being
moved down, this can no longer be the case. As such we end up
with a a double task refcounting overhead; and these callers
care enough about this (being rather core-ish).

This patch introduces a wake_q_add_safe() call that serves
for callers that have already done refcounting and therefore the
task is 'safe' from wake_q point of view (int that it requires
reference throughout the entire queue/>wakeup cycle). In the one
case it has internal reference counting, in the other case it
consumes the reference counting.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Xie Yongji <xieyongji@baidu.com>
Cc: Yongji Xie <elohimes@gmail.com>
Cc: andrea.parri@amarulasolutions.com
Cc: lilin24@baidu.com
Cc: liuqi16@baidu.com
Cc: nixun@baidu.com
Cc: yuanlinsi01@baidu.com
Cc: zhangyu31@baidu.com
Link: https://lkml.kernel.org/r/20181218195352.7orq3upiwfdbrdne@linux-r8p5
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:03:28 +01:00
..
lockdep_internals.h locking/lockdep: Provide enum lock_usage_bit mask names 2019-01-21 11:18:56 +01:00
lockdep_proc.c locking/lockdep: Make class->ops a percpu counter and move it under CONFIG_DEBUG_LOCKDEP=y 2018-10-09 09:56:33 +02:00
lockdep_states.h
lockdep.c locking/lockdep: Add debug_locks check in __lock_downgrade() 2019-02-04 09:03:27 +01:00
locktorture.c drm pull for 4.19-rc1 2018-08-15 17:39:07 -07:00
Makefile
mcs_spinlock.h locking/mcs: Use smp_cond_load_acquire() in MCS spin loop 2018-04-27 09:48:49 +02:00
mutex-debug.c locking/mutex: Replace spin_is_locked() with lockdep 2018-11-12 09:06:22 -08:00
mutex-debug.h
mutex.c kernel/locking/mutex.c: remove caller signal_pending branch predictions 2019-01-04 13:13:48 -08:00
mutex.h
osq_lock.c
percpu-rwsem.c
qrwlock.c
qspinlock_paravirt.h mm: remove include/linux/bootmem.h 2018-10-31 08:54:16 -07:00
qspinlock_stat.h locking/qspinlock_stat: Count instances of nested lock slowpaths 2018-10-17 08:37:31 +02:00
qspinlock.c locking/pvqspinlock: Extend node size when pvqspinlock is configured 2018-10-17 08:37:32 +02:00
rtmutex_common.h locking/rtmutex: Handle non enqueued waiters gracefully in remove_waiter() 2018-03-28 23:01:30 +02:00
rtmutex-debug.c
rtmutex-debug.h
rtmutex.c locking/rtmutex: Fix the preprocessor logic with normal #ifdef #else #endif 2018-09-11 08:12:00 +02:00
rtmutex.h
rwsem-spinlock.c
rwsem-xadd.c sched/wake_q: Reduce reference counting for special users 2019-02-04 09:03:28 +01:00
rwsem.c locking/rwsem: Make owner store task pointer of last owning reader 2018-09-10 12:04:07 +02:00
rwsem.h locking/rwsem: Make owner store task pointer of last owning reader 2018-09-10 12:04:07 +02:00
semaphore.c
spinlock_debug.c
spinlock.c locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y 2017-12-12 11:24:01 +01:00
test-ww_mutex.c locking/ww_mutex: Fix runtime warning in the WW mutex selftest 2018-10-03 08:56:31 +02:00