forked from luck/tmp_suning_uos_patched
locking/spinlocks: Use CONFIG_PREEMPTION
CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. Both PREEMPT and PREEMPT_RT require the same functionality which today depends on CONFIG_PREEMPT. Adjust the comments in the locking code. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Paul E. McKenney <paulmck@linux.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/20190726212124.302995288@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
01b1d88b09
commit
27972765bd
|
@ -214,7 +214,7 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock)
|
|||
|
||||
/*
|
||||
* Define the various spin_lock methods. Note we define these
|
||||
* regardless of whether CONFIG_SMP or CONFIG_PREEMPT are set. The
|
||||
* regardless of whether CONFIG_SMP or CONFIG_PREEMPTION are set. The
|
||||
* various methods are defined as nops in the case they are not
|
||||
* required.
|
||||
*/
|
||||
|
|
|
@ -96,7 +96,7 @@ static inline int __raw_spin_trylock(raw_spinlock_t *lock)
|
|||
|
||||
/*
|
||||
* If lockdep is enabled then we use the non-preemption spin-ops
|
||||
* even on CONFIG_PREEMPT, because lockdep assumes that interrupts are
|
||||
* even on CONFIG_PREEMPTION, because lockdep assumes that interrupts are
|
||||
* not re-enabled during lock-acquire (which the preempt-spin-ops do):
|
||||
*/
|
||||
#if !defined(CONFIG_GENERIC_LOCKBREAK) || defined(CONFIG_DEBUG_LOCK_ALLOC)
|
||||
|
|
Loading…
Reference in New Issue
Block a user