forked from luck/tmp_suning_uos_patched
perf/core: Explain perf_sched_mutex
To clarify why atomic_inc_return(&perf_sched_events) is not sufficient and a mutex is needed to order static branch enabling vs the atomic counter increment, this adds a comment with a short explanation. Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20170829140103.6563-1-alexander.shishkin@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
4c4de7d3c8
commit
5bce9db189
|
@ -9394,6 +9394,11 @@ static void account_event(struct perf_event *event)
|
|||
inc = true;
|
||||
|
||||
if (inc) {
|
||||
/*
|
||||
* We need the mutex here because static_branch_enable()
|
||||
* must complete *before* the perf_sched_count increment
|
||||
* becomes visible.
|
||||
*/
|
||||
if (atomic_inc_not_zero(&perf_sched_count))
|
||||
goto enabled;
|
||||
|
||||
|
|
Loading…
Reference in New Issue
Block a user