kernel_optimize_test/kernel/trace
Steven Rostedt (VMware) a33614d52e tracing: Restructure trace_clock_global() to never block
commit aafe104aa9096827a429bc1358f8260ee565b7cc upstream.

It was reported that a fix to the ring buffer recursion detection would
cause a hung machine when performing suspend / resume testing. The
following backtrace was extracted from debugging that case:

Call Trace:
 trace_clock_global+0x91/0xa0
 __rb_reserve_next+0x237/0x460
 ring_buffer_lock_reserve+0x12a/0x3f0
 trace_buffer_lock_reserve+0x10/0x50
 __trace_graph_return+0x1f/0x80
 trace_graph_return+0xb7/0xf0
 ? trace_clock_global+0x91/0xa0
 ftrace_return_to_handler+0x8b/0xf0
 ? pv_hash+0xa0/0xa0
 return_to_handler+0x15/0x30
 ? ftrace_graph_caller+0xa0/0xa0
 ? trace_clock_global+0x91/0xa0
 ? __rb_reserve_next+0x237/0x460
 ? ring_buffer_lock_reserve+0x12a/0x3f0
 ? trace_event_buffer_lock_reserve+0x3c/0x120
 ? trace_event_buffer_reserve+0x6b/0xc0
 ? trace_event_raw_event_device_pm_callback_start+0x125/0x2d0
 ? dpm_run_callback+0x3b/0xc0
 ? pm_ops_is_empty+0x50/0x50
 ? platform_get_irq_byname_optional+0x90/0x90
 ? trace_device_pm_callback_start+0x82/0xd0
 ? dpm_run_callback+0x49/0xc0

With the following RIP:

RIP: 0010:native_queued_spin_lock_slowpath+0x69/0x200

Since the fix to the recursion detection would allow a single recursion to
happen while tracing, this lead to the trace_clock_global() taking a spin
lock and then trying to take it again:

ring_buffer_lock_reserve() {
  trace_clock_global() {
    arch_spin_lock() {
      queued_spin_lock_slowpath() {
        /* lock taken */
        (something else gets traced by function graph tracer)
          ring_buffer_lock_reserve() {
            trace_clock_global() {
              arch_spin_lock() {
                queued_spin_lock_slowpath() {
                /* DEAD LOCK! */

Tracing should *never* block, as it can lead to strange lockups like the
above.

Restructure the trace_clock_global() code to instead of simply taking a
lock to update the recorded "prev_time" simply use it, as two events
happening on two different CPUs that calls this at the same time, really
doesn't matter which one goes first. Use a trylock to grab the lock for
updating the prev_time, and if it fails, simply try again the next time.
If it failed to be taken, that means something else is already updating
it.

Link: https://lkml.kernel.org/r/20210430121758.650b6e8a@gandalf.local.home

Cc: stable@vger.kernel.org
Tested-by: Konstantin Kharlamov <hi-angel@yandex.ru>
Tested-by: Todd Brandt <todd.e.brandt@linux.intel.com>
Fixes: b02414c8f0 ("ring-buffer: Fix recursion protection transitions between interrupt context") # started showing the problem
Fixes: 14131f2f98 ("tracing: implement trace_clock_*() APIs") # where the bug happened
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=212761
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-11 14:47:40 +02:00
..
blktrace.c
bpf_trace.c bpf: Unbreak BPF_PROG_TYPE_KPROBE when kprobe is called via do_int3 2021-02-17 11:02:25 +01:00
bpf_trace.h
fgraph.c fgraph: Initialize tracing_graph_pause at task creation 2021-02-10 09:29:16 +01:00
ftrace_internal.h
ftrace.c ftrace: Handle commands when closing set_ftrace_filter file 2021-05-11 14:47:12 +02:00
Kconfig tracing/kprobes: Do the notrace functions check without kprobes on ftrace 2021-01-19 18:27:19 +01:00
kprobe_event_gen_test.c
Makefile
power-traces.c
preemptirq_delay_test.c
ring_buffer_benchmark.c
ring_buffer.c ring-buffer: Force before_stamp and write_stamp to be different on discard 2021-03-09 11:11:12 +01:00
rpm-traces.c
synth_event_gen_test.c
trace_benchmark.c
trace_benchmark.h
trace_boot.c tracing: Disable ftrace selftests when any tracer is running 2020-12-30 11:54:28 +01:00
trace_branch.c
trace_clock.c tracing: Restructure trace_clock_global() to never block 2021-05-11 14:47:40 +02:00
trace_dynevent.c
trace_dynevent.h
trace_entries.h
trace_event_perf.c
trace_events_filter_test.h
trace_events_filter.c
trace_events_hist.c tracing: Check return value of __create_val_fields() before using its result 2020-10-15 12:00:59 -04:00
trace_events_inject.c
trace_events_synth.c tracing: Make -ENOMEM the default error for parse_synth_field() 2020-11-02 15:58:32 -05:00
trace_events_trigger.c
trace_events.c tracing: Do not count ftrace events in top level enable output 2021-02-17 11:02:20 +01:00
trace_export.c treewide: Convert macro and uses of __section(foo) to __section("foo") 2020-10-25 14:51:49 -07:00
trace_functions_graph.c
trace_functions.c
trace_hwlat.c tracing: Remove WARN_ON in start_thread() 2020-11-30 21:43:07 -05:00
trace_irqsoff.c tracing: Use pause-on-trace with the latency tracers 2021-02-10 09:29:16 +01:00
trace_kdb.c
trace_kprobe_selftest.c
trace_kprobe_selftest.h
trace_kprobe.c tracing/kprobe: Fix to support kretprobe events on unloaded modules 2021-02-10 09:29:16 +01:00
trace_mmiotrace.c
trace_nop.c
trace_output.c
trace_output.h
trace_preemptirq.c
trace_printk.c Updates for tracing and bootconfig: 2020-10-15 15:51:28 -07:00
trace_probe_tmpl.h
trace_probe.c
trace_probe.h tracing: Move is_good_name() from trace_probe.h to trace.h 2020-10-15 12:01:13 -04:00
trace_sched_switch.c
trace_sched_wakeup.c
trace_selftest_dynamic.c
trace_selftest.c tracing: Disable ftrace selftests when any tracer is running 2020-12-30 11:54:28 +01:00
trace_seq.c
trace_stack.c
trace_stat.c
trace_stat.h
trace_synth.h
trace_syscalls.c
trace_uprobe.c
trace.c tracing: Map all PIDs to command lines 2021-05-11 14:47:40 +02:00
trace.h tracing: Disable ftrace selftests when any tracer is running 2020-12-30 11:54:28 +01:00
tracing_map.c
tracing_map.h