kernel_optimize_test/kernel/trace
Nicholas Piggin b23d7a5f4a ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU
On a 144 thread system, `perf ftrace` takes about 20 seconds to start
up, due to calling synchronize_rcu() for each CPU.

  cat /proc/108560/stack
    0xc0003e7eb336f470
    __switch_to+0x2e0/0x480
    __wait_rcu_gp+0x20c/0x220
    synchronize_rcu+0x9c/0xc0
    ring_buffer_reset_cpu+0x88/0x2e0
    tracing_reset_online_cpus+0x84/0xe0
    tracing_open+0x1d4/0x1f0

On a system with 10x more threads, it starts to become an annoyance.

Batch these up so we disable all the per-cpu buffers first, then
synchronize_rcu() once, then reset each of the buffers. This brings
the time down to about 0.5s.

Link: https://lkml.kernel.org/r/20200625053403.2386972-1-npiggin@gmail.com

Tested-by: Anton Blanchard <anton@ozlabs.org>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2020-06-30 17:18:56 -04:00
..
blktrace.c blktrace: Avoid sparse warnings when assigning q->blk_trace 2020-06-17 09:07:11 -06:00
bpf_trace.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net 2020-06-25 18:27:40 -07:00
fgraph.c
ftrace_internal.h
ftrace.c x86/ftrace: Only have the builtin ftrace_regs_caller call direct hooks 2020-06-29 11:42:47 -04:00
Kconfig Tracing updates for 5.8: 2020-06-09 10:06:18 -07:00
kprobe_event_gen_test.c
Makefile Rebase locking/kcsan to locking/urgent 2020-06-11 20:02:46 +02:00
power-traces.c
preemptirq_delay_test.c
ring_buffer_benchmark.c
ring_buffer.c ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU 2020-06-30 17:18:56 -04:00
rpm-traces.c
synth_event_gen_test.c
trace_benchmark.c
trace_benchmark.h
trace_boot.c tracing/boottime: Fix kprobe multiple events 2020-06-23 21:51:50 -04:00
trace_branch.c
trace_clock.c
trace_dynevent.c
trace_dynevent.h
trace_entries.h tracing: Make ftrace packed events have align of 1 2020-06-16 21:21:02 -04:00
trace_event_perf.c
trace_events_filter_test.h
trace_events_filter.c
trace_events_hist.c
trace_events_inject.c
trace_events_synth.c
trace_events_trigger.c tracing: Fix event trigger to accept redundant spaces 2020-06-23 21:51:40 -04:00
trace_events.c
trace_export.c tracing: Make ftrace packed events have align of 1 2020-06-16 21:21:02 -04:00
trace_functions_graph.c
trace_functions.c trace: Fix typo in allocate_ftrace_ops()'s comment 2020-06-16 21:21:02 -04:00
trace_hwlat.c
trace_irqsoff.c
trace_kdb.c
trace_kprobe_selftest.c
trace_kprobe_selftest.h
trace_kprobe.c maccess: rename probe_user_{read,write} to copy_{from,to}_user_nofault 2020-06-17 10:57:41 -07:00
trace_mmiotrace.c
trace_nop.c
trace_output.c
trace_output.h
trace_preemptirq.c x86/entry: Rename trace_hardirqs_off_prepare() 2020-06-11 15:15:24 +02:00
trace_printk.c
trace_probe_tmpl.h
trace_probe.c tracing/probe: Fix memleak in fetch_op_data operations 2020-06-16 21:21:02 -04:00
trace_probe.h tracing/probe: Replace zero-length array with flexible-array 2020-06-15 23:08:32 -05:00
trace_sched_switch.c
trace_sched_wakeup.c
trace_selftest_dynamic.c
trace_selftest.c
trace_seq.c
trace_stack.c
trace_stat.c
trace_stat.h
trace_synth.h
trace_syscalls.c
trace_uprobe.c tracing/probe: Fix bpf_task_fd_query() for kprobes and uprobes 2020-06-09 11:10:12 -07:00
trace.c ring-buffer: speed up buffer resets by avoiding synchronize_rcu for each CPU 2020-06-30 17:18:56 -04:00
trace.h tracing: Move pipe reference to trace array instead of current_tracer 2020-06-30 14:29:33 -04:00
tracing_map.c
tracing_map.h