forked from luck/tmp_suning_uos_patched
Updates for tracing and bootconfig:
- Add support for "bool" type in synthetic events - Add per instance tracing for bootconfig - Support perf-style return probe ("SYMBOL%return") in kprobes and uprobes - Allow for kprobes to be enabled earlier in boot up - Added tracepoint helper function to allow testing if tracepoints are enabled in headers - Synthetic events can now have dynamic strings (variable length) - Various fixes and cleanups -----BEGIN PGP SIGNATURE----- iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCX4iMDRQccm9zdGVkdEBn b29kbWlzLm9yZwAKCRAp5XQQmuv6qrMPAP0UAfOeQcYxBAw9y8oX7oJnBBylLFTR CICOVEhBYC/xIQD/edVPEUt77ozM/Bplwv8BiO4QxFjgZFqtpZI8mskIfAo= =sbny -----END PGP SIGNATURE----- Merge tag 'trace-v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "Updates for tracing and bootconfig: - Add support for "bool" type in synthetic events - Add per instance tracing for bootconfig - Support perf-style return probe ("SYMBOL%return") in kprobes and uprobes - Allow for kprobes to be enabled earlier in boot up - Added tracepoint helper function to allow testing if tracepoints are enabled in headers - Synthetic events can now have dynamic strings (variable length) - Various fixes and cleanups" * tag 'trace-v5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (58 commits) tracing: support "bool" type in synthetic trace events selftests/ftrace: Add test case for synthetic event syntax errors tracing: Handle synthetic event array field type checking correctly selftests/ftrace: Change synthetic event name for inter-event-combined test tracing: Add synthetic event error logging tracing: Check that the synthetic event and field names are legal tracing: Move is_good_name() from trace_probe.h to trace.h tracing: Don't show dynamic string internals in synthetic event description tracing: Fix some typos in comments tracing/boot: Add ftrace.instance.*.alloc_snapshot option tracing: Fix race in trace_open and buffer resize call tracing: Check return value of __create_val_fields() before using its result tracing: Fix synthetic print fmt check for use of __get_str() tracing: Remove a pointless assignment ftrace: ftrace_global_list is renamed to ftrace_ops_list ftrace: Format variable declarations of ftrace_allocate_records ftrace: Simplify the calculation of page number for ftrace_page->records ftrace: Simplify the dyn_ftrace->flags macro ftrace: Simplify the hash calculation ftrace: Use fls() to get the bits for dup_hash() ...
This commit is contained in:
commit
fefa636d81
|
@ -61,6 +61,10 @@ These options can be used for each instance including global ftrace node.
|
|||
ftrace.[instance.INSTANCE.]options = OPT1[, OPT2[...]]
|
||||
Enable given ftrace options.
|
||||
|
||||
ftrace.[instance.INSTANCE.]tracing_on = 0|1
|
||||
Enable/Disable tracing on this instance when starting boot-time tracing.
|
||||
(you can enable it by the "traceon" event trigger action)
|
||||
|
||||
ftrace.[instance.INSTANCE.]trace_clock = CLOCK
|
||||
Set given CLOCK to ftrace's trace_clock.
|
||||
|
||||
|
@ -116,6 +120,20 @@ instance node, but those are also visible from other instances. So please
|
|||
take care for event name conflict.
|
||||
|
||||
|
||||
When to Start
|
||||
=============
|
||||
|
||||
All boot-time tracing options starting with ``ftrace`` will be enabled at the
|
||||
end of core_initcall. This means you can trace the events from postcore_initcall.
|
||||
Most of the subsystems and architecture dependent drivers will be initialized
|
||||
after that (arch_initcall or subsys_initcall). Thus, you can trace those with
|
||||
boot-time tracing.
|
||||
If you want to trace events before core_initcall, you can use the options
|
||||
starting with ``kernel``. Some of them will be enabled eariler than the initcall
|
||||
processing (for example,. ``kernel.ftrace=function`` and ``kernel.trace_event``
|
||||
will start before the initcall.)
|
||||
|
||||
|
||||
Examples
|
||||
========
|
||||
|
||||
|
@ -164,6 +182,26 @@ is for tracing functions starting with "user\_", and others tracing
|
|||
The instance node also accepts event nodes so that each instance
|
||||
can customize its event tracing.
|
||||
|
||||
With the trigger action and kprobes, you can trace function-graph while
|
||||
a function is called. For example, this will trace all function calls in
|
||||
the pci_proc_init()::
|
||||
|
||||
ftrace {
|
||||
tracing_on = 0
|
||||
tracer = function_graph
|
||||
event.kprobes {
|
||||
start_event {
|
||||
probes = "pci_proc_init"
|
||||
actions = "traceon"
|
||||
}
|
||||
end_event {
|
||||
probes = "pci_proc_init%return"
|
||||
actions = "traceoff"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
This boot-time tracing also supports ftrace kernel parameters via boot
|
||||
config.
|
||||
For example, following kernel parameters::
|
||||
|
|
|
@ -589,8 +589,19 @@ name::
|
|||
{ .type = "int", .name = "my_int_field" },
|
||||
};
|
||||
|
||||
See synth_field_size() for available types. If field_name contains [n]
|
||||
the field is considered to be an array.
|
||||
See synth_field_size() for available types.
|
||||
|
||||
If field_name contains [n], the field is considered to be a static array.
|
||||
|
||||
If field_names contains[] (no subscript), the field is considered to
|
||||
be a dynamic array, which will only take as much space in the event as
|
||||
is required to hold the array.
|
||||
|
||||
Because space for an event is reserved before assigning field values
|
||||
to the event, using dynamic arrays implies that the piecewise
|
||||
in-kernel API described below can't be used with dynamic arrays. The
|
||||
other non-piecewise in-kernel APIs can, however, be used with dynamic
|
||||
arrays.
|
||||
|
||||
If the event is created from within a module, a pointer to the module
|
||||
must be passed to synth_event_create(). This will ensure that the
|
||||
|
|
|
@ -1776,6 +1776,24 @@ consisting of the name of the new event along with one or more
|
|||
variables and their types, which can be any valid field type,
|
||||
separated by semicolons, to the tracing/synthetic_events file.
|
||||
|
||||
See synth_field_size() for available types.
|
||||
|
||||
If field_name contains [n], the field is considered to be a static array.
|
||||
|
||||
If field_names contains[] (no subscript), the field is considered to
|
||||
be a dynamic array, which will only take as much space in the event as
|
||||
is required to hold the array.
|
||||
|
||||
A string field can be specified using either the static notation:
|
||||
|
||||
char name[32];
|
||||
|
||||
Or the dynamic:
|
||||
|
||||
char name[];
|
||||
|
||||
The size limit for either is 256.
|
||||
|
||||
For instance, the following creates a new event named 'wakeup_latency'
|
||||
with 3 fields: lat, pid, and prio. Each of those fields is simply a
|
||||
variable reference to a variable on another event::
|
||||
|
|
|
@ -30,6 +30,7 @@ Synopsis of kprobe_events
|
|||
|
||||
p[:[GRP/]EVENT] [MOD:]SYM[+offs]|MEMADDR [FETCHARGS] : Set a probe
|
||||
r[MAXACTIVE][:[GRP/]EVENT] [MOD:]SYM[+0] [FETCHARGS] : Set a return probe
|
||||
p:[GRP/]EVENT] [MOD:]SYM[+0]%return [FETCHARGS] : Set a return probe
|
||||
-:[GRP/]EVENT : Clear a probe
|
||||
|
||||
GRP : Group name. If omitted, use "kprobes" for it.
|
||||
|
@ -37,6 +38,7 @@ Synopsis of kprobe_events
|
|||
based on SYM+offs or MEMADDR.
|
||||
MOD : Module name which has given SYM.
|
||||
SYM[+offs] : Symbol+offset where the probe is inserted.
|
||||
SYM%return : Return address of the symbol
|
||||
MEMADDR : Address where the probe is inserted.
|
||||
MAXACTIVE : Maximum number of instances of the specified function that
|
||||
can be probed simultaneously, or 0 for the default value
|
||||
|
|
|
@ -146,3 +146,30 @@ with jump labels and avoid conditional branches.
|
|||
define tracepoints. Check http://lwn.net/Articles/379903,
|
||||
http://lwn.net/Articles/381064 and http://lwn.net/Articles/383362
|
||||
for a series of articles with more details.
|
||||
|
||||
If you require calling a tracepoint from a header file, it is not
|
||||
recommended to call one directly or to use the trace_<tracepoint>_enabled()
|
||||
function call, as tracepoints in header files can have side effects if a
|
||||
header is included from a file that has CREATE_TRACE_POINTS set, as
|
||||
well as the trace_<tracepoint>() is not that small of an inline
|
||||
and can bloat the kernel if used by other inlined functions. Instead,
|
||||
include tracepoint-defs.h and use tracepoint_enabled().
|
||||
|
||||
In a C file::
|
||||
|
||||
void do_trace_foo_bar_wrapper(args)
|
||||
{
|
||||
trace_foo_bar(args);
|
||||
}
|
||||
|
||||
In the header file::
|
||||
|
||||
DECLARE_TRACEPOINT(foo_bar);
|
||||
|
||||
static inline void some_inline_function()
|
||||
{
|
||||
[..]
|
||||
if (tracepoint_enabled(foo_bar))
|
||||
do_trace_foo_bar_wrapper(args);
|
||||
[..]
|
||||
}
|
||||
|
|
|
@ -28,6 +28,7 @@ Synopsis of uprobe_tracer
|
|||
|
||||
p[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a uprobe
|
||||
r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS] : Set a return uprobe (uretprobe)
|
||||
p[:[GRP/]EVENT] PATH:OFFSET%return [FETCHARGS] : Set a return uprobe (uretprobe)
|
||||
-:[GRP/]EVENT : Clear uprobe or uretprobe event
|
||||
|
||||
GRP : Group name. If omitted, "uprobes" is the default value.
|
||||
|
@ -35,6 +36,7 @@ Synopsis of uprobe_tracer
|
|||
on PATH+OFFSET.
|
||||
PATH : Path to an executable or a library.
|
||||
OFFSET : Offset where the probe is inserted.
|
||||
OFFSET%return : Offset where the return probe is inserted.
|
||||
|
||||
FETCHARGS : Arguments. Each probe can have up to 128 args.
|
||||
%REG : Fetch register REG
|
||||
|
|
|
@ -6626,6 +6626,7 @@ F: fs/proc/bootconfig.c
|
|||
F: include/linux/bootconfig.h
|
||||
F: lib/bootconfig.c
|
||||
F: tools/bootconfig/*
|
||||
F: tools/bootconfig/scripts/*
|
||||
|
||||
EXYNOS DP DRIVER
|
||||
M: Jingoo Han <jingoohan1@gmail.com>
|
||||
|
|
|
@ -60,22 +60,20 @@ struct saved_msrs {
|
|||
#define EAX_EDX_RET(val, low, high) "=A" (val)
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_TRACEPOINTS
|
||||
/*
|
||||
* Be very careful with includes. This header is prone to include loops.
|
||||
*/
|
||||
#include <asm/atomic.h>
|
||||
#include <linux/tracepoint-defs.h>
|
||||
|
||||
extern struct tracepoint __tracepoint_read_msr;
|
||||
extern struct tracepoint __tracepoint_write_msr;
|
||||
extern struct tracepoint __tracepoint_rdpmc;
|
||||
#define msr_tracepoint_active(t) static_key_false(&(t).key)
|
||||
#ifdef CONFIG_TRACEPOINTS
|
||||
DECLARE_TRACEPOINT(read_msr);
|
||||
DECLARE_TRACEPOINT(write_msr);
|
||||
DECLARE_TRACEPOINT(rdpmc);
|
||||
extern void do_trace_write_msr(unsigned int msr, u64 val, int failed);
|
||||
extern void do_trace_read_msr(unsigned int msr, u64 val, int failed);
|
||||
extern void do_trace_rdpmc(unsigned int msr, u64 val, int failed);
|
||||
#else
|
||||
#define msr_tracepoint_active(t) false
|
||||
static inline void do_trace_write_msr(unsigned int msr, u64 val, int failed) {}
|
||||
static inline void do_trace_read_msr(unsigned int msr, u64 val, int failed) {}
|
||||
static inline void do_trace_rdpmc(unsigned int msr, u64 val, int failed) {}
|
||||
|
@ -128,7 +126,7 @@ static inline unsigned long long native_read_msr(unsigned int msr)
|
|||
|
||||
val = __rdmsr(msr);
|
||||
|
||||
if (msr_tracepoint_active(__tracepoint_read_msr))
|
||||
if (tracepoint_enabled(read_msr))
|
||||
do_trace_read_msr(msr, val, 0);
|
||||
|
||||
return val;
|
||||
|
@ -150,7 +148,7 @@ static inline unsigned long long native_read_msr_safe(unsigned int msr,
|
|||
_ASM_EXTABLE(2b, 3b)
|
||||
: [err] "=r" (*err), EAX_EDX_RET(val, low, high)
|
||||
: "c" (msr), [fault] "i" (-EIO));
|
||||
if (msr_tracepoint_active(__tracepoint_read_msr))
|
||||
if (tracepoint_enabled(read_msr))
|
||||
do_trace_read_msr(msr, EAX_EDX_VAL(val, low, high), *err);
|
||||
return EAX_EDX_VAL(val, low, high);
|
||||
}
|
||||
|
@ -161,7 +159,7 @@ native_write_msr(unsigned int msr, u32 low, u32 high)
|
|||
{
|
||||
__wrmsr(msr, low, high);
|
||||
|
||||
if (msr_tracepoint_active(__tracepoint_write_msr))
|
||||
if (tracepoint_enabled(write_msr))
|
||||
do_trace_write_msr(msr, ((u64)high << 32 | low), 0);
|
||||
}
|
||||
|
||||
|
@ -181,7 +179,7 @@ native_write_msr_safe(unsigned int msr, u32 low, u32 high)
|
|||
: "c" (msr), "0" (low), "d" (high),
|
||||
[fault] "i" (-EIO)
|
||||
: "memory");
|
||||
if (msr_tracepoint_active(__tracepoint_write_msr))
|
||||
if (tracepoint_enabled(write_msr))
|
||||
do_trace_write_msr(msr, ((u64)high << 32 | low), err);
|
||||
return err;
|
||||
}
|
||||
|
@ -248,7 +246,7 @@ static inline unsigned long long native_read_pmc(int counter)
|
|||
DECLARE_ARGS(val, low, high);
|
||||
|
||||
asm volatile("rdpmc" : EAX_EDX_RET(val, low, high) : "c" (counter));
|
||||
if (msr_tracepoint_active(__tracepoint_rdpmc))
|
||||
if (tracepoint_enabled(rdpmc))
|
||||
do_trace_rdpmc(counter, EAX_EDX_VAL(val, low, high), 0);
|
||||
return EAX_EDX_VAL(val, low, high);
|
||||
}
|
||||
|
|
|
@ -217,11 +217,11 @@ extern struct ftrace_ops __rcu *ftrace_ops_list;
|
|||
extern struct ftrace_ops ftrace_list_end;
|
||||
|
||||
/*
|
||||
* Traverse the ftrace_global_list, invoking all entries. The reason that we
|
||||
* Traverse the ftrace_ops_list, invoking all entries. The reason that we
|
||||
* can use rcu_dereference_raw_check() is that elements removed from this list
|
||||
* are simply leaked, so there is no need to interact with a grace-period
|
||||
* mechanism. The rcu_dereference_raw_check() calls are needed to handle
|
||||
* concurrent insertions into the ftrace_global_list.
|
||||
* concurrent insertions into the ftrace_ops_list.
|
||||
*
|
||||
* Silly Alpha and silly pointer-speculation compiler optimizations!
|
||||
*/
|
||||
|
@ -432,7 +432,7 @@ bool is_ftrace_trampoline(unsigned long addr);
|
|||
* DIRECT - there is a direct function to call
|
||||
*
|
||||
* When a new ftrace_ops is registered and wants a function to save
|
||||
* pt_regs, the rec->flag REGS is set. When the function has been
|
||||
* pt_regs, the rec->flags REGS is set. When the function has been
|
||||
* set up to save regs, the REG_EN flag is set. Once a function
|
||||
* starts saving regs it will do so until all ftrace_ops are removed
|
||||
* from tracing that function.
|
||||
|
@ -450,12 +450,9 @@ enum {
|
|||
};
|
||||
|
||||
#define FTRACE_REF_MAX_SHIFT 23
|
||||
#define FTRACE_FL_BITS 9
|
||||
#define FTRACE_FL_MASKED_BITS ((1UL << FTRACE_FL_BITS) - 1)
|
||||
#define FTRACE_FL_MASK (FTRACE_FL_MASKED_BITS << FTRACE_REF_MAX_SHIFT)
|
||||
#define FTRACE_REF_MAX ((1UL << FTRACE_REF_MAX_SHIFT) - 1)
|
||||
|
||||
#define ftrace_rec_count(rec) ((rec)->flags & ~FTRACE_FL_MASK)
|
||||
#define ftrace_rec_count(rec) ((rec)->flags & FTRACE_REF_MAX)
|
||||
|
||||
struct dyn_ftrace {
|
||||
unsigned long ip; /* address of mcount call-site */
|
||||
|
|
|
@ -7,13 +7,13 @@
|
|||
#include <linux/page-flags.h>
|
||||
#include <linux/tracepoint-defs.h>
|
||||
|
||||
extern struct tracepoint __tracepoint_page_ref_set;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod_and_test;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod_and_return;
|
||||
extern struct tracepoint __tracepoint_page_ref_mod_unless;
|
||||
extern struct tracepoint __tracepoint_page_ref_freeze;
|
||||
extern struct tracepoint __tracepoint_page_ref_unfreeze;
|
||||
DECLARE_TRACEPOINT(page_ref_set);
|
||||
DECLARE_TRACEPOINT(page_ref_mod);
|
||||
DECLARE_TRACEPOINT(page_ref_mod_and_test);
|
||||
DECLARE_TRACEPOINT(page_ref_mod_and_return);
|
||||
DECLARE_TRACEPOINT(page_ref_mod_unless);
|
||||
DECLARE_TRACEPOINT(page_ref_freeze);
|
||||
DECLARE_TRACEPOINT(page_ref_unfreeze);
|
||||
|
||||
#ifdef CONFIG_DEBUG_PAGE_REF
|
||||
|
||||
|
@ -24,7 +24,7 @@ extern struct tracepoint __tracepoint_page_ref_unfreeze;
|
|||
*
|
||||
* See trace_##name##_enabled(void) in include/linux/tracepoint.h
|
||||
*/
|
||||
#define page_ref_tracepoint_active(t) static_key_false(&(t).key)
|
||||
#define page_ref_tracepoint_active(t) tracepoint_enabled(t)
|
||||
|
||||
extern void __page_ref_set(struct page *page, int v);
|
||||
extern void __page_ref_mod(struct page *page, int v);
|
||||
|
@ -75,7 +75,7 @@ static inline int page_count(struct page *page)
|
|||
static inline void set_page_count(struct page *page, int v)
|
||||
{
|
||||
atomic_set(&page->_refcount, v);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_set))
|
||||
if (page_ref_tracepoint_active(page_ref_set))
|
||||
__page_ref_set(page, v);
|
||||
}
|
||||
|
||||
|
@ -91,14 +91,14 @@ static inline void init_page_count(struct page *page)
|
|||
static inline void page_ref_add(struct page *page, int nr)
|
||||
{
|
||||
atomic_add(nr, &page->_refcount);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
if (page_ref_tracepoint_active(page_ref_mod))
|
||||
__page_ref_mod(page, nr);
|
||||
}
|
||||
|
||||
static inline void page_ref_sub(struct page *page, int nr)
|
||||
{
|
||||
atomic_sub(nr, &page->_refcount);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
if (page_ref_tracepoint_active(page_ref_mod))
|
||||
__page_ref_mod(page, -nr);
|
||||
}
|
||||
|
||||
|
@ -106,7 +106,7 @@ static inline int page_ref_sub_return(struct page *page, int nr)
|
|||
{
|
||||
int ret = atomic_sub_return(nr, &page->_refcount);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_return))
|
||||
if (page_ref_tracepoint_active(page_ref_mod_and_return))
|
||||
__page_ref_mod_and_return(page, -nr, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -114,14 +114,14 @@ static inline int page_ref_sub_return(struct page *page, int nr)
|
|||
static inline void page_ref_inc(struct page *page)
|
||||
{
|
||||
atomic_inc(&page->_refcount);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
if (page_ref_tracepoint_active(page_ref_mod))
|
||||
__page_ref_mod(page, 1);
|
||||
}
|
||||
|
||||
static inline void page_ref_dec(struct page *page)
|
||||
{
|
||||
atomic_dec(&page->_refcount);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod))
|
||||
if (page_ref_tracepoint_active(page_ref_mod))
|
||||
__page_ref_mod(page, -1);
|
||||
}
|
||||
|
||||
|
@ -129,7 +129,7 @@ static inline int page_ref_sub_and_test(struct page *page, int nr)
|
|||
{
|
||||
int ret = atomic_sub_and_test(nr, &page->_refcount);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_test))
|
||||
if (page_ref_tracepoint_active(page_ref_mod_and_test))
|
||||
__page_ref_mod_and_test(page, -nr, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -138,7 +138,7 @@ static inline int page_ref_inc_return(struct page *page)
|
|||
{
|
||||
int ret = atomic_inc_return(&page->_refcount);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_return))
|
||||
if (page_ref_tracepoint_active(page_ref_mod_and_return))
|
||||
__page_ref_mod_and_return(page, 1, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -147,7 +147,7 @@ static inline int page_ref_dec_and_test(struct page *page)
|
|||
{
|
||||
int ret = atomic_dec_and_test(&page->_refcount);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_test))
|
||||
if (page_ref_tracepoint_active(page_ref_mod_and_test))
|
||||
__page_ref_mod_and_test(page, -1, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -156,7 +156,7 @@ static inline int page_ref_dec_return(struct page *page)
|
|||
{
|
||||
int ret = atomic_dec_return(&page->_refcount);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_and_return))
|
||||
if (page_ref_tracepoint_active(page_ref_mod_and_return))
|
||||
__page_ref_mod_and_return(page, -1, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -165,7 +165,7 @@ static inline int page_ref_add_unless(struct page *page, int nr, int u)
|
|||
{
|
||||
int ret = atomic_add_unless(&page->_refcount, nr, u);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_mod_unless))
|
||||
if (page_ref_tracepoint_active(page_ref_mod_unless))
|
||||
__page_ref_mod_unless(page, nr, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -174,7 +174,7 @@ static inline int page_ref_freeze(struct page *page, int count)
|
|||
{
|
||||
int ret = likely(atomic_cmpxchg(&page->_refcount, count, 0) == count);
|
||||
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_freeze))
|
||||
if (page_ref_tracepoint_active(page_ref_freeze))
|
||||
__page_ref_freeze(page, count, ret);
|
||||
return ret;
|
||||
}
|
||||
|
@ -185,7 +185,7 @@ static inline void page_ref_unfreeze(struct page *page, int count)
|
|||
VM_BUG_ON(count == 0);
|
||||
|
||||
atomic_set_release(&page->_refcount, count);
|
||||
if (page_ref_tracepoint_active(__tracepoint_page_ref_unfreeze))
|
||||
if (page_ref_tracepoint_active(page_ref_unfreeze))
|
||||
__page_ref_unfreeze(page, count);
|
||||
}
|
||||
|
||||
|
|
|
@ -53,4 +53,38 @@ struct bpf_raw_event_map {
|
|||
u32 writable_size;
|
||||
} __aligned(32);
|
||||
|
||||
/*
|
||||
* If a tracepoint needs to be called from a header file, it is not
|
||||
* recommended to call it directly, as tracepoints in header files
|
||||
* may cause side-effects and bloat the kernel. Instead, use
|
||||
* tracepoint_enabled() to test if the tracepoint is enabled, then if
|
||||
* it is, call a wrapper function defined in a C file that will then
|
||||
* call the tracepoint.
|
||||
*
|
||||
* For "trace_foo_bar()", you would need to create a wrapper function
|
||||
* in a C file to call trace_foo_bar():
|
||||
* void do_trace_foo_bar(args) { trace_foo_bar(args); }
|
||||
* Then in the header file, declare the tracepoint:
|
||||
* DECLARE_TRACEPOINT(foo_bar);
|
||||
* And call your wrapper:
|
||||
* static inline void some_inlined_function() {
|
||||
* [..]
|
||||
* if (tracepoint_enabled(foo_bar))
|
||||
* do_trace_foo_bar(args);
|
||||
* [..]
|
||||
* }
|
||||
*
|
||||
* Note: tracepoint_enabled(foo_bar) is equivalent to trace_foo_bar_enabled()
|
||||
* but is safe to have in headers, where trace_foo_bar_enabled() is not.
|
||||
*/
|
||||
#define DECLARE_TRACEPOINT(tp) \
|
||||
extern struct tracepoint __tracepoint_##tp
|
||||
|
||||
#ifdef CONFIG_TRACEPOINTS
|
||||
# define tracepoint_enabled(tp) \
|
||||
static_key_false(&(__tracepoint_##tp).key)
|
||||
#else
|
||||
# define tracepoint_enabled(tracepoint) false
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
|
|
@ -2614,7 +2614,7 @@ static int __init init_kprobes(void)
|
|||
init_test_probes();
|
||||
return err;
|
||||
}
|
||||
subsys_initcall(init_kprobes);
|
||||
early_initcall(init_kprobes);
|
||||
|
||||
#ifdef CONFIG_DEBUG_FS
|
||||
static void report_probe(struct seq_file *pi, struct kprobe *p,
|
||||
|
|
|
@ -387,8 +387,8 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
|
|||
}
|
||||
}
|
||||
|
||||
read_lock(&tasklist_lock);
|
||||
do_each_thread(g, t) {
|
||||
rcu_read_lock();
|
||||
for_each_process_thread(g, t) {
|
||||
if (start == end) {
|
||||
ret = -EAGAIN;
|
||||
goto unlock;
|
||||
|
@ -403,10 +403,10 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
|
|||
smp_wmb();
|
||||
t->ret_stack = ret_stack_list[start++];
|
||||
}
|
||||
} while_each_thread(g, t);
|
||||
}
|
||||
|
||||
unlock:
|
||||
read_unlock(&tasklist_lock);
|
||||
rcu_read_unlock();
|
||||
free:
|
||||
for (i = start; i < end; i++)
|
||||
kfree(ret_stack_list[i]);
|
||||
|
|
|
@ -230,7 +230,7 @@ static void update_ftrace_function(void)
|
|||
/*
|
||||
* For static tracing, we need to be a bit more careful.
|
||||
* The function change takes affect immediately. Thus,
|
||||
* we need to coorditate the setting of the function_trace_ops
|
||||
* we need to coordinate the setting of the function_trace_ops
|
||||
* with the setting of the ftrace_trace_function.
|
||||
*
|
||||
* Set the function to the list ops, which will call the
|
||||
|
@ -1368,10 +1368,10 @@ static struct ftrace_hash *dup_hash(struct ftrace_hash *src, int size)
|
|||
int i;
|
||||
|
||||
/*
|
||||
* Make the hash size about 1/2 the # found
|
||||
* Use around half the size (max bit of it), but
|
||||
* a minimum of 2 is fine (as size of 0 or 1 both give 1 for bits).
|
||||
*/
|
||||
for (size /= 2; size; size >>= 1)
|
||||
bits++;
|
||||
bits = fls(size / 2);
|
||||
|
||||
/* Don't allocate too much */
|
||||
if (bits > FTRACE_HASH_MAX_BITS)
|
||||
|
@ -1451,7 +1451,7 @@ static bool hash_contains_ip(unsigned long ip,
|
|||
{
|
||||
/*
|
||||
* The function record is a match if it exists in the filter
|
||||
* hash and not in the notrace hash. Note, an emty hash is
|
||||
* hash and not in the notrace hash. Note, an empty hash is
|
||||
* considered a match for the filter hash, but an empty
|
||||
* notrace hash is considered not in the notrace hash.
|
||||
*/
|
||||
|
@ -2402,7 +2402,7 @@ struct ftrace_ops direct_ops = {
|
|||
*
|
||||
* If the record has the FTRACE_FL_REGS set, that means that it
|
||||
* wants to convert to a callback that saves all regs. If FTRACE_FL_REGS
|
||||
* is not not set, then it wants to convert to the normal callback.
|
||||
* is not set, then it wants to convert to the normal callback.
|
||||
*
|
||||
* Returns the address of the trampoline to set to
|
||||
*/
|
||||
|
@ -2976,7 +2976,7 @@ int ftrace_shutdown(struct ftrace_ops *ops, int command)
|
|||
synchronize_rcu_tasks_rude();
|
||||
|
||||
/*
|
||||
* When the kernel is preeptive, tasks can be preempted
|
||||
* When the kernel is preemptive, tasks can be preempted
|
||||
* while on a ftrace trampoline. Just scheduling a task on
|
||||
* a CPU is not good enough to flush them. Calling
|
||||
* synchornize_rcu_tasks() will wait for those tasks to
|
||||
|
@ -3129,18 +3129,20 @@ static int ftrace_update_code(struct module *mod, struct ftrace_page *new_pgs)
|
|||
static int ftrace_allocate_records(struct ftrace_page *pg, int count)
|
||||
{
|
||||
int order;
|
||||
int pages;
|
||||
int cnt;
|
||||
|
||||
if (WARN_ON(!count))
|
||||
return -EINVAL;
|
||||
|
||||
order = get_count_order(DIV_ROUND_UP(count, ENTRIES_PER_PAGE));
|
||||
pages = DIV_ROUND_UP(count, ENTRIES_PER_PAGE);
|
||||
order = get_count_order(pages);
|
||||
|
||||
/*
|
||||
* We want to fill as much as possible. No more than a page
|
||||
* may be empty.
|
||||
*/
|
||||
while ((PAGE_SIZE << order) / ENTRY_SIZE >= count + ENTRIES_PER_PAGE)
|
||||
if (!is_power_of_2(pages))
|
||||
order--;
|
||||
|
||||
again:
|
||||
|
@ -4368,7 +4370,7 @@ void **ftrace_func_mapper_find_ip(struct ftrace_func_mapper *mapper,
|
|||
* @ip: The instruction pointer address to map @data to
|
||||
* @data: The data to map to @ip
|
||||
*
|
||||
* Returns 0 on succes otherwise an error.
|
||||
* Returns 0 on success otherwise an error.
|
||||
*/
|
||||
int ftrace_func_mapper_add_ip(struct ftrace_func_mapper *mapper,
|
||||
unsigned long ip, void *data)
|
||||
|
@ -4536,7 +4538,7 @@ register_ftrace_function_probe(char *glob, struct trace_array *tr,
|
|||
|
||||
/*
|
||||
* Note, there's a small window here that the func_hash->filter_hash
|
||||
* may be NULL or empty. Need to be carefule when reading the loop.
|
||||
* may be NULL or empty. Need to be careful when reading the loop.
|
||||
*/
|
||||
mutex_lock(&probe->ops.func_hash->regex_lock);
|
||||
|
||||
|
|
|
@ -4866,6 +4866,9 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
|
|||
if (!cpumask_test_cpu(cpu, buffer->cpumask))
|
||||
return;
|
||||
|
||||
/* prevent another thread from changing buffer sizes */
|
||||
mutex_lock(&buffer->mutex);
|
||||
|
||||
atomic_inc(&cpu_buffer->resize_disabled);
|
||||
atomic_inc(&cpu_buffer->record_disabled);
|
||||
|
||||
|
@ -4876,6 +4879,8 @@ void ring_buffer_reset_cpu(struct trace_buffer *buffer, int cpu)
|
|||
|
||||
atomic_dec(&cpu_buffer->record_disabled);
|
||||
atomic_dec(&cpu_buffer->resize_disabled);
|
||||
|
||||
mutex_unlock(&buffer->mutex);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ring_buffer_reset_cpu);
|
||||
|
||||
|
@ -4889,6 +4894,9 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
|
|||
struct ring_buffer_per_cpu *cpu_buffer;
|
||||
int cpu;
|
||||
|
||||
/* prevent another thread from changing buffer sizes */
|
||||
mutex_lock(&buffer->mutex);
|
||||
|
||||
for_each_online_buffer_cpu(buffer, cpu) {
|
||||
cpu_buffer = buffer->buffers[cpu];
|
||||
|
||||
|
@ -4907,6 +4915,8 @@ void ring_buffer_reset_online_cpus(struct trace_buffer *buffer)
|
|||
atomic_dec(&cpu_buffer->record_disabled);
|
||||
atomic_dec(&cpu_buffer->resize_disabled);
|
||||
}
|
||||
|
||||
mutex_unlock(&buffer->mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -242,9 +242,11 @@ static struct synth_field_desc create_synth_test_fields[] = {
|
|||
{ .type = "pid_t", .name = "next_pid_field" },
|
||||
{ .type = "char[16]", .name = "next_comm_field" },
|
||||
{ .type = "u64", .name = "ts_ns" },
|
||||
{ .type = "char[]", .name = "dynstring_field_1" },
|
||||
{ .type = "u64", .name = "ts_ms" },
|
||||
{ .type = "unsigned int", .name = "cpu" },
|
||||
{ .type = "char[64]", .name = "my_string_field" },
|
||||
{ .type = "char[]", .name = "dynstring_field_2" },
|
||||
{ .type = "int", .name = "my_int_field" },
|
||||
};
|
||||
|
||||
|
@ -254,7 +256,7 @@ static struct synth_field_desc create_synth_test_fields[] = {
|
|||
*/
|
||||
static int __init test_create_synth_event(void)
|
||||
{
|
||||
u64 vals[7];
|
||||
u64 vals[9];
|
||||
int ret;
|
||||
|
||||
/* Create the create_synth_test event with the fields above */
|
||||
|
@ -292,10 +294,12 @@ static int __init test_create_synth_event(void)
|
|||
vals[0] = 777; /* next_pid_field */
|
||||
vals[1] = (u64)(long)"tiddlywinks"; /* next_comm_field */
|
||||
vals[2] = 1000000; /* ts_ns */
|
||||
vals[3] = 1000; /* ts_ms */
|
||||
vals[4] = raw_smp_processor_id(); /* cpu */
|
||||
vals[5] = (u64)(long)"thneed"; /* my_string_field */
|
||||
vals[6] = 398; /* my_int_field */
|
||||
vals[3] = (u64)(long)"xrayspecs"; /* dynstring_field_1 */
|
||||
vals[4] = 1000; /* ts_ms */
|
||||
vals[5] = raw_smp_processor_id(); /* cpu */
|
||||
vals[6] = (u64)(long)"thneed"; /* my_string_field */
|
||||
vals[7] = (u64)(long)"kerplunk"; /* dynstring_field_2 */
|
||||
vals[8] = 398; /* my_int_field */
|
||||
|
||||
/* Now generate a create_synth_test event */
|
||||
ret = synth_event_trace_array(create_synth_test, vals, ARRAY_SIZE(vals));
|
||||
|
@ -422,13 +426,15 @@ static int __init test_trace_synth_event(void)
|
|||
int ret;
|
||||
|
||||
/* Trace some bogus values just for testing */
|
||||
ret = synth_event_trace(create_synth_test, 7, /* number of values */
|
||||
ret = synth_event_trace(create_synth_test, 9, /* number of values */
|
||||
(u64)444, /* next_pid_field */
|
||||
(u64)(long)"clackers", /* next_comm_field */
|
||||
(u64)1000000, /* ts_ns */
|
||||
(u64)(long)"viewmaster",/* dynstring_field_1 */
|
||||
(u64)1000, /* ts_ms */
|
||||
(u64)raw_smp_processor_id(), /* cpu */
|
||||
(u64)(long)"Thneed", /* my_string_field */
|
||||
(u64)(long)"yoyos", /* dynstring_field_2 */
|
||||
(u64)999); /* my_int_field */
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -2650,7 +2650,7 @@ void trace_buffered_event_enable(void)
|
|||
|
||||
preempt_disable();
|
||||
if (cpu == smp_processor_id() &&
|
||||
this_cpu_read(trace_buffered_event) !=
|
||||
__this_cpu_read(trace_buffered_event) !=
|
||||
per_cpu(trace_buffered_event, cpu))
|
||||
WARN_ON_ONCE(1);
|
||||
preempt_enable();
|
||||
|
@ -5142,10 +5142,10 @@ static const char readme_msg[] =
|
|||
"\t -:[<group>/]<event>\n"
|
||||
#ifdef CONFIG_KPROBE_EVENTS
|
||||
"\t place: [<module>:]<symbol>[+<offset>]|<memaddr>\n"
|
||||
"place (kretprobe): [<module>:]<symbol>[+<offset>]|<memaddr>\n"
|
||||
"place (kretprobe): [<module>:]<symbol>[+<offset>]%return|<memaddr>\n"
|
||||
#endif
|
||||
#ifdef CONFIG_UPROBE_EVENTS
|
||||
" place (uprobe): <path>:<offset>[(ref_ctr_offset)]\n"
|
||||
" place (uprobe): <path>:<offset>[%return][(ref_ctr_offset)]\n"
|
||||
#endif
|
||||
"\t args: <name>=fetcharg[:type]\n"
|
||||
"\t fetcharg: %<register>, @<address>, @<symbol>[+|-<offset>],\n"
|
||||
|
@ -5269,7 +5269,12 @@ static const char readme_msg[] =
|
|||
"\t trace(<synthetic_event>,param list) - generate synthetic event\n"
|
||||
"\t save(field,...) - save current event fields\n"
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
"\t snapshot() - snapshot the trace buffer\n"
|
||||
"\t snapshot() - snapshot the trace buffer\n\n"
|
||||
#endif
|
||||
#ifdef CONFIG_SYNTH_EVENTS
|
||||
" events/synthetic_events\t- Create/append/remove/show synthetic events\n"
|
||||
"\t Write into this file to define/undefine new synthetic events.\n"
|
||||
"\t example: echo 'myevent u64 lat; char name[]' >> synthetic_events\n"
|
||||
#endif
|
||||
#endif
|
||||
;
|
||||
|
@ -6682,7 +6687,6 @@ tracing_mark_write(struct file *filp, const char __user *ubuf,
|
|||
written = -EFAULT;
|
||||
} else
|
||||
written = cnt;
|
||||
len = cnt;
|
||||
|
||||
if (tr->trace_marker_file && !list_empty(&tr->trace_marker_file->triggers)) {
|
||||
/* do not add \n before testing triggers, but add \0 */
|
||||
|
@ -8658,6 +8662,24 @@ struct trace_array *trace_array_find_get(const char *instance)
|
|||
return tr;
|
||||
}
|
||||
|
||||
static int trace_array_create_dir(struct trace_array *tr)
|
||||
{
|
||||
int ret;
|
||||
|
||||
tr->dir = tracefs_create_dir(tr->name, trace_instance_dir);
|
||||
if (!tr->dir)
|
||||
return -EINVAL;
|
||||
|
||||
ret = event_trace_add_tracer(tr->dir, tr);
|
||||
if (ret)
|
||||
tracefs_remove(tr->dir);
|
||||
|
||||
init_tracer_tracefs(tr, tr->dir);
|
||||
__update_tracer_options(tr);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct trace_array *trace_array_create(const char *name)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
|
@ -8693,30 +8715,28 @@ static struct trace_array *trace_array_create(const char *name)
|
|||
if (allocate_trace_buffers(tr, trace_buf_size) < 0)
|
||||
goto out_free_tr;
|
||||
|
||||
tr->dir = tracefs_create_dir(name, trace_instance_dir);
|
||||
if (!tr->dir)
|
||||
if (ftrace_allocate_ftrace_ops(tr) < 0)
|
||||
goto out_free_tr;
|
||||
|
||||
ret = event_trace_add_tracer(tr->dir, tr);
|
||||
if (ret) {
|
||||
tracefs_remove(tr->dir);
|
||||
goto out_free_tr;
|
||||
}
|
||||
|
||||
ftrace_init_trace_array(tr);
|
||||
|
||||
init_tracer_tracefs(tr, tr->dir);
|
||||
init_trace_flags_index(tr);
|
||||
__update_tracer_options(tr);
|
||||
|
||||
if (trace_instance_dir) {
|
||||
ret = trace_array_create_dir(tr);
|
||||
if (ret)
|
||||
goto out_free_tr;
|
||||
} else
|
||||
__trace_early_add_events(tr);
|
||||
|
||||
list_add(&tr->list, &ftrace_trace_arrays);
|
||||
|
||||
tr->ref++;
|
||||
|
||||
|
||||
return tr;
|
||||
|
||||
out_free_tr:
|
||||
ftrace_free_ftrace_ops(tr);
|
||||
free_trace_buffers(tr);
|
||||
free_cpumask_var(tr->tracing_cpumask);
|
||||
kfree(tr->name);
|
||||
|
@ -8821,7 +8841,6 @@ static int __remove_instance(struct trace_array *tr)
|
|||
free_cpumask_var(tr->tracing_cpumask);
|
||||
kfree(tr->name);
|
||||
kfree(tr);
|
||||
tr = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -8875,11 +8894,27 @@ static int instance_rmdir(const char *name)
|
|||
|
||||
static __init void create_trace_instances(struct dentry *d_tracer)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
|
||||
trace_instance_dir = tracefs_create_instance_dir("instances", d_tracer,
|
||||
instance_mkdir,
|
||||
instance_rmdir);
|
||||
if (MEM_FAIL(!trace_instance_dir, "Failed to create instances directory\n"))
|
||||
return;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (!tr->name)
|
||||
continue;
|
||||
if (MEM_FAIL(trace_array_create_dir(tr) < 0,
|
||||
"Failed to create instance directory\n"))
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -8993,21 +9028,21 @@ static struct vfsmount *trace_automount(struct dentry *mntpt, void *ingore)
|
|||
* directory. It is called via fs_initcall() by any of the boot up code
|
||||
* and expects to return the dentry of the top level tracing directory.
|
||||
*/
|
||||
struct dentry *tracing_init_dentry(void)
|
||||
int tracing_init_dentry(void)
|
||||
{
|
||||
struct trace_array *tr = &global_trace;
|
||||
|
||||
if (security_locked_down(LOCKDOWN_TRACEFS)) {
|
||||
pr_warn("Tracing disabled due to lockdown\n");
|
||||
return ERR_PTR(-EPERM);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
/* The top level trace array uses NULL as parent */
|
||||
if (tr->dir)
|
||||
return NULL;
|
||||
return 0;
|
||||
|
||||
if (WARN_ON(!tracefs_initialized()))
|
||||
return ERR_PTR(-ENODEV);
|
||||
return -ENODEV;
|
||||
|
||||
/*
|
||||
* As there may still be users that expect the tracing
|
||||
|
@ -9018,7 +9053,7 @@ struct dentry *tracing_init_dentry(void)
|
|||
tr->dir = debugfs_create_automount("tracing", NULL,
|
||||
trace_automount, NULL);
|
||||
|
||||
return NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern struct trace_eval_map *__start_ftrace_eval_maps[];
|
||||
|
@ -9105,48 +9140,48 @@ static struct notifier_block trace_module_nb = {
|
|||
|
||||
static __init int tracer_init_tracefs(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
|
||||
trace_access_lock_init();
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
event_trace_init();
|
||||
|
||||
init_tracer_tracefs(&global_trace, d_tracer);
|
||||
ftrace_init_tracefs_toplevel(&global_trace, d_tracer);
|
||||
init_tracer_tracefs(&global_trace, NULL);
|
||||
ftrace_init_tracefs_toplevel(&global_trace, NULL);
|
||||
|
||||
trace_create_file("tracing_thresh", 0644, d_tracer,
|
||||
trace_create_file("tracing_thresh", 0644, NULL,
|
||||
&global_trace, &tracing_thresh_fops);
|
||||
|
||||
trace_create_file("README", 0444, d_tracer,
|
||||
trace_create_file("README", 0444, NULL,
|
||||
NULL, &tracing_readme_fops);
|
||||
|
||||
trace_create_file("saved_cmdlines", 0444, d_tracer,
|
||||
trace_create_file("saved_cmdlines", 0444, NULL,
|
||||
NULL, &tracing_saved_cmdlines_fops);
|
||||
|
||||
trace_create_file("saved_cmdlines_size", 0644, d_tracer,
|
||||
trace_create_file("saved_cmdlines_size", 0644, NULL,
|
||||
NULL, &tracing_saved_cmdlines_size_fops);
|
||||
|
||||
trace_create_file("saved_tgids", 0444, d_tracer,
|
||||
trace_create_file("saved_tgids", 0444, NULL,
|
||||
NULL, &tracing_saved_tgids_fops);
|
||||
|
||||
trace_eval_init();
|
||||
|
||||
trace_create_eval_file(d_tracer);
|
||||
trace_create_eval_file(NULL);
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
register_module_notifier(&trace_module_nb);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
trace_create_file("dyn_ftrace_total_info", 0444, d_tracer,
|
||||
trace_create_file("dyn_ftrace_total_info", 0444, NULL,
|
||||
NULL, &tracing_dyn_info_fops);
|
||||
#endif
|
||||
|
||||
create_trace_instances(d_tracer);
|
||||
create_trace_instances(NULL);
|
||||
|
||||
update_tracer_options(&global_trace);
|
||||
|
||||
|
@ -9309,7 +9344,7 @@ void ftrace_dump(enum ftrace_dump_mode oops_dump_mode)
|
|||
}
|
||||
|
||||
/*
|
||||
* We need to stop all tracing on all CPUS to read the
|
||||
* We need to stop all tracing on all CPUS to read
|
||||
* the next buffer. This is a bit expensive, but is
|
||||
* not done often. We fill all what we can read,
|
||||
* and then release the locks again.
|
||||
|
@ -9452,7 +9487,7 @@ __init static int tracer_alloc_buffers(void)
|
|||
}
|
||||
|
||||
/*
|
||||
* Make sure we don't accidently add more trace options
|
||||
* Make sure we don't accidentally add more trace options
|
||||
* than we have bits for.
|
||||
*/
|
||||
BUILD_BUG_ON(TRACE_ITER_LAST_BIT > TRACE_FLAGS_MAX_SIZE);
|
||||
|
@ -9481,7 +9516,7 @@ __init static int tracer_alloc_buffers(void)
|
|||
|
||||
/*
|
||||
* The prepare callbacks allocates some memory for the ring buffer. We
|
||||
* don't free the buffer if the if the CPU goes down. If we were to free
|
||||
* don't free the buffer if the CPU goes down. If we were to free
|
||||
* the buffer, then the user would lose any trace that was in the
|
||||
* buffer. The memory will be removed once the "instance" is removed.
|
||||
*/
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
#include <linux/glob.h>
|
||||
#include <linux/irq_work.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/ctype.h>
|
||||
|
||||
#ifdef CONFIG_FTRACE_SYSCALLS
|
||||
#include <asm/unistd.h> /* For NR_SYSCALLS */
|
||||
|
@ -246,7 +247,7 @@ typedef bool (*cond_update_fn_t)(struct trace_array *tr, void *cond_data);
|
|||
* tracing_snapshot_cond(tr, cond_data), the cond_data passed in is
|
||||
* passed in turn to the cond_snapshot.update() function. That data
|
||||
* can be compared by the update() implementation with the cond_data
|
||||
* contained wihin the struct cond_snapshot instance associated with
|
||||
* contained within the struct cond_snapshot instance associated with
|
||||
* the trace_array. Because the tr->max_lock is held throughout the
|
||||
* update() call, the update() function can directly retrieve the
|
||||
* cond_snapshot and cond_data associated with the per-instance
|
||||
|
@ -271,7 +272,7 @@ typedef bool (*cond_update_fn_t)(struct trace_array *tr, void *cond_data);
|
|||
* take the snapshot, by returning 'true' if so, 'false' if no
|
||||
* snapshot should be taken. Because the max_lock is held for
|
||||
* the duration of update(), the implementation is safe to
|
||||
* directly retrieven and save any implementation data it needs
|
||||
* directly retrieved and save any implementation data it needs
|
||||
* to in association with the snapshot.
|
||||
*/
|
||||
struct cond_snapshot {
|
||||
|
@ -573,7 +574,7 @@ struct tracer {
|
|||
* The function callback, which can use the FTRACE bits to
|
||||
* check for recursion.
|
||||
*
|
||||
* Now if the arch does not suppport a feature, and it calls
|
||||
* Now if the arch does not support a feature, and it calls
|
||||
* the global list function which calls the ftrace callback
|
||||
* all three of these steps will do a recursion protection.
|
||||
* There's no reason to do one if the previous caller already
|
||||
|
@ -737,7 +738,7 @@ struct dentry *trace_create_file(const char *name,
|
|||
void *data,
|
||||
const struct file_operations *fops);
|
||||
|
||||
struct dentry *tracing_init_dentry(void);
|
||||
int tracing_init_dentry(void);
|
||||
|
||||
struct ring_buffer_event;
|
||||
|
||||
|
@ -1125,6 +1126,8 @@ extern int ftrace_is_dead(void);
|
|||
int ftrace_create_function_files(struct trace_array *tr,
|
||||
struct dentry *parent);
|
||||
void ftrace_destroy_function_files(struct trace_array *tr);
|
||||
int ftrace_allocate_ftrace_ops(struct trace_array *tr);
|
||||
void ftrace_free_ftrace_ops(struct trace_array *tr);
|
||||
void ftrace_init_global_array_ops(struct trace_array *tr);
|
||||
void ftrace_init_array_ops(struct trace_array *tr, ftrace_func_t func);
|
||||
void ftrace_reset_array_ops(struct trace_array *tr);
|
||||
|
@ -1146,6 +1149,11 @@ ftrace_create_function_files(struct trace_array *tr,
|
|||
{
|
||||
return 0;
|
||||
}
|
||||
static inline int ftrace_allocate_ftrace_ops(struct trace_array *tr)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
static inline void ftrace_free_ftrace_ops(struct trace_array *tr) { }
|
||||
static inline void ftrace_destroy_function_files(struct trace_array *tr) { }
|
||||
static inline __init void
|
||||
ftrace_init_global_array_ops(struct trace_array *tr) { }
|
||||
|
@ -1472,7 +1480,7 @@ __trace_event_discard_commit(struct trace_buffer *buffer,
|
|||
/*
|
||||
* Helper function for event_trigger_unlock_commit{_regs}().
|
||||
* If there are event triggers attached to this event that requires
|
||||
* filtering against its fields, then they wil be called as the
|
||||
* filtering against its fields, then they will be called as the
|
||||
* entry already holds the field information of the current event.
|
||||
*
|
||||
* It also checks if the event should be discarded or not.
|
||||
|
@ -1651,6 +1659,7 @@ extern void trace_event_enable_tgid_record(bool enable);
|
|||
extern int event_trace_init(void);
|
||||
extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr);
|
||||
extern int event_trace_del_tracer(struct trace_array *tr);
|
||||
extern void __trace_early_add_events(struct trace_array *tr);
|
||||
|
||||
extern struct trace_event_file *__find_event_file(struct trace_array *tr,
|
||||
const char *system,
|
||||
|
@ -2082,4 +2091,16 @@ static __always_inline void trace_iterator_reset(struct trace_iterator *iter)
|
|||
iter->pos = -1;
|
||||
}
|
||||
|
||||
/* Check the name is good for event/group/fields */
|
||||
static inline bool is_good_name(const char *name)
|
||||
{
|
||||
if (!isalpha(*name) && *name != '_')
|
||||
return false;
|
||||
while (*++name != '\0') {
|
||||
if (!isalpha(*name) && !isdigit(*name) && *name != '_')
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
#endif /* _LINUX_KERNEL_TRACE_H */
|
||||
|
|
|
@ -40,6 +40,16 @@ trace_boot_set_instance_options(struct trace_array *tr, struct xbc_node *node)
|
|||
pr_err("Failed to set option: %s\n", buf);
|
||||
}
|
||||
|
||||
p = xbc_node_find_value(node, "tracing_on", NULL);
|
||||
if (p && *p != '\0') {
|
||||
if (kstrtoul(p, 10, &v))
|
||||
pr_err("Failed to set tracing on: %s\n", p);
|
||||
if (v)
|
||||
tracer_tracing_on(tr);
|
||||
else
|
||||
tracer_tracing_off(tr);
|
||||
}
|
||||
|
||||
p = xbc_node_find_value(node, "trace_clock", NULL);
|
||||
if (p && *p != '\0') {
|
||||
if (tracing_set_clock(tr, p) < 0)
|
||||
|
@ -274,6 +284,12 @@ trace_boot_enable_tracer(struct trace_array *tr, struct xbc_node *node)
|
|||
if (tracing_set_tracer(tr, p) < 0)
|
||||
pr_err("Failed to set given tracer: %s\n", p);
|
||||
}
|
||||
|
||||
/* Since tracer can free snapshot buffer, allocate snapshot here.*/
|
||||
if (xbc_node_find_value(node, "alloc_snapshot", NULL)) {
|
||||
if (tracing_alloc_snapshot_instance(tr) < 0)
|
||||
pr_err("Failed to allocate snapshot buffer\n");
|
||||
}
|
||||
}
|
||||
|
||||
static void __init
|
||||
|
@ -330,5 +346,8 @@ static int __init trace_boot_init(void)
|
|||
|
||||
return 0;
|
||||
}
|
||||
|
||||
fs_initcall(trace_boot_init);
|
||||
/*
|
||||
* Start tracing at the end of core-initcall, so that it starts tracing
|
||||
* from the beginning of postcore_initcall.
|
||||
*/
|
||||
core_initcall_sync(trace_boot_init);
|
||||
|
|
|
@ -206,14 +206,14 @@ static const struct file_operations dynamic_events_ops = {
|
|||
/* Make a tracefs interface for controlling dynamic events */
|
||||
static __init int init_dynamic_event(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
struct dentry *entry;
|
||||
int ret;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
entry = tracefs_create_file("dynamic_events", 0644, d_tracer,
|
||||
entry = tracefs_create_file("dynamic_events", 0644, NULL,
|
||||
NULL, &dynamic_events_ops);
|
||||
|
||||
/* Event list interface */
|
||||
|
@ -402,7 +402,7 @@ void dynevent_arg_init(struct dynevent_arg *arg,
|
|||
* whitespace, all followed by a separator, if applicable. After the
|
||||
* first arg string is successfully appended to the command string,
|
||||
* the optional @operator is appended, followed by the second arg and
|
||||
* and optional @separator. If no separator was specified when
|
||||
* optional @separator. If no separator was specified when
|
||||
* initializing the arg, a space will be appended.
|
||||
*/
|
||||
void dynevent_arg_pair_init(struct dynevent_arg_pair *arg_pair,
|
||||
|
|
|
@ -38,6 +38,7 @@ DEFINE_MUTEX(event_mutex);
|
|||
LIST_HEAD(ftrace_events);
|
||||
static LIST_HEAD(ftrace_generic_fields);
|
||||
static LIST_HEAD(ftrace_common_fields);
|
||||
static bool eventdir_initialized;
|
||||
|
||||
#define GFP_TRACE (GFP_KERNEL | __GFP_ZERO)
|
||||
|
||||
|
@ -2123,12 +2124,48 @@ event_subsystem_dir(struct trace_array *tr, const char *name,
|
|||
return NULL;
|
||||
}
|
||||
|
||||
static int
|
||||
event_define_fields(struct trace_event_call *call)
|
||||
{
|
||||
struct list_head *head;
|
||||
int ret = 0;
|
||||
|
||||
/*
|
||||
* Other events may have the same class. Only update
|
||||
* the fields if they are not already defined.
|
||||
*/
|
||||
head = trace_get_fields(call);
|
||||
if (list_empty(head)) {
|
||||
struct trace_event_fields *field = call->class->fields_array;
|
||||
unsigned int offset = sizeof(struct trace_entry);
|
||||
|
||||
for (; field->type; field++) {
|
||||
if (field->type == TRACE_FUNCTION_TYPE) {
|
||||
field->define_fields(call);
|
||||
break;
|
||||
}
|
||||
|
||||
offset = ALIGN(offset, field->align);
|
||||
ret = trace_define_field(call, field->type, field->name,
|
||||
offset, field->size,
|
||||
field->is_signed, field->filter_type);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_err("error code is %d\n", ret);
|
||||
break;
|
||||
}
|
||||
|
||||
offset += field->size;
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
event_create_dir(struct dentry *parent, struct trace_event_file *file)
|
||||
{
|
||||
struct trace_event_call *call = file->event_call;
|
||||
struct trace_array *tr = file->tr;
|
||||
struct list_head *head;
|
||||
struct dentry *d_events;
|
||||
const char *name;
|
||||
int ret;
|
||||
|
@ -2162,35 +2199,10 @@ event_create_dir(struct dentry *parent, struct trace_event_file *file)
|
|||
&ftrace_event_id_fops);
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Other events may have the same class. Only update
|
||||
* the fields if they are not already defined.
|
||||
*/
|
||||
head = trace_get_fields(call);
|
||||
if (list_empty(head)) {
|
||||
struct trace_event_fields *field = call->class->fields_array;
|
||||
unsigned int offset = sizeof(struct trace_entry);
|
||||
|
||||
for (; field->type; field++) {
|
||||
if (field->type == TRACE_FUNCTION_TYPE) {
|
||||
ret = field->define_fields(call);
|
||||
break;
|
||||
}
|
||||
|
||||
offset = ALIGN(offset, field->align);
|
||||
ret = trace_define_field(call, field->type, field->name,
|
||||
offset, field->size,
|
||||
field->is_signed, field->filter_type);
|
||||
if (ret)
|
||||
break;
|
||||
|
||||
offset += field->size;
|
||||
}
|
||||
if (ret < 0) {
|
||||
pr_warn("Could not initialize trace point events/%s\n",
|
||||
name);
|
||||
return -1;
|
||||
}
|
||||
ret = event_define_fields(call);
|
||||
if (ret < 0) {
|
||||
pr_warn("Could not initialize trace point events/%s\n", name);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -2475,7 +2487,10 @@ __trace_add_new_event(struct trace_event_call *call, struct trace_array *tr)
|
|||
if (!file)
|
||||
return -ENOMEM;
|
||||
|
||||
return event_create_dir(tr->event_dir, file);
|
||||
if (eventdir_initialized)
|
||||
return event_create_dir(tr->event_dir, file);
|
||||
else
|
||||
return event_define_fields(call);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -2493,7 +2508,7 @@ __trace_early_add_new_event(struct trace_event_call *call,
|
|||
if (!file)
|
||||
return -ENOMEM;
|
||||
|
||||
return 0;
|
||||
return event_define_fields(call);
|
||||
}
|
||||
|
||||
struct ftrace_module_file_ops;
|
||||
|
@ -3116,14 +3131,13 @@ static inline int register_event_cmds(void) { return 0; }
|
|||
#endif /* CONFIG_DYNAMIC_FTRACE */
|
||||
|
||||
/*
|
||||
* The top level array has already had its trace_event_file
|
||||
* descriptors created in order to allow for early events to
|
||||
* be recorded. This function is called after the tracefs has been
|
||||
* initialized, and we now have to create the files associated
|
||||
* to the events.
|
||||
* The top level array and trace arrays created by boot-time tracing
|
||||
* have already had its trace_event_file descriptors created in order
|
||||
* to allow for early events to be recorded.
|
||||
* This function is called after the tracefs has been initialized,
|
||||
* and we now have to create the files associated to the events.
|
||||
*/
|
||||
static __init void
|
||||
__trace_early_add_event_dirs(struct trace_array *tr)
|
||||
static void __trace_early_add_event_dirs(struct trace_array *tr)
|
||||
{
|
||||
struct trace_event_file *file;
|
||||
int ret;
|
||||
|
@ -3138,13 +3152,12 @@ __trace_early_add_event_dirs(struct trace_array *tr)
|
|||
}
|
||||
|
||||
/*
|
||||
* For early boot up, the top trace array requires to have
|
||||
* a list of events that can be enabled. This must be done before
|
||||
* the filesystem is set up in order to allow events to be traced
|
||||
* early.
|
||||
* For early boot up, the top trace array and the trace arrays created
|
||||
* by boot-time tracing require to have a list of events that can be
|
||||
* enabled. This must be done before the filesystem is set up in order
|
||||
* to allow events to be traced early.
|
||||
*/
|
||||
static __init void
|
||||
__trace_early_add_events(struct trace_array *tr)
|
||||
void __trace_early_add_events(struct trace_array *tr)
|
||||
{
|
||||
struct trace_event_call *call;
|
||||
int ret;
|
||||
|
@ -3275,7 +3288,11 @@ int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr)
|
|||
goto out;
|
||||
|
||||
down_write(&trace_event_sem);
|
||||
__trace_add_event_dirs(tr);
|
||||
/* If tr already has the event list, it is initialized in early boot. */
|
||||
if (unlikely(!list_empty(&tr->events)))
|
||||
__trace_early_add_event_dirs(tr);
|
||||
else
|
||||
__trace_add_event_dirs(tr);
|
||||
up_write(&trace_event_sem);
|
||||
|
||||
out:
|
||||
|
@ -3431,10 +3448,21 @@ static __init int event_trace_enable_again(void)
|
|||
|
||||
early_initcall(event_trace_enable_again);
|
||||
|
||||
/* Init fields which doesn't related to the tracefs */
|
||||
static __init int event_trace_init_fields(void)
|
||||
{
|
||||
if (trace_define_generic_fields())
|
||||
pr_warn("tracing: Failed to allocated generic fields");
|
||||
|
||||
if (trace_define_common_fields())
|
||||
pr_warn("tracing: Failed to allocate common fields");
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
__init int event_trace_init(void)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
struct dentry *d_tracer;
|
||||
struct dentry *entry;
|
||||
int ret;
|
||||
|
||||
|
@ -3442,22 +3470,12 @@ __init int event_trace_init(void)
|
|||
if (!tr)
|
||||
return -ENODEV;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
return 0;
|
||||
|
||||
entry = tracefs_create_file("available_events", 0444, d_tracer,
|
||||
entry = tracefs_create_file("available_events", 0444, NULL,
|
||||
tr, &ftrace_avail_fops);
|
||||
if (!entry)
|
||||
pr_warn("Could not create tracefs 'available_events' entry\n");
|
||||
|
||||
if (trace_define_generic_fields())
|
||||
pr_warn("tracing: Failed to allocated generic fields");
|
||||
|
||||
if (trace_define_common_fields())
|
||||
pr_warn("tracing: Failed to allocate common fields");
|
||||
|
||||
ret = early_event_add_tracer(d_tracer, tr);
|
||||
ret = early_event_add_tracer(NULL, tr);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
|
@ -3466,6 +3484,9 @@ __init int event_trace_init(void)
|
|||
if (ret)
|
||||
pr_warn("Failed to register trace events module notifier\n");
|
||||
#endif
|
||||
|
||||
eventdir_initialized = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -3474,6 +3495,7 @@ void __init trace_event_init(void)
|
|||
event_trace_memsetup();
|
||||
init_ftrace_syscalls();
|
||||
event_trace_enable();
|
||||
event_trace_init_fields();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_EVENT_TRACE_STARTUP_TEST
|
||||
|
|
|
@ -147,6 +147,8 @@ struct hist_field {
|
|||
*/
|
||||
unsigned int var_ref_idx;
|
||||
bool read_once;
|
||||
|
||||
unsigned int var_str_idx;
|
||||
};
|
||||
|
||||
static u64 hist_field_none(struct hist_field *field,
|
||||
|
@ -349,6 +351,7 @@ struct hist_trigger_data {
|
|||
unsigned int n_keys;
|
||||
unsigned int n_fields;
|
||||
unsigned int n_vars;
|
||||
unsigned int n_var_str;
|
||||
unsigned int key_size;
|
||||
struct tracing_map_sort_key sort_keys[TRACING_MAP_SORT_KEYS_MAX];
|
||||
unsigned int n_sort_keys;
|
||||
|
@ -1396,7 +1399,14 @@ static int hist_trigger_elt_data_alloc(struct tracing_map_elt *elt)
|
|||
}
|
||||
}
|
||||
|
||||
n_str = hist_data->n_field_var_str + hist_data->n_save_var_str;
|
||||
n_str = hist_data->n_field_var_str + hist_data->n_save_var_str +
|
||||
hist_data->n_var_str;
|
||||
if (n_str > SYNTH_FIELDS_MAX) {
|
||||
hist_elt_data_free(elt_data);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
BUILD_BUG_ON(STR_VAR_LEN_MAX & (sizeof(u64) - 1));
|
||||
|
||||
size = STR_VAR_LEN_MAX;
|
||||
|
||||
|
@ -3279,6 +3289,15 @@ static int check_synth_field(struct synth_event *event,
|
|||
|
||||
field = event->fields[field_pos];
|
||||
|
||||
/*
|
||||
* A dynamic string synth field can accept static or
|
||||
* dynamic. A static string synth field can only accept a
|
||||
* same-sized static string, which is checked for later.
|
||||
*/
|
||||
if (strstr(hist_field->type, "char[") && field->is_string
|
||||
&& field->is_dynamic)
|
||||
return 0;
|
||||
|
||||
if (strcmp(field->type, hist_field->type) != 0) {
|
||||
if (field->size != hist_field->size ||
|
||||
field->is_signed != hist_field->is_signed)
|
||||
|
@ -3651,6 +3670,7 @@ static int create_var_field(struct hist_trigger_data *hist_data,
|
|||
{
|
||||
struct trace_array *tr = hist_data->event_file->tr;
|
||||
unsigned long flags = 0;
|
||||
int ret;
|
||||
|
||||
if (WARN_ON(val_idx >= TRACING_MAP_VALS_MAX + TRACING_MAP_VARS_MAX))
|
||||
return -EINVAL;
|
||||
|
@ -3665,7 +3685,12 @@ static int create_var_field(struct hist_trigger_data *hist_data,
|
|||
if (WARN_ON(hist_data->n_vars > TRACING_MAP_VARS_MAX))
|
||||
return -EINVAL;
|
||||
|
||||
return __create_val_field(hist_data, val_idx, file, var_name, expr_str, flags);
|
||||
ret = __create_val_field(hist_data, val_idx, file, var_name, expr_str, flags);
|
||||
|
||||
if (!ret && hist_data->fields[val_idx]->flags & HIST_FIELD_FL_STRING)
|
||||
hist_data->fields[val_idx]->var_str_idx = hist_data->n_var_str++;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int create_val_fields(struct hist_trigger_data *hist_data,
|
||||
|
@ -4392,6 +4417,22 @@ static void hist_trigger_elt_update(struct hist_trigger_data *hist_data,
|
|||
hist_val = hist_field->fn(hist_field, elt, rbe, rec);
|
||||
if (hist_field->flags & HIST_FIELD_FL_VAR) {
|
||||
var_idx = hist_field->var.idx;
|
||||
|
||||
if (hist_field->flags & HIST_FIELD_FL_STRING) {
|
||||
unsigned int str_start, var_str_idx, idx;
|
||||
char *str, *val_str;
|
||||
|
||||
str_start = hist_data->n_field_var_str +
|
||||
hist_data->n_save_var_str;
|
||||
var_str_idx = hist_field->var_str_idx;
|
||||
idx = str_start + var_str_idx;
|
||||
|
||||
str = elt_data->field_var_str[idx];
|
||||
val_str = (char *)(uintptr_t)hist_val;
|
||||
strscpy(str, val_str, STR_VAR_LEN_MAX);
|
||||
|
||||
hist_val = (u64)(uintptr_t)str;
|
||||
}
|
||||
tracing_map_set_var(elt, var_idx, hist_val);
|
||||
continue;
|
||||
}
|
||||
|
|
|
@ -20,6 +20,48 @@
|
|||
|
||||
#include "trace_synth.h"
|
||||
|
||||
#undef ERRORS
|
||||
#define ERRORS \
|
||||
C(BAD_NAME, "Illegal name"), \
|
||||
C(CMD_INCOMPLETE, "Incomplete command"), \
|
||||
C(EVENT_EXISTS, "Event already exists"), \
|
||||
C(TOO_MANY_FIELDS, "Too many fields"), \
|
||||
C(INCOMPLETE_TYPE, "Incomplete type"), \
|
||||
C(INVALID_TYPE, "Invalid type"), \
|
||||
C(INVALID_FIELD, "Invalid field"), \
|
||||
C(CMD_TOO_LONG, "Command too long"),
|
||||
|
||||
#undef C
|
||||
#define C(a, b) SYNTH_ERR_##a
|
||||
|
||||
enum { ERRORS };
|
||||
|
||||
#undef C
|
||||
#define C(a, b) b
|
||||
|
||||
static const char *err_text[] = { ERRORS };
|
||||
|
||||
static char last_cmd[MAX_FILTER_STR_VAL];
|
||||
|
||||
static int errpos(const char *str)
|
||||
{
|
||||
return err_pos(last_cmd, str);
|
||||
}
|
||||
|
||||
static void last_cmd_set(char *str)
|
||||
{
|
||||
if (!str)
|
||||
return;
|
||||
|
||||
strncpy(last_cmd, str, MAX_FILTER_STR_VAL - 1);
|
||||
}
|
||||
|
||||
static void synth_err(u8 err_type, u8 err_pos)
|
||||
{
|
||||
tracing_log_err(NULL, "synthetic_events", last_cmd, err_text,
|
||||
err_type, err_pos);
|
||||
}
|
||||
|
||||
static int create_synth_event(int argc, const char **argv);
|
||||
static int synth_event_show(struct seq_file *m, struct dyn_event *ev);
|
||||
static int synth_event_release(struct dyn_event *ev);
|
||||
|
@ -88,7 +130,7 @@ static int synth_event_define_fields(struct trace_event_call *call)
|
|||
|
||||
event->fields[i]->offset = n_u64;
|
||||
|
||||
if (event->fields[i]->is_string) {
|
||||
if (event->fields[i]->is_string && !event->fields[i]->is_dynamic) {
|
||||
offset += STR_VAR_LEN_MAX;
|
||||
n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
} else {
|
||||
|
@ -132,13 +174,16 @@ static int synth_field_string_size(char *type)
|
|||
start += sizeof("char[") - 1;
|
||||
|
||||
end = strchr(type, ']');
|
||||
if (!end || end < start)
|
||||
if (!end || end < start || type + strlen(type) > end + 1)
|
||||
return -EINVAL;
|
||||
|
||||
len = end - start;
|
||||
if (len > 3)
|
||||
return -EINVAL;
|
||||
|
||||
if (len == 0)
|
||||
return 0; /* variable-length string */
|
||||
|
||||
strncpy(buf, start, len);
|
||||
buf[len] = '\0';
|
||||
|
||||
|
@ -184,6 +229,8 @@ static int synth_field_size(char *type)
|
|||
size = sizeof(long);
|
||||
else if (strcmp(type, "unsigned long") == 0)
|
||||
size = sizeof(unsigned long);
|
||||
else if (strcmp(type, "bool") == 0)
|
||||
size = sizeof(bool);
|
||||
else if (strcmp(type, "pid_t") == 0)
|
||||
size = sizeof(pid_t);
|
||||
else if (strcmp(type, "gfp_t") == 0)
|
||||
|
@ -226,12 +273,14 @@ static const char *synth_field_fmt(char *type)
|
|||
fmt = "%ld";
|
||||
else if (strcmp(type, "unsigned long") == 0)
|
||||
fmt = "%lu";
|
||||
else if (strcmp(type, "bool") == 0)
|
||||
fmt = "%d";
|
||||
else if (strcmp(type, "pid_t") == 0)
|
||||
fmt = "%d";
|
||||
else if (strcmp(type, "gfp_t") == 0)
|
||||
fmt = "%x";
|
||||
else if (synth_field_is_string(type))
|
||||
fmt = "%s";
|
||||
fmt = "%.*s";
|
||||
|
||||
return fmt;
|
||||
}
|
||||
|
@ -290,10 +339,27 @@ static enum print_line_t print_synth_event(struct trace_iterator *iter,
|
|||
|
||||
/* parameter values */
|
||||
if (se->fields[i]->is_string) {
|
||||
trace_seq_printf(s, print_fmt, se->fields[i]->name,
|
||||
(char *)&entry->fields[n_u64],
|
||||
i == se->n_fields - 1 ? "" : " ");
|
||||
n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
if (se->fields[i]->is_dynamic) {
|
||||
u32 offset, data_offset;
|
||||
char *str_field;
|
||||
|
||||
offset = (u32)entry->fields[n_u64];
|
||||
data_offset = offset & 0xffff;
|
||||
|
||||
str_field = (char *)entry + data_offset;
|
||||
|
||||
trace_seq_printf(s, print_fmt, se->fields[i]->name,
|
||||
STR_VAR_LEN_MAX,
|
||||
str_field,
|
||||
i == se->n_fields - 1 ? "" : " ");
|
||||
n_u64++;
|
||||
} else {
|
||||
trace_seq_printf(s, print_fmt, se->fields[i]->name,
|
||||
STR_VAR_LEN_MAX,
|
||||
(char *)&entry->fields[n_u64],
|
||||
i == se->n_fields - 1 ? "" : " ");
|
||||
n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
}
|
||||
} else {
|
||||
struct trace_print_flags __flags[] = {
|
||||
__def_gfpflag_names, {-1, NULL} };
|
||||
|
@ -325,16 +391,52 @@ static struct trace_event_functions synth_event_funcs = {
|
|||
.trace = print_synth_event
|
||||
};
|
||||
|
||||
static unsigned int trace_string(struct synth_trace_event *entry,
|
||||
struct synth_event *event,
|
||||
char *str_val,
|
||||
bool is_dynamic,
|
||||
unsigned int data_size,
|
||||
unsigned int *n_u64)
|
||||
{
|
||||
unsigned int len = 0;
|
||||
char *str_field;
|
||||
|
||||
if (is_dynamic) {
|
||||
u32 data_offset;
|
||||
|
||||
data_offset = offsetof(typeof(*entry), fields);
|
||||
data_offset += event->n_u64 * sizeof(u64);
|
||||
data_offset += data_size;
|
||||
|
||||
str_field = (char *)entry + data_offset;
|
||||
|
||||
len = strlen(str_val) + 1;
|
||||
strscpy(str_field, str_val, len);
|
||||
|
||||
data_offset |= len << 16;
|
||||
*(u32 *)&entry->fields[*n_u64] = data_offset;
|
||||
|
||||
(*n_u64)++;
|
||||
} else {
|
||||
str_field = (char *)&entry->fields[*n_u64];
|
||||
|
||||
strscpy(str_field, str_val, STR_VAR_LEN_MAX);
|
||||
(*n_u64) += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
}
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
static notrace void trace_event_raw_event_synth(void *__data,
|
||||
u64 *var_ref_vals,
|
||||
unsigned int *var_ref_idx)
|
||||
{
|
||||
unsigned int i, n_u64, val_idx, len, data_size = 0;
|
||||
struct trace_event_file *trace_file = __data;
|
||||
struct synth_trace_event *entry;
|
||||
struct trace_event_buffer fbuffer;
|
||||
struct trace_buffer *buffer;
|
||||
struct synth_event *event;
|
||||
unsigned int i, n_u64, val_idx;
|
||||
int fields_size = 0;
|
||||
|
||||
event = trace_file->event_call->data;
|
||||
|
@ -344,6 +446,18 @@ static notrace void trace_event_raw_event_synth(void *__data,
|
|||
|
||||
fields_size = event->n_u64 * sizeof(u64);
|
||||
|
||||
for (i = 0; i < event->n_dynamic_fields; i++) {
|
||||
unsigned int field_pos = event->dynamic_fields[i]->field_pos;
|
||||
char *str_val;
|
||||
|
||||
val_idx = var_ref_idx[field_pos];
|
||||
str_val = (char *)(long)var_ref_vals[val_idx];
|
||||
|
||||
len = strlen(str_val) + 1;
|
||||
|
||||
fields_size += len;
|
||||
}
|
||||
|
||||
/*
|
||||
* Avoid ring buffer recursion detection, as this event
|
||||
* is being performed within another event.
|
||||
|
@ -360,10 +474,11 @@ static notrace void trace_event_raw_event_synth(void *__data,
|
|||
val_idx = var_ref_idx[i];
|
||||
if (event->fields[i]->is_string) {
|
||||
char *str_val = (char *)(long)var_ref_vals[val_idx];
|
||||
char *str_field = (char *)&entry->fields[n_u64];
|
||||
|
||||
strscpy(str_field, str_val, STR_VAR_LEN_MAX);
|
||||
n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
len = trace_string(entry, event, str_val,
|
||||
event->fields[i]->is_dynamic,
|
||||
data_size, &n_u64);
|
||||
data_size += len; /* only dynamic string increments */
|
||||
} else {
|
||||
struct synth_field *field = event->fields[i];
|
||||
u64 val = var_ref_vals[val_idx];
|
||||
|
@ -422,8 +537,13 @@ static int __set_synth_event_print_fmt(struct synth_event *event,
|
|||
pos += snprintf(buf + pos, LEN_OR_ZERO, "\"");
|
||||
|
||||
for (i = 0; i < event->n_fields; i++) {
|
||||
pos += snprintf(buf + pos, LEN_OR_ZERO,
|
||||
", REC->%s", event->fields[i]->name);
|
||||
if (event->fields[i]->is_string &&
|
||||
event->fields[i]->is_dynamic)
|
||||
pos += snprintf(buf + pos, LEN_OR_ZERO,
|
||||
", __get_str(%s)", event->fields[i]->name);
|
||||
else
|
||||
pos += snprintf(buf + pos, LEN_OR_ZERO,
|
||||
", REC->%s", event->fields[i]->name);
|
||||
}
|
||||
|
||||
#undef LEN_OR_ZERO
|
||||
|
@ -465,13 +585,16 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
|
|||
struct synth_field *field;
|
||||
const char *prefix = NULL, *field_type = argv[0], *field_name, *array;
|
||||
int len, ret = 0;
|
||||
ssize_t size;
|
||||
|
||||
if (field_type[0] == ';')
|
||||
field_type++;
|
||||
|
||||
if (!strcmp(field_type, "unsigned")) {
|
||||
if (argc < 3)
|
||||
if (argc < 3) {
|
||||
synth_err(SYNTH_ERR_INCOMPLETE_TYPE, errpos(field_type));
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
prefix = "unsigned ";
|
||||
field_type = argv[1];
|
||||
field_name = argv[2];
|
||||
|
@ -497,12 +620,23 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
|
|||
ret = -ENOMEM;
|
||||
goto free;
|
||||
}
|
||||
if (!is_good_name(field->name)) {
|
||||
synth_err(SYNTH_ERR_BAD_NAME, errpos(field_name));
|
||||
ret = -EINVAL;
|
||||
goto free;
|
||||
}
|
||||
|
||||
if (field_type[0] == ';')
|
||||
field_type++;
|
||||
len = strlen(field_type) + 1;
|
||||
if (array)
|
||||
len += strlen(array);
|
||||
|
||||
if (array) {
|
||||
int l = strlen(array);
|
||||
|
||||
if (l && array[l - 1] == ';')
|
||||
l--;
|
||||
len += l;
|
||||
}
|
||||
if (prefix)
|
||||
len += strlen(prefix);
|
||||
|
||||
|
@ -520,17 +654,40 @@ static struct synth_field *parse_synth_field(int argc, const char **argv,
|
|||
field->type[len - 1] = '\0';
|
||||
}
|
||||
|
||||
field->size = synth_field_size(field->type);
|
||||
if (!field->size) {
|
||||
size = synth_field_size(field->type);
|
||||
if (size < 0) {
|
||||
synth_err(SYNTH_ERR_INVALID_TYPE, errpos(field_type));
|
||||
ret = -EINVAL;
|
||||
goto free;
|
||||
} else if (size == 0) {
|
||||
if (synth_field_is_string(field->type)) {
|
||||
char *type;
|
||||
|
||||
type = kzalloc(sizeof("__data_loc ") + strlen(field->type) + 1, GFP_KERNEL);
|
||||
if (!type) {
|
||||
ret = -ENOMEM;
|
||||
goto free;
|
||||
}
|
||||
|
||||
strcat(type, "__data_loc ");
|
||||
strcat(type, field->type);
|
||||
kfree(field->type);
|
||||
field->type = type;
|
||||
|
||||
field->is_dynamic = true;
|
||||
size = sizeof(u64);
|
||||
} else {
|
||||
synth_err(SYNTH_ERR_INVALID_TYPE, errpos(field_type));
|
||||
ret = -EINVAL;
|
||||
goto free;
|
||||
}
|
||||
}
|
||||
field->size = size;
|
||||
|
||||
if (synth_field_is_string(field->type))
|
||||
field->is_string = true;
|
||||
|
||||
field->is_signed = synth_field_signed(field->type);
|
||||
|
||||
out:
|
||||
return field;
|
||||
free:
|
||||
|
@ -661,6 +818,7 @@ static void free_synth_event(struct synth_event *event)
|
|||
free_synth_field(event->fields[i]);
|
||||
|
||||
kfree(event->fields);
|
||||
kfree(event->dynamic_fields);
|
||||
kfree(event->name);
|
||||
kfree(event->class.system);
|
||||
free_synth_tracepoint(event->tp);
|
||||
|
@ -671,8 +829,8 @@ static void free_synth_event(struct synth_event *event)
|
|||
static struct synth_event *alloc_synth_event(const char *name, int n_fields,
|
||||
struct synth_field **fields)
|
||||
{
|
||||
unsigned int i, j, n_dynamic_fields = 0;
|
||||
struct synth_event *event;
|
||||
unsigned int i;
|
||||
|
||||
event = kzalloc(sizeof(*event), GFP_KERNEL);
|
||||
if (!event) {
|
||||
|
@ -694,11 +852,33 @@ static struct synth_event *alloc_synth_event(const char *name, int n_fields,
|
|||
goto out;
|
||||
}
|
||||
|
||||
for (i = 0; i < n_fields; i++)
|
||||
if (fields[i]->is_dynamic)
|
||||
n_dynamic_fields++;
|
||||
|
||||
if (n_dynamic_fields) {
|
||||
event->dynamic_fields = kcalloc(n_dynamic_fields,
|
||||
sizeof(*event->dynamic_fields),
|
||||
GFP_KERNEL);
|
||||
if (!event->dynamic_fields) {
|
||||
free_synth_event(event);
|
||||
event = ERR_PTR(-ENOMEM);
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
dyn_event_init(&event->devent, &synth_event_ops);
|
||||
|
||||
for (i = 0; i < n_fields; i++)
|
||||
for (i = 0, j = 0; i < n_fields; i++) {
|
||||
event->fields[i] = fields[i];
|
||||
|
||||
if (fields[i]->is_dynamic) {
|
||||
event->dynamic_fields[j] = fields[i];
|
||||
event->dynamic_fields[j]->field_pos = i;
|
||||
event->dynamic_fields[j++] = fields[i];
|
||||
event->n_dynamic_fields++;
|
||||
}
|
||||
}
|
||||
event->n_fields = n_fields;
|
||||
out:
|
||||
return event;
|
||||
|
@ -710,6 +890,10 @@ static int synth_event_check_arg_fn(void *data)
|
|||
int size;
|
||||
|
||||
size = synth_field_size((char *)arg_pair->lhs);
|
||||
if (size == 0) {
|
||||
if (strstr((char *)arg_pair->lhs, "["))
|
||||
return 0;
|
||||
}
|
||||
|
||||
return size ? 0 : -EINVAL;
|
||||
}
|
||||
|
@ -971,12 +1155,47 @@ int synth_event_gen_cmd_array_start(struct dynevent_cmd *cmd, const char *name,
|
|||
}
|
||||
EXPORT_SYMBOL_GPL(synth_event_gen_cmd_array_start);
|
||||
|
||||
static int save_cmdstr(int argc, const char *name, const char **argv)
|
||||
{
|
||||
struct seq_buf s;
|
||||
char *buf;
|
||||
int i;
|
||||
|
||||
buf = kzalloc(MAX_DYNEVENT_CMD_LEN, GFP_KERNEL);
|
||||
if (!buf)
|
||||
return -ENOMEM;
|
||||
|
||||
seq_buf_init(&s, buf, MAX_DYNEVENT_CMD_LEN);
|
||||
|
||||
seq_buf_puts(&s, name);
|
||||
|
||||
for (i = 0; i < argc; i++) {
|
||||
seq_buf_putc(&s, ' ');
|
||||
seq_buf_puts(&s, argv[i]);
|
||||
}
|
||||
|
||||
if (!seq_buf_buffer_left(&s)) {
|
||||
synth_err(SYNTH_ERR_CMD_TOO_LONG, 0);
|
||||
kfree(buf);
|
||||
return -EINVAL;
|
||||
}
|
||||
buf[s.len] = 0;
|
||||
last_cmd_set(buf);
|
||||
|
||||
kfree(buf);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __create_synth_event(int argc, const char *name, const char **argv)
|
||||
{
|
||||
struct synth_field *field, *fields[SYNTH_FIELDS_MAX];
|
||||
struct synth_event *event = NULL;
|
||||
int i, consumed = 0, n_fields = 0, ret = 0;
|
||||
|
||||
ret = save_cmdstr(argc, name, argv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Argument syntax:
|
||||
* - Add synthetic event: <event_name> field[;field] ...
|
||||
|
@ -984,13 +1203,22 @@ static int __create_synth_event(int argc, const char *name, const char **argv)
|
|||
* where 'field' = type field_name
|
||||
*/
|
||||
|
||||
if (name[0] == '\0' || argc < 1)
|
||||
if (name[0] == '\0' || argc < 1) {
|
||||
synth_err(SYNTH_ERR_CMD_INCOMPLETE, 0);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
|
||||
if (!is_good_name(name)) {
|
||||
synth_err(SYNTH_ERR_BAD_NAME, errpos(name));
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
event = find_synth_event(name);
|
||||
if (event) {
|
||||
synth_err(SYNTH_ERR_EVENT_EXISTS, errpos(name));
|
||||
ret = -EEXIST;
|
||||
goto out;
|
||||
}
|
||||
|
@ -999,6 +1227,7 @@ static int __create_synth_event(int argc, const char *name, const char **argv)
|
|||
if (strcmp(argv[i], ";") == 0)
|
||||
continue;
|
||||
if (n_fields == SYNTH_FIELDS_MAX) {
|
||||
synth_err(SYNTH_ERR_TOO_MANY_FIELDS, 0);
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
@ -1013,6 +1242,7 @@ static int __create_synth_event(int argc, const char *name, const char **argv)
|
|||
}
|
||||
|
||||
if (i < argc && strcmp(argv[i], ";") != 0) {
|
||||
synth_err(SYNTH_ERR_INVALID_FIELD, errpos(argv[i]));
|
||||
ret = -EINVAL;
|
||||
goto err;
|
||||
}
|
||||
|
@ -1198,10 +1428,9 @@ void synth_event_cmd_init(struct dynevent_cmd *cmd, char *buf, int maxlen)
|
|||
EXPORT_SYMBOL_GPL(synth_event_cmd_init);
|
||||
|
||||
static inline int
|
||||
__synth_event_trace_start(struct trace_event_file *file,
|
||||
struct synth_event_trace_state *trace_state)
|
||||
__synth_event_trace_init(struct trace_event_file *file,
|
||||
struct synth_event_trace_state *trace_state)
|
||||
{
|
||||
int entry_size, fields_size = 0;
|
||||
int ret = 0;
|
||||
|
||||
memset(trace_state, '\0', sizeof(*trace_state));
|
||||
|
@ -1211,7 +1440,7 @@ __synth_event_trace_start(struct trace_event_file *file,
|
|||
* ENABLED bit is set (which attaches the probe thus allowing
|
||||
* this code to be called, etc). Because this is called
|
||||
* directly by the user, we don't have that but we still need
|
||||
* to honor not logging when disabled. For the the iterated
|
||||
* to honor not logging when disabled. For the iterated
|
||||
* trace case, we save the enabed state upon start and just
|
||||
* ignore the following data calls.
|
||||
*/
|
||||
|
@ -1223,8 +1452,20 @@ __synth_event_trace_start(struct trace_event_file *file,
|
|||
}
|
||||
|
||||
trace_state->event = file->event_call->data;
|
||||
out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static inline int
|
||||
__synth_event_trace_start(struct trace_event_file *file,
|
||||
struct synth_event_trace_state *trace_state,
|
||||
int dynamic_fields_size)
|
||||
{
|
||||
int entry_size, fields_size = 0;
|
||||
int ret = 0;
|
||||
|
||||
fields_size = trace_state->event->n_u64 * sizeof(u64);
|
||||
fields_size += dynamic_fields_size;
|
||||
|
||||
/*
|
||||
* Avoid ring buffer recursion detection, as this event
|
||||
|
@ -1241,7 +1482,7 @@ __synth_event_trace_start(struct trace_event_file *file,
|
|||
ring_buffer_nest_end(trace_state->buffer);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
out:
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1274,23 +1515,46 @@ __synth_event_trace_end(struct synth_event_trace_state *trace_state)
|
|||
*/
|
||||
int synth_event_trace(struct trace_event_file *file, unsigned int n_vals, ...)
|
||||
{
|
||||
unsigned int i, n_u64, len, data_size = 0;
|
||||
struct synth_event_trace_state state;
|
||||
unsigned int i, n_u64;
|
||||
va_list args;
|
||||
int ret;
|
||||
|
||||
ret = __synth_event_trace_start(file, &state);
|
||||
ret = __synth_event_trace_init(file, &state);
|
||||
if (ret) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0; /* just disabled, not really an error */
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (state.event->n_dynamic_fields) {
|
||||
va_start(args, n_vals);
|
||||
|
||||
for (i = 0; i < state.event->n_fields; i++) {
|
||||
u64 val = va_arg(args, u64);
|
||||
|
||||
if (state.event->fields[i]->is_string &&
|
||||
state.event->fields[i]->is_dynamic) {
|
||||
char *str_val = (char *)(long)val;
|
||||
|
||||
data_size += strlen(str_val) + 1;
|
||||
}
|
||||
}
|
||||
|
||||
va_end(args);
|
||||
}
|
||||
|
||||
ret = __synth_event_trace_start(file, &state, data_size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (n_vals != state.event->n_fields) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
data_size = 0;
|
||||
|
||||
va_start(args, n_vals);
|
||||
for (i = 0, n_u64 = 0; i < state.event->n_fields; i++) {
|
||||
u64 val;
|
||||
|
@ -1299,10 +1563,11 @@ int synth_event_trace(struct trace_event_file *file, unsigned int n_vals, ...)
|
|||
|
||||
if (state.event->fields[i]->is_string) {
|
||||
char *str_val = (char *)(long)val;
|
||||
char *str_field = (char *)&state.entry->fields[n_u64];
|
||||
|
||||
strscpy(str_field, str_val, STR_VAR_LEN_MAX);
|
||||
n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
len = trace_string(state.entry, state.event, str_val,
|
||||
state.event->fields[i]->is_dynamic,
|
||||
data_size, &n_u64);
|
||||
data_size += len; /* only dynamic string increments */
|
||||
} else {
|
||||
struct synth_field *field = state.event->fields[i];
|
||||
|
||||
|
@ -1355,29 +1620,46 @@ EXPORT_SYMBOL_GPL(synth_event_trace);
|
|||
int synth_event_trace_array(struct trace_event_file *file, u64 *vals,
|
||||
unsigned int n_vals)
|
||||
{
|
||||
unsigned int i, n_u64, field_pos, len, data_size = 0;
|
||||
struct synth_event_trace_state state;
|
||||
unsigned int i, n_u64;
|
||||
char *str_val;
|
||||
int ret;
|
||||
|
||||
ret = __synth_event_trace_start(file, &state);
|
||||
ret = __synth_event_trace_init(file, &state);
|
||||
if (ret) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0; /* just disabled, not really an error */
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (state.event->n_dynamic_fields) {
|
||||
for (i = 0; i < state.event->n_dynamic_fields; i++) {
|
||||
field_pos = state.event->dynamic_fields[i]->field_pos;
|
||||
str_val = (char *)(long)vals[field_pos];
|
||||
len = strlen(str_val) + 1;
|
||||
data_size += len;
|
||||
}
|
||||
}
|
||||
|
||||
ret = __synth_event_trace_start(file, &state, data_size);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (n_vals != state.event->n_fields) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
data_size = 0;
|
||||
|
||||
for (i = 0, n_u64 = 0; i < state.event->n_fields; i++) {
|
||||
if (state.event->fields[i]->is_string) {
|
||||
char *str_val = (char *)(long)vals[i];
|
||||
char *str_field = (char *)&state.entry->fields[n_u64];
|
||||
|
||||
strscpy(str_field, str_val, STR_VAR_LEN_MAX);
|
||||
n_u64 += STR_VAR_LEN_MAX / sizeof(u64);
|
||||
len = trace_string(state.entry, state.event, str_val,
|
||||
state.event->fields[i]->is_dynamic,
|
||||
data_size, &n_u64);
|
||||
data_size += len; /* only dynamic string increments */
|
||||
} else {
|
||||
struct synth_field *field = state.event->fields[i];
|
||||
u64 val = vals[i];
|
||||
|
@ -1445,9 +1727,17 @@ int synth_event_trace_start(struct trace_event_file *file,
|
|||
if (!trace_state)
|
||||
return -EINVAL;
|
||||
|
||||
ret = __synth_event_trace_start(file, trace_state);
|
||||
if (ret == -ENOENT)
|
||||
ret = 0; /* just disabled, not really an error */
|
||||
ret = __synth_event_trace_init(file, trace_state);
|
||||
if (ret) {
|
||||
if (ret == -ENOENT)
|
||||
ret = 0; /* just disabled, not really an error */
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (trace_state->event->n_dynamic_fields)
|
||||
return -ENOTSUPP;
|
||||
|
||||
ret = __synth_event_trace_start(file, trace_state, 0);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -1508,6 +1798,11 @@ static int __synth_event_add_val(const char *field_name, u64 val,
|
|||
char *str_val = (char *)(long)val;
|
||||
char *str_field;
|
||||
|
||||
if (field->is_dynamic) { /* add_val can't do dynamic strings */
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!str_val) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
|
@ -1679,14 +1974,22 @@ static int __synth_event_show(struct seq_file *m, struct synth_event *event)
|
|||
{
|
||||
struct synth_field *field;
|
||||
unsigned int i;
|
||||
char *type, *t;
|
||||
|
||||
seq_printf(m, "%s\t", event->name);
|
||||
|
||||
for (i = 0; i < event->n_fields; i++) {
|
||||
field = event->fields[i];
|
||||
|
||||
type = field->type;
|
||||
t = strstr(type, "__data_loc");
|
||||
if (t) { /* __data_loc belongs in format but not event desc */
|
||||
t += sizeof("__data_loc");
|
||||
type = t;
|
||||
}
|
||||
|
||||
/* parameter values */
|
||||
seq_printf(m, "%s %s%s", field->type, field->name,
|
||||
seq_printf(m, "%s %s%s", type, field->name,
|
||||
i == event->n_fields - 1 ? "" : "; ");
|
||||
}
|
||||
|
||||
|
@ -1754,25 +2057,31 @@ static const struct file_operations synth_events_fops = {
|
|||
.release = seq_release,
|
||||
};
|
||||
|
||||
static __init int trace_events_synth_init(void)
|
||||
/*
|
||||
* Register dynevent at core_initcall. This allows kernel to setup kprobe
|
||||
* events in postcore_initcall without tracefs.
|
||||
*/
|
||||
static __init int trace_events_synth_init_early(void)
|
||||
{
|
||||
struct dentry *entry = NULL;
|
||||
struct dentry *d_tracer;
|
||||
int err = 0;
|
||||
|
||||
err = dyn_event_register(&synth_event_ops);
|
||||
if (err) {
|
||||
if (err)
|
||||
pr_warn("Could not register synth_event_ops\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer)) {
|
||||
err = PTR_ERR(d_tracer);
|
||||
return err;
|
||||
}
|
||||
core_initcall(trace_events_synth_init_early);
|
||||
|
||||
static __init int trace_events_synth_init(void)
|
||||
{
|
||||
struct dentry *entry = NULL;
|
||||
int err = 0;
|
||||
err = tracing_init_dentry();
|
||||
if (err)
|
||||
goto err;
|
||||
}
|
||||
|
||||
entry = tracefs_create_file("synthetic_events", 0644, d_tracer,
|
||||
entry = tracefs_create_file("synthetic_events", 0644, NULL,
|
||||
NULL, &synth_events_fops);
|
||||
if (!entry) {
|
||||
err = -ENODEV;
|
||||
|
|
|
@ -34,10 +34,14 @@ enum {
|
|||
TRACE_FUNC_OPT_STACK = 0x1,
|
||||
};
|
||||
|
||||
static int allocate_ftrace_ops(struct trace_array *tr)
|
||||
int ftrace_allocate_ftrace_ops(struct trace_array *tr)
|
||||
{
|
||||
struct ftrace_ops *ops;
|
||||
|
||||
/* The top level array uses the "global_ops" */
|
||||
if (tr->flags & TRACE_ARRAY_FL_GLOBAL)
|
||||
return 0;
|
||||
|
||||
ops = kzalloc(sizeof(*ops), GFP_KERNEL);
|
||||
if (!ops)
|
||||
return -ENOMEM;
|
||||
|
@ -48,15 +52,19 @@ static int allocate_ftrace_ops(struct trace_array *tr)
|
|||
|
||||
tr->ops = ops;
|
||||
ops->private = tr;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
void ftrace_free_ftrace_ops(struct trace_array *tr)
|
||||
{
|
||||
kfree(tr->ops);
|
||||
tr->ops = NULL;
|
||||
}
|
||||
|
||||
int ftrace_create_function_files(struct trace_array *tr,
|
||||
struct dentry *parent)
|
||||
{
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* The top level array uses the "global_ops", and the files are
|
||||
* created on boot up.
|
||||
|
@ -64,9 +72,8 @@ int ftrace_create_function_files(struct trace_array *tr,
|
|||
if (tr->flags & TRACE_ARRAY_FL_GLOBAL)
|
||||
return 0;
|
||||
|
||||
ret = allocate_ftrace_ops(tr);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (!tr->ops)
|
||||
return -EINVAL;
|
||||
|
||||
ftrace_create_filter_files(tr->ops, parent);
|
||||
|
||||
|
@ -76,8 +83,7 @@ int ftrace_create_function_files(struct trace_array *tr,
|
|||
void ftrace_destroy_function_files(struct trace_array *tr)
|
||||
{
|
||||
ftrace_destroy_filter_files(tr->ops);
|
||||
kfree(tr->ops);
|
||||
tr->ops = NULL;
|
||||
ftrace_free_ftrace_ops(tr);
|
||||
}
|
||||
|
||||
static int function_trace_init(struct trace_array *tr)
|
||||
|
|
|
@ -1336,13 +1336,13 @@ static const struct file_operations graph_depth_fops = {
|
|||
|
||||
static __init int init_graph_tracefs(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
trace_create_file("max_graph_depth", 0644, d_tracer,
|
||||
trace_create_file("max_graph_depth", 0644, NULL,
|
||||
NULL, &graph_depth_fops);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -538,14 +538,14 @@ static const struct file_operations window_fops = {
|
|||
*/
|
||||
static int init_tracefs(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
struct dentry *top_dir;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return -ENOMEM;
|
||||
|
||||
top_dir = tracefs_create_dir("hwlat_detector", d_tracer);
|
||||
top_dir = tracefs_create_dir("hwlat_detector", NULL);
|
||||
if (!top_dir)
|
||||
return -ENOMEM;
|
||||
|
||||
|
|
|
@ -718,6 +718,9 @@ static int trace_kprobe_create(int argc, const char *argv[])
|
|||
* p[:[GRP/]EVENT] [MOD:]KSYM[+OFFS]|KADDR [FETCHARGS]
|
||||
* - Add kretprobe:
|
||||
* r[MAXACTIVE][:[GRP/]EVENT] [MOD:]KSYM[+0] [FETCHARGS]
|
||||
* Or
|
||||
* p:[GRP/]EVENT] [MOD:]KSYM[+0]%return [FETCHARGS]
|
||||
*
|
||||
* Fetch args:
|
||||
* $retval : fetch return value
|
||||
* $stack : fetch stack address
|
||||
|
@ -747,7 +750,6 @@ static int trace_kprobe_create(int argc, const char *argv[])
|
|||
switch (argv[0][0]) {
|
||||
case 'r':
|
||||
is_return = true;
|
||||
flags |= TPARG_FL_RETURN;
|
||||
break;
|
||||
case 'p':
|
||||
break;
|
||||
|
@ -805,12 +807,26 @@ static int trace_kprobe_create(int argc, const char *argv[])
|
|||
symbol = kstrdup(argv[1], GFP_KERNEL);
|
||||
if (!symbol)
|
||||
return -ENOMEM;
|
||||
|
||||
tmp = strchr(symbol, '%');
|
||||
if (tmp) {
|
||||
if (!strcmp(tmp, "%return")) {
|
||||
*tmp = '\0';
|
||||
is_return = true;
|
||||
} else {
|
||||
trace_probe_log_err(tmp - symbol, BAD_ADDR_SUFFIX);
|
||||
goto parse_error;
|
||||
}
|
||||
}
|
||||
|
||||
/* TODO: support .init module functions */
|
||||
ret = traceprobe_split_symbol_offset(symbol, &offset);
|
||||
if (ret || offset < 0 || offset > UINT_MAX) {
|
||||
trace_probe_log_err(0, BAD_PROBE_ADDR);
|
||||
goto parse_error;
|
||||
}
|
||||
if (is_return)
|
||||
flags |= TPARG_FL_RETURN;
|
||||
if (kprobe_on_func_entry(NULL, symbol, offset))
|
||||
flags |= TPARG_FL_FENTRY;
|
||||
if (offset && is_return && !(flags & TPARG_FL_FENTRY)) {
|
||||
|
@ -1881,8 +1897,8 @@ static __init void setup_boot_kprobe_events(void)
|
|||
}
|
||||
|
||||
/*
|
||||
* Register dynevent at subsys_initcall. This allows kernel to setup kprobe
|
||||
* events in fs_initcall without tracefs.
|
||||
* Register dynevent at core_initcall. This allows kernel to setup kprobe
|
||||
* events in postcore_initcall without tracefs.
|
||||
*/
|
||||
static __init int init_kprobe_trace_early(void)
|
||||
{
|
||||
|
@ -1897,19 +1913,19 @@ static __init int init_kprobe_trace_early(void)
|
|||
|
||||
return 0;
|
||||
}
|
||||
subsys_initcall(init_kprobe_trace_early);
|
||||
core_initcall(init_kprobe_trace_early);
|
||||
|
||||
/* Make a tracefs interface for controlling probe points */
|
||||
static __init int init_kprobe_trace(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
struct dentry *entry;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
entry = tracefs_create_file("kprobe_events", 0644, d_tracer,
|
||||
entry = tracefs_create_file("kprobe_events", 0644, NULL,
|
||||
NULL, &kprobe_events_ops);
|
||||
|
||||
/* Event list interface */
|
||||
|
@ -1917,7 +1933,7 @@ static __init int init_kprobe_trace(void)
|
|||
pr_warn("Could not create tracefs 'kprobe_events' entry\n");
|
||||
|
||||
/* Profile interface */
|
||||
entry = tracefs_create_file("kprobe_profile", 0444, d_tracer,
|
||||
entry = tracefs_create_file("kprobe_profile", 0444, NULL,
|
||||
NULL, &kprobe_profile_ops);
|
||||
|
||||
if (!entry)
|
||||
|
|
|
@ -367,13 +367,13 @@ static const struct file_operations ftrace_formats_fops = {
|
|||
|
||||
static __init int init_trace_printk_function_export(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
trace_create_file("printk_formats", 0444, d_tracer,
|
||||
trace_create_file("printk_formats", 0444, NULL,
|
||||
NULL, &ftrace_formats_fops);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -16,7 +16,6 @@
|
|||
#include <linux/tracefs.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/string.h>
|
||||
#include <linux/ctype.h>
|
||||
#include <linux/ptrace.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/kprobes.h>
|
||||
|
@ -348,18 +347,6 @@ bool trace_probe_match_command_args(struct trace_probe *tp,
|
|||
#define trace_probe_for_each_link_rcu(pos, tp) \
|
||||
list_for_each_entry_rcu(pos, &(tp)->event->files, list)
|
||||
|
||||
/* Check the name is good for event/group/fields */
|
||||
static inline bool is_good_name(const char *name)
|
||||
{
|
||||
if (!isalpha(*name) && *name != '_')
|
||||
return false;
|
||||
while (*++name != '\0') {
|
||||
if (!isalpha(*name) && !isdigit(*name) && *name != '_')
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
#define TPARG_FL_RETURN BIT(0)
|
||||
#define TPARG_FL_KERNEL BIT(1)
|
||||
#define TPARG_FL_FENTRY BIT(2)
|
||||
|
@ -404,6 +391,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
|
|||
C(MAXACT_TOO_BIG, "Maxactive is too big"), \
|
||||
C(BAD_PROBE_ADDR, "Invalid probed address or symbol"), \
|
||||
C(BAD_RETPROBE, "Retprobe address must be an function entry"), \
|
||||
C(BAD_ADDR_SUFFIX, "Invalid probed address suffix"), \
|
||||
C(NO_GROUP_NAME, "Group name is not specified"), \
|
||||
C(GROUP_TOO_LONG, "Group name is too long"), \
|
||||
C(BAD_GROUP_NAME, "Group name must follow the same rules as C identifiers"), \
|
||||
|
|
|
@ -554,20 +554,20 @@ __setup("stacktrace", enable_stacktrace);
|
|||
|
||||
static __init int stack_trace_init(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
trace_create_file("stack_max_size", 0644, d_tracer,
|
||||
trace_create_file("stack_max_size", 0644, NULL,
|
||||
&stack_trace_max_size, &stack_max_size_fops);
|
||||
|
||||
trace_create_file("stack_trace", 0444, d_tracer,
|
||||
trace_create_file("stack_trace", 0444, NULL,
|
||||
NULL, &stack_trace_fops);
|
||||
|
||||
#ifdef CONFIG_DYNAMIC_FTRACE
|
||||
trace_create_file("stack_trace_filter", 0644, d_tracer,
|
||||
trace_create_file("stack_trace_filter", 0644, NULL,
|
||||
&trace_ops, &stack_trace_filter_fops);
|
||||
#endif
|
||||
|
||||
|
|
|
@ -276,13 +276,13 @@ static const struct file_operations tracing_stat_fops = {
|
|||
|
||||
static int tracing_stat_init(void)
|
||||
{
|
||||
struct dentry *d_tracing;
|
||||
int ret;
|
||||
|
||||
d_tracing = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracing))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return -ENODEV;
|
||||
|
||||
stat_dir = tracefs_create_dir("trace_stat", d_tracing);
|
||||
stat_dir = tracefs_create_dir("trace_stat", NULL);
|
||||
if (!stat_dir) {
|
||||
pr_warn("Could not create tracefs 'trace_stat' entry\n");
|
||||
return -ENOMEM;
|
||||
|
|
|
@ -7,7 +7,7 @@
|
|||
#define SYNTH_SYSTEM "synthetic"
|
||||
#define SYNTH_FIELDS_MAX 32
|
||||
|
||||
#define STR_VAR_LEN_MAX 32 /* must be multiple of sizeof(u64) */
|
||||
#define STR_VAR_LEN_MAX MAX_FILTER_STR_VAL /* must be multiple of sizeof(u64) */
|
||||
|
||||
struct synth_field {
|
||||
char *type;
|
||||
|
@ -16,6 +16,8 @@ struct synth_field {
|
|||
unsigned int offset;
|
||||
bool is_signed;
|
||||
bool is_string;
|
||||
bool is_dynamic;
|
||||
bool field_pos;
|
||||
};
|
||||
|
||||
struct synth_event {
|
||||
|
@ -24,6 +26,8 @@ struct synth_event {
|
|||
char *name;
|
||||
struct synth_field **fields;
|
||||
unsigned int n_fields;
|
||||
struct synth_field **dynamic_fields;
|
||||
unsigned int n_dynamic_fields;
|
||||
unsigned int n_u64;
|
||||
struct trace_event_class class;
|
||||
struct trace_event_call call;
|
||||
|
|
|
@ -528,7 +528,7 @@ static int register_trace_uprobe(struct trace_uprobe *tu)
|
|||
|
||||
/*
|
||||
* Argument syntax:
|
||||
* - Add uprobe: p|r[:[GRP/]EVENT] PATH:OFFSET [FETCHARGS]
|
||||
* - Add uprobe: p|r[:[GRP/]EVENT] PATH:OFFSET[%return][(REF)] [FETCHARGS]
|
||||
*/
|
||||
static int trace_uprobe_create(int argc, const char **argv)
|
||||
{
|
||||
|
@ -617,6 +617,19 @@ static int trace_uprobe_create(int argc, const char **argv)
|
|||
}
|
||||
}
|
||||
|
||||
/* Check if there is %return suffix */
|
||||
tmp = strchr(arg, '%');
|
||||
if (tmp) {
|
||||
if (!strcmp(tmp, "%return")) {
|
||||
*tmp = '\0';
|
||||
is_return = true;
|
||||
} else {
|
||||
trace_probe_log_err(tmp - filename, BAD_ADDR_SUFFIX);
|
||||
ret = -EINVAL;
|
||||
goto fail_address_parse;
|
||||
}
|
||||
}
|
||||
|
||||
/* Parse uprobe offset. */
|
||||
ret = kstrtoul(arg, 0, &offset);
|
||||
if (ret) {
|
||||
|
@ -1625,21 +1638,20 @@ void destroy_local_trace_uprobe(struct trace_event_call *event_call)
|
|||
/* Make a trace interface for controling probe points */
|
||||
static __init int init_uprobe_trace(void)
|
||||
{
|
||||
struct dentry *d_tracer;
|
||||
int ret;
|
||||
|
||||
ret = dyn_event_register(&trace_uprobe_ops);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
d_tracer = tracing_init_dentry();
|
||||
if (IS_ERR(d_tracer))
|
||||
ret = tracing_init_dentry();
|
||||
if (ret)
|
||||
return 0;
|
||||
|
||||
trace_create_file("uprobe_events", 0644, d_tracer,
|
||||
trace_create_file("uprobe_events", 0644, NULL,
|
||||
NULL, &uprobe_events_ops);
|
||||
/* Profile interface */
|
||||
trace_create_file("uprobe_profile", 0444, d_tracer,
|
||||
trace_create_file("uprobe_profile", 0444, NULL,
|
||||
NULL, &uprobe_profile_ops);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -260,7 +260,7 @@ int tracing_map_add_var(struct tracing_map *map)
|
|||
* to use cmp_fn.
|
||||
*
|
||||
* A key can be a subset of a compound key; for that purpose, the
|
||||
* offset param is used to describe where within the the compound key
|
||||
* offset param is used to describe where within the compound key
|
||||
* the key referenced by this key field resides.
|
||||
*
|
||||
* Return: The index identifying the field in the map and associated
|
||||
|
|
|
@ -14,18 +14,19 @@
|
|||
#include <linux/kernel.h>
|
||||
#include <linux/bootconfig.h>
|
||||
|
||||
static int xbc_show_value(struct xbc_node *node)
|
||||
static int xbc_show_value(struct xbc_node *node, bool semicolon)
|
||||
{
|
||||
const char *val;
|
||||
const char *val, *eol;
|
||||
char q;
|
||||
int i = 0;
|
||||
|
||||
eol = semicolon ? ";\n" : "\n";
|
||||
xbc_array_for_each_value(node, val) {
|
||||
if (strchr(val, '"'))
|
||||
q = '\'';
|
||||
else
|
||||
q = '"';
|
||||
printf("%c%s%c%s", q, val, q, node->next ? ", " : ";\n");
|
||||
printf("%c%s%c%s", q, val, q, node->next ? ", " : eol);
|
||||
i++;
|
||||
}
|
||||
return i;
|
||||
|
@ -53,7 +54,7 @@ static void xbc_show_compact_tree(void)
|
|||
continue;
|
||||
} else if (cnode && xbc_node_is_value(cnode)) {
|
||||
printf("%s = ", xbc_node_get_data(node));
|
||||
xbc_show_value(cnode);
|
||||
xbc_show_value(cnode, true);
|
||||
} else {
|
||||
printf("%s;\n", xbc_node_get_data(node));
|
||||
}
|
||||
|
@ -77,8 +78,28 @@ static void xbc_show_compact_tree(void)
|
|||
}
|
||||
}
|
||||
|
||||
static void xbc_show_list(void)
|
||||
{
|
||||
char key[XBC_KEYLEN_MAX];
|
||||
struct xbc_node *leaf;
|
||||
const char *val;
|
||||
int ret = 0;
|
||||
|
||||
xbc_for_each_key_value(leaf, val) {
|
||||
ret = xbc_node_compose_key(leaf, key, XBC_KEYLEN_MAX);
|
||||
if (ret < 0)
|
||||
break;
|
||||
printf("%s = ", key);
|
||||
if (!val || val[0] == '\0') {
|
||||
printf("\"\"\n");
|
||||
continue;
|
||||
}
|
||||
xbc_show_value(xbc_node_get_child(leaf), false);
|
||||
}
|
||||
}
|
||||
|
||||
/* Simple real checksum */
|
||||
int checksum(unsigned char *buf, int len)
|
||||
static int checksum(unsigned char *buf, int len)
|
||||
{
|
||||
int i, sum = 0;
|
||||
|
||||
|
@ -90,7 +111,7 @@ int checksum(unsigned char *buf, int len)
|
|||
|
||||
#define PAGE_SIZE 4096
|
||||
|
||||
int load_xbc_fd(int fd, char **buf, int size)
|
||||
static int load_xbc_fd(int fd, char **buf, int size)
|
||||
{
|
||||
int ret;
|
||||
|
||||
|
@ -107,7 +128,7 @@ int load_xbc_fd(int fd, char **buf, int size)
|
|||
}
|
||||
|
||||
/* Return the read size or -errno */
|
||||
int load_xbc_file(const char *path, char **buf)
|
||||
static int load_xbc_file(const char *path, char **buf)
|
||||
{
|
||||
struct stat stat;
|
||||
int fd, ret;
|
||||
|
@ -126,7 +147,7 @@ int load_xbc_file(const char *path, char **buf)
|
|||
return ret;
|
||||
}
|
||||
|
||||
int load_xbc_from_initrd(int fd, char **buf)
|
||||
static int load_xbc_from_initrd(int fd, char **buf)
|
||||
{
|
||||
struct stat stat;
|
||||
int ret;
|
||||
|
@ -195,10 +216,55 @@ int load_xbc_from_initrd(int fd, char **buf)
|
|||
return size;
|
||||
}
|
||||
|
||||
int show_xbc(const char *path)
|
||||
static void show_xbc_error(const char *data, const char *msg, int pos)
|
||||
{
|
||||
int lin = 1, col, i;
|
||||
|
||||
if (pos < 0) {
|
||||
pr_err("Error: %s.\n", msg);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Note that pos starts from 0 but lin and col should start from 1. */
|
||||
col = pos + 1;
|
||||
for (i = 0; i < pos; i++) {
|
||||
if (data[i] == '\n') {
|
||||
lin++;
|
||||
col = pos - i;
|
||||
}
|
||||
}
|
||||
pr_err("Parse Error: %s at %d:%d\n", msg, lin, col);
|
||||
|
||||
}
|
||||
|
||||
static int init_xbc_with_error(char *buf, int len)
|
||||
{
|
||||
char *copy = strdup(buf);
|
||||
const char *msg;
|
||||
int ret, pos;
|
||||
|
||||
if (!copy)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = xbc_init(buf, &msg, &pos);
|
||||
if (ret < 0)
|
||||
show_xbc_error(copy, msg, pos);
|
||||
free(copy);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int show_xbc(const char *path, bool list)
|
||||
{
|
||||
int ret, fd;
|
||||
char *buf = NULL;
|
||||
struct stat st;
|
||||
|
||||
ret = stat(path, &st);
|
||||
if (ret < 0) {
|
||||
pr_err("Failed to stat %s: %d\n", path, -errno);
|
||||
return -errno;
|
||||
}
|
||||
|
||||
fd = open(path, O_RDONLY);
|
||||
if (fd < 0) {
|
||||
|
@ -207,20 +273,33 @@ int show_xbc(const char *path)
|
|||
}
|
||||
|
||||
ret = load_xbc_from_initrd(fd, &buf);
|
||||
close(fd);
|
||||
if (ret < 0) {
|
||||
pr_err("Failed to load a boot config from initrd: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
xbc_show_compact_tree();
|
||||
/* Assume a bootconfig file if it is enough small */
|
||||
if (ret == 0 && st.st_size <= XBC_DATA_MAX) {
|
||||
ret = load_xbc_file(path, &buf);
|
||||
if (ret < 0) {
|
||||
pr_err("Failed to load a boot config: %d\n", ret);
|
||||
goto out;
|
||||
}
|
||||
if (init_xbc_with_error(buf, ret) < 0)
|
||||
goto out;
|
||||
}
|
||||
if (list)
|
||||
xbc_show_list();
|
||||
else
|
||||
xbc_show_compact_tree();
|
||||
ret = 0;
|
||||
out:
|
||||
close(fd);
|
||||
free(buf);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int delete_xbc(const char *path)
|
||||
static int delete_xbc(const char *path)
|
||||
{
|
||||
struct stat stat;
|
||||
int ret = 0, fd, size;
|
||||
|
@ -251,28 +330,7 @@ int delete_xbc(const char *path)
|
|||
return ret;
|
||||
}
|
||||
|
||||
static void show_xbc_error(const char *data, const char *msg, int pos)
|
||||
{
|
||||
int lin = 1, col, i;
|
||||
|
||||
if (pos < 0) {
|
||||
pr_err("Error: %s.\n", msg);
|
||||
return;
|
||||
}
|
||||
|
||||
/* Note that pos starts from 0 but lin and col should start from 1. */
|
||||
col = pos + 1;
|
||||
for (i = 0; i < pos; i++) {
|
||||
if (data[i] == '\n') {
|
||||
lin++;
|
||||
col = pos - i;
|
||||
}
|
||||
}
|
||||
pr_err("Parse Error: %s at %d:%d\n", msg, lin, col);
|
||||
|
||||
}
|
||||
|
||||
int apply_xbc(const char *path, const char *xbc_path)
|
||||
static int apply_xbc(const char *path, const char *xbc_path)
|
||||
{
|
||||
u32 size, csum;
|
||||
char *buf, *data;
|
||||
|
@ -349,14 +407,16 @@ int apply_xbc(const char *path, const char *xbc_path)
|
|||
return ret;
|
||||
}
|
||||
|
||||
int usage(void)
|
||||
static int usage(void)
|
||||
{
|
||||
printf("Usage: bootconfig [OPTIONS] <INITRD>\n"
|
||||
"Or bootconfig <CONFIG>\n"
|
||||
" Apply, delete or show boot config to initrd.\n"
|
||||
" Options:\n"
|
||||
" -a <config>: Apply boot config to initrd\n"
|
||||
" -d : Delete boot config file from initrd\n\n"
|
||||
" If no option is given, show current applied boot config.\n");
|
||||
" -d : Delete boot config file from initrd\n"
|
||||
" -l : list boot config in initrd or file\n\n"
|
||||
" If no option is given, show the bootconfig in the given file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -364,10 +424,10 @@ int main(int argc, char **argv)
|
|||
{
|
||||
char *path = NULL;
|
||||
char *apply = NULL;
|
||||
bool delete = false;
|
||||
bool delete = false, list = false;
|
||||
int opt;
|
||||
|
||||
while ((opt = getopt(argc, argv, "hda:")) != -1) {
|
||||
while ((opt = getopt(argc, argv, "hda:l")) != -1) {
|
||||
switch (opt) {
|
||||
case 'd':
|
||||
delete = true;
|
||||
|
@ -375,14 +435,17 @@ int main(int argc, char **argv)
|
|||
case 'a':
|
||||
apply = optarg;
|
||||
break;
|
||||
case 'l':
|
||||
list = true;
|
||||
break;
|
||||
case 'h':
|
||||
default:
|
||||
return usage();
|
||||
}
|
||||
}
|
||||
|
||||
if (apply && delete) {
|
||||
pr_err("Error: You can not specify both -a and -d at once.\n");
|
||||
if ((apply && delete) || (delete && list) || (apply && list)) {
|
||||
pr_err("Error: You can give one of -a, -d or -l at once.\n");
|
||||
return usage();
|
||||
}
|
||||
|
||||
|
@ -398,5 +461,5 @@ int main(int argc, char **argv)
|
|||
else if (delete)
|
||||
return delete_xbc(path);
|
||||
|
||||
return show_xbc(path);
|
||||
return show_xbc(path, list);
|
||||
}
|
||||
|
|
199
tools/bootconfig/scripts/bconf2ftrace.sh
Executable file
199
tools/bootconfig/scripts/bconf2ftrace.sh
Executable file
|
@ -0,0 +1,199 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
usage() {
|
||||
echo "Ftrace boottime trace test tool"
|
||||
echo "Usage: $0 [--apply|--init] [--debug] BOOTCONFIG-FILE"
|
||||
echo " --apply: Test actual apply to tracefs (need sudo)"
|
||||
echo " --init: Initialize ftrace before applying (imply --apply)"
|
||||
exit 1
|
||||
}
|
||||
|
||||
[ $# -eq 0 ] && usage
|
||||
|
||||
BCONF=
|
||||
DEBUG=
|
||||
APPLY=
|
||||
INIT=
|
||||
while [ x"$1" != x ]; do
|
||||
case "$1" in
|
||||
"--debug")
|
||||
DEBUG=$1;;
|
||||
"--apply")
|
||||
APPLY=$1;;
|
||||
"--init")
|
||||
APPLY=$1
|
||||
INIT=$1;;
|
||||
*)
|
||||
[ ! -f $1 ] && usage
|
||||
BCONF=$1;;
|
||||
esac
|
||||
shift 1
|
||||
done
|
||||
|
||||
if [ x"$APPLY" != x ]; then
|
||||
if [ `id -u` -ne 0 ]; then
|
||||
echo "This must be run by root user. Try sudo." 1>&2
|
||||
exec sudo $0 $DEBUG $APPLY $BCONF
|
||||
fi
|
||||
fi
|
||||
|
||||
run_cmd() { # command
|
||||
echo "$*"
|
||||
if [ x"$APPLY" != x ]; then # apply command
|
||||
eval $*
|
||||
fi
|
||||
}
|
||||
|
||||
if [ x"$DEBUG" != x ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
TRACEFS=`grep -m 1 -w tracefs /proc/mounts | cut -f 2 -d " "`
|
||||
if [ -z "$TRACEFS" ]; then
|
||||
if ! grep -wq debugfs /proc/mounts; then
|
||||
echo "Error: No tracefs/debugfs was mounted." 1>&2
|
||||
exit 1
|
||||
fi
|
||||
TRACEFS=`grep -m 1 -w debugfs /proc/mounts | cut -f 2 -d " "`/tracing
|
||||
if [ ! -d $TRACEFS ]; then
|
||||
echo "Error: ftrace is not enabled on this kernel." 1>&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ x"$INIT" != x ]; then
|
||||
. `dirname $0`/ftrace.sh
|
||||
(cd $TRACEFS; initialize_ftrace)
|
||||
fi
|
||||
|
||||
. `dirname $0`/xbc.sh
|
||||
|
||||
######## main #########
|
||||
set -e
|
||||
|
||||
xbc_init $BCONF
|
||||
|
||||
set_value_of() { # key file
|
||||
if xbc_has_key $1; then
|
||||
val=`xbc_get_val $1 1`
|
||||
run_cmd "echo '$val' >> $2"
|
||||
fi
|
||||
}
|
||||
|
||||
set_array_of() { # key file
|
||||
if xbc_has_key $1; then
|
||||
xbc_get_val $1 | while read line; do
|
||||
run_cmd "echo '$line' >> $2"
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
compose_synth() { # event_name branch
|
||||
echo -n "$1 "
|
||||
xbc_get_val $2 | while read field; do echo -n "$field; "; done
|
||||
}
|
||||
|
||||
setup_event() { # prefix group event [instance]
|
||||
branch=$1.$2.$3
|
||||
if [ "$4" ]; then
|
||||
eventdir="$TRACEFS/instances/$4/events/$2/$3"
|
||||
else
|
||||
eventdir="$TRACEFS/events/$2/$3"
|
||||
fi
|
||||
case $2 in
|
||||
kprobes)
|
||||
xbc_get_val ${branch}.probes | while read line; do
|
||||
run_cmd "echo 'p:kprobes/$3 $line' >> $TRACEFS/kprobe_events"
|
||||
done
|
||||
;;
|
||||
synthetic)
|
||||
run_cmd "echo '`compose_synth $3 ${branch}.fields`' >> $TRACEFS/synthetic_events"
|
||||
;;
|
||||
esac
|
||||
|
||||
set_value_of ${branch}.filter ${eventdir}/filter
|
||||
set_array_of ${branch}.actions ${eventdir}/trigger
|
||||
|
||||
if xbc_has_key ${branch}.enable; then
|
||||
run_cmd "echo 1 > ${eventdir}/enable"
|
||||
fi
|
||||
}
|
||||
|
||||
setup_events() { # prefix("ftrace" or "ftrace.instance.INSTANCE") [instance]
|
||||
prefix="${1}.event"
|
||||
if xbc_has_branch ${1}.event; then
|
||||
for grpev in `xbc_subkeys ${1}.event 2`; do
|
||||
setup_event $prefix ${grpev%.*} ${grpev#*.} $2
|
||||
done
|
||||
fi
|
||||
}
|
||||
|
||||
size2kb() { # size[KB|MB]
|
||||
case $1 in
|
||||
*KB)
|
||||
echo ${1%KB};;
|
||||
*MB)
|
||||
expr ${1%MB} \* 1024;;
|
||||
*)
|
||||
expr $1 / 1024 ;;
|
||||
esac
|
||||
}
|
||||
|
||||
setup_instance() { # [instance]
|
||||
if [ "$1" ]; then
|
||||
instance="ftrace.instance.${1}"
|
||||
instancedir=$TRACEFS/instances/$1
|
||||
else
|
||||
instance="ftrace"
|
||||
instancedir=$TRACEFS
|
||||
fi
|
||||
|
||||
set_array_of ${instance}.options ${instancedir}/trace_options
|
||||
set_value_of ${instance}.trace_clock ${instancedir}/trace_clock
|
||||
set_value_of ${instance}.cpumask ${instancedir}/tracing_cpumask
|
||||
set_value_of ${instance}.tracer ${instancedir}/current_tracer
|
||||
set_array_of ${instance}.ftrace.filters \
|
||||
${instancedir}/set_ftrace_filter
|
||||
set_array_of ${instance}.ftrace.notrace \
|
||||
${instancedir}/set_ftrace_notrace
|
||||
|
||||
if xbc_has_key ${instance}.alloc_snapshot; then
|
||||
run_cmd "echo 1 > ${instancedir}/snapshot"
|
||||
fi
|
||||
|
||||
if xbc_has_key ${instance}.buffer_size; then
|
||||
size=`xbc_get_val ${instance}.buffer_size 1`
|
||||
size=`eval size2kb $size`
|
||||
run_cmd "echo $size >> ${instancedir}/buffer_size_kb"
|
||||
fi
|
||||
|
||||
setup_events ${instance} $1
|
||||
set_array_of ${instance}.events ${instancedir}/set_event
|
||||
}
|
||||
|
||||
# ftrace global configs (kernel.*)
|
||||
if xbc_has_key "kernel.dump_on_oops"; then
|
||||
dump_mode=`xbc_get_val "kernel.dump_on_oops" 1`
|
||||
[ "$dump_mode" ] && dump_mode=`eval echo $dump_mode` || dump_mode=1
|
||||
run_cmd "echo \"$dump_mode\" > /proc/sys/kernel/ftrace_dump_on_oops"
|
||||
fi
|
||||
|
||||
set_value_of kernel.fgraph_max_depth $TRACEFS/max_graph_depth
|
||||
set_array_of kernel.fgraph_filters $TRACEFS/set_graph_function
|
||||
set_array_of kernel.fgraph_notraces $TRACEFS/set_graph_notrace
|
||||
|
||||
# Per-instance/per-event configs
|
||||
if ! xbc_has_branch "ftrace" ; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
setup_instance # root instance
|
||||
|
||||
if xbc_has_branch "ftrace.instance"; then
|
||||
for i in `xbc_subkeys "ftrace.instance" 1`; do
|
||||
run_cmd "mkdir -p $TRACEFS/instances/$i"
|
||||
setup_instance $i
|
||||
done
|
||||
fi
|
||||
|
109
tools/bootconfig/scripts/ftrace.sh
Normal file
109
tools/bootconfig/scripts/ftrace.sh
Normal file
|
@ -0,0 +1,109 @@
|
|||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
clear_trace() { # reset trace output
|
||||
echo > trace
|
||||
}
|
||||
|
||||
disable_tracing() { # stop trace recording
|
||||
echo 0 > tracing_on
|
||||
}
|
||||
|
||||
enable_tracing() { # start trace recording
|
||||
echo 1 > tracing_on
|
||||
}
|
||||
|
||||
reset_tracer() { # reset the current tracer
|
||||
echo nop > current_tracer
|
||||
}
|
||||
|
||||
reset_trigger_file() {
|
||||
# remove action triggers first
|
||||
grep -H ':on[^:]*(' $@ |
|
||||
while read line; do
|
||||
cmd=`echo $line | cut -f2- -d: | cut -f1 -d"["`
|
||||
file=`echo $line | cut -f1 -d:`
|
||||
echo "!$cmd" >> $file
|
||||
done
|
||||
grep -Hv ^# $@ |
|
||||
while read line; do
|
||||
cmd=`echo $line | cut -f2- -d: | cut -f1 -d"["`
|
||||
file=`echo $line | cut -f1 -d:`
|
||||
echo "!$cmd" > $file
|
||||
done
|
||||
}
|
||||
|
||||
reset_trigger() { # reset all current setting triggers
|
||||
if [ -d events/synthetic ]; then
|
||||
reset_trigger_file events/synthetic/*/trigger
|
||||
fi
|
||||
reset_trigger_file events/*/*/trigger
|
||||
}
|
||||
|
||||
reset_events_filter() { # reset all current setting filters
|
||||
grep -v ^none events/*/*/filter |
|
||||
while read line; do
|
||||
echo 0 > `echo $line | cut -f1 -d:`
|
||||
done
|
||||
}
|
||||
|
||||
reset_ftrace_filter() { # reset all triggers in set_ftrace_filter
|
||||
if [ ! -f set_ftrace_filter ]; then
|
||||
return 0
|
||||
fi
|
||||
echo > set_ftrace_filter
|
||||
grep -v '^#' set_ftrace_filter | while read t; do
|
||||
tr=`echo $t | cut -d: -f2`
|
||||
if [ "$tr" = "" ]; then
|
||||
continue
|
||||
fi
|
||||
if ! grep -q "$t" set_ftrace_filter; then
|
||||
continue;
|
||||
fi
|
||||
name=`echo $t | cut -d: -f1 | cut -d' ' -f1`
|
||||
if [ $tr = "enable_event" -o $tr = "disable_event" ]; then
|
||||
tr=`echo $t | cut -d: -f2-4`
|
||||
limit=`echo $t | cut -d: -f5`
|
||||
else
|
||||
tr=`echo $t | cut -d: -f2`
|
||||
limit=`echo $t | cut -d: -f3`
|
||||
fi
|
||||
if [ "$limit" != "unlimited" ]; then
|
||||
tr="$tr:$limit"
|
||||
fi
|
||||
echo "!$name:$tr" > set_ftrace_filter
|
||||
done
|
||||
}
|
||||
|
||||
disable_events() {
|
||||
echo 0 > events/enable
|
||||
}
|
||||
|
||||
clear_synthetic_events() { # reset all current synthetic events
|
||||
grep -v ^# synthetic_events |
|
||||
while read line; do
|
||||
echo "!$line" >> synthetic_events
|
||||
done
|
||||
}
|
||||
|
||||
initialize_ftrace() { # Reset ftrace to initial-state
|
||||
# As the initial state, ftrace will be set to nop tracer,
|
||||
# no events, no triggers, no filters, no function filters,
|
||||
# no probes, and tracing on.
|
||||
disable_tracing
|
||||
reset_tracer
|
||||
reset_trigger
|
||||
reset_events_filter
|
||||
reset_ftrace_filter
|
||||
disable_events
|
||||
[ -f set_event_pid ] && echo > set_event_pid
|
||||
[ -f set_ftrace_pid ] && echo > set_ftrace_pid
|
||||
[ -f set_ftrace_notrace ] && echo > set_ftrace_notrace
|
||||
[ -f set_graph_function ] && echo | tee set_graph_*
|
||||
[ -f stack_trace_filter ] && echo > stack_trace_filter
|
||||
[ -f kprobe_events ] && echo > kprobe_events
|
||||
[ -f uprobe_events ] && echo > uprobe_events
|
||||
[ -f synthetic_events ] && echo > synthetic_events
|
||||
[ -f snapshot ] && echo 0 > snapshot
|
||||
clear_trace
|
||||
enable_tracing
|
||||
}
|
244
tools/bootconfig/scripts/ftrace2bconf.sh
Executable file
244
tools/bootconfig/scripts/ftrace2bconf.sh
Executable file
|
@ -0,0 +1,244 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
usage() {
|
||||
echo "Dump boot-time tracing bootconfig from ftrace"
|
||||
echo "Usage: $0 [--debug] [ > BOOTCONFIG-FILE]"
|
||||
exit 1
|
||||
}
|
||||
|
||||
DEBUG=
|
||||
while [ x"$1" != x ]; do
|
||||
case "$1" in
|
||||
"--debug")
|
||||
DEBUG=$1;;
|
||||
-*)
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
shift 1
|
||||
done
|
||||
|
||||
if [ x"$DEBUG" != x ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
TRACEFS=`grep -m 1 -w tracefs /proc/mounts | cut -f 2 -d " "`
|
||||
if [ -z "$TRACEFS" ]; then
|
||||
if ! grep -wq debugfs /proc/mounts; then
|
||||
echo "Error: No tracefs/debugfs was mounted."
|
||||
exit 1
|
||||
fi
|
||||
TRACEFS=`grep -m 1 -w debugfs /proc/mounts | cut -f 2 -d " "`/tracing
|
||||
if [ ! -d $TRACEFS ]; then
|
||||
echo "Error: ftrace is not enabled on this kernel." 1>&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
######## main #########
|
||||
|
||||
set -e
|
||||
|
||||
emit_kv() { # key =|+= value
|
||||
echo "$@"
|
||||
}
|
||||
|
||||
global_options() {
|
||||
val=`cat $TRACEFS/max_graph_depth`
|
||||
[ $val != 0 ] && emit_kv kernel.fgraph_max_depth = $val
|
||||
if grep -qv "^#" $TRACEFS/set_graph_function $TRACEFS/set_graph_notrace ; then
|
||||
cat 1>&2 << EOF
|
||||
# WARN: kernel.fgraph_filters and kernel.fgraph_notrace are not supported, since the wild card expression was expanded and lost from memory.
|
||||
EOF
|
||||
fi
|
||||
}
|
||||
|
||||
kprobe_event_options() {
|
||||
cat $TRACEFS/kprobe_events | while read p args; do
|
||||
case $p in
|
||||
r*)
|
||||
cat 1>&2 << EOF
|
||||
# WARN: A return probe found but it is not supported by bootconfig. Skip it.
|
||||
EOF
|
||||
continue;;
|
||||
esac
|
||||
p=${p#*:}
|
||||
event=${p#*/}
|
||||
group=${p%/*}
|
||||
if [ $group != "kprobes" ]; then
|
||||
cat 1>&2 << EOF
|
||||
# WARN: kprobes group name $group is changed to "kprobes" for bootconfig.
|
||||
EOF
|
||||
fi
|
||||
emit_kv $PREFIX.event.kprobes.$event.probes += $args
|
||||
done
|
||||
}
|
||||
|
||||
synth_event_options() {
|
||||
cat $TRACEFS/synthetic_events | while read event fields; do
|
||||
emit_kv $PREFIX.event.synthetic.$event.fields = `echo $fields | sed "s/;/,/g"`
|
||||
done
|
||||
}
|
||||
|
||||
# Variables resolver
|
||||
DEFINED_VARS=
|
||||
UNRESOLVED_EVENTS=
|
||||
|
||||
defined_vars() { # event-dir
|
||||
grep "^hist" $1/trigger | grep -o ':[a-zA-Z0-9]*='
|
||||
}
|
||||
referred_vars() {
|
||||
grep "^hist" $1/trigger | grep -o '$[a-zA-Z0-9]*'
|
||||
}
|
||||
|
||||
per_event_options() { # event-dir
|
||||
evdir=$1
|
||||
# Check the special event which has no filter and no trigger
|
||||
[ ! -f $evdir/filter ] && return
|
||||
|
||||
if grep -q "^hist:" $evdir/trigger; then
|
||||
# hist action can refer the undefined variables
|
||||
__vars=`defined_vars $evdir`
|
||||
for v in `referred_vars $evdir`; do
|
||||
if echo $DEFINED_VARS $__vars | grep -vqw ${v#$}; then
|
||||
# $v is not defined yet, defer it
|
||||
UNRESOLVED_EVENTS="$UNRESOLVED_EVENTS $evdir"
|
||||
return;
|
||||
fi
|
||||
done
|
||||
DEFINED_VARS="$DEFINED_VARS "`defined_vars $evdir`
|
||||
fi
|
||||
grep -v "^#" $evdir/trigger | while read action active; do
|
||||
emit_kv $PREFIX.event.$group.$event.actions += \'$action\'
|
||||
done
|
||||
|
||||
# enable is not checked; this is done by set_event in the instance.
|
||||
val=`cat $evdir/filter`
|
||||
if [ "$val" != "none" ]; then
|
||||
emit_kv $PREFIX.event.$group.$event.filter = "$val"
|
||||
fi
|
||||
}
|
||||
|
||||
retry_unresolved() {
|
||||
unresolved=$UNRESOLVED_EVENTS
|
||||
UNRESOLVED_EVENTS=
|
||||
for evdir in $unresolved; do
|
||||
event=${evdir##*/}
|
||||
group=${evdir%/*}; group=${group##*/}
|
||||
per_event_options $evdir
|
||||
done
|
||||
}
|
||||
|
||||
event_options() {
|
||||
# PREFIX and INSTANCE must be set
|
||||
if [ $PREFIX = "ftrace" ]; then
|
||||
# define the dynamic events
|
||||
kprobe_event_options
|
||||
synth_event_options
|
||||
fi
|
||||
for group in `ls $INSTANCE/events/` ; do
|
||||
[ ! -d $INSTANCE/events/$group ] && continue
|
||||
for event in `ls $INSTANCE/events/$group/` ;do
|
||||
[ ! -d $INSTANCE/events/$group/$event ] && continue
|
||||
per_event_options $INSTANCE/events/$group/$event
|
||||
done
|
||||
done
|
||||
retry=0
|
||||
while [ $retry -lt 3 ]; do
|
||||
retry_unresolved
|
||||
retry=$((retry + 1))
|
||||
done
|
||||
if [ "$UNRESOLVED_EVENTS" ]; then
|
||||
cat 1>&2 << EOF
|
||||
! ERROR: hist triggers in $UNRESOLVED_EVENTS use some undefined variables.
|
||||
EOF
|
||||
fi
|
||||
}
|
||||
|
||||
is_default_trace_option() { # option
|
||||
grep -qw $1 << EOF
|
||||
print-parent
|
||||
nosym-offset
|
||||
nosym-addr
|
||||
noverbose
|
||||
noraw
|
||||
nohex
|
||||
nobin
|
||||
noblock
|
||||
trace_printk
|
||||
annotate
|
||||
nouserstacktrace
|
||||
nosym-userobj
|
||||
noprintk-msg-only
|
||||
context-info
|
||||
nolatency-format
|
||||
record-cmd
|
||||
norecord-tgid
|
||||
overwrite
|
||||
nodisable_on_free
|
||||
irq-info
|
||||
markers
|
||||
noevent-fork
|
||||
nopause-on-trace
|
||||
function-trace
|
||||
nofunction-fork
|
||||
nodisplay-graph
|
||||
nostacktrace
|
||||
notest_nop_accept
|
||||
notest_nop_refuse
|
||||
EOF
|
||||
}
|
||||
|
||||
instance_options() { # [instance-name]
|
||||
if [ $# -eq 0 ]; then
|
||||
PREFIX="ftrace"
|
||||
INSTANCE=$TRACEFS
|
||||
else
|
||||
PREFIX="ftrace.instance.$1"
|
||||
INSTANCE=$TRACEFS/instances/$1
|
||||
fi
|
||||
val=
|
||||
for i in `cat $INSTANCE/trace_options`; do
|
||||
is_default_trace_option $i && continue
|
||||
val="$val, $i"
|
||||
done
|
||||
[ "$val" ] && emit_kv $PREFIX.options = "${val#,}"
|
||||
val="local"
|
||||
for i in `cat $INSTANCE/trace_clock` ; do
|
||||
[ "${i#*]}" ] && continue
|
||||
i=${i%]}; val=${i#[}
|
||||
done
|
||||
[ $val != "local" ] && emit_kv $PREFIX.trace_clock = $val
|
||||
val=`cat $INSTANCE/buffer_size_kb`
|
||||
if echo $val | grep -vq "expanded" ; then
|
||||
emit_kv $PREFIX.buffer_size = $val"KB"
|
||||
fi
|
||||
if grep -q "is allocated" $INSTANCE/snapshot ; then
|
||||
emit_kv $PREFIX.alloc_snapshot
|
||||
fi
|
||||
val=`cat $INSTANCE/tracing_cpumask`
|
||||
if [ `echo $val | sed -e s/f//g`x != x ]; then
|
||||
emit_kv $PREFIX.cpumask = $val
|
||||
fi
|
||||
|
||||
val=
|
||||
for i in `cat $INSTANCE/set_event`; do
|
||||
val="$val, $i"
|
||||
done
|
||||
[ "$val" ] && emit_kv $PREFIX.events = "${val#,}"
|
||||
val=`cat $INSTANCE/current_tracer`
|
||||
[ $val != nop ] && emit_kv $PREFIX.tracer = $val
|
||||
if grep -qv "^#" $INSTANCE/set_ftrace_filter $INSTANCE/set_ftrace_notrace; then
|
||||
cat 1>&2 << EOF
|
||||
# WARN: kernel.ftrace.filters and kernel.ftrace.notrace are not supported, since the wild card expression was expanded and lost from memory.
|
||||
EOF
|
||||
fi
|
||||
event_options
|
||||
}
|
||||
|
||||
global_options
|
||||
instance_options
|
||||
for i in `ls $TRACEFS/instances` ; do
|
||||
instance_options $i
|
||||
done
|
56
tools/bootconfig/scripts/xbc.sh
Normal file
56
tools/bootconfig/scripts/xbc.sh
Normal file
|
@ -0,0 +1,56 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
|
||||
# bootconfig utility functions
|
||||
|
||||
XBC_TMPFILE=
|
||||
XBC_BASEDIR=`dirname $0`
|
||||
BOOTCONFIG=${BOOTCONFIG:=$XBC_BASEDIR/../bootconfig}
|
||||
if [ ! -x "$BOOTCONFIG" ]; then
|
||||
BOOTCONFIG=`which bootconfig`
|
||||
if [ -z "$BOOTCONFIG" ]; then
|
||||
echo "Erorr: bootconfig command is not found" 1>&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
xbc_cleanup() {
|
||||
if [ "$XBC_TMPFILE" ]; then
|
||||
rm -f "$XBC_TMPFILE"
|
||||
fi
|
||||
}
|
||||
|
||||
xbc_init() { # bootconfig-file
|
||||
xbc_cleanup
|
||||
XBC_TMPFILE=`mktemp bconf-XXXX`
|
||||
trap xbc_cleanup EXIT TERM
|
||||
|
||||
$BOOTCONFIG -l $1 > $XBC_TMPFILE || exit 1
|
||||
}
|
||||
|
||||
nr_args() { # args
|
||||
echo $#
|
||||
}
|
||||
|
||||
xbc_get_val() { # key [maxnum]
|
||||
if [ "$2" ]; then
|
||||
MAXOPT="-L $2"
|
||||
fi
|
||||
grep "^$1 =" $XBC_TMPFILE | cut -d= -f2- | \
|
||||
sed -e 's/", /" /g' -e "s/',/' /g" | \
|
||||
xargs $MAXOPT -n 1 echo
|
||||
}
|
||||
|
||||
xbc_has_key() { # key
|
||||
grep -q "^$1 =" $XBC_TMPFILE
|
||||
}
|
||||
|
||||
xbc_has_branch() { # prefix-key
|
||||
grep -q "^$1" $XBC_TMPFILE
|
||||
}
|
||||
|
||||
xbc_subkeys() { # prefix-key depth
|
||||
__keys=`echo $1 | sed "s/\./ /g"`
|
||||
__s=`nr_args $__keys`
|
||||
grep "^$1" $XBC_TMPFILE | cut -d= -f1| cut -d. -f$((__s + 1))-$((__s + $2)) | uniq
|
||||
}
|
|
@ -97,4 +97,10 @@ check_error 'p:kprobes/testevent kernel_clone ^abcd=\"foo"' # DIFF_ARG_TYPE
|
|||
check_error '^p:kprobes/testevent kernel_clone abcd=\1' # SAME_PROBE
|
||||
fi
|
||||
|
||||
# %return suffix errors
|
||||
if grep -q "place (kretprobe): .*%return.*" README; then
|
||||
check_error 'p vfs_read^%hoge' # BAD_ADDR_SUFFIX
|
||||
check_error 'p ^vfs_read+10%return' # BAD_RETPROBE
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
|
@ -0,0 +1,21 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: Kretprobe %%return suffix test
|
||||
# requires: kprobe_events '<symbol>[+<offset>]%return':README
|
||||
|
||||
# Test for kretprobe by "r"
|
||||
echo 'r:myprobeaccept vfs_read' > kprobe_events
|
||||
RESULT1=`cat kprobe_events`
|
||||
|
||||
# Test for kretprobe by "%return"
|
||||
echo 'p:myprobeaccept vfs_read%return' > kprobe_events
|
||||
RESULT2=`cat kprobe_events`
|
||||
|
||||
if [ "$RESULT1" != "$RESULT2" ]; then
|
||||
echo "Error: %return suffix didn't make a return probe."
|
||||
echo "r-command: $RESULT1"
|
||||
echo "%return: $RESULT2"
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo > kprobe_events
|
|
@ -17,4 +17,10 @@ check_error 'p /bin/sh:10(10)^a' # BAD_REFCNT_SUFFIX
|
|||
check_error 'p /bin/sh:10 ^@+ab' # BAD_FILE_OFFS
|
||||
check_error 'p /bin/sh:10 ^@symbol' # SYM_ON_UPROBE
|
||||
|
||||
# %return suffix error
|
||||
if grep -q "place (uprobe): .*%return.*" README; then
|
||||
check_error 'p /bin/sh:10^%hoge' # BAD_ADDR_SUFFIX
|
||||
check_error 'p /bin/sh:10(10)^%return' # BAD_REFCNT_SUFFIX
|
||||
fi
|
||||
|
||||
exit 0
|
||||
|
|
|
@ -25,12 +25,12 @@ echo 'wakeup_latency u64 lat pid_t pid' >> synthetic_events
|
|||
echo 'hist:keys=pid:ts1=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger
|
||||
echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts1:onmatch(sched.sched_wakeup).wakeup_latency($wakeup_lat,next_pid) if next_comm=="ping"' > events/sched/sched_switch/trigger
|
||||
|
||||
echo 'waking+wakeup_latency u64 lat; pid_t pid' >> synthetic_events
|
||||
echo 'hist:keys=pid,lat:sort=pid,lat:ww_lat=$waking_lat+$wakeup_lat:onmatch(synthetic.wakeup_latency).waking+wakeup_latency($ww_lat,pid)' >> events/synthetic/wakeup_latency/trigger
|
||||
echo 'hist:keys=pid,lat:sort=pid,lat' >> events/synthetic/waking+wakeup_latency/trigger
|
||||
echo 'waking_plus_wakeup_latency u64 lat; pid_t pid' >> synthetic_events
|
||||
echo 'hist:keys=pid,lat:sort=pid,lat:ww_lat=$waking_lat+$wakeup_lat:onmatch(synthetic.wakeup_latency).waking_plus_wakeup_latency($ww_lat,pid)' >> events/synthetic/wakeup_latency/trigger
|
||||
echo 'hist:keys=pid,lat:sort=pid,lat' >> events/synthetic/waking_plus_wakeup_latency/trigger
|
||||
|
||||
ping $LOCALHOST -c 3
|
||||
if ! grep -q "pid:" events/synthetic/waking+wakeup_latency/hist; then
|
||||
if ! grep -q "pid:" events/synthetic/waking_plus_wakeup_latency/hist; then
|
||||
fail "Failed to create combined histogram"
|
||||
fi
|
||||
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: event trigger - test inter-event histogram trigger trace action with dynamic string param
|
||||
# requires: set_event synthetic_events events/sched/sched_process_exec/hist "char name[]' >> synthetic_events":README
|
||||
|
||||
fail() { #msg
|
||||
echo $1
|
||||
exit_fail
|
||||
}
|
||||
|
||||
echo "Test create synthetic event"
|
||||
|
||||
echo 'ping_test_latency u64 lat; char filename[]' > synthetic_events
|
||||
if [ ! -d events/synthetic/ping_test_latency ]; then
|
||||
fail "Failed to create ping_test_latency synthetic event"
|
||||
fi
|
||||
|
||||
echo "Test create histogram for synthetic event using trace action and dynamic strings"
|
||||
echo "Test histogram dynamic string variables,simple expression support and trace action"
|
||||
|
||||
echo 'hist:key=pid:filenamevar=filename:ts0=common_timestamp.usecs' > events/sched/sched_process_exec/trigger
|
||||
echo 'hist:key=pid:lat=common_timestamp.usecs-$ts0:onmatch(sched.sched_process_exec).ping_test_latency($lat,$filenamevar) if comm == "ping"' > events/sched/sched_process_exit/trigger
|
||||
echo 'hist:keys=filename,lat:sort=filename,lat' > events/synthetic/ping_test_latency/trigger
|
||||
|
||||
ping $LOCALHOST -c 5
|
||||
|
||||
if ! grep -q "ping" events/synthetic/ping_test_latency/hist; then
|
||||
fail "Failed to create dynamic string trace action inter-event histogram"
|
||||
fi
|
||||
|
||||
exit 0
|
|
@ -0,0 +1,19 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: event trigger - test synthetic_events syntax parser errors
|
||||
# requires: synthetic_events error_log
|
||||
|
||||
check_error() { # command-with-error-pos-by-^
|
||||
ftrace_errlog_check 'synthetic_events' "$1" 'synthetic_events'
|
||||
}
|
||||
|
||||
check_error 'myevent ^chr arg' # INVALID_TYPE
|
||||
check_error 'myevent ^char str[];; int v' # INVALID_TYPE
|
||||
check_error 'myevent char ^str]; int v' # INVALID_NAME
|
||||
check_error 'myevent char ^str;[]' # INVALID_NAME
|
||||
check_error 'myevent ^char str[; int v' # INVALID_TYPE
|
||||
check_error '^mye;vent char str[]' # BAD_NAME
|
||||
check_error 'myevent char str[]; ^int' # INVALID_FIELD
|
||||
check_error '^myevent' # INCOMPLETE_CMD
|
||||
|
||||
exit 0
|
Loading…
Reference in New Issue
Block a user