2019-11-15 02:02:54 +08:00
|
|
|
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The Kernel Concurrency Sanitizer (KCSAN) infrastructure. For more info please
|
|
|
|
* see Documentation/dev-tools/kcsan.rst.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _KERNEL_KCSAN_KCSAN_H
|
|
|
|
#define _KERNEL_KCSAN_KCSAN_H
|
|
|
|
|
|
|
|
#include <linux/kcsan.h>
|
|
|
|
|
|
|
|
/* The number of adjacent watchpoints to check. */
|
|
|
|
#define KCSAN_CHECK_ADJACENT 1
|
kcsan: Avoid blocking producers in prepare_report()
To avoid deadlock in case watchers can be interrupted, we need to ensure
that producers of the struct other_info can never be blocked by an
unrelated consumer. (Likely to occur with KCSAN_INTERRUPT_WATCHER.)
There are several cases that can lead to this scenario, for example:
1. A watchpoint A was set up by task T1, but interrupted by
interrupt I1. Some other thread (task or interrupt) finds
watchpoint A consumes it, and sets other_info. Then I1 also
finds some unrelated watchpoint B, consumes it, but is blocked
because other_info is in use. T1 cannot consume other_info
because I1 never returns -> deadlock.
2. A watchpoint A was set up by task T1, but interrupted by
interrupt I1, which also sets up a watchpoint B. Some other
thread finds watchpoint A, and consumes it and sets up
other_info with its information. Similarly some other thread
finds watchpoint B and consumes it, but is then blocked because
other_info is in use. When I1 continues it sees its watchpoint
was consumed, and that it must wait for other_info, which
currently contains information to be consumed by T1. However, T1
cannot unblock other_info because I1 never returns -> deadlock.
To avoid this, we need to ensure that producers of struct other_info
always have a usable other_info entry. This is obviously not the case
with only a single instance of struct other_info, as concurrent
producers must wait for the entry to be released by some consumer (which
may be locked up as illustrated above).
While it would be nice if producers could simply call kmalloc() and
append their instance of struct other_info to a list, we are very
limited in this code path: since KCSAN can instrument the allocators
themselves, calling kmalloc() could lead to deadlock or corrupted
allocator state.
Since producers of the struct other_info will always succeed at
try_consume_watchpoint(), preceding the call into kcsan_report(), we
know that the particular watchpoint slot cannot simply be reused or
consumed by another potential other_info producer. If we move removal of
a watchpoint after reporting (by the consumer of struct other_info), we
can see a consumed watchpoint as a held lock on elements of other_info,
if we create a one-to-one mapping of a watchpoint to an other_info
element.
Therefore, the simplest solution is to create an array of struct
other_info that is as large as the watchpoints array in core.c, and pass
the watchpoint index to kcsan_report() for producers and consumers, and
change watchpoints to be removed after reporting is done.
With a default config on a 64-bit system, the array other_infos consumes
~37KiB. For most systems today this is not a problem. On smaller memory
constrained systems, the config value CONFIG_KCSAN_NUM_WATCHPOINTS can
be reduced appropriately.
Overall, this change is a simplification of the prepare_report() code,
and makes some of the checks (such as checking if at least one access is
a write) redundant.
Tested:
$ tools/testing/selftests/rcutorture/bin/kvm.sh \
--cpus 12 --duration 10 --kconfig "CONFIG_DEBUG_INFO=y \
CONFIG_KCSAN=y CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n \
CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n \
CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y \
CONFIG_KCSAN_INTERRUPT_WATCHER=y CONFIG_PROVE_LOCKING=y" \
--configs TREE03
=> No longer hangs and runs to completion as expected.
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-03-19 01:38:45 +08:00
|
|
|
#define NUM_SLOTS (1 + 2*KCSAN_CHECK_ADJACENT)
|
2019-11-15 02:02:54 +08:00
|
|
|
|
2020-02-22 07:10:27 +08:00
|
|
|
extern unsigned int kcsan_udelay_task;
|
|
|
|
extern unsigned int kcsan_udelay_interrupt;
|
|
|
|
|
2019-11-15 02:02:54 +08:00
|
|
|
/*
|
|
|
|
* Globally enable and disable KCSAN.
|
|
|
|
*/
|
|
|
|
extern bool kcsan_enabled;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Initialize debugfs file.
|
|
|
|
*/
|
|
|
|
void kcsan_debugfs_init(void);
|
|
|
|
|
|
|
|
enum kcsan_counter_id {
|
|
|
|
/*
|
|
|
|
* Number of watchpoints currently in use.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_USED_WATCHPOINTS,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Total number of watchpoints set up.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_SETUP_WATCHPOINTS,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Total number of data races.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_DATA_RACES,
|
|
|
|
|
kcsan: Introduce KCSAN_ACCESS_ASSERT access type
The KCSAN_ACCESS_ASSERT access type may be used to introduce dummy reads
and writes to assert certain properties of concurrent code, where bugs
could not be detected as normal data races.
For example, a variable that is only meant to be written by a single
CPU, but may be read (without locking) by other CPUs must still be
marked properly to avoid data races. However, concurrent writes,
regardless if WRITE_ONCE() or not, would be a bug. Using
kcsan_check_access(&x, sizeof(x), KCSAN_ACCESS_ASSERT) would allow
catching such bugs.
To support KCSAN_ACCESS_ASSERT the following notable changes were made:
* If an access is of type KCSAN_ASSERT_ACCESS, disable various filters
that only apply to data races, so that all races that KCSAN observes are
reported.
* Bug reports that involve an ASSERT access type will be reported as
"KCSAN: assert: race in ..." instead of "data-race"; this will help
more easily distinguish them.
* Update a few comments to just mention 'races' where we do not always
mean pure data races.
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-02-06 23:46:24 +08:00
|
|
|
/*
|
|
|
|
* Total number of ASSERT failures due to races. If the observed race is
|
|
|
|
* due to two conflicting ASSERT type accesses, then both will be
|
|
|
|
* counted.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_ASSERT_FAILURES,
|
|
|
|
|
2019-11-15 02:02:54 +08:00
|
|
|
/*
|
|
|
|
* Number of times no watchpoints were available.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_NO_CAPACITY,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A thread checking a watchpoint raced with another checking thread;
|
|
|
|
* only one will be reported.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_REPORT_RACES,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Observed data value change, but writer thread unknown.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_RACES_UNKNOWN_ORIGIN,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The access cannot be encoded to a valid watchpoint.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_UNENCODABLE_ACCESSES,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Watchpoint encoding caused a watchpoint to fire on mismatching
|
|
|
|
* accesses.
|
|
|
|
*/
|
|
|
|
KCSAN_COUNTER_ENCODING_FALSE_POSITIVES,
|
|
|
|
|
|
|
|
KCSAN_COUNTER_COUNT, /* number of counters */
|
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Increment/decrement counter with given id; avoid calling these in fast-path.
|
|
|
|
*/
|
2019-11-20 17:41:43 +08:00
|
|
|
extern void kcsan_counter_inc(enum kcsan_counter_id id);
|
|
|
|
extern void kcsan_counter_dec(enum kcsan_counter_id id);
|
2019-11-15 02:02:54 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Returns true if data races in the function symbol that maps to func_addr
|
|
|
|
* (offsets are ignored) should *not* be reported.
|
|
|
|
*/
|
2019-11-20 17:41:43 +08:00
|
|
|
extern bool kcsan_skip_report_debugfs(unsigned long func_addr);
|
2019-11-15 02:02:54 +08:00
|
|
|
|
2020-02-12 00:04:21 +08:00
|
|
|
/*
|
|
|
|
* Value-change states.
|
|
|
|
*/
|
|
|
|
enum kcsan_value_change {
|
|
|
|
/*
|
|
|
|
* Did not observe a value-change, however, it is valid to report the
|
|
|
|
* race, depending on preferences.
|
|
|
|
*/
|
|
|
|
KCSAN_VALUE_CHANGE_MAYBE,
|
|
|
|
|
2020-02-12 00:04:22 +08:00
|
|
|
/*
|
|
|
|
* Did not observe a value-change, and it is invalid to report the race.
|
|
|
|
*/
|
|
|
|
KCSAN_VALUE_CHANGE_FALSE,
|
|
|
|
|
2020-02-12 00:04:21 +08:00
|
|
|
/*
|
|
|
|
* The value was observed to change, and the race should be reported.
|
|
|
|
*/
|
|
|
|
KCSAN_VALUE_CHANGE_TRUE,
|
|
|
|
};
|
|
|
|
|
2019-11-15 02:02:54 +08:00
|
|
|
enum kcsan_report_type {
|
|
|
|
/*
|
|
|
|
* The thread that set up the watchpoint and briefly stalled was
|
|
|
|
* signalled that another thread triggered the watchpoint.
|
|
|
|
*/
|
|
|
|
KCSAN_REPORT_RACE_SIGNAL,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* A thread found and consumed a matching watchpoint.
|
|
|
|
*/
|
|
|
|
KCSAN_REPORT_CONSUMED_WATCHPOINT,
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No other thread was observed to race with the access, but the data
|
|
|
|
* value before and after the stall differs.
|
|
|
|
*/
|
|
|
|
KCSAN_REPORT_RACE_UNKNOWN_ORIGIN,
|
|
|
|
};
|
2019-11-20 17:41:43 +08:00
|
|
|
|
2019-11-15 02:02:54 +08:00
|
|
|
/*
|
|
|
|
* Print a race report from thread that encountered the race.
|
|
|
|
*/
|
2020-01-11 02:48:33 +08:00
|
|
|
extern void kcsan_report(const volatile void *ptr, size_t size, int access_type,
|
2020-03-19 01:38:44 +08:00
|
|
|
enum kcsan_value_change value_change,
|
kcsan: Avoid blocking producers in prepare_report()
To avoid deadlock in case watchers can be interrupted, we need to ensure
that producers of the struct other_info can never be blocked by an
unrelated consumer. (Likely to occur with KCSAN_INTERRUPT_WATCHER.)
There are several cases that can lead to this scenario, for example:
1. A watchpoint A was set up by task T1, but interrupted by
interrupt I1. Some other thread (task or interrupt) finds
watchpoint A consumes it, and sets other_info. Then I1 also
finds some unrelated watchpoint B, consumes it, but is blocked
because other_info is in use. T1 cannot consume other_info
because I1 never returns -> deadlock.
2. A watchpoint A was set up by task T1, but interrupted by
interrupt I1, which also sets up a watchpoint B. Some other
thread finds watchpoint A, and consumes it and sets up
other_info with its information. Similarly some other thread
finds watchpoint B and consumes it, but is then blocked because
other_info is in use. When I1 continues it sees its watchpoint
was consumed, and that it must wait for other_info, which
currently contains information to be consumed by T1. However, T1
cannot unblock other_info because I1 never returns -> deadlock.
To avoid this, we need to ensure that producers of struct other_info
always have a usable other_info entry. This is obviously not the case
with only a single instance of struct other_info, as concurrent
producers must wait for the entry to be released by some consumer (which
may be locked up as illustrated above).
While it would be nice if producers could simply call kmalloc() and
append their instance of struct other_info to a list, we are very
limited in this code path: since KCSAN can instrument the allocators
themselves, calling kmalloc() could lead to deadlock or corrupted
allocator state.
Since producers of the struct other_info will always succeed at
try_consume_watchpoint(), preceding the call into kcsan_report(), we
know that the particular watchpoint slot cannot simply be reused or
consumed by another potential other_info producer. If we move removal of
a watchpoint after reporting (by the consumer of struct other_info), we
can see a consumed watchpoint as a held lock on elements of other_info,
if we create a one-to-one mapping of a watchpoint to an other_info
element.
Therefore, the simplest solution is to create an array of struct
other_info that is as large as the watchpoints array in core.c, and pass
the watchpoint index to kcsan_report() for producers and consumers, and
change watchpoints to be removed after reporting is done.
With a default config on a 64-bit system, the array other_infos consumes
~37KiB. For most systems today this is not a problem. On smaller memory
constrained systems, the config value CONFIG_KCSAN_NUM_WATCHPOINTS can
be reduced appropriately.
Overall, this change is a simplification of the prepare_report() code,
and makes some of the checks (such as checking if at least one access is
a write) redundant.
Tested:
$ tools/testing/selftests/rcutorture/bin/kvm.sh \
--cpus 12 --duration 10 --kconfig "CONFIG_DEBUG_INFO=y \
CONFIG_KCSAN=y CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n \
CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n \
CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y \
CONFIG_KCSAN_INTERRUPT_WATCHER=y CONFIG_PROVE_LOCKING=y" \
--configs TREE03
=> No longer hangs and runs to completion as expected.
Reported-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2020-03-19 01:38:45 +08:00
|
|
|
enum kcsan_report_type type, int watchpoint_idx);
|
2019-11-15 02:02:54 +08:00
|
|
|
|
|
|
|
#endif /* _KERNEL_KCSAN_KCSAN_H */
|