kernel_optimize_test/kernel/kcsan
Marco Elver 703b321501 kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask)
This introduces ASSERT_EXCLUSIVE_BITS(var, mask).
ASSERT_EXCLUSIVE_BITS(var, mask) will cause KCSAN to assume that the
following access is safe w.r.t. data races (however, please see the
docbook comment for disclaimer here).

For more context on why this was considered necessary, please see:

  http://lkml.kernel.org/r/1580995070-25139-1-git-send-email-cai@lca.pw

In particular, before this patch, data races between reads (that use
@mask bits of an access that should not be modified concurrently) and
writes (that change ~@mask bits not used by the readers) would have been
annotated with "data_race()" (or "READ_ONCE()"). However, doing so would
then hide real problems: we would no longer be able to detect harmful
races between reads to @mask bits and writes to @mask bits.

Therefore, by using ASSERT_EXCLUSIVE_BITS(var, mask), we accomplish:

  1. Avoid proliferation of specific macros at the call sites: by
     including a single mask in the argument list, we can use the same
     macro in a wide variety of call sites, regardless of how and which
     bits in a field each call site actually accesses.

  2. The existing code does not need to be modified (although READ_ONCE()
     may still be advisable if we cannot prove that the data race is
     always safe).

  3. We catch bugs where the exclusive bits are modified concurrently.

  4. We document properties of the current code.

Acked-by: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Marco Elver <elver@google.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Qian Cai <cai@lca.pw>
2020-03-21 09:44:14 +01:00
..
atomic.h kcsan: Prefer __always_inline for fast-path 2020-03-21 09:40:19 +01:00
core.c kcsan: Add kcsan_set_access_mask() support 2020-03-21 09:44:08 +01:00
debugfs.c kcsan: Introduce ASSERT_EXCLUSIVE_BITS(var, mask) 2020-03-21 09:44:14 +01:00
encoding.h kcsan: Prefer __always_inline for fast-path 2020-03-21 09:40:19 +01:00
kcsan.h kcsan: Add kcsan_set_access_mask() support 2020-03-21 09:44:08 +01:00
Makefile kcsan, ubsan: Make KCSAN+UBSAN work together 2020-01-07 07:47:23 -08:00
report.c kcsan: Add kcsan_set_access_mask() support 2020-03-21 09:44:08 +01:00
test.c kcsan: Fix 0-sized checks 2020-03-21 09:42:42 +01:00