forked from luck/tmp_suning_uos_patched
5c36142574
Prefer __always_inline for fast-path functions that are called outside of user_access_save, to avoid generating UACCESS warnings when optimizing for size (CC_OPTIMIZE_FOR_SIZE). It will also avoid future surprises with compiler versions that change the inlining heuristic even when optimizing for performance. Reported-by: Randy Dunlap <rdunlap@infradead.org> Acked-by: Randy Dunlap <rdunlap@infradead.org> # build-tested Signed-off-by: Marco Elver <elver@google.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: http://lkml.kernel.org/r/58708908-84a0-0a81-a836-ad97e33dbb62@infradead.org
28 lines
897 B
C
28 lines
897 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
#ifndef _KERNEL_KCSAN_ATOMIC_H
|
|
#define _KERNEL_KCSAN_ATOMIC_H
|
|
|
|
#include <linux/jiffies.h>
|
|
|
|
/*
|
|
* Helper that returns true if access to @ptr should be considered an atomic
|
|
* access, even though it is not explicitly atomic.
|
|
*
|
|
* List all volatile globals that have been observed in races, to suppress
|
|
* data race reports between accesses to these variables.
|
|
*
|
|
* For now, we assume that volatile accesses of globals are as strong as atomic
|
|
* accesses (READ_ONCE, WRITE_ONCE cast to volatile). The situation is still not
|
|
* entirely clear, as on some architectures (Alpha) READ_ONCE/WRITE_ONCE do more
|
|
* than cast to volatile. Eventually, we hope to be able to remove this
|
|
* function.
|
|
*/
|
|
static __always_inline bool kcsan_is_atomic(const volatile void *ptr)
|
|
{
|
|
/* only jiffies for now */
|
|
return ptr == &jiffies;
|
|
}
|
|
|
|
#endif /* _KERNEL_KCSAN_ATOMIC_H */
|