forked from luck/tmp_suning_uos_patched
locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()
[ Note, this is a Git cherry-pick of the following commit:
76ebbe78f7
("locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()")
... for easier x86 PTI code testing and back-porting. ]
In preparation for the removal of lockless_dereference(), which is the
same as READ_ONCE() on all architectures other than Alpha, add an
implicit smp_read_barrier_depends() to READ_ONCE() so that it can be
used to head dependency chains on all architectures.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1508840570-22169-3-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
ab95477e7c
commit
c2bc66082e
|
@ -341,6 +341,7 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
|
|||
__read_once_size(&(x), __u.__c, sizeof(x)); \
|
||||
else \
|
||||
__read_once_size_nocheck(&(x), __u.__c, sizeof(x)); \
|
||||
smp_read_barrier_depends(); /* Enforce dependency ordering from x */ \
|
||||
__u.__val; \
|
||||
})
|
||||
#define READ_ONCE(x) __READ_ONCE(x, 1)
|
||||
|
|
Loading…
Reference in New Issue
Block a user