forked from luck/tmp_suning_uos_patched
MIPS: barrier: Remove loongson_llsc_mb()
The loongson_llsc_mb() macro is no longer used - instead barriers are emitted as part of inline asm using the __SYNC() macro. Remove the now-defunct loongson_llsc_mb() macro. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Huacai Chen <chenhc@lemote.com> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com> Cc: linux-kernel@vger.kernel.org
This commit is contained in:
parent
e84957e6ae
commit
7f56b12354
|
@ -122,46 +122,6 @@ static inline void wmb(void)
|
|||
#define __smp_mb__before_atomic() __smp_mb__before_llsc()
|
||||
#define __smp_mb__after_atomic() smp_llsc_mb()
|
||||
|
||||
/*
|
||||
* Some Loongson 3 CPUs have a bug wherein execution of a memory access (load,
|
||||
* store or prefetch) in between an LL & SC can cause the SC instruction to
|
||||
* erroneously succeed, breaking atomicity. Whilst it's unusual to write code
|
||||
* containing such sequences, this bug bites harder than we might otherwise
|
||||
* expect due to reordering & speculation:
|
||||
*
|
||||
* 1) A memory access appearing prior to the LL in program order may actually
|
||||
* be executed after the LL - this is the reordering case.
|
||||
*
|
||||
* In order to avoid this we need to place a memory barrier (ie. a SYNC
|
||||
* instruction) prior to every LL instruction, in between it and any earlier
|
||||
* memory access instructions.
|
||||
*
|
||||
* This reordering case is fixed by 3A R2 CPUs, ie. 3A2000 models and later.
|
||||
*
|
||||
* 2) If a conditional branch exists between an LL & SC with a target outside
|
||||
* of the LL-SC loop, for example an exit upon value mismatch in cmpxchg()
|
||||
* or similar, then misprediction of the branch may allow speculative
|
||||
* execution of memory accesses from outside of the LL-SC loop.
|
||||
*
|
||||
* In order to avoid this we need a memory barrier (ie. a SYNC instruction)
|
||||
* at each affected branch target, for which we also use loongson_llsc_mb()
|
||||
* defined below.
|
||||
*
|
||||
* This case affects all current Loongson 3 CPUs.
|
||||
*
|
||||
* The above described cases cause an error in the cache coherence protocol;
|
||||
* such that the Invalidate of a competing LL-SC goes 'missing' and SC
|
||||
* erroneously observes its core still has Exclusive state and lets the SC
|
||||
* proceed.
|
||||
*
|
||||
* Therefore the error only occurs on SMP systems.
|
||||
*/
|
||||
#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */
|
||||
#define loongson_llsc_mb() __asm__ __volatile__("sync" : : :"memory")
|
||||
#else
|
||||
#define loongson_llsc_mb() do { } while (0)
|
||||
#endif
|
||||
|
||||
static inline void sync_ginv(void)
|
||||
{
|
||||
asm volatile(__SYNC(ginv, always));
|
||||
|
|
|
@ -27,7 +27,7 @@ cflags-$(CONFIG_CPU_LOONGSON3) += -Wa,--trap
|
|||
#
|
||||
# Some versions of binutils, not currently mainline as of 2019/02/04, support
|
||||
# an -mfix-loongson3-llsc flag which emits a sync prior to each ll instruction
|
||||
# to work around a CPU bug (see loongson_llsc_mb() in asm/barrier.h for a
|
||||
# to work around a CPU bug (see __SYNC_loongson3_war in asm/sync.h for a
|
||||
# description).
|
||||
#
|
||||
# We disable this in order to prevent the assembler meddling with the
|
||||
|
|
Loading…
Reference in New Issue
Block a user