forked from luck/tmp_suning_uos_patched
8f32543b61
This commit adds comments to the litmus tests summarizing what these tests are intended to demonstrate. [ paulmck: Apply Andrea's and Alan's feedback. ] Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: akiyks@gmail.com Cc: boqun.feng@gmail.com Cc: dhowells@redhat.com Cc: j.alglave@ucl.ac.uk Cc: linux-arch@vger.kernel.org Cc: luc.maranget@inria.fr Cc: nborisov@suse.com Cc: npiggin@gmail.com Cc: parri.andrea@gmail.com Cc: stern@rowland.harvard.edu Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1519169112-20593-4-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
36 lines
762 B
Plaintext
36 lines
762 B
Plaintext
C MP+polocks
|
|
|
|
(*
|
|
* Result: Never
|
|
*
|
|
* This litmus test demonstrates how lock acquisitions and releases can
|
|
* stand in for smp_load_acquire() and smp_store_release(), respectively.
|
|
* In other words, when holding a given lock (or indeed after releasing a
|
|
* given lock), a CPU is not only guaranteed to see the accesses that other
|
|
* CPUs made while previously holding that lock, it is also guaranteed
|
|
* to see all prior accesses by those other CPUs.
|
|
*)
|
|
|
|
{}
|
|
|
|
P0(int *x, int *y, spinlock_t *mylock)
|
|
{
|
|
WRITE_ONCE(*x, 1);
|
|
spin_lock(mylock);
|
|
WRITE_ONCE(*y, 1);
|
|
spin_unlock(mylock);
|
|
}
|
|
|
|
P1(int *x, int *y, spinlock_t *mylock)
|
|
{
|
|
int r0;
|
|
int r1;
|
|
|
|
spin_lock(mylock);
|
|
r0 = READ_ONCE(*y);
|
|
spin_unlock(mylock);
|
|
r1 = READ_ONCE(*x);
|
|
}
|
|
|
|
exists (1:r0=1 /\ 1:r1=0)
|