forked from luck/tmp_suning_uos_patched
f6bb2a2c0b
This results in no change in structure size on 64-bit machines as it fits in the padding between the gfp_t and the void *. 32-bit machines will grow the structure from 8 to 12 bytes. Almost all radix trees are protected with (at least) a spinlock, so as they are converted from radix trees to xarrays, the data structures will shrink again. Initialising the spinlock requires a name for the benefit of lockdep, so RADIX_TREE_INIT() now needs to know the name of the radix tree it's initialising, and so do IDR_INIT() and IDA_INIT(). Also add the xa_lock() and xa_unlock() family of wrappers to make it easier to use the lock. If we could rely on -fplan9-extensions in the compiler, we could avoid all of this syntactic sugar, but that wasn't added until gcc 4.6. Link: http://lkml.kernel.org/r/20180313132639.17387-8-willy@infradead.org Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> Reviewed-by: Jeff Layton <jlayton@kernel.org> Cc: Darrick J. Wong <darrick.wong@oracle.com> Cc: Dave Chinner <david@fromorbit.com> Cc: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
34 lines
836 B
C
34 lines
836 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
#ifndef __LINUX_SPINLOCK_H_
|
|
#define __LINUX_SPINLOCK_H_
|
|
|
|
#include <pthread.h>
|
|
#include <stdbool.h>
|
|
|
|
#define spinlock_t pthread_mutex_t
|
|
#define DEFINE_SPINLOCK(x) pthread_mutex_t x = PTHREAD_MUTEX_INITIALIZER;
|
|
#define __SPIN_LOCK_UNLOCKED(x) (pthread_mutex_t)PTHREAD_MUTEX_INITIALIZER
|
|
|
|
#define spin_lock_irqsave(x, f) (void)f, pthread_mutex_lock(x)
|
|
#define spin_unlock_irqrestore(x, f) (void)f, pthread_mutex_unlock(x)
|
|
|
|
#define arch_spinlock_t pthread_mutex_t
|
|
#define __ARCH_SPIN_LOCK_UNLOCKED PTHREAD_MUTEX_INITIALIZER
|
|
|
|
static inline void arch_spin_lock(arch_spinlock_t *mutex)
|
|
{
|
|
pthread_mutex_lock(mutex);
|
|
}
|
|
|
|
static inline void arch_spin_unlock(arch_spinlock_t *mutex)
|
|
{
|
|
pthread_mutex_unlock(mutex);
|
|
}
|
|
|
|
static inline bool arch_spin_is_locked(arch_spinlock_t *mutex)
|
|
{
|
|
return true;
|
|
}
|
|
|
|
#endif
|