forked from luck/tmp_suning_uos_patched
slab: fix generic PAGE_POISONING conflict with SLAB_RED_ZONE
A generic page poisoning mechanism was added with commit:
6a11f75b6a
which destructively poisons full pages with a bitpattern.
On arches where PAGE_POISONING is used, this conflicts with the slab
redzone checking enabled by DEBUG_SLAB, scribbling bits all over its
magic words and making it complain about that quite emphatically.
On x86 (and I presume at present all the other arches which set
ARCH_SUPPORTS_DEBUG_PAGEALLOC too), the kernel_map_pages() operation
is non destructive so it can coexist with the other DEBUG_SLAB
mechanisms just fine.
This patch favours the expensive full page destruction test for
cases where there is a collision and it is explicitly selected.
Signed-off-by: Ron Lee <ron@debian.org>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
This commit is contained in:
parent
45d447406a
commit
6746136520
|
@ -2353,6 +2353,15 @@ kmem_cache_create (const char *name, size_t size, size_t align,
|
|||
/* really off slab. No need for manual alignment */
|
||||
slab_size =
|
||||
cachep->num * sizeof(kmem_bufctl_t) + sizeof(struct slab);
|
||||
|
||||
#ifdef CONFIG_PAGE_POISONING
|
||||
/* If we're going to use the generic kernel_map_pages()
|
||||
* poisoning, then it's going to smash the contents of
|
||||
* the redzone and userword anyhow, so switch them off.
|
||||
*/
|
||||
if (size % PAGE_SIZE == 0 && flags & SLAB_POISON)
|
||||
flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);
|
||||
#endif
|
||||
}
|
||||
|
||||
cachep->colour_off = cache_line_size();
|
||||
|
|
Loading…
Reference in New Issue
Block a user