kernel_optimize_test/mm
Hugh Dickins 31c4a3d3a0 mm: further fix swapin race condition
Commit 4969c1192d ("mm: fix swapin race condition") is now agreed to
be incomplete.  There's a race, not very much less likely than the
original race envisaged, in which it is further necessary to check that
the swapcache page's swap has not changed.

Here's the reasoning: cast in terms of reuse_swap_page(), but probably
could be reformulated to rely on try_to_free_swap() instead, or on
swapoff+swapon.

A, faults into do_swap_page(): does page1 = lookup_swap_cache(swap1) and
comes through the lock_page(page1).

B, a racing thread of the same process, faults on the same address: does
page1 = lookup_swap_cache(swap1) and now waits in lock_page(page1), but
for whatever reason is unlucky not to get the lock any time soon.

A carries on through do_swap_page(), a write fault, but cannot reuse the
swap page1 (another reference to swap1).  Unlocks the page1 (but B
doesn't get it yet), does COW in do_wp_page(), page2 now in that pte.

C, perhaps the parent of A+B, comes in and write faults the same swap
page1 into its mm, reuse_swap_page() succeeds this time, swap1 is freed.

kswapd comes in after some time (B still unlucky) and swaps out some
pages from A+B and C: it allocates the original swap1 to page2 in A+B,
and some other swap2 to the original page1 now in C.  But does not
immediately free page1 (actually it couldn't: B holds a reference),
leaving it in swap cache for now.

B at last gets the lock on page1, hooray! Is PageSwapCache(page1)? Yes.
Is pte_same(*page_table, orig_pte)? Yes, because page2 has now been
given the swap1 which page1 used to have.  So B proceeds to insert page1
into A+B's page_table, though its content now belongs to C, quite
different from what A wrote there.

B ought to have checked that page1's swap was still swap1.

Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-09-20 10:44:37 -07:00
..
backing-dev.c
bootmem.c
bounce.c bounce: call flush_dcache_page() after bounce_copy_vec() 2010-09-09 18:57:25 -07:00
compaction.c mm: compaction: handle active and inactive fairly in too_many_isolated 2010-09-09 18:57:24 -07:00
debug-pagealloc.c
dmapool.c
fadvise.c
failslab.c
filemap_xip.c
filemap.c
fremap.c
highmem.c
hugetlb.c
hwpoison-inject.c
init-mm.c
internal.h
Kconfig mm: avoid warning when COMPACTION is selected 2010-09-09 18:57:24 -07:00
Kconfig.debug
kmemcheck.c
kmemleak-test.c
kmemleak.c
ksm.c mm: fix swapin race condition 2010-09-09 18:57:24 -07:00
maccess.c
madvise.c
Makefile
memblock.c
memcontrol.c
memory_hotplug.c memory hotplug: fix next block calculation in is_removable 2010-09-09 18:57:24 -07:00
memory-failure.c
memory.c mm: further fix swapin race condition 2010-09-20 10:44:37 -07:00
mempolicy.c
mempool.c
migrate.c
mincore.c
mlock.c mm: Move vma_stack_continue into mm.h 2010-09-09 09:05:06 -07:00
mm_init.c
mmap.c
mmu_context.c
mmu_notifier.c
mmzone.c mm: page allocator: calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake 2010-09-09 18:57:25 -07:00
mprotect.c
mremap.c
msync.c
nommu.c
oom_kill.c
page_alloc.c mm: page allocator: drain per-cpu lists after direct reclaim allocation fails 2010-09-09 18:57:25 -07:00
page_cgroup.c
page_io.c
page_isolation.c
page-writeback.c
pagewalk.c
percpu_up.c
percpu-km.c
percpu-vm.c
percpu.c
prio_tree.c
quicklist.c
readahead.c
rmap.c
shmem.c
slab.c
slob.c
slub.c
sparse-vmemmap.c
sparse.c
swap_state.c
swap.c
swapfile.c swap: discard while swapping only if SWAP_FLAG_DISCARD 2010-09-09 18:57:25 -07:00
thrash.c
truncate.c
util.c
vmalloc.c
vmscan.c
vmstat.c mm: page allocator: calculate a better estimate of NR_FREE_PAGES when memory is low and kswapd is awake 2010-09-09 18:57:25 -07:00