forked from luck/tmp_suning_uos_patched
mm: proc: smaps_rollup: do not stall write attempts on mmap_lock
smaps_rollup will try to grab mmap_lock and go through the whole vma list until it finishes the iterating. When encountering large processes, the mmap_lock will be held for a longer time, which may block other write requests like mmap and munmap from progressing smoothly. There are upcoming mmap_lock optimizations like range-based locks, but the lock applied to smaps_rollup would be the coarse type, which doesn't avoid the occurrence of unpleasant contention. To solve aforementioned issue, we add a check which detects whether anyone wants to grab mmap_lock for write attempts. Signed-off-by: Chinwen Chang <chinwen.chang@mediatek.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Steven Price <steven.price@arm.com> Cc: Michel Lespinasse <walken@google.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: Chinwen Chang <chinwen.chang@mediatek.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Song Liu <songliubraving@fb.com> Cc: Jimmy Assarsson <jimmyassarsson@gmail.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Daniel Kiss <daniel.kiss@arm.com> Cc: Laurent Dufour <ldufour@linux.ibm.com> Link: http://lkml.kernel.org/r/1597715898-3854-4-git-send-email-chinwen.chang@mediatek.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
parent
03b4b11493
commit
ff9f47f6f0
|
@ -865,9 +865,73 @@ static int show_smaps_rollup(struct seq_file *m, void *v)
|
|||
|
||||
hold_task_mempolicy(priv);
|
||||
|
||||
for (vma = priv->mm->mmap; vma; vma = vma->vm_next) {
|
||||
for (vma = priv->mm->mmap; vma;) {
|
||||
smap_gather_stats(vma, &mss, 0);
|
||||
last_vma_end = vma->vm_end;
|
||||
|
||||
/*
|
||||
* Release mmap_lock temporarily if someone wants to
|
||||
* access it for write request.
|
||||
*/
|
||||
if (mmap_lock_is_contended(mm)) {
|
||||
mmap_read_unlock(mm);
|
||||
ret = mmap_read_lock_killable(mm);
|
||||
if (ret) {
|
||||
release_task_mempolicy(priv);
|
||||
goto out_put_mm;
|
||||
}
|
||||
|
||||
/*
|
||||
* After dropping the lock, there are four cases to
|
||||
* consider. See the following example for explanation.
|
||||
*
|
||||
* +------+------+-----------+
|
||||
* | VMA1 | VMA2 | VMA3 |
|
||||
* +------+------+-----------+
|
||||
* | | | |
|
||||
* 4k 8k 16k 400k
|
||||
*
|
||||
* Suppose we drop the lock after reading VMA2 due to
|
||||
* contention, then we get:
|
||||
*
|
||||
* last_vma_end = 16k
|
||||
*
|
||||
* 1) VMA2 is freed, but VMA3 exists:
|
||||
*
|
||||
* find_vma(mm, 16k - 1) will return VMA3.
|
||||
* In this case, just continue from VMA3.
|
||||
*
|
||||
* 2) VMA2 still exists:
|
||||
*
|
||||
* find_vma(mm, 16k - 1) will return VMA2.
|
||||
* Iterate the loop like the original one.
|
||||
*
|
||||
* 3) No more VMAs can be found:
|
||||
*
|
||||
* find_vma(mm, 16k - 1) will return NULL.
|
||||
* No more things to do, just break.
|
||||
*
|
||||
* 4) (last_vma_end - 1) is the middle of a vma (VMA'):
|
||||
*
|
||||
* find_vma(mm, 16k - 1) will return VMA' whose range
|
||||
* contains last_vma_end.
|
||||
* Iterate VMA' from last_vma_end.
|
||||
*/
|
||||
vma = find_vma(mm, last_vma_end - 1);
|
||||
/* Case 3 above */
|
||||
if (!vma)
|
||||
break;
|
||||
|
||||
/* Case 1 above */
|
||||
if (vma->vm_start >= last_vma_end)
|
||||
continue;
|
||||
|
||||
/* Case 4 above */
|
||||
if (vma->vm_end > last_vma_end)
|
||||
smap_gather_stats(vma, &mss, last_vma_end);
|
||||
}
|
||||
/* Case 2 above */
|
||||
vma = vma->vm_next;
|
||||
}
|
||||
|
||||
show_vma_header_prefix(m, priv->mm->mmap->vm_start,
|
||||
|
|
Loading…
Reference in New Issue
Block a user