forked from luck/tmp_suning_uos_patched
perf/core: Make the mlock accounting simple again
Commit:
d44248a413
("perf/core: Rework memory accounting in perf_mmap()")
does a lot of things to the mlock accounting arithmetics, while the only
thing that actually needed to happen is subtracting the part that is
charged to the mm from the part that is charged to the user, so that the
former isn't charged twice.
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Cc: songliubraving@fb.com
Link: https://lkml.kernel.org/r/20191120170640.54123-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
36b3db03b4
commit
c4b7547974
|
@ -5825,13 +5825,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
|
|||
|
||||
user_locked = atomic_long_read(&user->locked_vm) + user_extra;
|
||||
|
||||
if (user_locked <= user_lock_limit) {
|
||||
/* charge all to locked_vm */
|
||||
} else if (atomic_long_read(&user->locked_vm) >= user_lock_limit) {
|
||||
/* charge all to pinned_vm */
|
||||
extra = user_extra;
|
||||
user_extra = 0;
|
||||
} else {
|
||||
if (user_locked > user_lock_limit) {
|
||||
/*
|
||||
* charge locked_vm until it hits user_lock_limit;
|
||||
* charge the rest from pinned_vm
|
||||
|
|
Loading…
Reference in New Issue
Block a user