]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
perf/core: Make the mlock accounting simple again
authorAlexander Shishkin <alexander.shishkin@linux.intel.com>
Wed, 20 Nov 2019 17:06:40 +0000 (19:06 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 21 Nov 2019 06:37:50 +0000 (07:37 +0100)
Commit:

  d44248a41337 ("perf/core: Rework memory accounting in perf_mmap()")

does a lot of things to the mlock accounting arithmetics, while the only
thing that actually needed to happen is subtracting the part that is
charged to the mm from the part that is charged to the user, so that the
former isn't charged twice.

Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Acked-by: Song Liu <songliubraving@fb.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wanpeng Li <wanpengli@tencent.com>
Cc: Yauheni Kaliuta <yauheni.kaliuta@redhat.com>
Cc: songliubraving@fb.com
Link: https://lkml.kernel.org/r/20191120170640.54123-1-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/events/core.c

index 8f66a4833dedd62d6e645cb9e273a89af7fbaf93..7e8980d0b99755b09a9b5b6c1e4bff5965c6172c 100644 (file)
@@ -5825,13 +5825,7 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
 
        user_locked = atomic_long_read(&user->locked_vm) + user_extra;
 
-       if (user_locked <= user_lock_limit) {
-               /* charge all to locked_vm */
-       } else if (atomic_long_read(&user->locked_vm) >= user_lock_limit) {
-               /* charge all to pinned_vm */
-               extra = user_extra;
-               user_extra = 0;
-       } else {
+       if (user_locked > user_lock_limit) {
                /*
                 * charge locked_vm until it hits user_lock_limit;
                 * charge the rest from pinned_vm