]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
perf/core: Fix mlock accounting in perf_mmap()
authorSong Liu <songliubraving@fb.com>
Thu, 23 Jan 2020 18:11:46 +0000 (10:11 -0800)
committerIngo Molnar <mingo@kernel.org>
Tue, 28 Jan 2020 20:20:18 +0000 (21:20 +0100)
Decreasing sysctl_perf_event_mlock between two consecutive perf_mmap()s of
a perf ring buffer may lead to an integer underflow in locked memory
accounting. This may lead to the undesired behaviors, such as failures in
BPF map creation.

Address this by adjusting the accounting logic to take into account the
possibility that the amount of already locked memory may exceed the
current limit.

Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again")
Suggested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: <stable@vger.kernel.org>
Acked-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Link: https://lkml.kernel.org/r/20200123181146.2238074-1-songliubraving@fb.com
kernel/events/core.c

index 2173c23c25b4fcef30b27059a1c799a697d217b1..2d9aeba1f3e206754b732d93ce9b4dc4ae6408ae 100644 (file)
@@ -5916,7 +5916,15 @@ static int perf_mmap(struct file *file, struct vm_area_struct *vma)
         */
        user_lock_limit *= num_online_cpus();
 
-       user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+       user_locked = atomic_long_read(&user->locked_vm);
+
+       /*
+        * sysctl_perf_event_mlock may have changed, so that
+        *     user->locked_vm > user_lock_limit
+        */
+       if (user_locked > user_lock_limit)
+               user_locked = user_lock_limit;
+       user_locked += user_extra;
 
        if (user_locked > user_lock_limit) {
                /*