]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
schied/fair: Skip calculating @contrib without load
authorPeng Wang <rocking@linux.alibaba.com>
Fri, 13 Dec 2019 03:45:40 +0000 (11:45 +0800)
committerPeter Zijlstra <peterz@infradead.org>
Tue, 17 Dec 2019 12:32:51 +0000 (13:32 +0100)
Because of the:

if (!load)
runnable = running = 0;

clause in ___update_load_sum(), all the actual users of @contrib in
accumulate_sum():

if (load)
sa->load_sum += load * contrib;
if (runnable)
sa->runnable_load_sum += runnable * contrib;
if (running)
sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;

don't happen, and therefore we don't care what @contrib actually is and
calculating it is pointless.

If we count the times when @load equals zero and not as below:

if (load) {
load_is_not_zero_count++;
contrib = __accumulate_pelt_segments(periods,
1024 - sa->period_contrib,delta);
} else
load_is_zero_count++;

As we can see, load_is_zero_count is much bigger than
load_is_zero_count, and the gap is gradually widening:

load_is_zero_count:            6016044 times
load_is_not_zero_count:         244316 times
19:50:43 up 1 min,  1 user,  load average: 0.09, 0.06, 0.02

load_is_zero_count:            7956168 times
load_is_not_zero_count:         261472 times
19:51:42 up 2 min,  1 user,  load average: 0.03, 0.05, 0.01

load_is_zero_count:           10199896 times
load_is_not_zero_count:         278364 times
19:52:51 up 3 min,  1 user,  load average: 0.06, 0.05, 0.01

load_is_zero_count:           14333700 times
load_is_not_zero_count:         318424 times
19:54:53 up 5 min,  1 user,  load average: 0.01, 0.03, 0.00

Perhaps we can gain some performance advantage by saving these
unnecessary calculation.

Signed-off-by: Peng Wang <rocking@linux.alibaba.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot < vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/1576208740-35609-1-git-send-email-rocking@linux.alibaba.com
kernel/sched/pelt.c

index a96db50d40e06339644b53d1314f6a899b9096ec..bd006b79b3608b9d5d371996dfe523b7f5bbfacd 100644 (file)
@@ -129,8 +129,20 @@ accumulate_sum(u64 delta, struct sched_avg *sa,
                 * Step 2
                 */
                delta %= 1024;
-               contrib = __accumulate_pelt_segments(periods,
-                               1024 - sa->period_contrib, delta);
+               if (load) {
+                       /*
+                        * This relies on the:
+                        *
+                        * if (!load)
+                        *      runnable = running = 0;
+                        *
+                        * clause from ___update_load_sum(); this results in
+                        * the below usage of @contrib to dissapear entirely,
+                        * so no point in calculating it.
+                        */
+                       contrib = __accumulate_pelt_segments(periods,
+                                       1024 - sa->period_contrib, delta);
+               }
        }
        sa->period_contrib = delta;
 
@@ -205,7 +217,9 @@ ___update_load_sum(u64 now, struct sched_avg *sa,
         * This means that weight will be 0 but not running for a sched_entity
         * but also for a cfs_rq if the latter becomes idle. As an example,
         * this happens during idle_balance() which calls
-        * update_blocked_averages()
+        * update_blocked_averages().
+        *
+        * Also see the comment in accumulate_sum().
         */
        if (!load)
                runnable = running = 0;