]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases
authorPatrick Bellasi <patrick.bellasi@matbug.net>
Wed, 23 Oct 2019 20:56:30 +0000 (21:56 +0100)
committerIngo Molnar <mingo@kernel.org>
Tue, 29 Oct 2019 09:01:07 +0000 (10:01 +0100)
The estimated utilization for a task:

   util_est = max(util_avg, est.enqueue, est.ewma)

is defined based on:

 - util_avg: the PELT defined utilization
 - est.enqueued: the util_avg at the end of the last activation
 - est.ewma:     a exponential moving average on the est.enqueued samples

According to this definition, when a task suddenly changes its bandwidth
requirements from small to big, the EWMA will need to collect multiple
samples before converging up to track the new big utilization.

This slow convergence towards bigger utilization values is not
aligned to the default scheduler behavior, which is to optimize for
performance. Moreover, the est.ewma component fails to compensate for
temporarely utilization drops which spans just few est.enqueued samples.

To let util_est do a better job in the scenario depicted above, change
its definition by making util_est directly follow upward motion and
only decay the est.ewma on downward.

Signed-off-by: Patrick Bellasi <patrick.bellasi@matbug.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Douglas Raillard <douglas.raillard@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Quentin Perret <qperret@google.com>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20191023205630.14469-1-patrick.bellasi@matbug.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c
kernel/sched/features.h

index a81c364728224d6df0b6c41950987b629046bd77..a14487462b6cff94a5f7b51b4f1a1661cbe2ff3c 100644 (file)
@@ -3768,11 +3768,22 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep)
        if (ue.enqueued & UTIL_AVG_UNCHANGED)
                return;
 
+       /*
+        * Reset EWMA on utilization increases, the moving average is used only
+        * to smooth utilization decreases.
+        */
+       ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED);
+       if (sched_feat(UTIL_EST_FASTUP)) {
+               if (ue.ewma < ue.enqueued) {
+                       ue.ewma = ue.enqueued;
+                       goto done;
+               }
+       }
+
        /*
         * Skip update of task's estimated utilization when its EWMA is
         * already ~1% close to its last activation value.
         */
-       ue.enqueued = (task_util(p) | UTIL_AVG_UNCHANGED);
        last_ewma_diff = ue.enqueued - ue.ewma;
        if (within_margin(last_ewma_diff, (SCHED_CAPACITY_SCALE / 100)))
                return;
@@ -3805,6 +3816,7 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep)
        ue.ewma <<= UTIL_EST_WEIGHT_SHIFT;
        ue.ewma  += last_ewma_diff;
        ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;
+done:
        WRITE_ONCE(p->se.avg.util_est, ue);
 }
 
index 2410db5e9a35302856df5db8ce1c65e8634fef80..7481cd96f3915536937fbcfa87343d4f89ba1629 100644 (file)
@@ -89,3 +89,4 @@ SCHED_FEAT(WA_BIAS, true)
  * UtilEstimation. Use estimated CPU utilization.
  */
 SCHED_FEAT(UTIL_EST, true)
+SCHED_FEAT(UTIL_EST_FASTUP, true)