]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
sched: Make resched_cpu() unconditional
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Mon, 18 Sep 2017 15:54:40 +0000 (08:54 -0700)
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Mon, 9 Oct 2017 21:24:14 +0000 (14:24 -0700)
The current implementation of synchronize_sched_expedited() incorrectly
assumes that resched_cpu() is unconditional, which it is not.  This means
that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
fails as follows (analysis by Neeraj Upadhyay):

o CPU1 is waiting for expedited wait to complete:

sync_rcu_exp_select_cpus
     rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
     IPI sent to CPU5

synchronize_sched_expedited_wait
 ret = swait_event_timeout(rsp->expedited_wq,
   sync_rcu_preempt_exp_done(rnp_root),
   jiffies_stall);

expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())

o CPU5 handles IPI and fails to acquire rq lock.

Handles IPI
     sync_sched_exp_handler
 resched_cpu
     returns while failing to try lock acquire rq->lock
 need_resched is not set

o CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
idle (schedule() is not called).

o CPU 1 reports RCU stall.

Given that resched_cpu() is now used only by RCU, this commit fixes the
assumption by making resched_cpu() unconditional.

Reported-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Suggested-by: Neeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: stable@vger.kernel.org
kernel/sched/core.c

index d17c5da523a0bc817a6b32413ee7e0ba3e7a6b2e..8fa7b6f9e19b869cb1f9b5aeff32c75fb9b2e629 100644 (file)
@@ -505,8 +505,7 @@ void resched_cpu(int cpu)
        struct rq *rq = cpu_rq(cpu);
        unsigned long flags;
 
-       if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-               return;
+       raw_spin_lock_irqsave(&rq->lock, flags);
        resched_curr(rq);
        raw_spin_unlock_irqrestore(&rq->lock, flags);
 }