]> asedeno.scripts.mit.edu Git - linux.git/commit
Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Wed, 11 Dec 2013 21:59:09 +0000 (13:59 -0800)
committerIngo Molnar <mingo@kernel.org>
Mon, 16 Dec 2013 10:36:15 +0000 (11:36 +0100)
commit17eb88e068430014deb709e5af34197cdf2390c9
tree869d7c1e27ff7eeb2b0b846b8f844d32ac375222
parent01352fb81658cbf78c55844de8e3d1d606bbf3f8
Documentation/memory-barriers.txt: Downgrade UNLOCK+BLOCK

Historically, an UNLOCK+LOCK pair executed by one CPU, by one
task, or on a given lock variable has implied a full memory
barrier.  In a recent LKML thread, the wisdom of this historical
approach was called into question:
http://www.spinics.net/lists/linux-mm/msg65653.html, in part due
to the memory-order complexities of low-handoff-overhead queued
locks on x86 systems.

This patch therefore removes this guarantee from the
documentation, and further documents how to restore it via a new
smp_mb__after_unlock_lock() primitive.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: <linux-arch@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1386799151-2219-6-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Documentation/memory-barriers.txt