1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.vnet.ibm.com>
10 (*) Abstract memory access model.
15 (*) What are memory barriers?
17 - Varieties of memory barrier.
18 - What may not be assumed about memory barriers?
19 - Data dependency barriers.
20 - Control dependencies.
21 - SMP barrier pairing.
22 - Examples of memory barrier sequences.
23 - Read memory barriers vs load speculation.
26 (*) Explicit kernel barriers.
29 - CPU memory barriers.
32 (*) Implicit kernel memory barriers.
35 - Interrupt disabling functions.
36 - Sleep and wake-up functions.
37 - Miscellaneous functions.
39 (*) Inter-CPU locking barrier effects.
41 - Locks vs memory accesses.
42 - Locks vs I/O accesses.
44 (*) Where are memory barriers needed?
46 - Interprocessor interaction.
51 (*) Kernel I/O barrier effects.
53 (*) Assumed minimum execution ordering model.
55 (*) The effects of the cpu cache.
58 - Cache coherency vs DMA.
59 - Cache coherency vs MMIO.
61 (*) The things CPUs get up to.
63 - And then there's the Alpha.
72 ============================
73 ABSTRACT MEMORY ACCESS MODEL
74 ============================
76 Consider the following abstract model of the system:
81 +-------+ : +--------+ : +-------+
84 | CPU 1 |<----->| Memory |<----->| CPU 2 |
87 +-------+ : +--------+ : +-------+
95 +---------->| Device |<----------+
101 Each CPU executes a program that generates memory access operations. In the
102 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103 perform the memory operations in any order it likes, provided program causality
104 appears to be maintained. Similarly, the compiler may also arrange the
105 instructions it emits in any order it likes, provided it doesn't affect the
106 apparent operation of the program.
108 So in the above diagram, the effects of the memory operations performed by a
109 CPU are perceived by the rest of the system as the operations cross the
110 interface between the CPU and rest of the system (the dotted lines).
113 For example, consider the following sequence of events:
116 =============== ===============
121 The set of accesses as seen by the memory system in the middle can be arranged
122 in 24 different combinations:
124 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4
125 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3
126 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4
127 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4
128 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3
129 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4
130 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4
134 and can thus result in four different combinations of values:
142 Furthermore, the stores committed by a CPU to the memory system may not be
143 perceived by the loads made by another CPU in the same order as the stores were
147 As a further example, consider this sequence of events:
150 =============== ===============
151 { A == 1, B == 2, C = 3, P == &A, Q == &C }
155 There is an obvious data dependency here, as the value loaded into D depends on
156 the address retrieved from P by CPU 2. At the end of the sequence, any of the
157 following results are possible:
159 (Q == &A) and (D == 1)
160 (Q == &B) and (D == 2)
161 (Q == &B) and (D == 4)
163 Note that CPU 2 will never try and load C into D because the CPU will load P
164 into Q before issuing the load of *Q.
170 Some devices present their control interfaces as collections of memory
171 locations, but the order in which the control registers are accessed is very
172 important. For instance, imagine an ethernet card with a set of internal
173 registers that are accessed through an address port register (A) and a data
174 port register (D). To read internal register 5, the following code might then
180 but this might show up as either of the following two sequences:
182 STORE *A = 5, x = LOAD *D
183 x = LOAD *D, STORE *A = 5
185 the second of which will almost certainly result in a malfunction, since it set
186 the address _after_ attempting to read the register.
192 There are some minimal guarantees that may be expected of a CPU:
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195 respect to itself. This means that for:
197 ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
199 the CPU will issue the following memory operations:
201 Q = LOAD P, D = LOAD *Q
203 and always in that order. On most systems, smp_read_barrier_depends()
204 does nothing, but it is required for DEC Alpha. The ACCESS_ONCE()
205 is required to prevent compiler mischief. Please note that you
206 should normally use something like rcu_dereference() instead of
207 open-coding smp_read_barrier_depends().
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210 ordered within that CPU. This means that for:
212 a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
214 the CPU will only issue the following sequence of memory operations:
216 a = LOAD *X, STORE *X = b
220 ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
222 the CPU will only issue:
224 STORE *X = c, d = LOAD *X
226 (Loads and stores overlap if they are targeted at overlapping pieces of
229 And there are a number of things that _must_ or _must_not_ be assumed:
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232 memory references that are not protected by ACCESS_ONCE(). Without
233 ACCESS_ONCE(), the compiler is within its rights to do all sorts
234 of "creative" transformations, which are covered in the Compiler
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
238 in the order given. This means that for:
240 X = *A; Y = *B; *D = Z;
242 we may get any of the following sequences:
244 X = LOAD *A, Y = LOAD *B, STORE *D = Z
245 X = LOAD *A, STORE *D = Z, Y = LOAD *B
246 Y = LOAD *B, X = LOAD *A, STORE *D = Z
247 Y = LOAD *B, STORE *D = Z, X = LOAD *A
248 STORE *D = Z, X = LOAD *A, Y = LOAD *B
249 STORE *D = Z, Y = LOAD *B, X = LOAD *A
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252 discarded. This means that for:
254 X = *A; Y = *(A + 4);
256 we may get any one of the following sequences:
258 X = LOAD *A; Y = LOAD *(A + 4);
259 Y = LOAD *(A + 4); X = LOAD *A;
260 {X, Y} = LOAD {*A, *(A + 4) };
264 *A = X; *(A + 4) = Y;
268 STORE *A = X; STORE *(A + 4) = Y;
269 STORE *(A + 4) = Y; STORE *A = X;
270 STORE {*A, *(A + 4) } = {X, Y};
273 =========================
274 WHAT ARE MEMORY BARRIERS?
275 =========================
277 As can be seen above, independent memory operations are effectively performed
278 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
279 What is required is some way of intervening to instruct the compiler and the
280 CPU to restrict the order.
282 Memory barriers are such interventions. They impose a perceived partial
283 ordering over the memory operations on either side of the barrier.
285 Such enforcement is important because the CPUs and other devices in a system
286 can use a variety of tricks to improve performance, including reordering,
287 deferral and combination of memory operations; speculative loads; speculative
288 branch prediction and various types of caching. Memory barriers are used to
289 override or suppress these tricks, allowing the code to sanely control the
290 interaction of multiple CPUs and/or devices.
293 VARIETIES OF MEMORY BARRIER
294 ---------------------------
296 Memory barriers come in four basic varieties:
298 (1) Write (or store) memory barriers.
300 A write memory barrier gives a guarantee that all the STORE operations
301 specified before the barrier will appear to happen before all the STORE
302 operations specified after the barrier with respect to the other
303 components of the system.
305 A write barrier is a partial ordering on stores only; it is not required
306 to have any effect on loads.
308 A CPU can be viewed as committing a sequence of store operations to the
309 memory system as time progresses. All stores before a write barrier will
310 occur in the sequence _before_ all the stores after the write barrier.
312 [!] Note that write barriers should normally be paired with read or data
313 dependency barriers; see the "SMP barrier pairing" subsection.
316 (2) Data dependency barriers.
318 A data dependency barrier is a weaker form of read barrier. In the case
319 where two loads are performed such that the second depends on the result
320 of the first (eg: the first load retrieves the address to which the second
321 load will be directed), a data dependency barrier would be required to
322 make sure that the target of the second load is updated before the address
323 obtained by the first load is accessed.
325 A data dependency barrier is a partial ordering on interdependent loads
326 only; it is not required to have any effect on stores, independent loads
327 or overlapping loads.
329 As mentioned in (1), the other CPUs in the system can be viewed as
330 committing sequences of stores to the memory system that the CPU being
331 considered can then perceive. A data dependency barrier issued by the CPU
332 under consideration guarantees that for any load preceding it, if that
333 load touches one of a sequence of stores from another CPU, then by the
334 time the barrier completes, the effects of all the stores prior to that
335 touched by the load will be perceptible to any loads issued after the data
338 See the "Examples of memory barrier sequences" subsection for diagrams
339 showing the ordering constraints.
341 [!] Note that the first load really has to have a _data_ dependency and
342 not a control dependency. If the address for the second load is dependent
343 on the first load, but the dependency is through a conditional rather than
344 actually loading the address itself, then it's a _control_ dependency and
345 a full read barrier or better is required. See the "Control dependencies"
346 subsection for more information.
348 [!] Note that data dependency barriers should normally be paired with
349 write barriers; see the "SMP barrier pairing" subsection.
352 (3) Read (or load) memory barriers.
354 A read barrier is a data dependency barrier plus a guarantee that all the
355 LOAD operations specified before the barrier will appear to happen before
356 all the LOAD operations specified after the barrier with respect to the
357 other components of the system.
359 A read barrier is a partial ordering on loads only; it is not required to
360 have any effect on stores.
362 Read memory barriers imply data dependency barriers, and so can substitute
365 [!] Note that read barriers should normally be paired with write barriers;
366 see the "SMP barrier pairing" subsection.
369 (4) General memory barriers.
371 A general memory barrier gives a guarantee that all the LOAD and STORE
372 operations specified before the barrier will appear to happen before all
373 the LOAD and STORE operations specified after the barrier with respect to
374 the other components of the system.
376 A general memory barrier is a partial ordering over both loads and stores.
378 General memory barriers imply both read and write memory barriers, and so
379 can substitute for either.
382 And a couple of implicit varieties:
386 This acts as a one-way permeable barrier. It guarantees that all memory
387 operations after the LOCK operation will appear to happen after the LOCK
388 operation with respect to the other components of the system.
390 Memory operations that occur before a LOCK operation may appear to happen
393 A LOCK operation should almost always be paired with an UNLOCK operation.
396 (6) UNLOCK operations.
398 This also acts as a one-way permeable barrier. It guarantees that all
399 memory operations before the UNLOCK operation will appear to happen before
400 the UNLOCK operation with respect to the other components of the system.
402 Memory operations that occur after an UNLOCK operation may appear to
403 happen before it completes.
405 LOCK and UNLOCK operations are guaranteed to appear with respect to each
406 other strictly in the order specified.
408 The use of LOCK and UNLOCK operations generally precludes the need for
409 other sorts of memory barrier (but note the exceptions mentioned in the
410 subsection "MMIO write barrier").
413 Memory barriers are only required where there's a possibility of interaction
414 between two CPUs or between a CPU and a device. If it can be guaranteed that
415 there won't be any such interaction in any particular piece of code, then
416 memory barriers are unnecessary in that piece of code.
419 Note that these are the _minimum_ guarantees. Different architectures may give
420 more substantial guarantees, but they may _not_ be relied upon outside of arch
424 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
425 ----------------------------------------------
427 There are certain things that the Linux kernel memory barriers do not guarantee:
429 (*) There is no guarantee that any of the memory accesses specified before a
430 memory barrier will be _complete_ by the completion of a memory barrier
431 instruction; the barrier can be considered to draw a line in that CPU's
432 access queue that accesses of the appropriate type may not cross.
434 (*) There is no guarantee that issuing a memory barrier on one CPU will have
435 any direct effect on another CPU or any other hardware in the system. The
436 indirect effect will be the order in which the second CPU sees the effects
437 of the first CPU's accesses occur, but see the next point:
439 (*) There is no guarantee that a CPU will see the correct order of effects
440 from a second CPU's accesses, even _if_ the second CPU uses a memory
441 barrier, unless the first CPU _also_ uses a matching memory barrier (see
442 the subsection on "SMP Barrier Pairing").
444 (*) There is no guarantee that some intervening piece of off-the-CPU
445 hardware[*] will not reorder the memory accesses. CPU cache coherency
446 mechanisms should propagate the indirect effects of a memory barrier
447 between CPUs, but might not do so in order.
449 [*] For information on bus mastering DMA and coherency please read:
451 Documentation/PCI/pci.txt
452 Documentation/DMA-API-HOWTO.txt
453 Documentation/DMA-API.txt
456 DATA DEPENDENCY BARRIERS
457 ------------------------
459 The usage requirements of data dependency barriers are a little subtle, and
460 it's not always obvious that they're needed. To illustrate, consider the
461 following sequence of events:
464 =============== ===============
465 { A == 1, B == 2, C = 3, P == &A, Q == &C }
472 There's a clear data dependency here, and it would seem that by the end of the
473 sequence, Q must be either &A or &B, and that:
475 (Q == &A) implies (D == 1)
476 (Q == &B) implies (D == 4)
478 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
479 leading to the following situation:
481 (Q == &B) and (D == 2) ????
483 Whilst this may seem like a failure of coherency or causality maintenance, it
484 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
487 To deal with this, a data dependency barrier or better must be inserted
488 between the address load and the data load:
491 =============== ===============
492 { A == 1, B == 2, C = 3, P == &A, Q == &C }
497 <data dependency barrier>
500 This enforces the occurrence of one of the two implications, and prevents the
501 third possibility from arising.
503 [!] Note that this extremely counterintuitive situation arises most easily on
504 machines with split caches, so that, for example, one cache bank processes
505 even-numbered cache lines and the other bank processes odd-numbered cache
506 lines. The pointer P might be stored in an odd-numbered cache line, and the
507 variable B might be stored in an even-numbered cache line. Then, if the
508 even-numbered bank of the reading CPU's cache is extremely busy while the
509 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
510 but the old value of the variable B (2).
513 Another example of where data dependency barriers might be required is where a
514 number is read from memory and then used to calculate the index for an array
518 =============== ===============
519 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
524 <data dependency barrier>
528 The data dependency barrier is very important to the RCU system,
529 for example. See rcu_assign_pointer() and rcu_dereference() in
530 include/linux/rcupdate.h. This permits the current target of an RCU'd
531 pointer to be replaced with a new modified target, without the replacement
532 target appearing to be incompletely initialised.
534 See also the subsection on "Cache Coherency" for a more thorough example.
540 A control dependency requires a full read memory barrier, not simply a data
541 dependency barrier to make it work correctly. Consider the following bit of
546 <data dependency barrier> /* BUG: No data dependency!!! */
550 This will not have the desired effect because there is no actual data
551 dependency, but rather a control dependency that the CPU may short-circuit
552 by attempting to predict the outcome in advance, so that other CPUs see
553 the load from b as having happened before the load from a. In such a
554 case what's actually required is:
562 However, stores are not speculated. This means that ordering -is- provided
563 in the following example:
566 if (ACCESS_ONCE(q)) {
570 Please note that ACCESS_ONCE() is not optional! Without the ACCESS_ONCE(),
571 the compiler is within its rights to transform this example:
575 b = p; /* BUG: Compiler can reorder!!! */
578 b = p; /* BUG: Compiler can reorder!!! */
582 into this, which of course defeats the ordering:
591 Worse yet, if the compiler is able to prove (say) that the value of
592 variable 'a' is always non-zero, it would be well within its rights
593 to optimize the original example by eliminating the "if" statement
597 b = p; /* BUG: Compiler can reorder!!! */
600 The solution is again ACCESS_ONCE(), which preserves the ordering between
601 the load from variable 'a' and the store to variable 'b':
612 You could also use barrier() to prevent the compiler from moving
613 the stores to variable 'b', but barrier() would not prevent the
614 compiler from proving to itself that a==1 always, so ACCESS_ONCE()
617 It is important to note that control dependencies absolutely require a
618 a conditional. For example, the following "optimized" version of
619 the above example breaks ordering:
622 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
624 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
627 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
631 It is of course legal for the prior load to be part of the conditional,
632 for example, as follows:
634 if (ACCESS_ONCE(a) > 0) {
635 ACCESS_ONCE(b) = q / 2;
638 ACCESS_ONCE(b) = q / 3;
642 This will again ensure that the load from variable 'a' is ordered before the
643 stores to variable 'b'.
645 In addition, you need to be careful what you do with the local variable 'q',
646 otherwise the compiler might be able to guess the value and again remove
647 the needed conditional. For example:
658 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
659 equal to zero, in which case the compiler is within its rights to
660 transform the above code into the following:
666 This transformation loses the ordering between the load from variable 'a'
667 and the store to variable 'b'. If you are relying on this ordering, you
668 should do something like the following:
671 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
680 Finally, control dependencies do -not- provide transitivity. This is
681 demonstrated by two related examples:
684 ===================== =====================
685 r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y);
686 if (r1 >= 0) if (r2 >= 0)
687 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
689 assert(!(r1 == 1 && r2 == 1));
691 The above two-CPU example will never trigger the assert(). However,
692 if control dependencies guaranteed transitivity (which they do not),
693 then adding the following two CPUs would guarantee a related assertion:
696 ===================== =====================
697 ACCESS_ONCE(x) = 2; ACCESS_ONCE(y) = 2;
699 assert(!(r1 == 2 && r2 == 2 && x == 1 && y == 1)); /* FAILS!!! */
701 But because control dependencies do -not- provide transitivity, the
702 above assertion can fail after the combined four-CPU example completes.
703 If you need the four-CPU example to provide ordering, you will need
704 smp_mb() between the loads and stores in the CPU 0 and CPU 1 code fragments.
708 (*) Control dependencies can order prior loads against later stores.
709 However, they do -not- guarantee any other sort of ordering:
710 Not prior loads against later loads, nor prior stores against
711 later anything. If you need these other forms of ordering,
712 use smb_rmb(), smp_wmb(), or, in the case of prior stores and
713 later loads, smp_mb().
715 (*) Control dependencies require at least one run-time conditional
716 between the prior load and the subsequent store. If the compiler
717 is able to optimize the conditional away, it will have also
718 optimized away the ordering. Careful use of ACCESS_ONCE() can
719 help to preserve the needed conditional.
721 (*) Control dependencies require that the compiler avoid reordering the
722 dependency into nonexistence. Careful use of ACCESS_ONCE() or
723 barrier() can help to preserve your control dependency. Please
724 see the Compiler Barrier section for more information.
726 (*) Control dependencies do -not- provide transitivity. If you
727 need transitivity, use smp_mb().
733 When dealing with CPU-CPU interactions, certain types of memory barrier should
734 always be paired. A lack of appropriate pairing is almost certainly an error.
736 A write barrier should always be paired with a data dependency barrier or read
737 barrier, though a general barrier would also be viable. Similarly a read
738 barrier or a data dependency barrier should always be paired with at least an
739 write barrier, though, again, a general barrier is viable:
742 =============== ===============
745 ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b);
752 =============== ===============================
755 ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b);
756 <data dependency barrier>
759 Basically, the read barrier always has to be there, even though it can be of
762 [!] Note that the stores before the write barrier would normally be expected to
763 match the loads after the read barrier or the data dependency barrier, and vice
767 =================== ===================
768 ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c);
769 ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d);
770 <write barrier> \ <read barrier>
771 ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a);
772 ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b);
775 EXAMPLES OF MEMORY BARRIER SEQUENCES
776 ------------------------------------
778 Firstly, write barriers act as partial orderings on store operations.
779 Consider the following sequence of events:
782 =======================
790 This sequence of events is committed to the memory coherence system in an order
791 that the rest of the system might perceive as the unordered set of { STORE A,
792 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
797 | |------>| C=3 | } /\
798 | | : +------+ }----- \ -----> Events perceptible to
799 | | : | A=1 | } \/ the rest of the system
801 | CPU 1 | : | B=2 | }
803 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
804 | | +------+ } requires all stores prior to the
805 | | : | E=5 | } barrier to be committed before
806 | | : +------+ } further stores may take place
811 | Sequence in which stores are committed to the
812 | memory system by CPU 1
816 Secondly, data dependency barriers act as partial orderings on data-dependent
817 loads. Consider the following sequence of events:
820 ======================= =======================
821 { B = 7; X = 9; Y = 8; C = &Y }
826 STORE D = 4 LOAD C (gets &B)
829 Without intervention, CPU 2 may perceive the events on CPU 1 in some
830 effectively random order, despite the write barrier issued by CPU 1:
833 | | +------+ +-------+ | Sequence of update
834 | |------>| B=2 |----- --->| Y->8 | | of perception on
835 | | : +------+ \ +-------+ | CPU 2
836 | CPU 1 | : | A=1 | \ --->| C->&Y | V
837 | | +------+ | +-------+
838 | | wwwwwwwwwwwwwwww | : :
840 | | : | C=&B |--- | : : +-------+
841 | | : +------+ \ | +-------+ | |
842 | |------>| D=4 | ----------->| C->&B |------>| |
843 | | +------+ | +-------+ | |
844 +-------+ : : | : : | |
848 Apparently incorrect ---> | | B->7 |------>| |
849 perception of B (!) | +-------+ | |
852 The load of X holds ---> \ | X->9 |------>| |
853 up the maintenance \ +-------+ | |
854 of coherence of B ----->| B->2 | +-------+
859 In the above example, CPU 2 perceives that B is 7, despite the load of *C
860 (which would be B) coming after the LOAD of C.
862 If, however, a data dependency barrier were to be placed between the load of C
863 and the load of *C (ie: B) on CPU 2:
866 ======================= =======================
867 { B = 7; X = 9; Y = 8; C = &Y }
872 STORE D = 4 LOAD C (gets &B)
873 <data dependency barrier>
876 then the following will occur:
879 | | +------+ +-------+
880 | |------>| B=2 |----- --->| Y->8 |
881 | | : +------+ \ +-------+
882 | CPU 1 | : | A=1 | \ --->| C->&Y |
883 | | +------+ | +-------+
884 | | wwwwwwwwwwwwwwww | : :
886 | | : | C=&B |--- | : : +-------+
887 | | : +------+ \ | +-------+ | |
888 | |------>| D=4 | ----------->| C->&B |------>| |
889 | | +------+ | +-------+ | |
890 +-------+ : : | : : | |
896 Makes sure all effects ---> \ ddddddddddddddddd | |
897 prior to the store of C \ +-------+ | |
898 are perceptible to ----->| B->2 |------>| |
899 subsequent loads +-------+ | |
903 And thirdly, a read barrier acts as a partial order on loads. Consider the
904 following sequence of events:
907 ======================= =======================
915 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
916 some effectively random order, despite the write barrier issued by CPU 1:
919 | | +------+ +-------+
920 | |------>| A=1 |------ --->| A->0 |
921 | | +------+ \ +-------+
922 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
923 | | +------+ | +-------+
924 | |------>| B=2 |--- | : :
925 | | +------+ \ | : : +-------+
926 +-------+ : : \ | +-------+ | |
927 ---------->| B->2 |------>| |
928 | +-------+ | CPU 2 |
939 If, however, a read barrier were to be placed between the load of B and the
943 ======================= =======================
952 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
956 | | +------+ +-------+
957 | |------>| A=1 |------ --->| A->0 |
958 | | +------+ \ +-------+
959 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
960 | | +------+ | +-------+
961 | |------>| B=2 |--- | : :
962 | | +------+ \ | : : +-------+
963 +-------+ : : \ | +-------+ | |
964 ---------->| B->2 |------>| |
965 | +-------+ | CPU 2 |
968 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
969 barrier causes all effects \ +-------+ | |
970 prior to the storage of B ---->| A->1 |------>| |
971 to be perceptible to CPU 2 +-------+ | |
975 To illustrate this more completely, consider what could happen if the code
976 contained a load of A either side of the read barrier:
979 ======================= =======================
985 LOAD A [first load of A]
987 LOAD A [second load of A]
989 Even though the two loads of A both occur after the load of B, they may both
990 come up with different values:
993 | | +------+ +-------+
994 | |------>| A=1 |------ --->| A->0 |
995 | | +------+ \ +-------+
996 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
997 | | +------+ | +-------+
998 | |------>| B=2 |--- | : :
999 | | +------+ \ | : : +-------+
1000 +-------+ : : \ | +-------+ | |
1001 ---------->| B->2 |------>| |
1002 | +-------+ | CPU 2 |
1006 | | A->0 |------>| 1st |
1008 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1009 barrier causes all effects \ +-------+ | |
1010 prior to the storage of B ---->| A->1 |------>| 2nd |
1011 to be perceptible to CPU 2 +-------+ | |
1015 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1016 before the read barrier completes anyway:
1019 | | +------+ +-------+
1020 | |------>| A=1 |------ --->| A->0 |
1021 | | +------+ \ +-------+
1022 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1023 | | +------+ | +-------+
1024 | |------>| B=2 |--- | : :
1025 | | +------+ \ | : : +-------+
1026 +-------+ : : \ | +-------+ | |
1027 ---------->| B->2 |------>| |
1028 | +-------+ | CPU 2 |
1032 ---->| A->1 |------>| 1st |
1034 rrrrrrrrrrrrrrrrr | |
1036 | A->1 |------>| 2nd |
1041 The guarantee is that the second load will always come up with A == 1 if the
1042 load of B came up with B == 2. No such guarantee exists for the first load of
1043 A; that may come up with either A == 0 or A == 1.
1046 READ MEMORY BARRIERS VS LOAD SPECULATION
1047 ----------------------------------------
1049 Many CPUs speculate with loads: that is they see that they will need to load an
1050 item from memory, and they find a time where they're not using the bus for any
1051 other loads, and so do the load in advance - even though they haven't actually
1052 got to that point in the instruction execution flow yet. This permits the
1053 actual load instruction to potentially complete immediately because the CPU
1054 already has the value to hand.
1056 It may turn out that the CPU didn't actually need the value - perhaps because a
1057 branch circumvented the load - in which case it can discard the value or just
1058 cache it for later use.
1063 ======================= =======================
1065 DIVIDE } Divide instructions generally
1066 DIVIDE } take a long time to perform
1069 Which might appear as this:
1073 --->| B->2 |------>| |
1077 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1078 division speculates on the +-------+ ~ | |
1082 Once the divisions are complete --> : : ~-->| |
1083 the CPU can then perform the : : | |
1084 LOAD with immediate effect : : +-------+
1087 Placing a read barrier or a data dependency barrier just before the second
1091 ======================= =======================
1098 will force any value speculatively obtained to be reconsidered to an extent
1099 dependent on the type of barrier used. If there was no change made to the
1100 speculated memory location, then the speculated value will just be used:
1104 --->| B->2 |------>| |
1108 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1109 division speculates on the +-------+ ~ | |
1114 rrrrrrrrrrrrrrrr~ | |
1121 but if there was an update or an invalidation from another CPU pending, then
1122 the speculation will be cancelled and the value reloaded:
1126 --->| B->2 |------>| |
1130 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1131 division speculates on the +-------+ ~ | |
1136 rrrrrrrrrrrrrrrrr | |
1138 The speculation is discarded ---> --->| A->1 |------>| |
1139 and an updated value is +-------+ | |
1140 retrieved : : +-------+
1146 Transitivity is a deeply intuitive notion about ordering that is not
1147 always provided by real computer systems. The following example
1148 demonstrates transitivity (also called "cumulativity"):
1151 ======================= ======================= =======================
1153 STORE X=1 LOAD X STORE Y=1
1154 <general barrier> <general barrier>
1157 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1158 This indicates that CPU 2's load from X in some sense follows CPU 1's
1159 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1160 store to Y. The question is then "Can CPU 3's load from X return 0?"
1162 Because CPU 2's load from X in some sense came after CPU 1's store, it
1163 is natural to expect that CPU 3's load from X must therefore return 1.
1164 This expectation is an example of transitivity: if a load executing on
1165 CPU A follows a load from the same variable executing on CPU B, then
1166 CPU A's load must either return the same value that CPU B's load did,
1167 or must return some later value.
1169 In the Linux kernel, use of general memory barriers guarantees
1170 transitivity. Therefore, in the above example, if CPU 2's load from X
1171 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1174 However, transitivity is -not- guaranteed for read or write barriers.
1175 For example, suppose that CPU 2's general barrier in the above example
1176 is changed to a read barrier as shown below:
1179 ======================= ======================= =======================
1181 STORE X=1 LOAD X STORE Y=1
1182 <read barrier> <general barrier>
1185 This substitution destroys transitivity: in this example, it is perfectly
1186 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1187 and CPU 3's load from X to return 0.
1189 The key point is that although CPU 2's read barrier orders its pair
1190 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1191 this example runs on a system where CPUs 1 and 2 share a store buffer
1192 or a level of cache, CPU 2 might have early access to CPU 1's writes.
1193 General barriers are therefore required to ensure that all CPUs agree
1194 on the combined order of CPU 1's and CPU 2's accesses.
1196 To reiterate, if your code requires transitivity, use general barriers
1200 ========================
1201 EXPLICIT KERNEL BARRIERS
1202 ========================
1204 The Linux kernel has a variety of different barriers that act at different
1207 (*) Compiler barrier.
1209 (*) CPU memory barriers.
1211 (*) MMIO write barrier.
1217 The Linux kernel has an explicit compiler barrier function that prevents the
1218 compiler from moving the memory accesses either side of it to the other side:
1222 This is a general barrier -- there are no read-read or write-write variants
1223 of barrier(). However, ACCESS_ONCE() can be thought of as a weak form
1224 for barrier() that affects only the specific accesses flagged by the
1227 The barrier() function has the following effects:
1229 (*) Prevents the compiler from reordering accesses following the
1230 barrier() to precede any accesses preceding the barrier().
1231 One example use for this property is to ease communication between
1232 interrupt-handler code and the code that was interrupted.
1234 (*) Within a loop, forces the compiler to load the variables used
1235 in that loop's conditional on each pass through that loop.
1237 The ACCESS_ONCE() function can prevent any number of optimizations that,
1238 while perfectly safe in single-threaded code, can be fatal in concurrent
1239 code. Here are some examples of these sorts of optimizations:
1241 (*) The compiler is within its rights to merge successive loads from
1242 the same variable. Such merging can cause the compiler to "optimize"
1246 do_something_with(tmp);
1248 into the following code, which, although in some sense legitimate
1249 for single-threaded code, is almost certainly not what the developer
1254 do_something_with(tmp);
1256 Use ACCESS_ONCE() to prevent the compiler from doing this to you:
1258 while (tmp = ACCESS_ONCE(a))
1259 do_something_with(tmp);
1261 (*) The compiler is within its rights to reload a variable, for example,
1262 in cases where high register pressure prevents the compiler from
1263 keeping all data of interest in registers. The compiler might
1264 therefore optimize the variable 'tmp' out of our previous example:
1267 do_something_with(tmp);
1269 This could result in the following code, which is perfectly safe in
1270 single-threaded code, but can be fatal in concurrent code:
1273 do_something_with(a);
1275 For example, the optimized version of this code could result in
1276 passing a zero to do_something_with() in the case where the variable
1277 a was modified by some other CPU between the "while" statement and
1278 the call to do_something_with().
1280 Again, use ACCESS_ONCE() to prevent the compiler from doing this:
1282 while (tmp = ACCESS_ONCE(a))
1283 do_something_with(tmp);
1285 Note that if the compiler runs short of registers, it might save
1286 tmp onto the stack. The overhead of this saving and later restoring
1287 is why compilers reload variables. Doing so is perfectly safe for
1288 single-threaded code, so you need to tell the compiler about cases
1289 where it is not safe.
1291 (*) The compiler is within its rights to omit a load entirely if it knows
1292 what the value will be. For example, if the compiler can prove that
1293 the value of variable 'a' is always zero, it can optimize this code:
1296 do_something_with(tmp);
1302 This transformation is a win for single-threaded code because it gets
1303 rid of a load and a branch. The problem is that the compiler will
1304 carry out its proof assuming that the current CPU is the only one
1305 updating variable 'a'. If variable 'a' is shared, then the compiler's
1306 proof will be erroneous. Use ACCESS_ONCE() to tell the compiler
1307 that it doesn't know as much as it thinks it does:
1309 while (tmp = ACCESS_ONCE(a))
1310 do_something_with(tmp);
1312 But please note that the compiler is also closely watching what you
1313 do with the value after the ACCESS_ONCE(). For example, suppose you
1314 do the following and MAX is a preprocessor macro with the value 1:
1316 while ((tmp = ACCESS_ONCE(a)) % MAX)
1317 do_something_with(tmp);
1319 Then the compiler knows that the result of the "%" operator applied
1320 to MAX will always be zero, again allowing the compiler to optimize
1321 the code into near-nonexistence. (It will still load from the
1324 (*) Similarly, the compiler is within its rights to omit a store entirely
1325 if it knows that the variable already has the value being stored.
1326 Again, the compiler assumes that the current CPU is the only one
1327 storing into the variable, which can cause the compiler to do the
1328 wrong thing for shared variables. For example, suppose you have
1332 /* Code that does not store to variable a. */
1335 The compiler sees that the value of variable 'a' is already zero, so
1336 it might well omit the second store. This would come as a fatal
1337 surprise if some other CPU might have stored to variable 'a' in the
1340 Use ACCESS_ONCE() to prevent the compiler from making this sort of
1344 /* Code that does not store to variable a. */
1347 (*) The compiler is within its rights to reorder memory accesses unless
1348 you tell it not to. For example, consider the following interaction
1349 between process-level code and an interrupt handler:
1351 void process_level(void)
1353 msg = get_message();
1357 void interrupt_handler(void)
1360 process_message(msg);
1363 There is nothing to prevent the the compiler from transforming
1364 process_level() to the following, in fact, this might well be a
1365 win for single-threaded code:
1367 void process_level(void)
1370 msg = get_message();
1373 If the interrupt occurs between these two statement, then
1374 interrupt_handler() might be passed a garbled msg. Use ACCESS_ONCE()
1375 to prevent this as follows:
1377 void process_level(void)
1379 ACCESS_ONCE(msg) = get_message();
1380 ACCESS_ONCE(flag) = true;
1383 void interrupt_handler(void)
1385 if (ACCESS_ONCE(flag))
1386 process_message(ACCESS_ONCE(msg));
1389 Note that the ACCESS_ONCE() wrappers in interrupt_handler()
1390 are needed if this interrupt handler can itself be interrupted
1391 by something that also accesses 'flag' and 'msg', for example,
1392 a nested interrupt or an NMI. Otherwise, ACCESS_ONCE() is not
1393 needed in interrupt_handler() other than for documentation purposes.
1394 (Note also that nested interrupts do not typically occur in modern
1395 Linux kernels, in fact, if an interrupt handler returns with
1396 interrupts enabled, you will get a WARN_ONCE() splat.)
1398 You should assume that the compiler can move ACCESS_ONCE() past
1399 code not containing ACCESS_ONCE(), barrier(), or similar primitives.
1401 This effect could also be achieved using barrier(), but ACCESS_ONCE()
1402 is more selective: With ACCESS_ONCE(), the compiler need only forget
1403 the contents of the indicated memory locations, while with barrier()
1404 the compiler must discard the value of all memory locations that
1405 it has currented cached in any machine registers. Of course,
1406 the compiler must also respect the order in which the ACCESS_ONCE()s
1407 occur, though the CPU of course need not do so.
1409 (*) The compiler is within its rights to invent stores to a variable,
1410 as in the following example:
1417 The compiler might save a branch by optimizing this as follows:
1423 In single-threaded code, this is not only safe, but also saves
1424 a branch. Unfortunately, in concurrent code, this optimization
1425 could cause some other CPU to see a spurious value of 42 -- even
1426 if variable 'a' was never zero -- when loading variable 'b'.
1427 Use ACCESS_ONCE() to prevent this as follows:
1432 ACCESS_ONCE(b) = 42;
1434 The compiler can also invent loads. These are usually less
1435 damaging, but they can result in cache-line bouncing and thus in
1436 poor performance and scalability. Use ACCESS_ONCE() to prevent
1439 (*) For aligned memory locations whose size allows them to be accessed
1440 with a single memory-reference instruction, prevents "load tearing"
1441 and "store tearing," in which a single large access is replaced by
1442 multiple smaller accesses. For example, given an architecture having
1443 16-bit store instructions with 7-bit immediate fields, the compiler
1444 might be tempted to use two 16-bit store-immediate instructions to
1445 implement the following 32-bit store:
1449 Please note that GCC really does use this sort of optimization,
1450 which is not surprising given that it would likely take more
1451 than two instructions to build the constant and then store it.
1452 This optimization can therefore be a win in single-threaded code.
1453 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1454 this optimization in a volatile store. In the absence of such bugs,
1455 use of ACCESS_ONCE() prevents store tearing in the following example:
1457 ACCESS_ONCE(p) = 0x00010002;
1459 Use of packed structures can also result in load and store tearing,
1462 struct __attribute__((__packed__)) foo {
1467 struct foo foo1, foo2;
1474 Because there are no ACCESS_ONCE() wrappers and no volatile markings,
1475 the compiler would be well within its rights to implement these three
1476 assignment statements as a pair of 32-bit loads followed by a pair
1477 of 32-bit stores. This would result in load tearing on 'foo1.b'
1478 and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing
1482 ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
1485 All that aside, it is never necessary to use ACCESS_ONCE() on a variable
1486 that has been marked volatile. For example, because 'jiffies' is marked
1487 volatile, it is never necessary to say ACCESS_ONCE(jiffies). The reason
1488 for this is that ACCESS_ONCE() is implemented as a volatile cast, which
1489 has no effect when its argument is already marked volatile.
1491 Please note that these compiler barriers have no direct effect on the CPU,
1492 which may then reorder things however it wishes.
1498 The Linux kernel has eight basic CPU memory barriers:
1500 TYPE MANDATORY SMP CONDITIONAL
1501 =============== ======================= ===========================
1502 GENERAL mb() smp_mb()
1503 WRITE wmb() smp_wmb()
1504 READ rmb() smp_rmb()
1505 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
1508 All memory barriers except the data dependency barriers imply a compiler
1509 barrier. Data dependencies do not impose any additional compiler ordering.
1511 Aside: In the case of data dependencies, the compiler would be expected to
1512 issue the loads in the correct order (eg. `a[b]` would have to load the value
1513 of b before loading a[b]), however there is no guarantee in the C specification
1514 that the compiler may not speculate the value of b (eg. is equal to 1) and load
1515 a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1516 problem of a compiler reloading b after having loaded a[b], thus having a newer
1517 copy of b than a[b]. A consensus has not yet been reached about these problems,
1518 however the ACCESS_ONCE macro is a good place to start looking.
1520 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1521 systems because it is assumed that a CPU will appear to be self-consistent,
1522 and will order overlapping accesses correctly with respect to itself.
1524 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1525 references to shared memory on SMP systems, though the use of locking instead
1528 Mandatory barriers should not be used to control SMP effects, since mandatory
1529 barriers unnecessarily impose overhead on UP systems. They may, however, be
1530 used to control MMIO effects on accesses through relaxed memory I/O windows.
1531 These are required even on non-SMP systems as they affect the order in which
1532 memory operations appear to a device by prohibiting both the compiler and the
1533 CPU from reordering them.
1536 There are some more advanced barrier functions:
1538 (*) set_mb(var, value)
1540 This assigns the value to the variable and then inserts a full memory
1541 barrier after it, depending on the function. It isn't guaranteed to
1542 insert anything more than a compiler barrier in a UP compilation.
1545 (*) smp_mb__before_atomic_dec();
1546 (*) smp_mb__after_atomic_dec();
1547 (*) smp_mb__before_atomic_inc();
1548 (*) smp_mb__after_atomic_inc();
1550 These are for use with atomic add, subtract, increment and decrement
1551 functions that don't return a value, especially when used for reference
1552 counting. These functions do not imply memory barriers.
1554 As an example, consider a piece of code that marks an object as being dead
1555 and then decrements the object's reference count:
1558 smp_mb__before_atomic_dec();
1559 atomic_dec(&obj->ref_count);
1561 This makes sure that the death mark on the object is perceived to be set
1562 *before* the reference counter is decremented.
1564 See Documentation/atomic_ops.txt for more information. See the "Atomic
1565 operations" subsection for information on where to use these.
1568 (*) smp_mb__before_clear_bit(void);
1569 (*) smp_mb__after_clear_bit(void);
1571 These are for use similar to the atomic inc/dec barriers. These are
1572 typically used for bitwise unlocking operations, so care must be taken as
1573 there are no implicit memory barriers here either.
1575 Consider implementing an unlock operation of some nature by clearing a
1576 locking bit. The clear_bit() would then need to be barriered like this:
1578 smp_mb__before_clear_bit();
1581 This prevents memory operations before the clear leaking to after it. See
1582 the subsection on "Locking Functions" with reference to UNLOCK operation
1585 See Documentation/atomic_ops.txt for more information. See the "Atomic
1586 operations" subsection for information on where to use these.
1592 The Linux kernel also has a special barrier for use with memory-mapped I/O
1597 This is a variation on the mandatory write barrier that causes writes to weakly
1598 ordered I/O regions to be partially ordered. Its effects may go beyond the
1599 CPU->Hardware interface and actually affect the hardware at some level.
1601 See the subsection "Locks vs I/O accesses" for more information.
1604 ===============================
1605 IMPLICIT KERNEL MEMORY BARRIERS
1606 ===============================
1608 Some of the other functions in the linux kernel imply memory barriers, amongst
1609 which are locking and scheduling functions.
1611 This specification is a _minimum_ guarantee; any particular architecture may
1612 provide more substantial guarantees, but these may not be relied upon outside
1613 of arch specific code.
1619 The Linux kernel has a number of locking constructs:
1628 In all cases there are variants on "LOCK" operations and "UNLOCK" operations
1629 for each construct. These operations all imply certain barriers:
1631 (1) LOCK operation implication:
1633 Memory operations issued after the LOCK will be completed after the LOCK
1634 operation has completed.
1636 Memory operations issued before the LOCK may be completed after the LOCK
1637 operation has completed.
1639 (2) UNLOCK operation implication:
1641 Memory operations issued before the UNLOCK will be completed before the
1642 UNLOCK operation has completed.
1644 Memory operations issued after the UNLOCK may be completed before the
1645 UNLOCK operation has completed.
1647 (3) LOCK vs LOCK implication:
1649 All LOCK operations issued before another LOCK operation will be completed
1650 before that LOCK operation.
1652 (4) LOCK vs UNLOCK implication:
1654 All LOCK operations issued before an UNLOCK operation will be completed
1655 before the UNLOCK operation.
1657 All UNLOCK operations issued before a LOCK operation will be completed
1658 before the LOCK operation.
1660 (5) Failed conditional LOCK implication:
1662 Certain variants of the LOCK operation may fail, either due to being
1663 unable to get the lock immediately, or due to receiving an unblocked
1664 signal whilst asleep waiting for the lock to become available. Failed
1665 locks do not imply any sort of barrier.
1667 Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
1668 equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
1670 [!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way
1671 barriers is that the effects of instructions outside of a critical section
1672 may seep into the inside of the critical section.
1674 A LOCK followed by an UNLOCK may not be assumed to be full memory barrier
1675 because it is possible for an access preceding the LOCK to happen after the
1676 LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the
1677 two accesses can themselves then cross:
1686 LOCK, STORE *B, STORE *A, UNLOCK
1688 Locks and semaphores may not provide any guarantee of ordering on UP compiled
1689 systems, and so cannot be counted on in such a situation to actually achieve
1690 anything at all - especially with respect to I/O accesses - unless combined
1691 with interrupt disabling operations.
1693 See also the section on "Inter-CPU locking barrier effects".
1696 As an example, consider the following:
1707 The following sequence of events is acceptable:
1709 LOCK, {*F,*A}, *E, {*C,*D}, *B, UNLOCK
1711 [+] Note that {*F,*A} indicates a combined access.
1713 But none of the following are:
1715 {*F,*A}, *B, LOCK, *C, *D, UNLOCK, *E
1716 *A, *B, *C, LOCK, *D, UNLOCK, *E, *F
1717 *A, *B, LOCK, *C, UNLOCK, *D, *E, *F
1718 *B, LOCK, *C, *D, UNLOCK, {*F,*A}, *E
1722 INTERRUPT DISABLING FUNCTIONS
1723 -----------------------------
1725 Functions that disable interrupts (LOCK equivalent) and enable interrupts
1726 (UNLOCK equivalent) will act as compiler barriers only. So if memory or I/O
1727 barriers are required in such a situation, they must be provided from some
1731 SLEEP AND WAKE-UP FUNCTIONS
1732 ---------------------------
1734 Sleeping and waking on an event flagged in global data can be viewed as an
1735 interaction between two pieces of data: the task state of the task waiting for
1736 the event and the global data used to indicate the event. To make sure that
1737 these appear to happen in the right order, the primitives to begin the process
1738 of going to sleep, and the primitives to initiate a wake up imply certain
1741 Firstly, the sleeper normally follows something like this sequence of events:
1744 set_current_state(TASK_UNINTERRUPTIBLE);
1745 if (event_indicated)
1750 A general memory barrier is interpolated automatically by set_current_state()
1751 after it has altered the task state:
1754 ===============================
1755 set_current_state();
1757 STORE current->state
1759 LOAD event_indicated
1761 set_current_state() may be wrapped by:
1764 prepare_to_wait_exclusive();
1766 which therefore also imply a general memory barrier after setting the state.
1767 The whole sequence above is available in various canned forms, all of which
1768 interpolate the memory barrier in the right place:
1771 wait_event_interruptible();
1772 wait_event_interruptible_exclusive();
1773 wait_event_interruptible_timeout();
1774 wait_event_killable();
1775 wait_event_timeout();
1780 Secondly, code that performs a wake up normally follows something like this:
1782 event_indicated = 1;
1783 wake_up(&event_wait_queue);
1787 event_indicated = 1;
1788 wake_up_process(event_daemon);
1790 A write memory barrier is implied by wake_up() and co. if and only if they wake
1791 something up. The barrier occurs before the task state is cleared, and so sits
1792 between the STORE to indicate the event and the STORE to set TASK_RUNNING:
1795 =============================== ===============================
1796 set_current_state(); STORE event_indicated
1797 set_mb(); wake_up();
1798 STORE current->state <write barrier>
1799 <general barrier> STORE current->state
1800 LOAD event_indicated
1802 The available waker functions include:
1808 wake_up_interruptible();
1809 wake_up_interruptible_all();
1810 wake_up_interruptible_nr();
1811 wake_up_interruptible_poll();
1812 wake_up_interruptible_sync();
1813 wake_up_interruptible_sync_poll();
1815 wake_up_locked_poll();
1821 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
1822 order multiple stores before the wake-up with respect to loads of those stored
1823 values after the sleeper has called set_current_state(). For instance, if the
1826 set_current_state(TASK_INTERRUPTIBLE);
1827 if (event_indicated)
1829 __set_current_state(TASK_RUNNING);
1830 do_something(my_data);
1835 event_indicated = 1;
1836 wake_up(&event_wait_queue);
1838 there's no guarantee that the change to event_indicated will be perceived by
1839 the sleeper as coming after the change to my_data. In such a circumstance, the
1840 code on both sides must interpolate its own memory barriers between the
1841 separate data accesses. Thus the above sleeper ought to do:
1843 set_current_state(TASK_INTERRUPTIBLE);
1844 if (event_indicated) {
1846 do_something(my_data);
1849 and the waker should do:
1853 event_indicated = 1;
1854 wake_up(&event_wait_queue);
1857 MISCELLANEOUS FUNCTIONS
1858 -----------------------
1860 Other functions that imply barriers:
1862 (*) schedule() and similar imply full memory barriers.
1865 =================================
1866 INTER-CPU LOCKING BARRIER EFFECTS
1867 =================================
1869 On SMP systems locking primitives give a more substantial form of barrier: one
1870 that does affect memory access ordering on other CPUs, within the context of
1871 conflict on any particular lock.
1874 LOCKS VS MEMORY ACCESSES
1875 ------------------------
1877 Consider the following: the system has a pair of spinlocks (M) and (Q), and
1878 three CPUs; then should the following sequence of events occur:
1881 =============================== ===============================
1882 ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e;
1884 ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f;
1885 ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g;
1887 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h;
1889 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
1890 through *H occur in, other than the constraints imposed by the separate locks
1891 on the separate CPUs. It might, for example, see:
1893 *E, LOCK M, LOCK Q, *G, *C, *F, *A, *B, UNLOCK Q, *D, *H, UNLOCK M
1895 But it won't see any of:
1897 *B, *C or *D preceding LOCK M
1898 *A, *B or *C following UNLOCK M
1899 *F, *G or *H preceding LOCK Q
1900 *E, *F or *G following UNLOCK Q
1903 However, if the following occurs:
1906 =============================== ===============================
1907 ACCESS_ONCE(*A) = a;
1909 ACCESS_ONCE(*B) = b;
1910 ACCESS_ONCE(*C) = c;
1912 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e;
1914 ACCESS_ONCE(*F) = f;
1915 ACCESS_ONCE(*G) = g;
1917 ACCESS_ONCE(*H) = h;
1921 *E, LOCK M [1], *C, *B, *A, UNLOCK M [1],
1922 LOCK M [2], *H, *F, *G, UNLOCK M [2], *D
1924 But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
1926 *B, *C, *D, *F, *G or *H preceding LOCK M [1]
1927 *A, *B or *C following UNLOCK M [1]
1928 *F, *G or *H preceding LOCK M [2]
1929 *A, *B, *C, *E, *F or *G following UNLOCK M [2]
1932 LOCKS VS I/O ACCESSES
1933 ---------------------
1935 Under certain circumstances (especially involving NUMA), I/O accesses within
1936 two spinlocked sections on two different CPUs may be seen as interleaved by the
1937 PCI bridge, because the PCI bridge does not necessarily participate in the
1938 cache-coherence protocol, and is therefore incapable of issuing the required
1939 read memory barriers.
1944 =============================== ===============================
1954 may be seen by the PCI bridge as follows:
1956 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
1958 which would probably cause the hardware to malfunction.
1961 What is necessary here is to intervene with an mmiowb() before dropping the
1962 spinlock, for example:
1965 =============================== ===============================
1977 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
1978 before either of the stores issued on CPU 2.
1981 Furthermore, following a store by a load from the same device obviates the need
1982 for the mmiowb(), because the load forces the store to complete before the load
1986 =============================== ===============================
1997 See Documentation/DocBook/deviceiobook.tmpl for more information.
2000 =================================
2001 WHERE ARE MEMORY BARRIERS NEEDED?
2002 =================================
2004 Under normal operation, memory operation reordering is generally not going to
2005 be a problem as a single-threaded linear piece of code will still appear to
2006 work correctly, even if it's in an SMP kernel. There are, however, four
2007 circumstances in which reordering definitely _could_ be a problem:
2009 (*) Interprocessor interaction.
2011 (*) Atomic operations.
2013 (*) Accessing devices.
2018 INTERPROCESSOR INTERACTION
2019 --------------------------
2021 When there's a system with more than one processor, more than one CPU in the
2022 system may be working on the same data set at the same time. This can cause
2023 synchronisation problems, and the usual way of dealing with them is to use
2024 locks. Locks, however, are quite expensive, and so it may be preferable to
2025 operate without the use of a lock if at all possible. In such a case
2026 operations that affect both CPUs may have to be carefully ordered to prevent
2029 Consider, for example, the R/W semaphore slow path. Here a waiting process is
2030 queued on the semaphore, by virtue of it having a piece of its stack linked to
2031 the semaphore's list of waiting processes:
2033 struct rw_semaphore {
2036 struct list_head waiters;
2039 struct rwsem_waiter {
2040 struct list_head list;
2041 struct task_struct *task;
2044 To wake up a particular waiter, the up_read() or up_write() functions have to:
2046 (1) read the next pointer from this waiter's record to know as to where the
2047 next waiter record is;
2049 (2) read the pointer to the waiter's task structure;
2051 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2053 (4) call wake_up_process() on the task; and
2055 (5) release the reference held on the waiter's task struct.
2057 In other words, it has to perform this sequence of events:
2059 LOAD waiter->list.next;
2065 and if any of these steps occur out of order, then the whole thing may
2068 Once it has queued itself and dropped the semaphore lock, the waiter does not
2069 get the lock again; it instead just waits for its task pointer to be cleared
2070 before proceeding. Since the record is on the waiter's stack, this means that
2071 if the task pointer is cleared _before_ the next pointer in the list is read,
2072 another CPU might start processing the waiter and might clobber the waiter's
2073 stack before the up*() function has a chance to read the next pointer.
2075 Consider then what might happen to the above sequence of events:
2078 =============================== ===============================
2085 Woken up by other event
2090 foo() clobbers *waiter
2092 LOAD waiter->list.next;
2095 This could be dealt with using the semaphore lock, but then the down_xxx()
2096 function has to needlessly get the spinlock again after being woken up.
2098 The way to deal with this is to insert a general SMP memory barrier:
2100 LOAD waiter->list.next;
2107 In this case, the barrier makes a guarantee that all memory accesses before the
2108 barrier will appear to happen before all the memory accesses after the barrier
2109 with respect to the other CPUs on the system. It does _not_ guarantee that all
2110 the memory accesses before the barrier will be complete by the time the barrier
2111 instruction itself is complete.
2113 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2114 compiler barrier, thus making sure the compiler emits the instructions in the
2115 right order without actually intervening in the CPU. Since there's only one
2116 CPU, that CPU's dependency ordering logic will take care of everything else.
2122 Whilst they are technically interprocessor interaction considerations, atomic
2123 operations are noted specially as some of them imply full memory barriers and
2124 some don't, but they're very heavily relied on as a group throughout the
2127 Any atomic operation that modifies some state in memory and returns information
2128 about the state (old or new) implies an SMP-conditional general memory barrier
2129 (smp_mb()) on each side of the actual operation (with the exception of
2130 explicit lock operations, described later). These include:
2134 atomic_xchg(); atomic_long_xchg();
2135 atomic_cmpxchg(); atomic_long_cmpxchg();
2136 atomic_inc_return(); atomic_long_inc_return();
2137 atomic_dec_return(); atomic_long_dec_return();
2138 atomic_add_return(); atomic_long_add_return();
2139 atomic_sub_return(); atomic_long_sub_return();
2140 atomic_inc_and_test(); atomic_long_inc_and_test();
2141 atomic_dec_and_test(); atomic_long_dec_and_test();
2142 atomic_sub_and_test(); atomic_long_sub_and_test();
2143 atomic_add_negative(); atomic_long_add_negative();
2145 test_and_clear_bit();
2146 test_and_change_bit();
2148 /* when succeeds (returns 1) */
2149 atomic_add_unless(); atomic_long_add_unless();
2151 These are used for such things as implementing LOCK-class and UNLOCK-class
2152 operations and adjusting reference counters towards object destruction, and as
2153 such the implicit memory barrier effects are necessary.
2156 The following operations are potential problems as they do _not_ imply memory
2157 barriers, but might be used for implementing such things as UNLOCK-class
2165 With these the appropriate explicit memory barrier should be used if necessary
2166 (smp_mb__before_clear_bit() for instance).
2169 The following also do _not_ imply memory barriers, and so may require explicit
2170 memory barriers under some circumstances (smp_mb__before_atomic_dec() for
2178 If they're used for statistics generation, then they probably don't need memory
2179 barriers, unless there's a coupling between statistical data.
2181 If they're used for reference counting on an object to control its lifetime,
2182 they probably don't need memory barriers because either the reference count
2183 will be adjusted inside a locked section, or the caller will already hold
2184 sufficient references to make the lock, and thus a memory barrier unnecessary.
2186 If they're used for constructing a lock of some description, then they probably
2187 do need memory barriers as a lock primitive generally has to do things in a
2190 Basically, each usage case has to be carefully considered as to whether memory
2191 barriers are needed or not.
2193 The following operations are special locking primitives:
2195 test_and_set_bit_lock();
2197 __clear_bit_unlock();
2199 These implement LOCK-class and UNLOCK-class operations. These should be used in
2200 preference to other operations when implementing locking primitives, because
2201 their implementations can be optimised on many architectures.
2203 [!] Note that special memory barrier primitives are available for these
2204 situations because on some CPUs the atomic instructions used imply full memory
2205 barriers, and so barrier instructions are superfluous in conjunction with them,
2206 and in such cases the special barrier primitives will be no-ops.
2208 See Documentation/atomic_ops.txt for more information.
2214 Many devices can be memory mapped, and so appear to the CPU as if they're just
2215 a set of memory locations. To control such a device, the driver usually has to
2216 make the right memory accesses in exactly the right order.
2218 However, having a clever CPU or a clever compiler creates a potential problem
2219 in that the carefully sequenced accesses in the driver code won't reach the
2220 device in the requisite order if the CPU or the compiler thinks it is more
2221 efficient to reorder, combine or merge accesses - something that would cause
2222 the device to malfunction.
2224 Inside of the Linux kernel, I/O should be done through the appropriate accessor
2225 routines - such as inb() or writel() - which know how to make such accesses
2226 appropriately sequential. Whilst this, for the most part, renders the explicit
2227 use of memory barriers unnecessary, there are a couple of situations where they
2230 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2231 so for _all_ general drivers locks should be used and mmiowb() must be
2232 issued prior to unlocking the critical section.
2234 (2) If the accessor functions are used to refer to an I/O memory window with
2235 relaxed memory access properties, then _mandatory_ memory barriers are
2236 required to enforce ordering.
2238 See Documentation/DocBook/deviceiobook.tmpl for more information.
2244 A driver may be interrupted by its own interrupt service routine, and thus the
2245 two parts of the driver may interfere with each other's attempts to control or
2248 This may be alleviated - at least in part - by disabling local interrupts (a
2249 form of locking), such that the critical operations are all contained within
2250 the interrupt-disabled section in the driver. Whilst the driver's interrupt
2251 routine is executing, the driver's core may not run on the same CPU, and its
2252 interrupt is not permitted to happen again until the current interrupt has been
2253 handled, thus the interrupt handler does not need to lock against that.
2255 However, consider a driver that was talking to an ethernet card that sports an
2256 address register and a data register. If that driver's core talks to the card
2257 under interrupt-disablement and then the driver's interrupt handler is invoked:
2268 The store to the data register might happen after the second store to the
2269 address register if ordering rules are sufficiently relaxed:
2271 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2274 If ordering rules are relaxed, it must be assumed that accesses done inside an
2275 interrupt disabled section may leak outside of it and may interleave with
2276 accesses performed in an interrupt - and vice versa - unless implicit or
2277 explicit barriers are used.
2279 Normally this won't be a problem because the I/O accesses done inside such
2280 sections will include synchronous load operations on strictly ordered I/O
2281 registers that form implicit I/O barriers. If this isn't sufficient then an
2282 mmiowb() may need to be used explicitly.
2285 A similar situation may occur between an interrupt routine and two routines
2286 running on separate CPUs that communicate with each other. If such a case is
2287 likely, then interrupt-disabling locks should be used to guarantee ordering.
2290 ==========================
2291 KERNEL I/O BARRIER EFFECTS
2292 ==========================
2294 When accessing I/O memory, drivers should use the appropriate accessor
2299 These are intended to talk to I/O space rather than memory space, but
2300 that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2301 indeed have special I/O space access cycles and instructions, but many
2302 CPUs don't have such a concept.
2304 The PCI bus, amongst others, defines an I/O space concept which - on such
2305 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2306 space. However, it may also be mapped as a virtual I/O space in the CPU's
2307 memory map, particularly on those CPUs that don't support alternate I/O
2310 Accesses to this space may be fully synchronous (as on i386), but
2311 intermediary bridges (such as the PCI host bridge) may not fully honour
2314 They are guaranteed to be fully ordered with respect to each other.
2316 They are not guaranteed to be fully ordered with respect to other types of
2317 memory and I/O operation.
2319 (*) readX(), writeX():
2321 Whether these are guaranteed to be fully ordered and uncombined with
2322 respect to each other on the issuing CPU depends on the characteristics
2323 defined for the memory window through which they're accessing. On later
2324 i386 architecture machines, for example, this is controlled by way of the
2327 Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2328 provided they're not accessing a prefetchable device.
2330 However, intermediary hardware (such as a PCI bridge) may indulge in
2331 deferral if it so wishes; to flush a store, a load from the same location
2332 is preferred[*], but a load from the same device or from configuration
2333 space should suffice for PCI.
2335 [*] NOTE! attempting to load from the same location as was written to may
2336 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2339 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2340 force stores to be ordered.
2342 Please refer to the PCI specification for more information on interactions
2343 between PCI transactions.
2347 These are similar to readX(), but are not guaranteed to be ordered in any
2348 way. Be aware that there is no I/O read barrier available.
2350 (*) ioreadX(), iowriteX()
2352 These will perform appropriately for the type of access they're actually
2353 doing, be it inX()/outX() or readX()/writeX().
2356 ========================================
2357 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2358 ========================================
2360 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2361 maintain the appearance of program causality with respect to itself. Some CPUs
2362 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2363 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2364 of arch-specific code.
2366 This means that it must be considered that the CPU will execute its instruction
2367 stream in any order it feels like - or even in parallel - provided that if an
2368 instruction in the stream depends on an earlier instruction, then that
2369 earlier instruction must be sufficiently complete[*] before the later
2370 instruction may proceed; in other words: provided that the appearance of
2371 causality is maintained.
2373 [*] Some instructions have more than one effect - such as changing the
2374 condition codes, changing registers or changing memory - and different
2375 instructions may depend on different effects.
2377 A CPU may also discard any instruction sequence that winds up having no
2378 ultimate effect. For example, if two adjacent instructions both load an
2379 immediate value into the same register, the first may be discarded.
2382 Similarly, it has to be assumed that compiler might reorder the instruction
2383 stream in any way it sees fit, again provided the appearance of causality is
2387 ============================
2388 THE EFFECTS OF THE CPU CACHE
2389 ============================
2391 The way cached memory operations are perceived across the system is affected to
2392 a certain extent by the caches that lie between CPUs and memory, and by the
2393 memory coherence system that maintains the consistency of state in the system.
2395 As far as the way a CPU interacts with another part of the system through the
2396 caches goes, the memory system has to include the CPU's caches, and memory
2397 barriers for the most part act at the interface between the CPU and its cache
2398 (memory barriers logically act on the dotted line in the following diagram):
2400 <--- CPU ---> : <----------- Memory ----------->
2402 +--------+ +--------+ : +--------+ +-----------+
2403 | | | | : | | | | +--------+
2404 | CPU | | Memory | : | CPU | | | | |
2405 | Core |--->| Access |----->| Cache |<-->| | | |
2406 | | | Queue | : | | | |--->| Memory |
2407 | | | | : | | | | | |
2408 +--------+ +--------+ : +--------+ | | | |
2409 : | Cache | +--------+
2411 : | Mechanism | +--------+
2412 +--------+ +--------+ : +--------+ | | | |
2413 | | | | : | | | | | |
2414 | CPU | | Memory | : | CPU | | |--->| Device |
2415 | Core |--->| Access |----->| Cache |<-->| | | |
2416 | | | Queue | : | | | | | |
2417 | | | | : | | | | +--------+
2418 +--------+ +--------+ : +--------+ +-----------+
2422 Although any particular load or store may not actually appear outside of the
2423 CPU that issued it since it may have been satisfied within the CPU's own cache,
2424 it will still appear as if the full memory access had taken place as far as the
2425 other CPUs are concerned since the cache coherency mechanisms will migrate the
2426 cacheline over to the accessing CPU and propagate the effects upon conflict.
2428 The CPU core may execute instructions in any order it deems fit, provided the
2429 expected program causality appears to be maintained. Some of the instructions
2430 generate load and store operations which then go into the queue of memory
2431 accesses to be performed. The core may place these in the queue in any order
2432 it wishes, and continue execution until it is forced to wait for an instruction
2435 What memory barriers are concerned with is controlling the order in which
2436 accesses cross from the CPU side of things to the memory side of things, and
2437 the order in which the effects are perceived to happen by the other observers
2440 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2441 their own loads and stores as if they had happened in program order.
2443 [!] MMIO or other device accesses may bypass the cache system. This depends on
2444 the properties of the memory window through which devices are accessed and/or
2445 the use of any special device communication instructions the CPU may have.
2451 Life isn't quite as simple as it may appear above, however: for while the
2452 caches are expected to be coherent, there's no guarantee that that coherency
2453 will be ordered. This means that whilst changes made on one CPU will
2454 eventually become visible on all CPUs, there's no guarantee that they will
2455 become apparent in the same order on those other CPUs.
2458 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2459 has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2464 +--------+ : +--->| Cache A |<------->| |
2465 | | : | +---------+ | |
2467 | | : | +---------+ | |
2468 +--------+ : +--->| Cache B |<------->| |
2471 : +---------+ | System |
2472 +--------+ : +--->| Cache C |<------->| |
2473 | | : | +---------+ | |
2475 | | : | +---------+ | |
2476 +--------+ : +--->| Cache D |<------->| |
2481 Imagine the system has the following properties:
2483 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2486 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2489 (*) whilst the CPU core is interrogating one cache, the other cache may be
2490 making use of the bus to access the rest of the system - perhaps to
2491 displace a dirty cacheline or to do a speculative load;
2493 (*) each cache has a queue of operations that need to be applied to that cache
2494 to maintain coherency with the rest of the system;
2496 (*) the coherency queue is not flushed by normal loads to lines already
2497 present in the cache, even though the contents of the queue may
2498 potentially affect those loads.
2500 Imagine, then, that two writes are made on the first CPU, with a write barrier
2501 between them to guarantee that they will appear to reach that CPU's caches in
2502 the requisite order:
2505 =============== =============== =======================================
2506 u == 0, v == 1 and p == &u, q == &u
2508 smp_wmb(); Make sure change to v is visible before
2510 <A:modify v=2> v is now in cache A exclusively
2512 <B:modify p=&v> p is now in cache B exclusively
2514 The write memory barrier forces the other CPUs in the system to perceive that
2515 the local CPU's caches have apparently been updated in the correct order. But
2516 now imagine that the second CPU wants to read those values:
2519 =============== =============== =======================================
2524 The above pair of reads may then fail to happen in the expected order, as the
2525 cacheline holding p may get updated in one of the second CPU's caches whilst
2526 the update to the cacheline holding v is delayed in the other of the second
2527 CPU's caches by some other cache event:
2530 =============== =============== =======================================
2531 u == 0, v == 1 and p == &u, q == &u
2534 <A:modify v=2> <C:busy>
2538 <B:modify p=&v> <D:commit p=&v>
2541 <C:read *q> Reads from v before v updated in cache
2545 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2546 no guarantee that, without intervention, the order of update will be the same
2547 as that committed on CPU 1.
2550 To intervene, we need to interpolate a data dependency barrier or a read
2551 barrier between the loads. This will force the cache to commit its coherency
2552 queue before processing any further requests:
2555 =============== =============== =======================================
2556 u == 0, v == 1 and p == &u, q == &u
2559 <A:modify v=2> <C:busy>
2563 <B:modify p=&v> <D:commit p=&v>
2565 smp_read_barrier_depends()
2569 <C:read *q> Reads from v after v updated in cache
2572 This sort of problem can be encountered on DEC Alpha processors as they have a
2573 split cache that improves performance by making better use of the data bus.
2574 Whilst most CPUs do imply a data dependency barrier on the read when a memory
2575 access depends on a read, not all do, so it may not be relied on.
2577 Other CPUs may also have split caches, but must coordinate between the various
2578 cachelets for normal memory accesses. The semantics of the Alpha removes the
2579 need for coordination in the absence of memory barriers.
2582 CACHE COHERENCY VS DMA
2583 ----------------------
2585 Not all systems maintain cache coherency with respect to devices doing DMA. In
2586 such cases, a device attempting DMA may obtain stale data from RAM because
2587 dirty cache lines may be resident in the caches of various CPUs, and may not
2588 have been written back to RAM yet. To deal with this, the appropriate part of
2589 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2590 invalidate them as well).
2592 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2593 cache lines being written back to RAM from a CPU's cache after the device has
2594 installed its own data, or cache lines present in the CPU's cache may simply
2595 obscure the fact that RAM has been updated, until at such time as the cacheline
2596 is discarded from the CPU's cache and reloaded. To deal with this, the
2597 appropriate part of the kernel must invalidate the overlapping bits of the
2600 See Documentation/cachetlb.txt for more information on cache management.
2603 CACHE COHERENCY VS MMIO
2604 -----------------------
2606 Memory mapped I/O usually takes place through memory locations that are part of
2607 a window in the CPU's memory space that has different properties assigned than
2608 the usual RAM directed window.
2610 Amongst these properties is usually the fact that such accesses bypass the
2611 caching entirely and go directly to the device buses. This means MMIO accesses
2612 may, in effect, overtake accesses to cached memory that were emitted earlier.
2613 A memory barrier isn't sufficient in such a case, but rather the cache must be
2614 flushed between the cached memory write and the MMIO access if the two are in
2618 =========================
2619 THE THINGS CPUS GET UP TO
2620 =========================
2622 A programmer might take it for granted that the CPU will perform memory
2623 operations in exactly the order specified, so that if the CPU is, for example,
2624 given the following piece of code to execute:
2626 a = ACCESS_ONCE(*A);
2627 ACCESS_ONCE(*B) = b;
2628 c = ACCESS_ONCE(*C);
2629 d = ACCESS_ONCE(*D);
2630 ACCESS_ONCE(*E) = e;
2632 they would then expect that the CPU will complete the memory operation for each
2633 instruction before moving on to the next one, leading to a definite sequence of
2634 operations as seen by external observers in the system:
2636 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2639 Reality is, of course, much messier. With many CPUs and compilers, the above
2640 assumption doesn't hold because:
2642 (*) loads are more likely to need to be completed immediately to permit
2643 execution progress, whereas stores can often be deferred without a
2646 (*) loads may be done speculatively, and the result discarded should it prove
2647 to have been unnecessary;
2649 (*) loads may be done speculatively, leading to the result having been fetched
2650 at the wrong time in the expected sequence of events;
2652 (*) the order of the memory accesses may be rearranged to promote better use
2653 of the CPU buses and caches;
2655 (*) loads and stores may be combined to improve performance when talking to
2656 memory or I/O hardware that can do batched accesses of adjacent locations,
2657 thus cutting down on transaction setup costs (memory and PCI devices may
2658 both be able to do this); and
2660 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2661 mechanisms may alleviate this - once the store has actually hit the cache
2662 - there's no guarantee that the coherency management will be propagated in
2663 order to other CPUs.
2665 So what another CPU, say, might actually observe from the above piece of code
2668 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2670 (Where "LOAD {*C,*D}" is a combined load)
2673 However, it is guaranteed that a CPU will be self-consistent: it will see its
2674 _own_ accesses appear to be correctly ordered, without the need for a memory
2675 barrier. For instance with the following code:
2677 U = ACCESS_ONCE(*A);
2678 ACCESS_ONCE(*A) = V;
2679 ACCESS_ONCE(*A) = W;
2680 X = ACCESS_ONCE(*A);
2681 ACCESS_ONCE(*A) = Y;
2682 Z = ACCESS_ONCE(*A);
2684 and assuming no intervention by an external influence, it can be assumed that
2685 the final result will appear to be:
2687 U == the original value of *A
2692 The code above may cause the CPU to generate the full sequence of memory
2695 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2697 in that order, but, without intervention, the sequence may have almost any
2698 combination of elements combined or discarded, provided the program's view of
2699 the world remains consistent. Note that ACCESS_ONCE() is -not- optional
2700 in the above example, as there are architectures where a given CPU might
2701 interchange successive loads to the same location. On such architectures,
2702 ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2703 Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2704 special ld.acq and st.rel instructions that prevent such reordering.
2706 The compiler may also combine, discard or defer elements of the sequence before
2707 the CPU even sees them.
2718 since, without either a write barrier or an ACCESS_ONCE(), it can be
2719 assumed that the effect of the storage of V to *A is lost. Similarly:
2724 may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2729 and the LOAD operation never appear outside of the CPU.
2732 AND THEN THERE'S THE ALPHA
2733 --------------------------
2735 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
2736 some versions of the Alpha CPU have a split data cache, permitting them to have
2737 two semantically-related cache lines updated at separate times. This is where
2738 the data dependency barrier really becomes necessary as this synchronises both
2739 caches with the memory coherence system, thus making it seem like pointer
2740 changes vs new data occur in the right order.
2742 The Alpha defines the Linux kernel's memory barrier model.
2744 See the subsection on "Cache Coherency" above.
2754 Memory barriers can be used to implement circular buffering without the need
2755 of a lock to serialise the producer with the consumer. See:
2757 Documentation/circular-buffers.txt
2766 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2768 Chapter 5.2: Physical Address Space Characteristics
2769 Chapter 5.4: Caches and Write Buffers
2770 Chapter 5.5: Data Sharing
2771 Chapter 5.6: Read/Write Ordering
2773 AMD64 Architecture Programmer's Manual Volume 2: System Programming
2774 Chapter 7.1: Memory-Access Ordering
2775 Chapter 7.4: Buffering and Combining Memory Writes
2777 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2778 System Programming Guide
2779 Chapter 7.1: Locked Atomic Operations
2780 Chapter 7.2: Memory Ordering
2781 Chapter 7.4: Serializing Instructions
2783 The SPARC Architecture Manual, Version 9
2784 Chapter 8: Memory Models
2785 Appendix D: Formal Specification of the Memory Models
2786 Appendix J: Programming with the Memory Models
2788 UltraSPARC Programmer Reference Manual
2789 Chapter 5: Memory Accesses and Cacheability
2790 Chapter 15: Sparc-V9 Memory Models
2792 UltraSPARC III Cu User's Manual
2793 Chapter 9: Memory Models
2795 UltraSPARC IIIi Processor User's Manual
2796 Chapter 8: Memory Models
2798 UltraSPARC Architecture 2005
2800 Appendix D: Formal Specifications of the Memory Models
2802 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2803 Chapter 8: Memory Models
2804 Appendix F: Caches and Cache Coherency
2806 Solaris Internals, Core Kernel Architecture, p63-68:
2807 Chapter 3.3: Hardware Considerations for Locks and
2810 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2811 for Kernel Programmers:
2812 Chapter 13: Other Memory Models
2814 Intel Itanium Architecture Software Developer's Manual: Volume 1:
2815 Section 2.6: Speculation
2816 Section 4.4: Memory Access