1 ============================================
2 Dynamic DMA mapping using the generic device
3 ============================================
5 :Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
7 This document describes the DMA API. For a more gentle introduction
8 of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
10 This API is split into two pieces. Part I describes the basic API.
11 Part II describes extensions for supporting non-consistent memory
12 machines. Unless you know that your driver absolutely has to support
13 non-consistent platforms (this is usually only legacy platforms) you
14 should only use the API described in part I.
19 To get the dma_API, you must #include <linux/dma-mapping.h>. This
20 provides dma_addr_t and the interfaces described below.
22 A dma_addr_t can hold any valid DMA address for the platform. It can be
23 given to a device to use as a DMA source or target. A CPU cannot reference
24 a dma_addr_t directly because there may be translation between its physical
25 address space and the DMA address space.
27 Part Ia - Using large DMA-coherent buffers
28 ------------------------------------------
33 dma_alloc_coherent(struct device *dev, size_t size,
34 dma_addr_t *dma_handle, gfp_t flag)
36 Consistent memory is memory for which a write by either the device or
37 the processor can immediately be read by the processor or device
38 without having to worry about caching effects. (You may however need
39 to make sure to flush the processor's write buffers before telling
40 devices to read that memory.)
42 This routine allocates a region of <size> bytes of consistent memory.
44 It returns a pointer to the allocated region (in the processor's virtual
45 address space) or NULL if the allocation failed.
47 It also returns a <dma_handle> which may be cast to an unsigned integer the
48 same width as the bus and given to the device as the DMA address base of
51 Note: consistent memory can be expensive on some platforms, and the
52 minimum allocation length may be as big as a page, so you should
53 consolidate your requests for consistent memory as much as possible.
54 The simplest way to do that is to use the dma_pool calls (see below).
56 The flag parameter (dma_alloc_coherent() only) allows the caller to
57 specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
58 implementation may choose to ignore flags that affect the location of
59 the returned memory, like GFP_DMA).
64 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
65 dma_addr_t dma_handle)
67 Free a region of consistent memory you previously allocated. dev,
68 size and dma_handle must all be the same as those passed into
69 dma_alloc_coherent(). cpu_addr must be the virtual address returned by
70 the dma_alloc_coherent().
72 Note that unlike their sibling allocation calls, these routines
73 may only be called with IRQs enabled.
76 Part Ib - Using small DMA-coherent buffers
77 ------------------------------------------
79 To get this part of the dma_API, you must #include <linux/dmapool.h>
81 Many drivers need lots of small DMA-coherent memory regions for DMA
82 descriptors or I/O buffers. Rather than allocating in units of a page
83 or more using dma_alloc_coherent(), you can use DMA pools. These work
84 much like a struct kmem_cache, except that they use the DMA-coherent allocator,
85 not __get_free_pages(). Also, they understand common hardware constraints
86 for alignment, like queue heads needing to be aligned on N-byte boundaries.
92 dma_pool_create(const char *name, struct device *dev,
93 size_t size, size_t align, size_t alloc);
95 dma_pool_create() initializes a pool of DMA-coherent buffers
96 for use with a given device. It must be called in a context which
99 The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100 are like what you'd pass to dma_alloc_coherent(). The device's hardware
101 alignment requirement for this type of data is "align" (which is expressed
102 in bytes, and must be a power of two). If your device has no boundary
103 crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104 from this pool must not cross 4KByte boundaries.
109 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
112 Wraps dma_pool_alloc() and also zeroes the returned memory if the
113 allocation attempt succeeded.
119 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
120 dma_addr_t *dma_handle);
122 This allocates memory from the pool; the returned memory will meet the
123 size and alignment requirements specified at creation time. Pass
124 GFP_ATOMIC to prevent blocking, or if it's permitted (not
125 in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
126 blocking. Like dma_alloc_coherent(), this returns two values: an
127 address usable by the CPU, and the DMA address usable by the pool's
133 dma_pool_free(struct dma_pool *pool, void *vaddr,
136 This puts memory back into the pool. The pool is what was passed to
137 dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
138 were returned when that routine allocated the memory being freed.
143 dma_pool_destroy(struct dma_pool *pool);
145 dma_pool_destroy() frees the resources of the pool. It must be
146 called in a context which can sleep. Make sure you've freed all allocated
147 memory back to the pool before you destroy it.
150 Part Ic - DMA addressing limitations
151 ------------------------------------
156 dma_set_mask_and_coherent(struct device *dev, u64 mask)
158 Checks to see if the mask is possible and updates the device
159 streaming and coherent DMA mask parameters if it is.
161 Returns: 0 if successful and a negative error if not.
166 dma_set_mask(struct device *dev, u64 mask)
168 Checks to see if the mask is possible and updates the device
171 Returns: 0 if successful and a negative error if not.
176 dma_set_coherent_mask(struct device *dev, u64 mask)
178 Checks to see if the mask is possible and updates the device
181 Returns: 0 if successful and a negative error if not.
186 dma_get_required_mask(struct device *dev)
188 This API returns the mask that the platform requires to
189 operate efficiently. Usually this means the returned mask
190 is the minimum required to cover all of memory. Examining the
191 required mask gives drivers with variable descriptor sizes the
192 opportunity to use smaller descriptors as necessary.
194 Requesting the required mask does not alter the current mask. If you
195 wish to take advantage of it, you should issue a dma_set_mask()
196 call to set the mask to the value returned.
199 Part Id - Streaming DMA mappings
200 --------------------------------
205 dma_map_single(struct device *dev, void *cpu_addr, size_t size,
206 enum dma_data_direction direction)
208 Maps a piece of processor virtual memory so it can be accessed by the
209 device and returns the DMA address of the memory.
211 The direction for both APIs may be converted freely by casting.
212 However the dma_API uses a strongly typed enumerator for its
215 ======================= =============================================
216 DMA_NONE no direction (used for debugging)
217 DMA_TO_DEVICE data is going from the memory to the device
218 DMA_FROM_DEVICE data is coming from the device to the memory
219 DMA_BIDIRECTIONAL direction isn't known
220 ======================= =============================================
224 Not all memory regions in a machine can be mapped by this API.
225 Further, contiguous kernel virtual space may not be contiguous as
226 physical memory. Since this API does not provide any scatter/gather
227 capability, it will fail if the user tries to map a non-physically
228 contiguous piece of memory. For this reason, memory to be mapped by
229 this API should be obtained from sources which guarantee it to be
230 physically contiguous (like kmalloc).
232 Further, the DMA address of the memory must be within the
233 dma_mask of the device (the dma_mask is a bit mask of the
234 addressable region for the device, i.e., if the DMA address of
235 the memory ANDed with the dma_mask is still equal to the DMA
236 address, then the device can perform DMA to the memory). To
237 ensure that the memory allocated by kmalloc is within the dma_mask,
238 the driver may specify various platform-dependent flags to restrict
239 the DMA address range of the allocation (e.g., on x86, GFP_DMA
240 guarantees to be within the first 16MB of available DMA addresses,
241 as required by ISA devices).
243 Note also that the above constraints on physical contiguity and
244 dma_mask may not apply if the platform has an IOMMU (a device which
245 maps an I/O DMA address to a physical memory address). However, to be
246 portable, device driver writers may *not* assume that such an IOMMU
251 Memory coherency operates at a granularity called the cache
252 line width. In order for memory mapped by this API to operate
253 correctly, the mapped region must begin exactly on a cache line
254 boundary and end exactly on one (to prevent two separately mapped
255 regions from sharing a single cache line). Since the cache line size
256 may not be known at compile time, the API will not enforce this
257 requirement. Therefore, it is recommended that driver writers who
258 don't take special care to determine the cache line size at run time
259 only map virtual regions that begin and end on page boundaries (which
260 are guaranteed also to be cache line boundaries).
262 DMA_TO_DEVICE synchronisation must be done after the last modification
263 of the memory region by the software and before it is handed off to
264 the device. Once this primitive is used, memory covered by this
265 primitive should be treated as read-only by the device. If the device
266 may write to it at any point, it should be DMA_BIDIRECTIONAL (see
269 DMA_FROM_DEVICE synchronisation must be done before the driver
270 accesses data that may be changed by the device. This memory should
271 be treated as read-only by the driver. If the driver needs to write
272 to it at any point, it should be DMA_BIDIRECTIONAL (see below).
274 DMA_BIDIRECTIONAL requires special handling: it means that the driver
275 isn't sure if the memory was modified before being handed off to the
276 device and also isn't sure if the device will also modify it. Thus,
277 you must always sync bidirectional memory twice: once before the
278 memory is handed off to the device (to make sure all memory changes
279 are flushed from the processor) and once before the data may be
280 accessed after being used by the device (to make sure any processor
281 cache lines are updated with data that the device may have changed).
286 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
287 enum dma_data_direction direction)
289 Unmaps the region previously mapped. All the parameters passed in
290 must be identical to those passed in (and returned) by the mapping
296 dma_map_page(struct device *dev, struct page *page,
297 unsigned long offset, size_t size,
298 enum dma_data_direction direction)
301 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
302 enum dma_data_direction direction)
304 API for mapping and unmapping for pages. All the notes and warnings
305 for the other mapping APIs apply here. Also, although the <offset>
306 and <size> parameters are provided to do partial page mapping, it is
307 recommended that you never use these unless you really know what the
313 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
314 enum dma_data_direction dir, unsigned long attrs)
317 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
318 enum dma_data_direction dir, unsigned long attrs)
320 API for mapping and unmapping for MMIO resources. All the notes and
321 warnings for the other mapping APIs apply here. The API should only be
322 used to map device MMIO resources, mapping of RAM is not permitted.
327 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
329 In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
330 will fail to create a mapping. A driver can check for these errors by testing
331 the returned DMA address with dma_mapping_error(). A non-zero return value
332 means the mapping could not be created and the driver should take appropriate
333 action (e.g. reduce current DMA mapping usage or delay and try again later).
338 dma_map_sg(struct device *dev, struct scatterlist *sg,
339 int nents, enum dma_data_direction direction)
341 Returns: the number of DMA address segments mapped (this may be shorter
342 than <nents> passed in if some elements of the scatter/gather list are
343 physically or virtually adjacent and an IOMMU maps them with a single
346 Please note that the sg cannot be mapped again if it has been mapped once.
347 The mapping process is allowed to destroy information in the sg.
349 As with the other mapping interfaces, dma_map_sg() can fail. When it
350 does, 0 is returned and a driver must take appropriate action. It is
351 critical that the driver do something, in the case of a block driver
352 aborting the request or even oopsing is better than doing nothing and
353 corrupting the filesystem.
355 With scatterlists, you use the resulting mapping like this::
357 int i, count = dma_map_sg(dev, sglist, nents, direction);
358 struct scatterlist *sg;
360 for_each_sg(sglist, sg, count, i) {
361 hw_address[i] = sg_dma_address(sg);
362 hw_len[i] = sg_dma_len(sg);
365 where nents is the number of entries in the sglist.
367 The implementation is free to merge several consecutive sglist entries
368 into one (e.g. with an IOMMU, or if several pages just happen to be
369 physically contiguous) and returns the actual number of sg entries it
370 mapped them to. On failure 0, is returned.
372 Then you should loop count times (note: this can be less than nents times)
373 and use sg_dma_address() and sg_dma_len() macros where you previously
374 accessed sg->address and sg->length as shown above.
379 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
380 int nents, enum dma_data_direction direction)
382 Unmap the previously mapped scatter/gather list. All the parameters
383 must be the same as those and passed in to the scatter/gather mapping
386 Note: <nents> must be the number you passed in, *not* the number of
387 DMA address entries returned.
392 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
394 enum dma_data_direction direction)
397 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
399 enum dma_data_direction direction)
402 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
404 enum dma_data_direction direction)
407 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
409 enum dma_data_direction direction)
411 Synchronise a single contiguous or scatter/gather mapping for the CPU
412 and device. With the sync_sg API, all the parameters must be the same
413 as those passed into the single mapping API. With the sync_single API,
414 you can use dma_handle and size parameters that aren't identical to
415 those passed into the single mapping API to do a partial sync.
422 - Before reading values that have been written by DMA from the device
423 (use the DMA_FROM_DEVICE direction)
424 - After writing values that will be written to the device using DMA
425 (use the DMA_TO_DEVICE) direction
426 - before *and* after handing memory to the device if the memory is
429 See also dma_map_single().
434 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
435 enum dma_data_direction dir,
439 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
440 size_t size, enum dma_data_direction dir,
444 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
445 int nents, enum dma_data_direction dir,
449 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
450 int nents, enum dma_data_direction dir,
453 The four functions above are just like the counterpart functions
454 without the _attrs suffixes, except that they pass an optional
457 The interpretation of DMA attributes is architecture-specific, and
458 each attribute should be documented in Documentation/DMA-attributes.txt.
460 If dma_attrs are 0, the semantics of each of these functions
461 is identical to those of the corresponding function
462 without the _attrs suffix. As a result dma_map_single_attrs()
463 can generally replace dma_map_single(), etc.
465 As an example of the use of the ``*_attrs`` functions, here's how
466 you could pass an attribute DMA_ATTR_FOO when mapping memory
469 #include <linux/dma-mapping.h>
470 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
471 * documented in Documentation/DMA-attributes.txt */
475 attr |= DMA_ATTR_FOO;
477 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
480 Architectures that care about DMA_ATTR_FOO would check for its
481 presence in their implementations of the mapping and unmapping
484 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
485 size_t size, enum dma_data_direction dir,
489 if (attrs & DMA_ATTR_FOO)
490 /* twizzle the frobnozzle */
495 Part II - Advanced dma usage
496 ----------------------------
498 Warning: These pieces of the DMA API should not be used in the
499 majority of cases, since they cater for unlikely corner cases that
500 don't belong in usual drivers.
502 If you don't understand how cache line coherency works between a
503 processor and an I/O device, you should not be using this part of the
509 dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
510 gfp_t flag, unsigned long attrs)
512 Identical to dma_alloc_coherent() except that when the
513 DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the
514 platform will choose to return either consistent or non-consistent memory
515 as it sees fit. By using this API, you are guaranteeing to the platform
516 that you have all the correct and necessary sync points for this memory
517 in the driver should it choose to return non-consistent memory.
519 Note: where the platform can return consistent memory, it will
520 guarantee that the sync points become nops.
522 Warning: Handling non-consistent memory is a real pain. You should
523 only use this API if you positively know your driver will be
524 required to work on one of the rare (usually non-PCI) architectures
525 that simply cannot make consistent memory.
530 dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
531 dma_addr_t dma_handle, unsigned long attrs)
533 Free memory allocated by the dma_alloc_attrs(). All parameters common
534 parameters must identical to those otherwise passed to dma_fre_coherent,
535 and the attrs argument must be identical to the attrs passed to
541 dma_get_cache_alignment(void)
543 Returns the processor cache alignment. This is the absolute minimum
544 alignment *and* width that you must observe when either mapping
545 memory or doing partial flushes.
549 This API may return a number *larger* than the actual cache
550 line, but it will guarantee that one or more cache lines fit exactly
551 into the width returned by this call. It will also always be a power
552 of two for easy alignment.
557 dma_cache_sync(struct device *dev, void *vaddr, size_t size,
558 enum dma_data_direction direction)
560 Do a partial sync of memory that was allocated by dma_alloc_attrs() with
561 the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and
562 continuing on for size. Again, you *must* observe the cache line
563 boundaries when doing this.
568 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
569 dma_addr_t device_addr, size_t size, int
572 Declare region of memory to be handed out by dma_alloc_coherent() when
573 it's asked for coherent memory for this device.
575 phys_addr is the CPU physical address to which the memory is currently
576 assigned (this will be ioremapped so the CPU can access the region).
578 device_addr is the DMA address the device needs to be programmed
579 with to actually address this memory (this will be handed out as the
580 dma_addr_t in dma_alloc_coherent()).
582 size is the size of the area (must be multiples of PAGE_SIZE).
584 flags can be ORed together and are:
586 - DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
587 Do not allow dma_alloc_coherent() to fall back to system memory when
588 it's out of memory in the declared region.
590 As a simplification for the platforms, only *one* such region of
591 memory may be declared per device.
593 For reasons of efficiency, most platforms choose to track the declared
594 region only at the granularity of a page. For smaller allocations,
595 you should use the dma_pool() API.
600 dma_release_declared_memory(struct device *dev)
602 Remove the memory region previously declared from the system. This
603 API performs *no* in-use checking for this region and will return
604 unconditionally having removed all the required structures. It is the
605 driver's job to ensure that no parts of this memory region are
611 dma_mark_declared_memory_occupied(struct device *dev,
612 dma_addr_t device_addr, size_t size)
614 This is used to occupy specific regions of the declared space
615 (dma_alloc_coherent() will hand out the first free region it finds).
617 device_addr is the *device* address of the region requested.
619 size is the size (and should be a page-sized multiple).
621 The return value will be either a pointer to the processor virtual
622 address of the memory, or an error (via PTR_ERR()) if any part of the
625 Part III - Debug drivers use of the DMA-API
626 -------------------------------------------
628 The DMA-API as described above has some constraints. DMA addresses must be
629 released with the corresponding function with the same size for example. With
630 the advent of hardware IOMMUs it becomes more and more important that drivers
631 do not violate those constraints. In the worst case such a violation can
632 result in data corruption up to destroyed filesystems.
634 To debug drivers and find bugs in the usage of the DMA-API checking code can
635 be compiled into the kernel which will tell the developer about those
636 violations. If your architecture supports it you can select the "Enable
637 debugging of DMA-API usage" option in your kernel configuration. Enabling this
638 option has a performance impact. Do not enable it in production kernels.
640 If you boot the resulting kernel will contain code which does some bookkeeping
641 about what DMA memory was allocated for which device. If this code detects an
642 error it prints a warning message with some details into your kernel log. An
643 example warning message may look like this::
645 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
646 check_unmap+0x203/0x490()
648 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
649 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
650 single] [unmapped as page]
651 Modules linked in: nfsd exportfs bridge stp llc r8169
652 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
654 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
655 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
656 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
657 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
658 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
659 [<ffffffff80252f96>] queue_work+0x56/0x60
660 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
661 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
662 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
663 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
664 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
665 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
666 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
667 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
668 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
669 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
670 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
671 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
672 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
673 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
675 The driver developer can find the driver and the device including a stacktrace
676 of the DMA-API call which caused this warning.
678 Per default only the first error will result in a warning message. All other
679 errors will only silently counted. This limitation exist to prevent the code
680 from flooding your kernel log. To support debugging a device driver this can
681 be disabled via debugfs. See the debugfs interface documentation below for
684 The debugfs directory for the DMA-API debugging code is called dma-api/. In
685 this directory the following files can currently be found:
687 =============================== ===============================================
688 dma-api/all_errors This file contains a numeric value. If this
689 value is not equal to zero the debugging code
690 will print a warning for every error it finds
691 into the kernel log. Be careful with this
692 option, as it can easily flood your logs.
694 dma-api/disabled This read-only file contains the character 'Y'
695 if the debugging code is disabled. This can
696 happen when it runs out of memory or if it was
697 disabled at boot time
699 dma-api/error_count This file is read-only and shows the total
700 numbers of errors found.
702 dma-api/num_errors The number in this file shows how many
703 warnings will be printed to the kernel log
704 before it stops. This number is initialized to
705 one at system boot and be set by writing into
708 dma-api/min_free_entries This read-only file can be read to get the
709 minimum number of free dma_debug_entries the
710 allocator has ever seen. If this value goes
711 down to zero the code will attempt to increase
712 nr_total_entries to compensate.
714 dma-api/num_free_entries The current number of free dma_debug_entries
717 dma-api/nr_total_entries The total number of dma_debug_entries in the
718 allocator, both free and used.
720 dma-api/driver-filter You can write a name of a driver into this file
721 to limit the debug output to requests from that
722 particular driver. Write an empty string to
723 that file to disable the filter and see
725 =============================== ===============================================
727 If you have this code compiled into your kernel it will be enabled by default.
728 If you want to boot without the bookkeeping anyway you can provide
729 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
730 Notice that you can not enable it again at runtime. You have to reboot to do
733 If you want to see debug messages only for a special device driver you can
734 specify the dma_debug_driver=<drivername> parameter. This will enable the
735 driver filter at boot time. The debug code will only print errors for that
736 driver afterwards. This filter can be disabled or changed later using debugfs.
738 When the code disables itself at runtime this is most likely because it ran
739 out of dma_debug_entries and was unable to allocate more on-demand. 65536
740 entries are preallocated at boot - if this is too low for you boot with
741 'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
742 that the code allocates entries in batches, so the exact number of
743 preallocated entries may be greater than the actual number requested. The
744 code will print to the kernel log each time it has dynamically allocated
745 as many entries as were initially preallocated. This is to indicate that a
746 larger preallocation size may be appropriate, or if it happens continually
747 that a driver may be leaking mappings.
752 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
754 dma-debug interface debug_dma_mapping_error() to debug drivers that fail
755 to check DMA mapping errors on addresses returned by dma_map_single() and
756 dma_map_page() interfaces. This interface clears a flag set by
757 debug_dma_map_page() to indicate that dma_mapping_error() has been called by
758 the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
759 this flag is still set, prints warning message that includes call trace that
760 leads up to the unmap. This interface can be called from dma_mapping_error()
761 routines to enable DMA mapping error check debugging.