3 ============================
4 Transparent Hugepage Support
5 ============================
10 Performance critical computing applications dealing with large memory
11 working sets are already running on top of libhugetlbfs and in turn
12 hugetlbfs. Transparent Hugepage Support is an alternative means of
13 using huge pages for the backing of virtual memory with huge pages
14 that supports the automatic promotion and demotion of page sizes and
15 without the shortcomings of hugetlbfs.
17 Currently it only works for anonymous memory mappings and tmpfs/shmem.
18 But in the future it can expand to other filesystems.
20 The reason applications are running faster is because of two
21 factors. The first factor is almost completely irrelevant and it's not
22 of significant interest because it'll also have the downside of
23 requiring larger clear-page copy-page in page faults which is a
24 potentially negative effect. The first factor consists in taking a
25 single page fault for each 2M virtual region touched by userland (so
26 reducing the enter/exit kernel frequency by a 512 times factor). This
27 only matters the first time the memory is accessed for the lifetime of
28 a memory mapping. The second long lasting and much more important
29 factor will affect all subsequent accesses to the memory for the whole
30 runtime of the application. The second factor consist of two
31 components: 1) the TLB miss will run faster (especially with
32 virtualization using nested pagetables but almost always also on bare
33 metal without virtualization) and 2) a single TLB entry will be
34 mapping a much larger amount of virtual memory in turn reducing the
35 number of TLB misses. With virtualization and nested pagetables the
36 TLB can be mapped of larger size only if both KVM and the Linux guest
37 are using hugepages but a significant speedup already happens if only
38 one of the two is using hugepages just because of the fact the TLB
39 miss is going to run faster.
44 - "graceful fallback": mm components which don't have transparent hugepage
45 knowledge fall back to breaking huge pmd mapping into table of ptes and,
46 if necessary, split a transparent hugepage. Therefore these components
47 can continue working on the regular pages or regular pte mappings.
49 - if a hugepage allocation fails because of memory fragmentation,
50 regular pages should be gracefully allocated instead and mixed in
51 the same vma without any failure or significant delay and without
54 - if some task quits and more hugepages become available (either
55 immediately in the buddy or through the VM), guest physical memory
56 backed by regular pages should be relocated on hugepages
57 automatically (with khugepaged)
59 - it doesn't require memory reservation and in turn it uses hugepages
60 whenever possible (the only possible reservation here is kernelcore=
61 to avoid unmovable pages to fragment all the memory but such a tweak
62 is not specific to transparent hugepage support and it's a generic
63 feature that applies to all dynamic high order allocations in the
66 Transparent Hugepage Support maximizes the usefulness of free memory
67 if compared to the reservation approach of hugetlbfs by allowing all
68 unused memory to be used as cache or other movable (or even unmovable
69 entities). It doesn't require reservation to prevent hugepage
70 allocation failures to be noticeable from userland. It allows paging
71 and all other advanced VM features to be available on the
72 hugepages. It requires no modifications for applications to take
75 Applications however can be further optimized to take advantage of
76 this feature, like for example they've been optimized before to avoid
77 a flood of mmap system calls for every malloc(4k). Optimizing userland
78 is by far not mandatory and khugepaged already can take care of long
79 lived page allocations even for hugepage unaware applications that
80 deals with large amounts of memory.
82 In certain cases when hugepages are enabled system wide, application
83 may end up allocating more memory resources. An application may mmap a
84 large region but only touch 1 byte of it, in that case a 2M page might
85 be allocated instead of a 4k page for no good. This is why it's
86 possible to disable hugepages system-wide and to only have them inside
87 MADV_HUGEPAGE madvise regions.
89 Embedded systems should enable hugepages only inside madvise regions
90 to eliminate any risk of wasting any precious byte of memory and to
93 Applications that gets a lot of benefit from hugepages and that don't
94 risk to lose memory by using hugepages, should use
95 madvise(MADV_HUGEPAGE) on their critical mmapped regions.
100 Transparent Hugepage Support for anonymous memory can be entirely disabled
101 (mostly for debugging purposes) or only enabled inside MADV_HUGEPAGE
102 regions (to avoid the risk of consuming more memory resources) or enabled
103 system wide. This can be achieved with one of::
105 echo always >/sys/kernel/mm/transparent_hugepage/enabled
106 echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
107 echo never >/sys/kernel/mm/transparent_hugepage/enabled
109 It's also possible to limit defrag efforts in the VM to generate
110 anonymous hugepages in case they're not immediately free to madvise
111 regions or to never try to defrag memory and simply fallback to regular
112 pages unless hugepages are immediately available. Clearly if we spend CPU
113 time to defrag memory, we would expect to gain even more by the fact we
114 use hugepages later instead of regular pages. This isn't always
115 guaranteed, but it may be more likely in case the allocation is for a
116 MADV_HUGEPAGE region.
120 echo always >/sys/kernel/mm/transparent_hugepage/defrag
121 echo defer >/sys/kernel/mm/transparent_hugepage/defrag
122 echo defer+madvise >/sys/kernel/mm/transparent_hugepage/defrag
123 echo madvise >/sys/kernel/mm/transparent_hugepage/defrag
124 echo never >/sys/kernel/mm/transparent_hugepage/defrag
127 means that an application requesting THP will stall on
128 allocation failure and directly reclaim pages and compact
129 memory in an effort to allocate a THP immediately. This may be
130 desirable for virtual machines that benefit heavily from THP
131 use and are willing to delay the VM start to utilise them.
134 means that an application will wake kswapd in the background
135 to reclaim pages and wake kcompactd to compact memory so that
136 THP is available in the near future. It's the responsibility
137 of khugepaged to then install the THP pages later.
140 will enter direct reclaim and compaction like ``always``, but
141 only for regions that have used madvise(MADV_HUGEPAGE); all
142 other regions will wake kswapd in the background to reclaim
143 pages and wake kcompactd to compact memory so that THP is
144 available in the near future.
147 will enter direct reclaim like ``always`` but only for regions
148 that are have used madvise(MADV_HUGEPAGE). This is the default
152 should be self-explanatory.
154 By default kernel tries to use huge zero page on read page fault to
155 anonymous mapping. It's possible to disable huge zero page by writing 0
156 or enable it back by writing 1::
158 echo 0 >/sys/kernel/mm/transparent_hugepage/use_zero_page
159 echo 1 >/sys/kernel/mm/transparent_hugepage/use_zero_page
161 Some userspace (such as a test program, or an optimized memory allocation
162 library) may want to know the size (in bytes) of a transparent hugepage::
164 cat /sys/kernel/mm/transparent_hugepage/hpage_pmd_size
166 khugepaged will be automatically started when
167 transparent_hugepage/enabled is set to "always" or "madvise, and it'll
168 be automatically shutdown if it's set to "never".
170 khugepaged runs usually at low frequency so while one may not want to
171 invoke defrag algorithms synchronously during the page faults, it
172 should be worth invoking defrag at least in khugepaged. However it's
173 also possible to disable defrag in khugepaged by writing 0 or enable
174 defrag in khugepaged by writing 1::
176 echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
177 echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
179 You can also control how many pages khugepaged should scan at each
182 /sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
184 and how many milliseconds to wait in khugepaged between each pass (you
185 can set this to 0 to run khugepaged at 100% utilization of one core)::
187 /sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
189 and how many milliseconds to wait in khugepaged if there's an hugepage
190 allocation failure to throttle the next allocation attempt::
192 /sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
194 The khugepaged progress can be seen in the number of pages collapsed::
196 /sys/kernel/mm/transparent_hugepage/khugepaged/pages_collapsed
200 /sys/kernel/mm/transparent_hugepage/khugepaged/full_scans
202 ``max_ptes_none`` specifies how many extra small pages (that are
203 not already mapped) can be allocated when collapsing a group
204 of small pages into one large page::
206 /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none
208 A higher value leads to use additional memory for programs.
209 A lower value leads to gain less thp performance. Value of
210 max_ptes_none can waste cpu time very little, you can
213 ``max_ptes_swap`` specifies how many pages can be brought in from
214 swap when collapsing a group of pages into a transparent huge page::
216 /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_swap
218 A higher value can cause excessive swap IO and waste
219 memory. A lower value can prevent THPs from being
220 collapsed, resulting fewer pages being collapsed into
221 THPs, and lower memory access performance.
226 You can change the sysfs boot time defaults of Transparent Hugepage
227 Support by passing the parameter ``transparent_hugepage=always`` or
228 ``transparent_hugepage=madvise`` or ``transparent_hugepage=never``
229 to the kernel command line.
231 Hugepages in tmpfs/shmem
232 ========================
234 You can control hugepage allocation policy in tmpfs with mount option
235 ``huge=``. It can have following values:
238 Attempt to allocate huge pages every time we need a new page;
241 Do not allocate huge pages;
244 Only allocate huge page if it will be fully within i_size.
245 Also respect fadvise()/madvise() hints;
248 Only allocate huge pages if requested with fadvise()/madvise();
250 The default policy is ``never``.
252 ``mount -o remount,huge= /mountpoint`` works fine after mount: remounting
253 ``huge=never`` will not attempt to break up huge pages at all, just stop more
254 from being allocated.
256 There's also sysfs knob to control hugepage allocation policy for internal
257 shmem mount: /sys/kernel/mm/transparent_hugepage/shmem_enabled. The mount
258 is used for SysV SHM, memfds, shared anonymous mmaps (of /dev/zero or
259 MAP_ANONYMOUS), GPU drivers' DRM objects, Ashmem.
261 In addition to policies listed above, shmem_enabled allows two further
265 For use in emergencies, to force the huge option off from
268 Force the huge option on for all - very useful for testing;
270 Need of application restart
271 ===========================
273 The transparent_hugepage/enabled values and tmpfs mount option only affect
274 future behavior. So to make them effective you need to restart any
275 application that could have been using hugepages. This also applies to the
276 regions registered in khugepaged.
281 The number of anonymous transparent huge pages currently used by the
282 system is available by reading the AnonHugePages field in ``/proc/meminfo``.
283 To identify what applications are using anonymous transparent huge pages,
284 it is necessary to read ``/proc/PID/smaps`` and count the AnonHugePages fields
287 The number of file transparent huge pages mapped to userspace is available
288 by reading ShmemPmdMapped and ShmemHugePages fields in ``/proc/meminfo``.
289 To identify what applications are mapping file transparent huge pages, it
290 is necessary to read ``/proc/PID/smaps`` and count the FileHugeMapped fields
293 Note that reading the smaps file is expensive and reading it
294 frequently will incur overhead.
296 There are a number of counters in ``/proc/vmstat`` that may be used to
297 monitor how successfully the system is providing huge pages for use.
300 is incremented every time a huge page is successfully
301 allocated to handle a page fault. This applies to both the
302 first time a page is faulted and for COW faults.
305 is incremented by khugepaged when it has found
306 a range of pages to collapse into one huge page and has
307 successfully allocated a new huge page to store the data.
310 is incremented if a page fault fails to allocate
311 a huge page and instead falls back to using small pages.
313 thp_collapse_alloc_failed
314 is incremented if khugepaged found a range
315 of pages that should be collapsed into one huge page but failed
319 is incremented every time a file huge page is successfully
323 is incremented every time a file huge page is mapped into
327 is incremented every time a huge page is split into base
328 pages. This can happen for a variety of reasons but a common
329 reason is that a huge page is old and is being reclaimed.
330 This action implies splitting all PMD the page mapped with.
332 thp_split_page_failed
333 is incremented if kernel fails to split huge
334 page. This can happen if the page was pinned by somebody.
336 thp_deferred_split_page
337 is incremented when a huge page is put onto split
338 queue. This happens when a huge page is partially unmapped and
339 splitting it would free up some memory. Pages on split queue are
340 going to be split under memory pressure.
343 is incremented every time a PMD split into table of PTEs.
344 This can happen, for instance, when application calls mprotect() or
345 munmap() on part of huge page. It doesn't split huge page, only
349 is incremented every time a huge zero page is
350 successfully allocated. It includes allocations which where
351 dropped due race with other allocation. Note, it doesn't count
352 every map of the huge zero page, only its allocation.
354 thp_zero_page_alloc_failed
355 is incremented if kernel fails to allocate
356 huge zero page and falls back to using small pages.
359 is incremented every time a huge page is swapout in one
360 piece without splitting.
363 is incremented if a huge page has to be split before swapout.
364 Usually because failed to allocate some continuous swap space
367 As the system ages, allocating huge pages may be expensive as the
368 system uses memory compaction to copy data around memory to free a
369 huge page for use. There are some counters in ``/proc/vmstat`` to help
370 monitor this overhead.
373 is incremented every time a process stalls to run
374 memory compaction so that a huge page is free for use.
377 is incremented if the system compacted memory and
378 freed a huge page for use.
381 is incremented if the system tries to compact memory
385 is incremented each time a page is moved. If
386 this value is increasing rapidly, it implies that the system
387 is copying a lot of data to satisfy the huge page allocation.
388 It is possible that the cost of copying exceeds any savings
389 from reduced TLB misses.
391 compact_pagemigrate_failed
392 is incremented when the underlying mechanism
393 for moving a page failed.
396 is incremented each time memory compaction examines
397 a huge page aligned range of pages.
399 It is possible to establish how long the stalls were using the function
400 tracer to record how long was spent in __alloc_pages_nodemask and
401 using the mm_page_alloc tracepoint to identify which allocations were
404 get_user_pages and follow_page
405 ==============================
407 get_user_pages and follow_page if run on a hugepage, will return the
408 head or tail pages as usual (exactly as they would do on
409 hugetlbfs). Most gup users will only care about the actual physical
410 address of the page and its temporary pinning to release after the I/O
411 is complete, so they won't ever notice the fact the page is huge. But
412 if any driver is going to mangle over the page structure of the tail
413 page (like for checking page->mapping or other bits that are relevant
414 for the head page and not the tail page), it should be updated to jump
415 to check head page instead. Taking reference on any head/tail page would
416 prevent page from being split by anyone.
419 these aren't new constraints to the GUP API, and they match the
420 same constrains that applies to hugetlbfs too, so any driver capable
421 of handling GUP on hugetlbfs will also work fine on transparent
422 hugepage backed mappings.
424 In case you can't handle compound pages if they're returned by
425 follow_page, the FOLL_SPLIT bit can be specified as parameter to
426 follow_page, so that it will split the hugepages before returning
427 them. Migration for example passes FOLL_SPLIT as parameter to
428 follow_page because it's not hugepage aware and in fact it can't work
429 at all on hugetlbfs (but it instead works fine on transparent
430 hugepages thanks to FOLL_SPLIT). migration simply can't deal with
431 hugepages being returned (as it's not only checking the pfn of the
432 page and pinning it during the copy but it pretends to migrate the
433 memory in regular page sizes and with regular pte/pmd mappings).
435 Optimizing the applications
436 ===========================
438 To be guaranteed that the kernel will map a 2M page immediately in any
439 memory region, the mmap region has to be hugepage naturally
440 aligned. posix_memalign() can provide that guarantee.
445 You can use hugetlbfs on a kernel that has transparent hugepage
446 support enabled just fine as always. No difference can be noted in
447 hugetlbfs other than there will be less overall fragmentation. All
448 usual features belonging to hugetlbfs are preserved and
449 unaffected. libhugetlbfs will also work fine as usual.
454 Code walking pagetables but unaware about huge pmds can simply call
455 split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
456 pmd_offset. It's trivial to make the code transparent hugepage aware
457 by just grepping for "pmd_offset" and adding split_huge_pmd where
458 missing after pmd_offset returns the pmd. Thanks to the graceful
459 fallback design, with a one liner change, you can avoid to write
460 hundred if not thousand of lines of complex code to make your code
463 If you're not walking pagetables but you run into a physical hugepage
464 but you can't handle it natively in your code, you can split it by
465 calling split_huge_page(page). This is what the Linux VM does before
466 it tries to swapout the hugepage for example. split_huge_page() can fail
467 if the page is pinned and you must handle this correctly.
469 Example to make mremap.c transparent hugepage aware with a one liner
472 diff --git a/mm/mremap.c b/mm/mremap.c
475 @@ -41,6 +41,7 @@ static pmd_t *get_old_pmd(struct mm_stru
478 pmd = pmd_offset(pud, addr);
479 + split_huge_pmd(vma, pmd, addr);
480 if (pmd_none_or_clear_bad(pmd))
483 Locking in hugepage aware code
484 ==============================
486 We want as much code as possible hugepage aware, as calling
487 split_huge_page() or split_huge_pmd() has a cost.
489 To make pagetable walks huge pmd aware, all you need to do is to call
490 pmd_trans_huge() on the pmd returned by pmd_offset. You must hold the
491 mmap_sem in read (or write) mode to be sure an huge pmd cannot be
492 created from under you by khugepaged (khugepaged collapse_huge_page
493 takes the mmap_sem in write mode in addition to the anon_vma lock). If
494 pmd_trans_huge returns false, you just fallback in the old code
495 paths. If instead pmd_trans_huge returns true, you have to take the
496 page table lock (pmd_lock()) and re-run pmd_trans_huge. Taking the
497 page table lock will prevent the huge pmd to be converted into a
498 regular pmd from under you (split_huge_pmd can run in parallel to the
499 pagetable walk). If the second pmd_trans_huge returns false, you
500 should just drop the page table lock and fallback to the old code as
501 before. Otherwise you can proceed to process the huge pmd and the
502 hugepage natively. Once finished you can drop the page table lock.
504 Refcounts and transparent huge pages
505 ====================================
507 Refcounting on THP is mostly consistent with refcounting on other compound
510 - get_page()/put_page() and GUP operate in head page's ->_refcount.
512 - ->_refcount in tail pages is always zero: get_page_unless_zero() never
513 succeed on tail pages.
515 - map/unmap of the pages with PTE entry increment/decrement ->_mapcount
516 on relevant sub-page of the compound page.
518 - map/unmap of the whole compound page accounted in compound_mapcount
519 (stored in first tail page). For file huge pages, we also increment
520 ->_mapcount of all sub-pages in order to have race-free detection of
521 last unmap of subpages.
523 PageDoubleMap() indicates that the page is *possibly* mapped with PTEs.
525 For anonymous pages PageDoubleMap() also indicates ->_mapcount in all
526 subpages is offset up by one. This additional reference is required to
527 get race-free detection of unmap of subpages when we have them mapped with
530 This is optimization required to lower overhead of per-subpage mapcount
531 tracking. The alternative is alter ->_mapcount in all subpages on each
532 map/unmap of the whole compound page.
534 For anonymous pages, we set PG_double_map when a PMD of the page got split
535 for the first time, but still have PMD mapping. The additional references
536 go away with last compound_mapcount.
538 File pages get PG_double_map set on first map of the page with PTE and
539 goes away when the page gets evicted from page cache.
541 split_huge_page internally has to distribute the refcounts in the head
542 page to the tail pages before clearing all PG_head/tail bits from the page
543 structures. It can be done easily for refcounts taken by page table
544 entries. But we don't have enough information on how to distribute any
545 additional pins (i.e. from get_user_pages). split_huge_page() fails any
546 requests to split pinned huge page: it expects page count to be equal to
547 sum of mapcount of all sub-pages plus one (split_huge_page caller must
548 have reference for head page).
550 split_huge_page uses migration entries to stabilize page->_refcount and
551 page->_mapcount of anonymous pages. File pages just got unmapped.
553 We safe against physical memory scanners too: the only legitimate way
554 scanner can get reference to a page is get_page_unless_zero().
556 All tail pages have zero ->_refcount until atomic_add(). This prevents the
557 scanner from getting a reference to the tail page up to that point. After the
558 atomic_add() we don't care about the ->_refcount value. We already known how
559 many references should be uncharged from the head page.
561 For head page get_page_unless_zero() will succeed and we don't mind. It's
562 clear where reference should go after split: it will stay on head page.
564 Note that split_huge_pmd() doesn't have any limitation on refcounting:
565 pmd can be split at any point and never fails.
567 Partial unmap and deferred_split_huge_page()
568 ============================================
570 Unmapping part of THP (with munmap() or other way) is not going to free
571 memory immediately. Instead, we detect that a subpage of THP is not in use
572 in page_remove_rmap() and queue the THP for splitting if memory pressure
573 comes. Splitting will free up unused subpages.
575 Splitting the page right away is not an option due to locking context in
576 the place where we can detect partial unmap. It's also might be
577 counterproductive since in many cases partial unmap happens during exit(2) if
578 a THP crosses a VMA boundary.
580 Function deferred_split_huge_page() is used to queue page for splitting.
581 The splitting itself will happen when we get memory pressure via shrinker