From: Nicholas Piggin Date: Fri, 1 Jun 2018 10:01:18 +0000 (+1000) Subject: powerpc/64s/radix: prefetch user address in update_mmu_cache X-Git-Tag: v4.18-rc1~108^2~81 X-Git-Url: https://asedeno.scripts.mit.edu/gitweb/?a=commitdiff_plain;h=68662f85f3c3ea8a6fb61586a05955ab51bbb9e6;p=linux.git powerpc/64s/radix: prefetch user address in update_mmu_cache Prefetch the faulting address in update_mmu_cache to give the page table walker perhaps 100 cycles head start as locks are dropped and the interrupt completed. Signed-off-by: Nicholas Piggin Signed-off-by: Michael Ellerman --- diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index c3c39b02b2ba..8cecda4bd66a 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -509,8 +509,10 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, */ unsigned long access, trap; - if (radix_enabled()) + if (radix_enabled()) { + prefetch((void *)address); return; + } /* We only want HPTEs for linux PTEs that have _PAGE_ACCESSED set */ if (!pte_young(*ptep) || address >= TASK_SIZE) diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c index 82fed87289de..c1f4ca45c93a 100644 --- a/arch/powerpc/mm/pgtable-book3s64.c +++ b/arch/powerpc/mm/pgtable-book3s64.c @@ -152,7 +152,8 @@ pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd) { - return; + if (radix_enabled()) + prefetch((void *)addr); } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */