]> asedeno.scripts.mit.edu Git - linux.git/commit
x86/mm: Avoid redundant interrupt disable in load_mm_cr4()
authorJan Kiszka <jan.kiszka@siemens.com>
Tue, 18 Jun 2019 07:32:11 +0000 (09:32 +0200)
committerThomas Gleixner <tglx@linutronix.de>
Wed, 24 Jul 2019 12:43:37 +0000 (14:43 +0200)
commit21e450d21ccad4cb7c7984c29ff145012b47736d
tree17a6ad3e3620cedd781fa9096abaa4d3b14d0a30
parent5f9e832c137075045d15cd6899ab0505cfb2ca4b
x86/mm: Avoid redundant interrupt disable in load_mm_cr4()

load_mm_cr4() is always called with interrupts disabled from:

 - switch_mm_irqs_off()
 - refresh_pce(), which is a on_each_cpu() callback

Thus, disabling interrupts in cr4_set/clear_bits() is redundant.

Implement cr4_set/clear_bits_irqsoff() helpers, rename load_mm_cr4() to
load_mm_cr4_irqsoff() and use the new helpers. The new helpers do not need
a lockdep assert as __cr4_set() has one already.

The renaming in combination with the checks in __cr4_set() ensure that any
changes in the boundary conditions at the call sites will be detected.

[ tglx: Massaged change log ]

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/0fbbcb64-5f26-4ffb-1bb9-4f5f48426893@siemens.com
arch/x86/events/core.c
arch/x86/include/asm/mmu_context.h
arch/x86/include/asm/tlbflush.h
arch/x86/mm/tlb.c