]> asedeno.scripts.mit.edu Git - linux.git/commit
x86/mce: Improve memcpy_mcsafe()
authorTony Luck <tony.luck@intel.com>
Thu, 1 Sep 2016 18:39:33 +0000 (11:39 -0700)
committerThomas Gleixner <tglx@linutronix.de>
Mon, 5 Sep 2016 09:47:31 +0000 (11:47 +0200)
commit9a6fb28a355d2609ace4dab4e6425442c647894d
tree7c66135ce9e31daa18d5486e17c2f44ab3cfefd5
parent3637efb00864f465baebd49464e58319fd295b65
x86/mce: Improve memcpy_mcsafe()

Use the mcsafe_key defined in the previous patch to make decisions on which
copy function to use. We can't use the FEATURE bit any more because PCI
quirks run too late to affect the patching of code. So we use a static key.

Turn memcpy_mcsafe() into an inline function to make life easier for
callers. The assembly code that actually does the copy is now named
memcpy_mcsafe_unrolled()

Signed-off-by: Tony Luck <tony.luck@intel.com>
Acked-by: Borislav Petkov <bp@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Boris Petkov <bp@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/bfde2fc774e94f53d91b70a4321c85a0d33e7118.1472754712.git.tony.luck@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
arch/x86/include/asm/pmem.h
arch/x86/include/asm/string_64.h
arch/x86/kernel/x8664_ksyms_64.c
arch/x86/lib/memcpy_64.S