forked from luck/tmp_suning_uos_patched
a048a07d7f
On some CPUs we can prevent a vulnerability related to store-to-load forwarding by preventing store forwarding between privilege domains, by inserting a barrier in kernel entry and exit paths. This is known to be the case on at least Power7, Power8 and Power9 powerpc CPUs. Barriers must be inserted generally before the first load after moving to a higher privilege, and after the last store before moving to a lower privilege, HV and PR privilege transitions must be protected. Barriers are added as patch sections, with all kernel/hypervisor entry points patched, and the exit points to lower privilge levels patched similarly to the RFI flush patching. Firmware advertisement is not implemented yet, so CPU flush types are hard coded. Thanks to Michal Suchánek for bug fixes and review. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michal Suchánek <msuchanek@suse.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> |
||
---|---|---|
.. | ||
alloc.c | ||
checksum_32.S | ||
checksum_64.S | ||
checksum_wrappers.c | ||
code-patching.c | ||
copy_32.S | ||
copypage_64.S | ||
copypage_power7.S | ||
copyuser_64.S | ||
copyuser_power7.S | ||
crtsavres.S | ||
div64.S | ||
feature-fixups-test.S | ||
feature-fixups.c | ||
hweight_64.S | ||
ldstfp.S | ||
locks.c | ||
Makefile | ||
mem_64.S | ||
memcmp_64.S | ||
memcpy_64.S | ||
memcpy_power7.S | ||
pmem.c | ||
quad.S | ||
rheap.c | ||
sstep.c | ||
string_64.S | ||
string.S | ||
test_emulate_step.c | ||
vmx-helper.c | ||
xor_vmx_glue.c | ||
xor_vmx.c | ||
xor_vmx.h |