kernel_optimize_test/kernel/dma
Robin Murphy 73bc8a5e8e dma-direct: don't over-decrypt memory
commit 4a37f3dd9a83186cb88d44808ab35b78375082c9 upstream.

The original x86 sev_alloc() only called set_memory_decrypted() on
memory returned by alloc_pages_node(), so the page order calculation
fell out of that logic. However, the common dma-direct code has several
potential allocators, not all of which are guaranteed to round up the
underlying allocation to a power-of-two size, so carrying over that
calculation for the encryption/decryption size was a mistake. Fix it by
rounding to a *number* of pages, rather than an order.

Until recently there was an even worse interaction with DMA_DIRECT_REMAP
where we could have ended up decrypting part of the next adjacent
vmalloc area, only averted by no architecture actually supporting both
configs at once. Don't ask how I found that one out...

Fixes: c10f07aa27 ("dma/direct: Handle force decryption for DMA coherent buffers in common code")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David Rientjes <rientjes@google.com>
[ backport the functional change without all the prior refactoring ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2022-06-22 14:13:20 +02:00
..
coherent.c
contiguous.c
debug.c dma-debug: make things less spammy under memory pressure 2022-06-22 14:13:13 +02:00
debug.h
direct.c dma-direct: don't over-decrypt memory 2022-06-22 14:13:20 +02:00
direct.h dma-direct: avoid redundant memory sync for swiotlb 2022-04-20 09:23:30 +02:00
dummy.c
Kconfig
Makefile
mapping.c
ops_helpers.c
pool.c dma/pool: create dma atomic pool only if dma zone has managed pages 2022-01-27 10:53:44 +01:00
remap.c
swiotlb.c Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE"" 2022-05-25 09:17:55 +02:00
virt.c