forked from luck/tmp_suning_uos_patched
162b96e63e
All the necessary fields for handling an ioat2,3 ring entry can fit into one cacheline. Move ->len prior to ->txd in struct ioat_ring_ent, and move allocation of these entries to a hw-cache-aligned kmem cache to reduce the number of cachelines dirtied for descriptor management. Signed-off-by: Dan Williams <dan.j.williams@intel.com> |
||
---|---|---|
.. | ||
dca.c | ||
dma_v2.c | ||
dma_v2.h | ||
dma.c | ||
dma.h | ||
hw.h | ||
Makefile | ||
pci.c | ||
registers.h |