kernel_optimize_test/drivers/dma/ioat
Dan Williams 162b96e63e ioat2,3: cacheline align software descriptor allocations
All the necessary fields for handling an ioat2,3 ring entry can fit into
one cacheline.  Move ->len prior to ->txd in struct ioat_ring_ent, and
move allocation of these entries to a hw-cache-aligned kmem cache to
reduce the number of cachelines dirtied for descriptor management.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2009-09-08 17:53:04 -07:00
..
dca.c ioat: ___devinit annotate the initialization paths 2009-09-08 17:30:24 -07:00
dma_v2.c ioat2,3: cacheline align software descriptor allocations 2009-09-08 17:53:04 -07:00
dma_v2.h ioat2,3: cacheline align software descriptor allocations 2009-09-08 17:53:04 -07:00
dma.c ioat: implement a private tx_list 2009-09-08 17:53:02 -07:00
dma.h ioat: implement a private tx_list 2009-09-08 17:53:02 -07:00
hw.h ioat1: trim ioat_dma_desc_sw 2009-09-08 17:30:24 -07:00
Makefile ioat2,3: convert to a true ring buffer 2009-09-08 17:29:55 -07:00
pci.c ioat2,3: cacheline align software descriptor allocations 2009-09-08 17:53:04 -07:00
registers.h ioat: switch watchdog and reset handler from workqueue to timer 2009-09-08 17:30:24 -07:00