forked from luck/tmp_suning_uos_patched
[ARM] xsc3: fix xsc3_l2_inv_range
When 'start' and 'end' are less than a cacheline apart and 'start' is unaligned we are done after cleaning and invalidating the first cacheline. So check for (start < end) which will not walk off into invalid address ranges when (start > end). This issue was caught by drivers/dma/dmatest. 2.6.27 is susceptible. Cc: <stable@kernel.org> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Cc: Lothar WaÃ<9f>mann <LW@KARO-electronics.de> Cc: Lennert Buytenhek <buytenh@marvell.com> Cc: Eric Miao <eric.miao@marvell.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
This commit is contained in:
parent
45beca08dd
commit
c7cf72dcad
@ -98,7 +98,7 @@ static void xsc3_l2_inv_range(unsigned long start, unsigned long end)
|
||||
/*
|
||||
* Clean and invalidate partial last cache line.
|
||||
*/
|
||||
if (end & (CACHE_LINE_SIZE - 1)) {
|
||||
if (start < end && (end & (CACHE_LINE_SIZE - 1))) {
|
||||
xsc3_l2_clean_pa(end & ~(CACHE_LINE_SIZE - 1));
|
||||
xsc3_l2_inv_pa(end & ~(CACHE_LINE_SIZE - 1));
|
||||
end &= ~(CACHE_LINE_SIZE - 1);
|
||||
@ -107,7 +107,7 @@ static void xsc3_l2_inv_range(unsigned long start, unsigned long end)
|
||||
/*
|
||||
* Invalidate all full cache lines between 'start' and 'end'.
|
||||
*/
|
||||
while (start != end) {
|
||||
while (start < end) {
|
||||
xsc3_l2_inv_pa(start);
|
||||
start += CACHE_LINE_SIZE;
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user