forked from luck/tmp_suning_uos_patched
tg3: Enforce DMA mapping / skb assignment ordering
Michael Chan noted that there is nothing in the code that would prevent the compiler from delaying the access of the "mapping" member of the newly arrived packet until much later. If this happened after the skb = NULL assignment, it is possible for the driver to pass a bad dma_addr value to pci_unmap_single(). To enforce this ordering, we need a write memory barrier. The pairing read memory barrier already exists in tg3_rx_prodring_xfer() under the comments starting with "Ensure that updates to the...". Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: Michael Chan <mchan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
9940516259
commit
61e800cf94
@ -4659,11 +4659,16 @@ static int tg3_rx(struct tg3_napi *tnapi, int budget)
|
||||
if (skb_size < 0)
|
||||
goto drop_it;
|
||||
|
||||
ri->skb = NULL;
|
||||
|
||||
pci_unmap_single(tp->pdev, dma_addr, skb_size,
|
||||
PCI_DMA_FROMDEVICE);
|
||||
|
||||
/* Ensure that the update to the skb happens
|
||||
* after the usage of the old DMA mapping.
|
||||
*/
|
||||
smp_wmb();
|
||||
|
||||
ri->skb = NULL;
|
||||
|
||||
skb_put(skb, len);
|
||||
} else {
|
||||
struct sk_buff *copy_skb;
|
||||
|
Loading…
Reference in New Issue
Block a user