forked from luck/tmp_suning_uos_patched
rps: avoid one atomic in enqueue_to_backlog
If CONFIG_SMP=y, then we own a queue spinlock, we can avoid the atomic test_and_set_bit() from napi_schedule_prep(). We now have same number of atomic ops per netif_rx() calls than with pre-RPS kernel. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
parent
3f78d1f210
commit
ebda37c27d
@ -2432,8 +2432,10 @@ static int enqueue_to_backlog(struct sk_buff *skb, int cpu,
|
||||
return NET_RX_SUCCESS;
|
||||
}
|
||||
|
||||
/* Schedule NAPI for backlog device */
|
||||
if (napi_schedule_prep(&sd->backlog)) {
|
||||
/* Schedule NAPI for backlog device
|
||||
* We can use non atomic operation since we own the queue lock
|
||||
*/
|
||||
if (!__test_and_set_bit(NAPI_STATE_SCHED, &sd->backlog.state)) {
|
||||
if (!rps_ipi_queued(sd))
|
||||
____napi_schedule(sd, &sd->backlog);
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user