2020-05-01 00:04:04 +08:00
|
|
|
.. SPDX-License-Identifier: GPL-2.0
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2020-05-01 00:04:04 +08:00
|
|
|
=====================================
|
2005-04-17 06:20:36 +08:00
|
|
|
Network Devices, the Kernel, and You!
|
2020-05-01 00:04:04 +08:00
|
|
|
=====================================
|
2005-04-17 06:20:36 +08:00
|
|
|
|
|
|
|
|
|
|
|
Introduction
|
|
|
|
============
|
|
|
|
The following is a random collection of documentation regarding
|
|
|
|
network devices.
|
|
|
|
|
|
|
|
struct net_device allocation rules
|
|
|
|
==================================
|
|
|
|
Network device structures need to persist even after module is unloaded and
|
2013-10-31 04:10:44 +08:00
|
|
|
must be allocated with alloc_netdev_mqs() and friends.
|
|
|
|
If device has registered successfully, it will be freed on last use
|
|
|
|
by free_netdev(). This is required to handle the pathologic case cleanly
|
|
|
|
(example: rmmod mydriver </sys/class/net/myeth/mtu )
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2013-10-31 04:10:44 +08:00
|
|
|
alloc_netdev_mqs()/alloc_netdev() reserve extra space for driver
|
2005-04-17 06:20:36 +08:00
|
|
|
private data which gets freed when the network device is freed. If
|
|
|
|
separately allocated data is attached to the network device
|
2008-12-08 17:14:16 +08:00
|
|
|
(netdev_priv(dev)) then it is up to the module exit handler to free that.
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2007-07-08 14:03:44 +08:00
|
|
|
MTU
|
|
|
|
===
|
|
|
|
Each network device has a Maximum Transfer Unit. The MTU does not
|
|
|
|
include any link layer protocol overhead. Upper layer protocols must
|
|
|
|
not pass a socket buffer (skb) to a device to transmit with more data
|
|
|
|
than the mtu. The MTU does not include link layer header overhead, so
|
|
|
|
for example on Ethernet if the standard MTU is 1500 bytes used, the
|
|
|
|
actual skb will contain up to 1514 bytes because of the Ethernet
|
|
|
|
header. Devices should allow for the 4 byte VLAN header as well.
|
|
|
|
|
|
|
|
Segmentation Offload (GSO, TSO) is an exception to this rule. The
|
|
|
|
upper layer protocol may pass a large socket buffer to the device
|
|
|
|
transmit routine, and the device will break that up into separate
|
|
|
|
packets based on the current MTU.
|
|
|
|
|
|
|
|
MTU is symmetrical and applies both to receive and transmit. A device
|
|
|
|
must be able to receive at least the maximum size packet allowed by
|
|
|
|
the MTU. A network device may use the MTU as mechanism to size receive
|
|
|
|
buffers, but the device should allow packets with VLAN header. With
|
|
|
|
standard Ethernet mtu of 1500 bytes, the device should allow up to
|
|
|
|
1518 byte packets (1500 + 14 header + 4 tag). The device may either:
|
|
|
|
drop, truncate, or pass up oversize packets, but dropping oversize
|
|
|
|
packets is preferred.
|
|
|
|
|
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
struct net_device synchronization rules
|
|
|
|
=======================================
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_open:
|
2005-04-17 06:20:36 +08:00
|
|
|
Synchronization: rtnl_lock() semaphore.
|
|
|
|
Context: process
|
|
|
|
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_stop:
|
2005-04-17 06:20:36 +08:00
|
|
|
Synchronization: rtnl_lock() semaphore.
|
|
|
|
Context: process
|
2012-04-05 22:39:10 +08:00
|
|
|
Note: netif_running() is guaranteed false
|
2005-04-17 06:20:36 +08:00
|
|
|
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_do_ioctl:
|
2005-04-17 06:20:36 +08:00
|
|
|
Synchronization: rtnl_lock() semaphore.
|
|
|
|
Context: process
|
|
|
|
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_get_stats:
|
2005-04-17 06:20:36 +08:00
|
|
|
Synchronization: dev_base_lock rwlock.
|
|
|
|
Context: nominally process, but don't sleep inside an rwlock
|
|
|
|
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_start_xmit:
|
2012-04-05 22:39:30 +08:00
|
|
|
Synchronization: __netif_tx_lock spinlock.
|
2007-07-08 13:59:14 +08:00
|
|
|
|
2005-04-17 06:20:36 +08:00
|
|
|
When the driver sets NETIF_F_LLTX in dev->features this will be
|
2006-06-10 03:20:56 +08:00
|
|
|
called without holding netif_tx_lock. In this case the driver
|
2016-04-25 03:38:14 +08:00
|
|
|
has to lock by itself when needed.
|
|
|
|
The locking there should also properly protect against
|
|
|
|
set_rx_mode. WARNING: use of NETIF_F_LLTX is deprecated.
|
2009-04-27 21:06:31 +08:00
|
|
|
Don't use it for new drivers.
|
2007-07-08 13:59:14 +08:00
|
|
|
|
|
|
|
Context: Process with BHs disabled or BH (timer),
|
2020-05-01 00:04:04 +08:00
|
|
|
will be called with interrupts disabled by netconsole.
|
2007-07-08 13:59:14 +08:00
|
|
|
|
2020-05-01 00:04:04 +08:00
|
|
|
Return codes:
|
|
|
|
|
|
|
|
* NETDEV_TX_OK everything ok.
|
|
|
|
* NETDEV_TX_BUSY Cannot transmit packet, try later
|
2005-04-17 06:20:36 +08:00
|
|
|
Usually a bug, means queue start/stop flow control is broken in
|
|
|
|
the driver. Note: the driver must NOT put the skb in its DMA ring.
|
|
|
|
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_tx_timeout:
|
2012-04-05 22:39:30 +08:00
|
|
|
Synchronization: netif_tx_lock spinlock; all TX queues frozen.
|
2005-04-17 06:20:36 +08:00
|
|
|
Context: BHs disabled
|
|
|
|
Notes: netif_queue_stopped() is guaranteed true
|
|
|
|
|
2012-04-05 22:39:47 +08:00
|
|
|
ndo_set_rx_mode:
|
2012-04-05 22:39:30 +08:00
|
|
|
Synchronization: netif_addr_lock spinlock.
|
2005-04-17 06:20:36 +08:00
|
|
|
Context: BHs disabled
|
|
|
|
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 07:41:36 +08:00
|
|
|
struct napi_struct synchronization rules
|
|
|
|
========================================
|
|
|
|
napi->poll:
|
2020-05-01 00:04:04 +08:00
|
|
|
Synchronization:
|
|
|
|
NAPI_STATE_SCHED bit in napi->state. Device
|
2012-04-05 22:39:47 +08:00
|
|
|
driver's ndo_stop method will invoke napi_disable() on
|
[NET]: Make NAPI polling independent of struct net_device objects.
Several devices have multiple independant RX queues per net
device, and some have a single interrupt doorbell for several
queues.
In either case, it's easier to support layouts like that if the
structure representing the poll is independant from the net
device itself.
The signature of the ->poll() call back goes from:
int foo_poll(struct net_device *dev, int *budget)
to
int foo_poll(struct napi_struct *napi, int budget)
The caller is returned the number of RX packets processed (or
the number of "NAPI credits" consumed if you want to get
abstract). The callee no longer messes around bumping
dev->quota, *budget, etc. because that is all handled in the
caller upon return.
The napi_struct is to be embedded in the device driver private data
structures.
Furthermore, it is the driver's responsibility to disable all NAPI
instances in it's ->stop() device close handler. Since the
napi_struct is privatized into the driver's private data structures,
only the driver knows how to get at all of the napi_struct instances
it may have per-device.
With lots of help and suggestions from Rusty Russell, Roland Dreier,
Michael Chan, Jeff Garzik, and Jamal Hadi Salim.
Bug fixes from Thomas Graf, Roland Dreier, Peter Zijlstra,
Joseph Fannin, Scott Wood, Hans J. Koch, and Michael Chan.
[ Ported to current tree and all drivers converted. Integrated
Stephen's follow-on kerneldoc additions, and restored poll_list
handling to the old style to fix mutual exclusion issues. -DaveM ]
Signed-off-by: Stephen Hemminger <shemminger@linux-foundation.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2007-10-04 07:41:36 +08:00
|
|
|
all NAPI instances which will do a sleeping poll on the
|
|
|
|
NAPI_STATE_SCHED napi->state bit, waiting for all pending
|
|
|
|
NAPI activity to cease.
|
2020-05-01 00:04:04 +08:00
|
|
|
|
|
|
|
Context:
|
|
|
|
softirq
|
|
|
|
will be called with interrupts disabled by netconsole.
|