The 5719 has bug where RDMAs larger than 4k can cause problems. This
patch works around the problem by dividing larger DMA requests into
something the hardware can handle.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
As the driver breaks large skb fragments into smaller submissions to the
hardware, there is a new danger that BDs might get exhausted before all
fragments have been mapped. This patch adds code to make sure tx BDs
aren't oversubscribed and flag the condition if it happens.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This patch consolidates all code that populates tx BDs into a single
routine. Setting tx BDs needs to be more carefully controlled to see if
workarounds need to be applied.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The following patches are going to break skb fragments into smaller
sizes. This patch attempts to make the change easier to digest by only
addressing the skb teardown portion.
The patch modifies the driver to skip over any BDs that have a flag set
that indicates the BD isn't the beginning of an skb fragment. Such BDs
were a result of segmentation and do not need a pci_unmap_page() call.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the following patches, unmapping skb fragments will get just as
complicated as mapping them. This patch generalizes
tg3_skb_error_unmap() and makes it the one-stop-shop for skb unmapping.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The first fragment of an skb should always be greater than 8 bytes.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
In the following patches, the process the driver will use to assign skb
fragments to transmit BDs will get more complicated. To prepare for
that new code, this patch seeks to simplify how transmit BDs are
populated. It does this by separating the code that assigns the BD
members from the logic that controls how the fields are set.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The following patches will require the use of an additional flag in the
ring_info structure. The use of this flag is tx path specific, so this
patch defines a specialized ring_info structure.
Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
Reviewed-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The AX88772B uses only 11 bits of the header for the actual size. The other bits
are used for something else. This causes dmesg full of messages:
asix_rx_fixup() Bad Header Length
This patch trims the check to only 11 bits. I believe on older chips, the
remaining 5 top bits are unused.
Signed-off-by: Marek Vasut <marek.vasut@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Try to send to correct address this time!
---------- Forwarded Message ----------
Subject: [PATCH] Fix cdc-phonet build
Date: Saturday 23 Jul 2011
From: Chris Clayton <chris2553@googlemail.com>
To: linux-net@vger.kernel.org
cdc-phonet does not presently build on linux-3.0 because there is no entry for it in
drivers/net/Makefile. This patch adds that entry.
Signed-off-by: Chris Clayton <chris2553@googlemail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
On Tue, Jul 26, 2011 at 05:40:27PM -0700, Joe Perches wrote:
> On Tue, 2011-07-26 at 17:37 -0700, Jay Vosburgh wrote:
> > Joe Perches <joe@perches.com> wrote:
> > >I'd prefer you don't separate the format string
> > >into multiple pieces.
> > Why not? To me, it looks easier to read split into sections
> > that don't wrap lines.
>
> Harder to grep for a dmesg and the
> defect rate of these split formats is
> typically higher than single strings
> because of bad spacing between string
> segments.
>
I noticed that you took some time back in late 2009 to 'consolidate' the
split format-strings present in the bonding driver at the time and I've
decided I'm fine to leave them the way they are. The main point of my
patch was to change the output and I would like to get that included.
Here is my updated patch...
Subject: [PATCH net-next-2.6 v2] bonding: reduce noise during init
Many are using sysfs to configure bonding rather than module options, so
there is no need for bonding to throw this warning in normal cases.
Keep the message around when debugging is enabled as it might be useful
for someone desperate enough to enable debugging, but eliminate it
otherwise.
Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
When a bond contains a device where one name is the subset of another
(eth1 and eth10, for example), one cannot properly set the primary
device or the currently active device.
This was reported and based on work by Takuma Umeya. I also verified
the problem and tested that this fix resolves it.
V2: A few did not like the the current code or my changes, so I
refactored bonding_store_primary and bonding_store_active_slave to be a
bit cleaner, dropped the use of strnicmp since we did not really need
the comparison to be case insensitive, and formatted the input string
from sysfs so a comparison to IFNAMSIZ could be used.
I also discovered an error in bonding_store_active_slave that would
modify bond->primary_slave rather than bond->curr_active_slave before
forcing the bonding driver to choose a new active slave.
V3: Actually sending the proper patch....
Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
Reported-by: Takuma Umeya <tumeya@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
After the last patch, We are left in a state in which only drivers calling
ether_setup have IFF_TX_SKB_SHARING set (we assume that drivers touching real
hardware call ether_setup for their net_devices and don't hold any state in
their skbs. There are a handful of drivers that violate this assumption of
course, and need to be fixed up. This patch identifies those drivers, and marks
them as not being able to support the safe transmission of skbs by clearning the
IFF_TX_SKB_SHARING flag in priv_flags
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: Karsten Keil <isdn@linux-pingi.de>
CC: "David S. Miller" <davem@davemloft.net>
CC: Jay Vosburgh <fubar@us.ibm.com>
CC: Andy Gospodarek <andy@greyhouse.net>
CC: Patrick McHardy <kaber@trash.net>
CC: Krzysztof Halasa <khc@pm.waw.pl>
CC: "John W. Linville" <linville@tuxdriver.com>
CC: Greg Kroah-Hartman <gregkh@suse.de>
CC: Marcel Holtmann <marcel@holtmann.org>
CC: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Pktgen attempts to transmit shared skbs to net devices, which can't be used by
some drivers as they keep state information in skbs. This patch adds a flag
marking drivers as being able to handle shared skbs in their tx path. Drivers
are defaulted to being unable to do so, but calling ether_setup enables this
flag, as 90% of the drivers calling ether_setup touch real hardware and can
handle shared skbs. A subsequent patch will audit drivers to ensure that the
flag is set properly
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reported-by: Jiri Pirko <jpirko@redhat.com>
CC: Robert Olsson <robert.olsson@its.uu.se>
CC: Eric Dumazet <eric.dumazet@gmail.com>
CC: Alexey Dobriyan <adobriyan@gmail.com>
CC: David S. Miller <davem@davemloft.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
For some reason, when rxaccel is disabled, NV_RX3_VLAN_TAG_PRESENT is
still set and some pseudorandom vids appear. So check for
NETIF_F_HW_VLAN_RX as well. Also set correctly hw_features and set vlan
mode on probe.
Signed-off-by: Jiri Pirko <jpirko@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
commit 87c288c6e9 "gianfar: do vlan cleanup" has two issues:
# permutation of rx and tx flags
# enabling vlan tag insertion by default (this leads to unusable connections on some configurations)
If VLAN insertion is requested (via ethtool) it will be set at an other point ...
Signed-off-by: Sebastian Poehn <sebastian.poehn@belden.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The cpu compatible string we look for is "SPARC-T3".
As far as memset/memcpy optimizations go, we treat this chip the same
as Niagara-T2/T2+. Use cache initializing stores for memset, and use
perfetch, FPU block loads, cache initializing stores, and block stores
for copies.
We use the Niagara-T2 perf support, since T3 is a close relative in
this regard. Later we'll add support for the new events T3 can
report, plus enable T3's new "sample" mode.
For now I haven't added any new ELF hwcap flags. We probably need
to add a couple, for example:
T2 and T3 both support the population count instruction in hardware.
T3 supports VIS3 instructions, including support (finally) for
partitioned shift. One can also now move directly between float
and integer registers.
T3 supports instructions meant to help with Galois Field and other HPC
calculations, such as XOR multiply. Also there are "OP and negate"
instructions, for example "fnmul" which is multiply-and-negate.
T3 recognizes the transactional memory opcodes, however since
transactional memory isn't supported: 1) 'commit' behaves as a NOP and
2) 'chkpt' always branches 3) 'rdcps' returns all zeros and 4) 'wrcps'
behaves as a NOP.
So we'll need about 3 new elf capability flags in the end to represent
all of these things.
Signed-off-by: David S. Miller <davem@davemloft.net>
The hypervisor call is only necessary if hypervisor events are
being requested.
So if we're not tracking hypervisor events, simply do a direct
register write.
Signed-off-by: David S. Miller <davem@davemloft.net>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/security-testing-2.6: (54 commits)
tpm_nsc: Fix bug when loading multiple TPM drivers
tpm: Move tpm_tis_reenable_interrupts out of CONFIG_PNP block
tpm: Fix compilation warning when CONFIG_PNP is not defined
TOMOYO: Update kernel-doc.
tpm: Fix a typo
tpm_tis: Probing function for Intel iTPM bug
tpm_tis: Fix the probing for interrupts
tpm_tis: Delay ACPI S3 suspend while the TPM is busy
tpm_tis: Re-enable interrupts upon (S3) resume
tpm: Fix display of data in pubek sysfs entry
tpm_tis: Add timeouts sysfs entry
tpm: Adjust interface timeouts if they are too small
tpm: Use interface timeouts returned from the TPM
tpm_tis: Introduce durations sysfs entry
tpm: Adjust the durations if they are too small
tpm: Use durations returned from TPM
TOMOYO: Enable conditional ACL.
TOMOYO: Allow using argv[]/envp[] of execve() as conditions.
TOMOYO: Allow using executable's realpath and symlink's target as conditions.
TOMOYO: Allow using owner/group etc. of file objects as conditions.
...
Fix up trivial conflict in security/tomoyo/realpath.c
Currently when we get a read error during recovery, we simply abort
the recovery.
Instead, repeat the read in page-sized blocks.
On successful reads, write to the target.
On read errors, record a bad block on the destination,
and only if that fails do we abort the recovery.
As we now retry reads we need to know where we read from. This was in
bi_sector but that can be changed during a read attempt.
So store the correct from_addr and to_addr in the r10_bio for later
access.
Signed-off-by: NeilBrown<neilb@suse.de>
If a read error is detected during recovery the code currently
fails the read device.
This isn't really necessary. recovery_request_write will signal
a write error to end_sync_write and it will record a write
error on the destination device which will record a bad block
there or kick it from the array.
So just remove this call to do md_error.
Signed-off-by: NeilBrown <neilb@suse.de>
If we get a write error during resync/recovery don't fail the device
but instead record a bad block. If that fails we can then fail the
device.
Signed-off-by: NeilBrown <neilb@suse.de>
We already attempt to fix read errors found during normal IO
and a 'repair' process.
It is best to try to repair them at any time they are found,
so move a test so that during sync and check a read error will
be corrected by over-writing with good data.
If both (all) devices have known bad blocks in the sync section we
won't try to fix even though the bad blocks might not overlap. That
should be considered later.
Also if we hit a read error during recovery we don't try to fix it.
It would only be possible to fix if there were at least three copies
of data, which is not very common with RAID10. But it should still
be considered later.
Signed-off-by: NeilBrown <neilb@suse.de>
When we get a write error (in the data area, not in metadata),
update the badblock log rather than failing the whole device.
As the write may well be many blocks, we trying writing each
block individually and only log the ones which fail.
Signed-off-by: NeilBrown <neilb@suse.de>
If we succeed in writing to a block that was recorded as
being bad, we clear the bad-block record.
This requires some delayed handling as the bad-block-list update has
to happen in process-context.
Signed-off-by: NeilBrown <neilb@suse.de>
Writing to known bad blocks on drives that have seen a write error
is asking for trouble. So try to avoid these blocks.
Signed-off-by: NeilBrown <neilb@suse.de>
When recovering one or more devices, if all the good devices have
bad blocks we should record a bad block on the device being rebuilt.
If this fails, we need to abort the recovery.
To ensure we don't think that we aborted later than we actually did,
we need to move the check for MD_RECOVERY_INTR earlier in md_do_sync,
in particular before mddev->curr_resync is updated.
Signed-off-by: NeilBrown <neilb@suse.de>
During resync/recovery limit the size of the request to avoid
reading into a bad block that does not start at-or-before the current
read address.
Similarly if there is a bad block at this address, don't allow the
current request to extend beyond the end of that bad block.
Now that we don't ever read from known bad blocks, it is safe to allow
devices with those blocks into the array.
Signed-off-by: NeilBrown <neilb@suse.de>
When attempting to repair a read error, don't read from
devices with a known bad block.
As we are only reading PAGE_SIZE blocks, we don't try to
narrow down to smaller regions in the hope that only part of this
page is bad - it isn't worth the effort.
Signed-off-by: NeilBrown <neilb@suse.de>
When redirecting a read error to a different device, we must
again avoid bad blocks and possibly split the request.
Spin_lock typo fixed thanks to Dan Carpenter <error27@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
This patch just covers the basic read path:
1/ read_balance needs to check for badblocks, and return not only
the chosen slot, but also how many good blocks are available
there.
2/ read submission must be ready to issue multiple reads to
different devices as different bad blocks on different devices
could mean that a single large read cannot be served by any one
device, but can still be served by the array.
This requires keeping count of the number of outstanding requests
per bio. This count is stored in 'bi_phys_segments'
On read error we currently just fail the request if another target
cannot handle the whole request. Next patch refines that a bit.
Signed-off-by: NeilBrown <neilb@suse.de>
When a loop ends with a large if, it can be neater to change the
if to invert the condition and just 'continue'.
Then the body of the if can be indented to a lower level.
Signed-off-by: NeilBrown <neilb@suse.de>
On a successful write to a known bad block, flag the sh
so that raid5d can remove the known bad block from the list.
Signed-off-by: NeilBrown <neilb@suse.de>
When a write error is detected, don't mark the device as failed
immediately but rather record the fact for handle_stripe to deal with.
Handle_stripe then attempts to record a bad block. Only if that fails
does the device get marked as faulty.
Signed-off-by: NeilBrown <neilb@suse.de>
If we get an uncorrectable read error - record a bad block rather than
failing the device.
And if these errors (which may be due to known bad blocks) cause
recovery to be impossible, record a bad block on the recovering
devices, or abort the recovery.
As we might abort a recovery without failing a device we need to teach
RAID5 about recovery_disabled handling.
Signed-off-by: NeilBrown <neilb@suse.de>
There are two times that we might read in raid5:
1/ when a read request fits within a chunk on a single
working device.
In this case, if there is any bad block in the range of
the read, we simply fail the cache-bypass read and
perform the read though the stripe cache.
2/ when reading into the stripe cache. In this case we
mark as failed any device which has a bad block in that
strip (1 page wide).
Note that we will both avoid reading and avoid writing.
This is correct (as we will never read from the block, there
is no point writing), but not optimal (as writing could 'fix'
the error) - that will be addressed later.
If we have not seen any write errors on the device yet, we treat a bad
block like a recent read error. This will encourage an attempt to fix
the read error which will either generate a write error, or will
ensure good data is stored there. We don't yet forget the bad block
in that case. That comes later.
Now that we honour bad blocks when reading we can allow devices with
bad blocks into the array.
Signed-off-by: NeilBrown <neilb@suse.de>
raid1d is too big with several deep branches.
So separate them out into their own functions.
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
If we cannot read a block from anywhere during recovery, there is
now a better approach than just giving up.
We can record a bad block on each device and keep going - being
careful not to clear the bad block when a write succeeds as it might -
it will be a write of incorrect data.
We have now reached the state where - for raid1 - we only call
md_error if md_set_badblocks has failed.
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
If we find a bad block while writing as part of resync/recovery we
need to report that back to raid1d which must record the bad block,
or fail the device.
Similarly when fixing a read error, a further error should just
record a bad block if possible rather than failing the device.
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
When we get a write error (in the data area, not in metadata),
update the badblock log rather than failing the whole device.
As the write may well be many blocks, we trying writing each
block individually and only log the ones which fail.
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
When performing write-behind we allocate pages to store the data
during write.
Previously we just keep a list of pages. Now we keep a list of
bi_vec which includes offset and size.
This means that the r1bio has complete information to create a new
bio which will be needed for retrying after write errors.
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>
If we succeed in writing to a block that was recorded as
being bad, we clear the bad-block record.
This requires some delayed handling as the bad-block-list update has
to happen in process-context.
Signed-off-by: NeilBrown <neilb@suse.de>
Reviewed-by: Namhyung Kim <namhyung@gmail.com>