This erratum can occur if a single-precision floating-point,
double-precision floating-point or vector floating-point instruction on a
mispredicted branch path signals one of the floating-point data interrupts
which are enabled by the SPEFSCR (FINVE, FDBZE, FUNFE or FOVFE bits). This
interrupt must be recorded in a one-cycle window when the misprediction is
resolved. If this extremely rare event should occur, the result could be:
The SPE Data Exception from the mispredicted path may be reported
erroneously if a single-precision floating-point, double-precision
floating-point or vector floating-point instruction is the second
instruction on the correct branch path.
According to errata description, some efp instructions which are not
supposed to trigger SPE exceptions can trigger the exceptions in this case.
However, as we haven't emulated these instructions here, a signal will
send to userspace, and userspace application would exit.
This patch re-issue the efp instruction that we haven't emulated,
so that hardware can properly execute it again if this case happen.
Signed-off-by: Liu Yu <yu.liu@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
FSL PCIe controller v2.1:
- New MSI inbound window
- Same Inbound windows address as PCIe controller v1.x
Added new pit_t member(pmit) to struct ccsr_pci for MSI inbound window
FSL PCIe controller v2.2 and v2.3:
- Different addresses for PCIe inbound window 3,2,1
- Exposed PCIe inbound window 0
- New PCIe interrupt status register
Added new config and interrupt Status register to struct ccsr_pci & updated
pit_t array size to reflect the 4 inbound windows.
Device tree is used to maintain backward compatibility i.e. update inbound
window 1 index depending upon "compatible" field witin PCIE node.
Signed-off-by: Prabhakar Kushwaha <prabhakar@freescale.com>
Acked-by: Roy Zang <tie-fei.zang@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
If the spin table is located in the linear mapping (which can happen if
we have 4G or more of memory) we need to access the spin table via a
cacheable coherent mapping like we do on ppc32 (and do explicit cache
flush).
See the following commit for the ppc32 version of this issue:
commit d1d47ec6e6
Author: Peter Tyser <ptyser@xes-inc.com>
Date: Fri Dec 18 16:50:37 2009 -0600
powerpc/85xx: Fix SMP when "cpu-release-addr" is in lowmem
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
On upcoming hardware, we have a PCI adapter with two functions, one of
which uses MSI and the other uses MSI-X. This adapter, when MSI is
disabled using the "old" firmware interface (RTAS_CHANGE_FN), still
signals an MSI-X interrupt and triggers an EEH. We are working with the
vendor to ensure that the hardware is not at fault, but if we use the
"new" interface (RTAS_CHANGE_MSI_FN) to disable MSI, we also
automatically disable MSI-X and the adapter does not appear to signal
any stray MSI-X interrupt.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Acked-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
If firmware allows us to map all of a partition's memory for DMA on a
particular bridge, create a 1:1 mapping of that memory. Add hooks for
dealing with hotplug events. Dynamic DMA windows can use larger than the
default page size, and we use the largest one possible.
Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Move SPRN_PID declearations in various locations into one place.
Signed-off-by: Tseng-Hui (Frank) Lin <thlin@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Create the lnx,oops-log NVRAM partition, and capture the end of the printk
buffer in it when there's an oops or panic. If we can't create the
lnx,oops-log partition, capture the oops/panic report in ibm,rtas-log.
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Adapt the functions used to create and write to the RTAS-log partition
to work with any OS-type partition.
Signed-off-by: Jim Keniston <jkenisto@us.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
memblock_enforce_memory_limit() takes the desired maximum quantity of memory
to end up with, not an address above which memory will not be used.
Signed-off-by: Scott Wood <scottwood@freescale.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The rtas_event_scan() function uses smp_processor_id() to select a
starting point in cpu_online_mask, and does so under the protection
of get_online_cpus(). This might not select the current processor
in any case, so switch to raw_smp_processor_id().
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The patch below removes an extra "l" in the word.
Signed-off-by: Justin P. Mattock <justinmattock@gmail.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
A number of drivers are using pgprot_writecombine() to enable write
combining on userspace mappings. Implement it on powerpc.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Use the new functions and free the descriptor when the virq is
destroyed.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Define the ARCH_IRQ_INIT_FLAGS instead of fixing it up in a loop.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Minor cleanup of notifier_from_errno() in powerpc.
notifier_from_errno() now contains the if(ret)/else conditional.
There is no need to do it in the powerpc code.
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Untested, but looks like an obvious typo to me.
[BenH: No feedback, but it's obviously wrong]
Signed-off-by: Nicolas Kaiser <nikai@nikai.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Spinlocks on shared processor partitions use H_YIELD to notify the
hypervisor we are waiting on another virtual CPU. Unfortunately this means
the hcall tracepoints can recurse.
The patch below adds a percpu depth and checks it on both the entry and
exit hcall tracepoints.
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
CC: stable@kernel.org
When converting to the new cpumask code I screwed up:
- if (cpu_isset(cpu, numa_cpumask_lookup_table[node])) {
- cpu_clear(cpu, numa_cpumask_lookup_table[node]);
+ if (cpumask_test_cpu(cpu, node_to_cpumask_map[node])) {
+ cpumask_set_cpu(cpu, node_to_cpumask_map[node]);
This was introduced in commit 25863de07a (powerpc/cpumask: Convert NUMA code
to new cpumask API)
Fix it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: <stable@kernel.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
There is no need to start up the timer and monitor topology changes on a
dedicated processor partition, so disable it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The rest of the NUMA code expects an OF associativity property with
the first cell containing the length. Without this fix all topology changes
cause us to misparse the property and put the cpu into node 0.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
The hypervisor uses unsigned 1 byte counters to signal topology changes to
the OS. Since they can wrap we need to check for any difference, not just if
the hypervisor count is greater than the previous count.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
VPHN supports up to 8 distance fields but the number of entries in
ibm,associativity-reference-points signifies how many are in use.
Don't look at all the VPHN counts, only distance_ref_points_depth
worth.
Since we already cap our distance metrics at MAX_DISTANCE_REF_POINTS,
use that to size the VPHN arrays and add a BUILD_BUG_ON to avoid it growing
larger than the VPHN maximum of 8.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>