Commit Graph

725 Commits

Author SHA1 Message Date
Paul Mackerras
3cecdda3f1 Merge branch 'for-2.6.25' of master.kernel.org:/pub/scm/linux/kernel/git/arnd/cell-2.6 into merge 2008-03-03 21:31:09 +11:00
Michael Ellerman
da40451bba [POWERPC] Convert the cell IOMMU fixed mapping to 16M IOMMU pages
The only tricky part is we need to adjust the PTE insertion loop to
cater for holes in the page table. The PTEs for each segment start on
a 4K boundary, so with 16M pages we have 16 PTEs per segment and then
a gap to the next 4K page boundary.

It might be possible to allocate the PTEs for each segment separately,
saving the memory currently filling the gaps. However we'd need to
check that's OK with the hardware, and that it actually saves memory.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
225d49050f [POWERPC] Allow for different IOMMU page sizes in cell IOMMU code
Make some preliminary changes to cell_iommu_alloc_ptab() to allow it to
take the page size as a parameter rather than assuming IOMMU_PAGE_SIZE.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
3d3e6da17d [POWERPC] Cell IOMMU: n_pte_pages is in 4K page units, not IOMMU_PAGE_SIZE
We use n_pte_pages to calculate the stride through the page tables, but
we also use it to set the NPPT value in the segment table entry. That is
defined as the number of 4K pages per segment, so we should calculate
it as such regardless of the IOMMU page size.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
7d432ff1b7 [POWERPC] Split setup of IOMMU stab and ptab, allocate dynamic/fixed ptabs separately
Currently the cell IOMMU code allocates the entire IOMMU page table in a
contiguous chunk. This is nice and tidy, but for machines with larger
amounts of RAM the page table allocation can fail due to it simply being
too large.

So split the segment table and page table setup routine, and arrange to
have the dynamic and fixed page tables allocated separately.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
edf441fb80 [POWERPC] Move allocation of cell IOMMU pad page
There's no need to allocate the pad page unless we're going to actually
use it - so move the allocation to where we know we're going to use it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
08e024272e [POWERPC] Remove unused pte_offset variable
The cell IOMMU code no longer needs to save the pte_offset variable
separately, it is incorporated into tbl->it_offset.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
0d7386ebff [POWERPC] Use it_offset not pte_offset in cell IOMMU code
The cell IOMMU tce build and free routines use pte_offset to convert
the index passed from the generic IOMMU code into a page table offset.

This takes into account the SPIDER_DMA_OFFSET which sets the top bit
of every DMA address.

However it doesn't cater for the IOMMU window starting at a non-zero
address, as the base of the window is not incorporated into pte_offset
at all.

As it turns out tbl->it_offset already contains the value we need, it
takes into account the base of the window and also pte_offset. So use
it instead!

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:15 +01:00
Michael Ellerman
f9660e8a6c [POWERPC] Clearup cell IOMMU fixed mapping terminology
It's called the fixed mapping, not the static mapping.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:14 +01:00
Jens Osterkamp
f3c1ed9720 [POWERPC] enable hardware watchpoints on cell blades
Ulrich Weigand has found that the hardware watchpoints on cell were not
working back in November :

http://ozlabs.org/pipermail/linuxppc-dev/2007-November/046135.html

This patch sets them during initialization.

Signed-off-by: Jens Osterkamp <jens@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2008-03-03 08:03:14 +01:00
Andre Detsch
2a58aa33da [POWERPC] spufs: fix use time accounting on SPE-overcommit
The spu_runcntl_RW register is restored within spu_restore function.
So, at the end of spu_bind_context, the SPU context is not just loaded,
but running.

This change corrects the state switch to account the time as USER.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29 15:48:55 +11:00
Arnd Bergmann
c92a1acb67 [POWERPC] spufs: serialize SLB invalidation against SLB loading
There is a potential race between flushes of the entire SLB in the MFC
and the point where new entries are being established. The problem is
that we might put a ESID entry into the MFC SLB when the VSID entry has
just been cleared by the global flush.

This can be circumvented by holding the register_lock throughout both
the flushing and the creation of SLB entries.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29 15:19:52 +11:00
Arnd Bergmann
cc4b7c1814 [POWERPC] spufs: invalidate SLB translation before adding a new entry
When we replace an SLB entry in the MFC after using up all the available
entries, there is a short window in which an incorrect entry is marked
as valid.

The problem is that the 'valid' bit is stored in the ESID, which is
always written after the VSID. Overwriting the VSID first will make the
original ESID entry point to the new VSID, which means that any
concurrent DMA accessing the old ESID ends up being redirected to the
new virtual address.  A few cycles later, we write the new ESID and
everything is fine again.

That race can be closed by writing a zero entry to the ESID first, which
makes sure that the VSID is not accessed until we write the new ESID.

Note that we don't actually need to invalidate the SLB entry using the
invalidation register, which would also flush any ERAT entries for that
segment, because the segment translation does not become invalid but is
only removed from the SLB cache.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29 15:17:49 +11:00
Arnd Bergmann
fae9ca7915 [POWERPC] spufs: synchronize IRQ when disabling
There is a small race between the context save procedure
and the SPU interrupt handling, where we expect all interrupt
processing to have finished after disabling them, while
an interrupt is still being processed on another CPU.

The obvious fix is to call synchronize_irq() after disabling
the interrupts at the start of the context save procedure
to make sure we never access the SPU any more during an
ongoing save or even after that.

Thanks to Benjamin Herrenschmidt for pointing this out.

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29 15:16:48 +11:00
Jeremy Kerr
71791bee90 [POWERPC] spufs: fix order of sputrace thread IDs
Currently, we get the following output from sputrace:

[5.097935954] 1606: spufs_ps_nopfn__enter (thread = 1605, spu = -1)
[5.097958164] 1606: spufs_ps_nopfn__insert (thread = 1605, spu = 15)
[5.097973529] 1607: spufs_ps_nopfn__enter (thread = 1605, spu = -1)
[5.097989174] 1607: spufs_ps_nopfn__insert (thread = 1605, spu = 14)

Which leads me to believe that 160[67] is the current thread ID, and
1605 is the context backing the psmap.

However, the 'current' and 'owner' tids are reversed - the 'current'
tid is on the right. This change puts the current thread ID in the
left-hand column instead, and renames the right to 'ctxthread'.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-29 15:00:08 +11:00
Jeremy Kerr
0111a70186 [POWERPC] spufs: fix invalid scheduling of forgotten contexts
At present, we have a situation where a context with no owner is
re-scheduled by spu_forget:

	Thread 1: reading regs file	Thread 2: context owner

					spu_forget()
						- ctx->owner = NULL
						- set SPU_SCHED_WAS_ACTIVE

	spu_acquire_saved()
	- context is in saved state

	spu_release_saved()
	- SPU_SCHED_WAS_ACTIVE is set,
	  so spu_activate() the context,
	  which now has no owner

In spu_forget(), we shouldn't be requesting a re-schedule by setting
SPU_SCHED_WAS_ACTIVE. This change removes the set_bit in spu_forget(),
so that spu_release_saved() doesn't reinsert this destroyed context on
to the run queue.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-28 09:56:28 +11:00
Jeremy Kerr
d58831375d [POWERPC] spufs: fix context destruction during psmap fault
We have a small window where a spu context may be destroyed while
we're servicing a page fault (from another thread) to the context's
problem state mapping.

After we up_read() the mmap_sem, it's possible that the context is
destroyed by its owning thread, and so the later references to ctx
are invalid. This can maifest as a deadlock on the (now free()-ed)
context state mutex.

This change adds a reference to the context before we release the
mmap_sem, so that the context cannot be destroyed.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-27 18:47:53 +11:00
Paul Mackerras
f8303dd3db Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/lmb-2.6 2008-02-26 21:08:45 +11:00
Andre Detsch
61b36fc1f7 [POWERPC] cell: fix spurious false return from spu_trap_data_{map,seg}
At present, the __spufs_trap_data_map and __spu_trap_data_seq functions
exit if spu->flags has the SPU_CONTEXT_SWITCH_ACTIVE set. This was
resulting in suprious returns from these functions, as they may be
legitimately called when we have this bit set.

We only use it in these two sanity checks, so this change removes the
flag completely. This fixes hangs in the page-fault path of SPE apps.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-20 14:57:36 +11:00
Jeremy Kerr
4ef110141b [POWERPC] spufs: fix scheduler starvation by idle contexts
2.6.25 has a regression where we can starve the scheduler by creating
(N_SPES+1) contexts, then running them one at a time.

The final context will never be run, as the other contexts are loaded on
the SPEs, none of which are repoted as free (ie, spu->alloc_state !=
SPU_FREE), so spu_get_idle() doesn't give us a spu to run on. Because
all of the contexts are stopped, none are descheduled by the scheduler
tick, as spusched_tick returns if spu_stopped(ctx).

This change replaces the spu_stopped() check with checking for SCHED_IDLE
in ctx->policy. We set a context's policy to SCHED_IDLE when we're not
in spu_run(). We also favour SCHED_IDLE contexts when looking for contexts
to unbind, but leave their timeslice intact for later resumption.

This patch fixes the following test in the spufs-testsuite:
  tests/20-scheduler/02-yield-starvation

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
2008-02-19 10:12:02 +11:00
Linus Torvalds
b9e222904c Merge branch 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'merge' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
  [POWERPC] Remove unused CONFIG_WANT_DEVICE_TREE
  [POWERPC] Cell RAS: Remove DEBUG, and add license and copyright
  [POWERPC] hvc_rtas_init() must be __init
  [POWERPC] free_property() must not be __init
  [POWERPC] vdso_do_func_patch{32,64}() must be __init
  [POWERPC] Remove generated files on make clean
  [POWERPC] Fix arch/ppc compilation - add typedef for pgtable_t
  [POWERPC] Wire up new timerfd syscalls
  [POWERPC] PS3: Update sys-manager button events
  [POWERPC] PS3: Sys-manager code cleanup
  [POWERPC] PS3: Use system reboot on restart
  [POWERPC] PS3: Fix bootwrapper hang bug
  [POWERPC] PS3: Fix reading pm interval in logical performance monitor
  [POWERPC] PS3: Fix setting bookmark in logical performance monitor
  [POWERPC] Fix DEBUG_PREEMPT warning when warning
2008-02-14 21:22:33 -08:00
Jan Blunck
1d957f9bf8 Introduce path_put()
* Add path_put() functions for releasing a reference to the dentry and
  vfsmount of a struct path in the right order

* Switch from path_release(nd) to path_put(&nd->path)

* Rename dput_path() to path_put_conditional()

[akpm@linux-foundation.org: fix cifs]
Signed-off-by: Jan Blunck <jblunck@suse.de>
Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
Acked-by: Christoph Hellwig <hch@lst.de>
Cc: <linux-fsdevel@vger.kernel.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Steven French <sfrench@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-14 21:13:33 -08:00
Jan Blunck
4ac9137858 Embed a struct path into struct nameidata instead of nd->{dentry,mnt}
This is the central patch of a cleanup series. In most cases there is no good
reason why someone would want to use a dentry for itself. This series reflects
that fact and embeds a struct path into nameidata.

Together with the other patches of this series
- it enforced the correct order of getting/releasing the reference count on
  <dentry,vfsmount> pairs
- it prepares the VFS for stacking support since it is essential to have a
  struct path in every place where the stack can be traversed
- it reduces the overall code size:

without patch series:
   text    data     bss     dec     hex filename
5321639  858418  715768 6895825  6938d1 vmlinux

with patch series:
   text    data     bss     dec     hex filename
5320026  858418  715768 6894212  693284 vmlinux

This patch:

Switch from nd->{dentry,mnt} to nd->path.{dentry,mnt} everywhere.

[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix cifs]
[akpm@linux-foundation.org: fix smack]
Signed-off-by: Jan Blunck <jblunck@suse.de>
Signed-off-by: Andreas Gruenbacher <agruen@suse.de>
Acked-by: Christoph Hellwig <hch@lst.de>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Casey Schaufler <casey@schaufler-ca.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-14 21:13:33 -08:00
Michael Ellerman
bdb226bac1 [POWERPC] Cell RAS: Remove DEBUG, and add license and copyright
arch/powerpc/platforms/cell/ras.c still has DEBUG #defined, which is no
longer necessary.  Disable it - this disables two pr_debugs().

While we're there this file should have a copyright notice and license,
so add both.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-14 22:11:02 +11:00
David S. Miller
d9b2b2a277 [LIB]: Make PowerPC LMB code generic so sparc64 can use it too.
Signed-off-by: David S. Miller <davem@davemloft.net>
2008-02-13 16:56:49 -08:00
Mathieu Desnoyers
fb40bd78b0 Linux Kernel Markers: support multiple probes
RCU style multiple probes support for the Linux Kernel Markers.  Common case
(one probe) is still fast and does not require dynamic allocation or a
supplementary pointer dereference on the fast path.

- Move preempt disable from the marker site to the callback.

Since we now have an internal callback, move the preempt disable/enable to the
callback instead of the marker site.

Since the callback change is done asynchronously (passing from a handler that
supports arguments to a handler that does not setup the arguments is no
arguments are passed), we can safely update it even if it is outside the
preempt disable section.

- Move probe arm to probe connection. Now, a connected probe is automatically
  armed.

Remove MARK_MAX_FORMAT_LEN, unused.

This patch modifies the Linux Kernel Markers API : it removes the probe
"arm/disarm" and changes the probe function prototype : it now expects a
va_list * instead of a "...".

If we want to have more than one probe connected to a marker at a given
time (LTTng, or blktrace, ssytemtap) then we need this patch. Without it,
connecting a second probe handler to a marker will fail.

It allow us, for instance, to do interesting combinations :

Do standard tracing with LTTng and, eventually, to compute statistics
with SystemTAP, or to have a special trigger on an event that would call
a systemtap script which would stop flight recorder tracing.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Mike Mason <mmlnx@us.ibm.com>
Cc: Dipankar Sarma <dipankar@in.ibm.com>
Cc: David Smith <dsmith@redhat.com>
Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
Cc: "Frank Ch. Eigler" <fche@redhat.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-13 16:21:20 -08:00
Linus Torvalds
dde0013782 Merge branch 'for-2.6.25' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'for-2.6.25' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
  [POWERPC] Add arch-specific walk_memory_remove() for 64-bit powerpc
  [POWERPC] Enable hotplug memory remove for 64-bit powerpc
  [POWERPC] Add remove_memory() for 64-bit powerpc
  [POWERPC] Make cell IOMMU fixed mapping printk more useful
  [POWERPC] Fix potential cell IOMMU bug when switching back to default DMA ops
  [POWERPC] Don't enable cell IOMMU fixed mapping if there are no dma-ranges
  [POWERPC] Fix cell IOMMU null pointer explosion on old firmwares
  [POWERPC] spufs: Fix timing dependent false return from spufs_run_spu
  [POWERPC] spufs: No need to have a runnable SPU for libassist update
  [POWERPC] spufs: Update SPU_Status[CISHP] in backing runcntl write
  [POWERPC] spufs: Fix state_mutex leaks
  [POWERPC] Disable G5 NAP mode during SMU commands on U3
2008-02-08 09:31:42 -08:00
Miklos Szeredi
90d09e141b mount options: fix spufs
Add a .show_options super operation to spufs.

Use generic_show_options() and save the complete option string in
spufs_fill_super().

Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-08 09:22:40 -08:00
Christoph Hellwig
74bedc4d56 libfs: rename simple_attr_close to simple_attr_release
simple_attr_close implementes ->release so it should be named accordingly.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: <stefano.brivio@polimi.it>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg KH <greg@kroah.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-08 09:22:34 -08:00
Christoph Hellwig
8b88b0998e libfs: allow error return from simple attributes
Sometimes simple attributes might need to return an error, e.g. for
acquiring a mutex interruptibly.  In fact we have that situation in
spufs already which is the original user of the simple attributes.  This
patch merged the temporarily forked attributes in spufs back into the
main ones and allows to return errors.

[akpm@linux-foundation.org: build fix]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: <stefano.brivio@polimi.it>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg KH <greg@kroah.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-02-08 09:22:34 -08:00
Michael Ellerman
44621be4b5 [POWERPC] Make cell IOMMU fixed mapping printk more useful
Currently the cell IOMMU fixed mapping just printks that it's been setup,
which is not particularly useful.  Much more interesting is the address
ranges for the different windows.  This adds one line to dmesg on a blade.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:40 +11:00
Michael Ellerman
4a8df1507e [POWERPC] Fix potential cell IOMMU bug when switching back to default DMA ops
If we get a 64-bit dma mask we switch to the fixed ops and call
cell_dma_dev_setup().  If the driver then switches back to a 32-bit dma
mask for any reason we don't call cell_dma_dev_setup() again, which
has the potential to leave bogus data in dev->archdata.dma_data.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:40 +11:00
Michael Ellerman
0e0b47abb7 [POWERPC] Don't enable cell IOMMU fixed mapping if there are no dma-ranges
In order for the cell IOMMU fixed mapping to work we need "dma-ranges"
properties in the device tree. If there are none then there's no point
enabling the fixed mapping support.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:40 +11:00
Michael Ellerman
ccd05d086f [POWERPC] Fix cell IOMMU null pointer explosion on old firmwares
The cell IOMMU fixed mapping support has a null pointer bug if you run
it on older firmwares that don't contain the "dma-ranges" properties.
Fix it and convert to using of_get_next_parent() while we're there.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:39 +11:00
Luke Browning
85687ff2b4 [POWERPC] spufs: Fix timing dependent false return from spufs_run_spu
Stop bits are only valid when the running bit is not set.  Status bits
carry over from one invocation of spufs_run_spu() to another, so the
RUNNING bit gets added to the previous state of the register which may
have been a remote library call.  In this case, it looks like another
library routine should be invoked, but the spe is actually running.

This fixes a problem with a testcase that exercises the scheduler.

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:36 +11:00
Luke Browning
e66686b414 [POWERPC] spufs: No need to have a runnable SPU for libassist update
We don't need to update the libassist statistic with the context in a
runnable state, so do it after spu_disable_spu().

Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:36 +11:00
Masato Noguchi
732377c5f5 [POWERPC] spufs: Update SPU_Status[CISHP] in backing runcntl write
Currently, the kernel may fail to restart a SPE context which
has stopped and been swapped out.

This changes spu_backing_runcntl_write to emulate the real
SPU_Status register exactly.  When the SPU Run Control register
is written with SPU_RunCntl[Run] set to '1', the physical SPU
automatically sets SPU_Status[R] and clears SPU_Status[CISHP].

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:35 +11:00
Christoph Hellwig
eebead5b8f [POWERPC] spufs: Fix state_mutex leaks
Fix various state_mutex leaks.  The worst one was introduced by the
interrutible state_mutex conversion but there've been a few before
too.  Notably spufs_wait now returns without the state_mutex held
when returning an error, which actually cleans up some code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Luke Browning <lukebrowning@us.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-08 19:52:35 +11:00
Stephen Rothwell
c6d01179bf [POWERPC] Avoid possible extra of_node_put in axon_msi.c
I got this warning from gcc:
arch/powerpc/platforms/cell/axon_msi.c:118: warning: 'tmp' may be used uninitialized in this function

Which turns out to be a false positive, but pointed out that it was
possible for the error path in find_msi_translator() to do an extra
of_node_put on a node.  This fixes it by localising the ref counting
a bit.  As a side effect, the warning goes away.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:30:00 +11:00
Michael Ellerman
de4c928b84 [POWERPC] Avoid DMA exception when using axon_msi with IOMMU
There's a brown-paper-bag bug in axon_msi, we pass the address of our
FIFO directly to the hardware, without DMA mapping it.  This leads to
DMA exceptions if you enable MSI & the IOMMU.

The fix is to correctly DMA map the fifo, dma_alloc_coherent() does
what we want - and we need to track the virt & phys addresses.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:30:00 +11:00
Michael Ellerman
e4347dfb58 [POWERPC] Convert axon_msi to an of_platform driver
Now that we create of_platform devices earlier on cell, we can make the
axon_msi driver an of_platform driver.  This makes the code cleaner in
several ways, and most importantly means we have a struct device.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:30:00 +11:00
Michael Ellerman
bb125fb0e0 [POWERPC] Search for and publish cell OF platform devices earlier
Currently cell publishes OF devices at device_initcall() time, which
means the earliest a driver can bind to a device is also device_initcall()
time.  We have a driver we want to register before other devices, so
publish the devices at subsys_initcall() time.

This should not cause any behaviour change for existing drivers, as they
are still bound at device_initcall() time.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:29:59 +11:00
Andre Detsch
58119068cb [POWERPC] spufs: Fix memory leak on SPU affinity
Reference count for the "neighbor" spu context was not
being correctly decremented after usage.
So, contexts used as reference during SPU affinity setup
were not being deallocated, leading to a memory leak.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:26:59 +11:00
Jeremy Kerr
60cf54db47 [POWERPC] spufs: Fix SPE single-step mode
Currently we only catch debug events through the 0x3fff status;
spufs_run_spu doesn't handle single-step SPE events.

This change adds a handler for conditions where the SPE is stopped due
to single-step-mode.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:26:59 +11:00
Christoph Hellwig
038200cfdc [POWERPC] spufs: Add marker-based tracing facility
This adds markers two important points in the spufs code and a new
module (sputrace.ko) that allows reading these out through a proc file.

Long-term I'd rather see something like lttng extended to use the spufs
instrumentation, but for now I think this is a good enough quick
solution.  We'll probably want to add various addition event in addition
to that ones I have already.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-02-06 16:26:59 +11:00
Michael Ellerman
99e139126a [POWERPC] Cell IOMMU fixed mapping support
This patch adds support for setting up a fixed IOMMU mapping on certain
cell machines.  For 64-bit devices this avoids the performance overhead of
mapping and unmapping pages at runtime.  32-bit devices are unable to use
the fixed mapping.

The fixed mapping is established at boot, and maps all of physical memory
1:1 into device space at some offset.  On machines with < 30 GB of memory
we setup the fixed mapping immediately above the normal IOMMU window.

For example a machine with 4GB of memory would end up with the normal
IOMMU window from 0-2GB and the fixed mapping window from 2GB to 6GB. In
this case a 64-bit device wishing to DMA to 1GB would be told to DMA to
3GB, plus any offset required by firmware.  The firmware offset is encoded
in the "dma-ranges" property.

On machines with 30GB or more of memory, we are unable to place the fixed
mapping above the normal IOMMU window as we would run out of address space.
Instead we move the normal IOMMU window to coincide with the hash page
table, this region does not need to be part of the fixed mapping as no
device should ever be DMA'ing to it.  We then setup the fixed mapping
from 0 to 32GB.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-31 12:11:11 +11:00
Michael Ellerman
c96b51265a [POWERPC] Split out the ioid fetching/checking logic
Split out the ioid fetching and checking logic so we can use it elsewhere
in a subsequent patch.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-31 12:11:11 +11:00
Michael Ellerman
4134791728 [POWERPC] Add support to cell_iommu_setup_page_tables() for multiple windows
Add support to cell_iommu_setup_page_tables() for handling two windows,
the dynamic window and the fixed window.  A fixed window size of 0
indicates that there is no fixed window at all.

Currently there are no callers who pass a non-zero fixed window, but the
upcoming fixed IOMMU mapping patch will change that.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-31 12:11:11 +11:00
Michael Ellerman
86865771ea [POWERPC] Split out the IOMMU logic from cell_dma_dev_setup()
Split the IOMMU logic out from cell_dma_dev_setup() into a separate
function.  If we're not using dma_direct_ops or dma_iommu_ops we don't
know what the hell's going on, so BUG.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-31 12:11:10 +11:00
Michael Ellerman
7fc67afc43 [POWERPC] Split cell_iommu_setup_hardware() into two parts
Split cell_iommu_setup_hardware() into two parts.  Split the page table
setup into cell_iommu_setup_page_tables() and the bits that kick the
hardware into cell_iommu_enable_hardware().

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-31 12:11:10 +11:00
Michael Ellerman
209bfbb478 [POWERPC] Split out the logic that allocates struct iommus
Split out the logic that allocates a struct iommu into a separate
function.  This can fail however the calling code has never cared - so
just return if we can't allocate an iommu.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-31 12:11:10 +11:00
Paul Mackerras
bd45ac0c5d Merge branch 'linux-2.6' 2008-01-31 11:25:51 +11:00
Michael Ellerman
3ca6644e5c [POWERPC] Make IOMMU code safe for > 132 GB of memory
Currently the IOMMU code allocates one page for the segment table, that
isn't safe if we have more than 132 GB of RAM.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-25 22:52:54 +11:00
Michael Ellerman
f5d67bd5ec [POWERPC] Have cell use its own dma_direct_offset variable
Rather than using the global variable, have cell use its own variable
to store the direct DMA offset.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-25 22:52:54 +11:00
Michael Ellerman
110f95c9f0 [POWERPC] Set archdata.dma_data for direct DMA in cell_dma_dev_setup()
Store the direct_dma_offset in each device's dma_data in the case
where we're using the direct DMA ops.

We need to make sure we setup the ppc_md.pci_dma_dev_setup() callback
if we're using a non-zero offset.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-25 22:52:53 +11:00
Kay Sievers
af5ca3f4ec Driver core: change sysdev classes to use dynamic kobject names
All kobjects require a dynamically allocated name now. We no longer
need to keep track if the name is statically assigned, we can just
unconditionally free() all kobject names on cleanup.

Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2008-01-24 20:40:40 -08:00
Paul Mackerras
9156ad4833 Merge branch 'linux-2.6' 2008-01-24 10:07:21 +11:00
Grant Likely
e25c47ffa9 [POWERPC] cell: Use machine_*_initcall() hooks in platform code
Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-01-17 14:57:15 +11:00
Paul Mackerras
a5a971129c [POWERPC] Fix build failure on Cell when CONFIG_SPU_FS=y
Commit aed3a8c9bb introduced a
definition of notify_spus_active in .../cell/spu_syscalls.c, and
another definition under #ifndef MODULE in .../cell/spufs/sched.c.
The latter is not necessary and causes the build to fail when
CONFIG_SPU_FS=y, so this removes it.  It also removes the export
of do_notify_spus_active, which is unnecessary.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
2008-01-02 15:56:30 +11:00
Bob Nelson
aed3a8c9bb [POWERPC] Oprofile: Remove dependency on spufs module
This removes an OProfile dependency on the spufs module.  This
dependency was causing a problem for multiplatform systems that are
built with support for Oprofile on Cell but try to load the oprofile
module on a non-Cell system.

Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-28 15:07:52 +11:00
Jeremy Kerr
cbea92383d [POWERPC] spufs: Don't leak kernel stack through an empty {i,m}box_info read
Based on an original patch from Arnd Bergmann
<arnd.bergmann@de.ibm.com>

If there's no entry in the mailbox, then a read on the _info file will
return data from an uninitialised variable.

This change returns EOF if there's no mailbox info available instead.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:22 +11:00
Andre Detsch
18789fb1c3 [POWERPC] spufs: DMA Restart after SIGSEGV
This fixes the behavior of spufs when a spu tries a DMA operation
based on a wrong / unavailable address.

Instead of just generating a SIGBUS signal, spufs now
generates a SIGSEGV signal and restarts the problematic DMA operation
after the execution of the application's signal handler.  This allows
applications to employ user-level paging systems.

Although the restart_dma function is called before the application's
signal handler, the operation is not actually performed at this time,
since the spu context is already stopped.  The operation only takes
place when spu_run is restarted (which happens automatically).

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Aegis Lin
90608a2928 [POWERPC] spufs: Use separate timer for /proc/spu_loadavg calculation
The original spusched_timer was designed to take effect only when
a context is waiting in the runqueue.

This change adds an additional lower-freq timer has been added to
purely handle the spu_load updates. The new timer will be triggered
per LOAD_FREQ ticks.

Signed-off-by: Aegis Lin <aegislin@gmail.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Christoph Hellwig
c9101bdb1b [POWERPC] spufs: make state_mutex interruptible
Make most places that use spu_acquire/spu_acquire_saved interruptible,
this allows getting out of the spufs code when e.g. pressing ctrl+c.
There are a few places where we get called e.g. from spufs teardown
routines were we can't simply err out so these are left with a comment.
For now I've also not touched the poll routines because it's open what
libspe would expect in terms of interrupted system calls.

Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Christoph Hellwig
197b1a8263 [POWERPC] spufs: add enchanced simple attr macros
The simple attr macros currently used by spufs can't deal with the
handlers returning errors, which is required to make the state_mutex
interruptible.  This adds a local copy that allows for an error
return from the get/set handlers.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Luke Browning
e65c2f6fce [POWERPC] spufs: decouple spu scheduler from spufs_spu_run (asynchronous scheduling)
Change spufs_spu_run so that the context is queued directly to the
scheduler and the controlling thread advances directly to spufs_wait()
for spe errors and exceptions.

nosched contexts are treated the same as before.

Fixes from Christoph Hellwig <hch@lst.de>

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:21 +11:00
Masato Noguchi
9476141c18 [POWERPC] spufs: don't set reserved bits in spu interrupt status
This changes the spu context switch code to not write to reserved bits
of spu interrupt status register.
The architecture book says the reserved fields should be set to zero.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Luke Browning
b192541b39 [POWERPC] spufs: spu_find_victim may choose wrong victim
Need to re-check priority after dropping lock.  Otherwise, a
more favored context may be preempted.

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Luke Browning
91569531d1 [POWERPC] spufs: reorganize spu_run_init
This cleans up spu_run_init so that it does all of the spu
initialization for spufs_run_spu.  It initializes the spu context as
much as possible before it activates the spu and writes the runcntl
register.

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Jeremy Kerr
d6ad39bc53 [POWERPC] spufs: rework class 0 and 1 interrupt handling
Based on original patches from
 Arnd Bergmann <arnd.bergman@de.ibm.com>; and
 Luke Browning <lukebr@linux.vnet.ibm.com>

Currently, spu contexts need to be loaded to the SPU in order to take
class 0 and class 1 exceptions.

This change makes the actual interrupt-handlers much simpler (ie,
set the exception information in the context save area), and defers the
handling code to the spufs_handle_class[01] functions, called from
spufs_run_spu.

This should improve the concurrency of the spu scheduling leading to
greater SPU utilization when SPUs are overcommited.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Jeremy Kerr
8af30675c3 [POWERPC] spufs: use #defines for SPU class [012] exception status
Add a few #defines for the class 0, 1 and 2 interrupt status bits, and
use them instead of magic numbers when we're setting or checking for
these interrupts.

Also, add a #define for the class 2 mailbox threshold interrupt mask.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:20 +11:00
Jeremy Kerr
c40aa47104 [POWERPC] spufs: fix incorrect interrupt status clearing in backing mbox stat poll
When doing a poll on the mbox stat file of a swapped-out context, we
clear the class 0 interrupt status, rather than the class 2 interrupt
status.

This change corrects the poll operation to clear the correct interrupt.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:19 +11:00
Luke Browning
cc210b3ec5 [POWERPC] spufs: add backing ops for privcntl register
This change encapsulates the spu_privcntl_RW register so that it can
be written through backing ops.  This is necessary so that spu contexts
can be initialized and queued to the scheduler in spufs_run_spu.

Signed-off-by: Luke Browning <lukebr@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:19 +11:00
Arnd Bergmann
33bfd7a738 [POWERPC] spufs: block fault handlers in spu_acquire_runnable
This change disables the logic that faults-in spu contexts under the
covers from the page fault handler.  When a fault requires a runnable
context, the handler will block until the context is scheduled by
other means.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:19 +11:00
Jeremy Kerr
7cd58e4381 [POWERPC] spufs: move fault, lscsa_alloc and switch code to spufs module
Currently, part of the spufs code (switch.o, lscsa_alloc.o and fault.o)
is compiled directly into the kernel.

This change moves these components of spufs into the kernel.

The lscsa and switch objects are fairly straightforward to move in.

For the fault.o module, we split the fault-handling code into two
parts: a/p/p/c/spu_fault.c and a/p/p/c/spufs/fault.c. The former is for
the in-kernel spu_handle_mm_fault function, and we move the rest of the
fault-handling code into spufs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:19 +11:00
Julio M. Merino Vidal
9b1d21f858 [POWERPC] spufs: fix typos in sched.c comments
Fix a few typos in the spufs scheduler comments

Signed-off-by: Julio M. Merino Vidal <jmerino@ac.upc.edu>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:46:18 +11:00
Masato Noguchi
c25620d766 [POWERPC] cell: wrap master run control bit
Add platform specific SPU run control routines to the spufs.  The current
spufs implementation uses the SPU master run control bit (MFC_SR1[S]) to
control SPE execution, but the PS3 hypervisor does not support the use of
this feature.

This change adds the run control wrapper routies spu_enable_spu() and
spu_disable_spu().  The bare metal routines use the master run control
bit, and the PS3 specific routines use the priv2 run control register.

An outstanding enhancement for the PS3 would be to add a guard to check
for incorrect access to the spu problem state when the spu context is
disabled.  This check could be implemented with a flag added to the spu
context that would inhibit mapping problem state pages, and a routine
to unmap spu problem state pages.  When the spu is enabled with
ps3_enable_spu() the flag would be set allowing pages to be mapped,
and when the spu is disabled with ps3_disable_spu() the flag would be
cleared and mapped problem state pages would be unmapped.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-21 19:45:05 +11:00
Julia Lawall
1fe58a875e [POWERPC] cell/cbe_regs.c: Add missing of_node_put
There should be an of_node_put when breaking out of a loop that iterates
using for_each_node_by_type.

This was detected and fixed using the following semantic patch.
(http://www.emn.fr/x-info/coccinelle/)

// <smpl>
@@
identifier d;
type T;
expression e;
iterator for_each_node_by_type;
@@

T *d;
...
for_each_node_by_type(d,...)
  {... when != of_node_put(d)
       when != e = d
(
   return d;
|
+  of_node_put(d);
?  return ...;
)
...}
// </smpl>

Signed-off-by: Julia Lawall <julia@diku.dk>
Cc: Christian Krafft <krafft@de.ibm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: David Erb <djerb@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-20 17:13:51 +11:00
Ishizaki Kou
7e1961ff49 [POWERPC] celleb: Split machine definition
This splits the machine definition for celleb into two definitions,
one for celleb_beat, and the other for celleb_native.  Though this
looks complex because of sorting some functions, there are no
more semantic changes than that for the splitting.

Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-20 16:15:30 +11:00
Ishizaki Kou
4751505cf7 [POWERPC] Cleanup calling mmio_nvram_init
This makes mmio_nvram_init() callable unconditionally by providing
a dummy definition when CONFIG_MMIO_NVRAM is not defined.

Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-20 16:15:27 +11:00
Jeremy Kerr
1e7710390f [POWERPC] cell: catch errors from sysfs_create_group()
We're currently getting a warning from not checking the result of
sysfs_create_group, which is declared as __must_check.

This change introduces appropriate error-handling for
spu_add_sysdev_attr_group()

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:06 +01:00
Jeremy Kerr
684bd61401 [POWERPC] cell: handle SPE kernel mappings that cross segment boundaries
Currently, we have a possibilty that the SLBs setup during context
switch don't cover the entirety of the necessary lscsa and code
regions, if these regions cross a segment boundary.

This change checks the start and end of each region, and inserts a SLB
entry for each, if unique. We also remove the assumption that the
spu_save_code and spu_restore_code reside in the same segment, by using
the specific code array for save and restore.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:05 +01:00
Jeremy Kerr
f6eb7d7ffe [POWERPC] cell: add spu_64k_pages_available() check
Add a function spu_64k_pages_available(), so that we can abstract the
explicity use of mmu_psize_defs() in lssca_alloc.c

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:05 +01:00
Jeremy Kerr
4d43466d56 [POWERPC] cell: use spu_load_slb for SLB setup
Now that we have a helper function to setup a SPU SLB, use it for
__spu_trap_data_seq.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:04 +01:00
Jeremy Kerr
58bd403c3c [POWERPC] cell: handle kernel SLB setup in spu_base.c
Currently, the SPU context switch code (spufs/switch.c) sets up the
SPU's SLBs directly, which requires some low-level mm stuff.

This change moves the kernel SLB setup to spu_base.c, by exposing
a function spu_setup_kernel_slbs() to do this setup. This allows us
to remove the low-level mm code from switch.c, making it possible
to later move switch.c to the spufs module.

Also, add a struct spu_slb for the cases where we need to deal with
SLB entries.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:04 +01:00
Andre Detsch
a0a7ae8939 [POWERPC] cell: safer of_has_vicinity routine
This patch changes the way we check for the existence of
vicinity property in spe device nodes.

The new implementation does not depend on having an initialized
cbe_spu_info[0].spus, and checks for presence of vicinity in all
nodes, not only in the first one.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:03 +01:00
Jeremy Kerr
3ce2f62b05 [POWERPC] cell: export force_sig_info()
Export force_sig_info to allow signals to be sent from a modular spufs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:03 +01:00
Jon Loeliger
d8caf74f1b [POWERPC] cell: Convert #include of asm/of_{platform, device}.h into linux/of_{platform, device}.h.
Signed-off-by: Jon Loeliger <jdl@freescale.com>
Acked-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:02 +01:00
Ishizaki Kou
23666ebc15 [POWERPC] cell: add missing '\n'
Two printk() calls were missing the terminating '\n'.

Signed-off-by: Kou Ishizaki <kou.ishizaki@toshiba.co.jp>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:02 +01:00
Kevin Corry
29641ce165 [POWERPC] perfmon2: make pm_interval register read/write
The pm_interval register in the Cell PMU is read/write, but was implemented in
the kernel as write-only. Previously, the written value was saved in a "shadow"
copy so calls to cbe_read_pm() could return the value.

Perfmon2 needs to be able to read the current values of pm_interval, so change
cbe_read_pm() to read the actual register instead of the "shadow" copy. There
is currently no code in the kernel that tries to read the pm_interval register
with cbe_read_pm() (expecting to receive the "shadow" value), so this should
not break any existing code.

Signed-off-by: Kevin Corry <kevcorry@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2007-12-19 01:00:01 +01:00
Stephen Rothwell
44ef339073 [POWERPC] pci_controller->arch_data really is a struct device_node *
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-11 13:42:37 +11:00
Ishizaki Kou
9858ee8ac5 [POWERPC] celleb: Add support for native CBE
This adds support for native CBE on Celleb, that is, without the BEAT
hypervisor.  Many codes in platforms/cell/ are used in native CBE
environment.

Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-11 13:34:40 +11:00
Ishizaki Kou
c7a3f93d00 [POWERPC] cell: Fix undefined reference to mmio_nvram_init
This fixes the following link error with CONFIG_PPC_CELL_NATIVE=y and
CONFIG_PPC_CELL_BLADE=n:

arch/powerpc/platforms/built-in.o: In function `.cell_setup_arch':
setup.c:(.init.text+0xe80): undefined reference to `.mmio_nvram_init'

Signed-off-by: Kou Ishizaki <Kou.Ishizaki@toshiba.co.jp>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-11 13:34:39 +11:00
Benjamin Herrenschmidt
8d089085a4 [POWERPC] Cleanup SMT thread handling
This cleans up the SMT thread handling, removing some hard coded
assumptions and providing a set of helpers to convert between linux
cpu numbers, thread numbers and cores.

This implementation requires the number of threads per core to be a
power of 2 and identical on all cores in the system, but it's an
implementation detail, not an API requirement and so this limitation
can be lifted in the future if anybody ever needs it.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-12-03 13:56:25 +11:00
Jeremy Kerr
c443acab2e [POWERPC] spufs: Fix context destroy vs /spu readdir race
We can currently cause an oops by repeatedly creating and destroying
contexts, while doing getdents() calls on the "/spu" directory.

This is due to the context's top-level dentry remaining hashed while
the context is being destroyed.

Fix this by unhashing the context's dentry with the
dentry->d_inode->i_mutex held. This way, we'll hit the check for
d_unhashed in dentry_readdir, and won't be included in the
list of subdirs for /spu.

test: spufs-testsuite:tests/01-spu_create/07-destroy-vs-readdir-race

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-20 16:10:20 +11:00
will schmidt
aa39be09df [POWERPC] Include udbg.h when using udbg_printf
This fixes the error
	error: implicit declaration of function "udbg_printf"

We have a few spots where we reference udbg_printf() without #including
udbg.h.  These are within #ifdef DEBUG blocks, so unnoticed until we do
a #define DEBUG or #define DEBUG_LOW nearby.

Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-08 14:15:31 +11:00
Olof Johansson
4bfac36891 [POWERPC] Fix CONFIG_SMP=n build break
Fix two build errors on powerpc allyesconfig + CONFIG_SMP=n:

arch/powerpc/platforms/built-in.o: In function `cpu_affinity_set':
arch/powerpc/platforms/cell/spu_priv1_mmio.c:78: undefined reference to `.iic_get_target_id'
arch/powerpc/platforms/built-in.o: In function `iic_init_IRQ':
arch/powerpc/platforms/cell/interrupt.c:397: undefined reference to `.iic_setup_cpu'

Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-08 14:15:30 +11:00
Jan Engelhardt
96de0e252c Convert files to UTF-8 and some cleanups
* Convert files to UTF-8.

  * Also correct some people's names
    (one example is Eißfeldt, which was found in a source file.
    Given that the author used an ß at all in a source file
    indicates that the real name has in fact a 'ß' and not an 'ss',
    which is commonly used as a substitute for 'ß' when limited to
    7bit.)

  * Correct town names (Goettingen -> Göttingen)

  * Update Eberhard Mönkeberg's address (http://lkml.org/lkml/2007/1/8/313)

Signed-off-by: Jan Engelhardt <jengelh@gmx.de>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
2007-10-19 23:21:04 +02:00
Christoph Lameter
4ba9b9d0ba Slab API: remove useless ctor parameter and reorder parameters
Slab constructors currently have a flags parameter that is never used.  And
the order of the arguments is opposite to other slab functions.  The object
pointer is placed before the kmem_cache pointer.

Convert

        ctor(void *object, struct kmem_cache *s, unsigned long flags)

to

        ctor(struct kmem_cache *s, void *object)

throughout the kernel

[akpm@linux-foundation.org: coupla fixes]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-17 08:42:45 -07:00
Mike Travis
d5a7430ddc Convert cpu_sibling_map to be a per cpu variable
Convert cpu_sibling_map from a static array sized by NR_CPUS to a per_cpu
variable.  This saves sizeof(cpumask_t) * NR unused cpus.  Access is mostly
from startup and CPU HOTPLUG functions.

Signed-off-by: Mike Travis <travis@sgi.com>
Cc: Andi Kleen <ak@suse.de>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "Luck, Tony" <tony.luck@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-10-16 09:42:50 -07:00
Michael Ellerman
2843e7f7d6 Remove msic_dcr_read() in axon_msi.c
msic_dcr_read() doesn't really do anything useful, just replace it with
direct calls to dcr_read().

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-10-15 14:29:49 -04:00
Michael Ellerman
83f34df4e7 Add dcr_host_t.base in dcr_read()/dcr_write()
Now that all users of dcr_read()/dcr_write() add the dcr_host_t.base, we
can save them the trouble and do it in dcr_read()/dcr_write().

As some background to why we just went through all this jiggery-pokery,
benh sayeth:

 Initially the goal of the dcr_read/dcr_write routines was to operate like
 mfdcr/mtdcr which take absolute DCR numbers. The reason is that on 4xx
 hardware, indirect DCR access is a pain (goes through a table of
 instructions) and it's useful to have the compiler resolve an absolute DCR
 inline.

 We decided that wasn't worth the API bastardisation since most places
 where absolute DCR values are used are low level 4xx-only code which may
 as well continue using mfdcr/mtdcr, while the new API is designed for
 device "instances" that can exist on 4xx and Axon type platforms and may
 be located at variable DCR offsets.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-10-15 14:29:49 -04:00
Linus Torvalds
4d5709a7b7 Merge master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreq
* master.kernel.org:/pub/scm/linux/kernel/git/davej/cpufreq:
  [CPUFREQ] Don't take semaphore in cpufreq_quick_get()
  [CPUFREQ] Support different families in fid/did to frequency conversion
  [CPUFREQ] cpufreq_stats: misc cpuinit section annotations
  [CPUFREQ] implement !CONFIG_CPU_FREQ stub for  cpufreq_unregister_notifier()
  [CPUFREQ] mark hotplug notifier callback as __cpuinit
  [CPUFREQ] Only check for transition latency on problematic governors (kconfig fix)
  [CPUFREQ] allow ondemand and conservative cpufreq governors to be used as default
  [CPUFREQ] move policy's governor initialisation out of low-level drivers into cpufreq core
  [CPUFREQ] Longhaul - Add support for PM133 northbridge
  [CPUFREQ] x86: use num_online_nodes to get physical cpus numbers for
2007-10-12 15:42:01 -07:00
Linus Torvalds
e86908614f Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (408 commits)
  [POWERPC] Add memchr() to the bootwrapper
  [POWERPC] Implement logging of unhandled signals
  [POWERPC] Add legacy serial support for OPB with flattened device tree
  [POWERPC] Use 1TB segments
  [POWERPC] XilinxFB: Allow fixed framebuffer base address
  [POWERPC] XilinxFB: Add support for custom screen resolution
  [POWERPC] XilinxFB: Use pdata to pass around framebuffer parameters
  [POWERPC] PCI: Add 64-bit physical address support to setup_indirect_pci
  [POWERPC] 4xx: Kilauea defconfig file
  [POWERPC] 4xx: Kilauea DTS
  [POWERPC] 4xx: Add AMCC Kilauea eval board support to platforms/40x
  [POWERPC] 4xx: Add AMCC 405EX support to cputable.c
  [POWERPC] Adjust TASK_SIZE on ppc32 systems to 3GB that are capable
  [POWERPC] Use PAGE_OFFSET to tell if an address is user/kernel in SW TLB handlers
  [POWERPC] 85xx: Enable FP emulation in MPC8560 ADS defconfig
  [POWERPC] 85xx: Killed <asm/mpc85xx.h>
  [POWERPC] 85xx: Add cpm nodes for 8541/8555 CDS
  [POWERPC] 85xx: Convert mpc8560ads to the new CPM binding.
  [POWERPC] mpc8272ads: Remove muram from the CPM reg property.
  [POWERPC] Make clockevents work on PPC601 processors
  ...

Fixed up conflict in Documentation/powerpc/booting-without-of.txt manually.
2007-10-11 21:55:47 -07:00
Paul Mackerras
1189be6508 [POWERPC] Use 1TB segments
This makes the kernel use 1TB segments for all kernel mappings and for
user addresses of 1TB and above, on machines which support them
(currently POWER5+, POWER6 and PA6T).

We detect that the machine supports 1TB segments by looking at the
ibm,processor-segment-sizes property in the device tree.

We don't currently use 1TB segments for user addresses < 1T, since
that would effectively prevent 32-bit processes from using huge pages
unless we also had a way to revert to using 256MB segments.  That
would be possible but would involve extra complications (such as
keeping track of which segment size was used when HPTEs were inserted)
and is not addressed here.

Parts of this patch were originally written by Ben Herrenschmidt.

Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-12 14:05:17 +10:00
Grant Likely
745e102775 [POWERPC] Platforms shouldn't mess with ROOT_DEV
There is no good reason for board platform code to mess with the
ROOT_DEV.  Remove it from all in-tree platforms except powermac.

Signed-off-by: Grant Likely <grant.likely@secretlab.ca>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-11 20:40:43 +10:00
David Gibson
1d3bb99648 Device tree aware EMAC driver
Based on BenH's earlier work, this is a new version of the EMAC driver
for the built-in ethernet found on PowerPC 4xx embedded CPUs.  The
same ASIC is also found in the Axon bridge chip.  This new version is
designed to work in the arch/powerpc tree, using the device tree to
probe the device, rather than the old and ugly arch/ppc OCP layer.

This driver is designed to sit alongside the old driver (that lies in
drivers/net/ibm_emac and this one in drivers/net/ibm_newemac).  The
old driver is left in place to support arch/ppc until arch/ppc itself
reaches its final demise (not too long now, with luck).

This driver still has a number of things that could do with cleaning
up, but I think they can be fixed up after merging.  Specifically:
	- Should be adjusted to properly use the dma mapping API.
Axon needs this.
	- Probe logic needs reworking, in conjuction with the general
probing code for of_platform devices.  The dependencies here between
EMAC, MAL, ZMII etc. make this complicated.  At present, it usually
works, because we initialize and register the sub-drivers before the
EMAC driver itself, and (being in driver code) runs after the devices
themselves have been instantiated from the device tree.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Jeff Garzik <jeff@garzik.org>
2007-10-10 16:51:52 -07:00
Benjamin Herrenschmidt
d767efe30f [POWERPC] cell: Add Cell memory controller register defs and expose it
This adds definitions for the Cell memory controller registers (at
least some of them) for use by the EDAC driver for ECC error reporting.

It also expose the said MIC as a platform device that can be used
by the EDAC driver to match on.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-09 21:01:56 +10:00
Benjamin Herrenschmidt
eef686a009 [POWERPC] cell: Move cbe_regs.h to include/asm-powerpc/cell-regs.h
The new Cell EDAC driver needs that file, oprofile also does ugly
path tricks to get to it, it's time to move it to asm-powerpc. While
at it, rename it to be consistent with cell-pmu.h (and dashes look
nicer than underscores anyway).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-09 21:01:56 +10:00
Thomas Renninger
8122c6cea0 [CPUFREQ] move policy's governor initialisation out of low-level drivers into cpufreq core
Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Bryan Wu <bryan.wu@analog.com>
Cc: Andi Kleen <ak@suse.de>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Dave Jones <davej@redhat.com>
2007-10-04 18:40:57 -04:00
Paul Mackerras
70f227d884 Merge branch 'linux-2.6' into for-2.6.24 2007-10-03 15:33:17 +10:00
Michael Ellerman
4acb889627 [POWERPC] Update axon_msi to use dcr_host_t.base
Now that dcr_host_t contains the base address, we can use that in the
axon_msi code, rather than storing it separately.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-03 13:25:28 +10:00
Michael Ellerman
db220b234d [POWERPC] Make sure to of_node_get() the result of pci_device_to_OF_node()
pci_device_to_OF_node() returns the device node attached to a PCI device,
but doesn't actually grab a reference - we need to do it ourselves.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-10-03 09:11:29 +10:00
Jeremy Kerr
603c461250 [POWERPC] spufs: fix mismerge, making context signal{1,2} files readable again
The commit 8b6f50ef1d seems to have
been affected by a mismerge of a duplicate patch
(d054b36ffd) - both the
spufs_dir_contents and spufs_dir_nosched_contents have been given
write-only signal notification files.

This change reverts the spufs_dir_contents array to use the
readable signal notification file implementation.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-26 19:47:07 +10:00
Jeremy Kerr
fb8299ed31 [POWERPC] cell: Don't cast the result of of_get_property()
The cast to u32 * isn't required, of_get_property returns a void *.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-22 14:49:22 +10:00
Paul Mackerras
0ce49a3945 Merge branch 'linux-2.6' 2007-09-20 10:09:27 +10:00
Christoph Hellwig
c0e7b4aa1c [POWERPC] spusched: Fix null pointer dereference in find_victim
find_victim can dereference a NULL pointer when iterating over the list
of victim spus because list_mutex only guarantees spu->ct to be stable,
but of course not to be non-NULL.

Also fix find_victim to not call spu_unbind_context without list_mutex
because that violates the above guarantee.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:26:29 +10:00
Michael Ellerman
104f0cc2dc [POWERPC] spufs: Add DEFINE_SPUFS_ATTRIBUTE()
This patch adds DEFINE_SPUFS_ATTRIBUTE(), a wrapper around
DEFINE_SIMPLE_ATTRIBUTE which does the specified locking for the get
routine for us.

Unfortunately we need two get routines (a locked and unlocked version) to
support the coredump code.  This hides one of those (the locked version)
inside the macro foo.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:19 +10:00
Michael Ellerman
9e25ae6d91 [POWERPC] spufs: Respect RLIMIT_CORE in spu coredump code
Currently the spu coredump code doesn't respect the ulimit, it should.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:19 +10:00
Michael Ellerman
7af1443a9d [POWERPC] spufs: Handle errors in SPU coredump code, and support coredump to a pipe
Rework spufs_coredump_extra_notes_write() to check for and return errors.

If we're coredumping to a pipe we can't trust file->f_pos, we need to
maintain the foffset value passed to us. The cleanest way to do this is
to have the low level write routine increment foffset when we've
successfully written.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:19 +10:00
Michael Ellerman
e55014923e [POWERPC] spufs: Cleanup ELF coredump extra notes logic
To start with, arch_notes_size() etc. is a little too ambiguous a name for
my liking, so change the function names to be more explicit.

Calling through macros is ugly, especially with hidden parameters, so don't
do that, call the routines directly.

Use ARCH_HAVE_EXTRA_ELF_NOTES as the only flag, and based on it decide
whether we want the extern declarations or the empty versions.

Since we have empty routines, actually use them in the coredump code to
save a few #ifdefs.

We want to change the handling of foffset so that the write routine updates
foffset as it goes, instead of using file->f_pos (so that writing to a pipe
works).  So pass foffset to the write routine, and for now just set it to
file->f_pos at the end of writing.

It should also be possible for the write routine to fail, so change it to
return int and treat a non-zero return as failure.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:19 +10:00
Michael Ellerman
48cad41f7e [POWERPC] spufs: Combine spufs_coredump_calls with spufs_calls
Because spufs might be built as a module, we can't have other parts of the
kernel calling directly into it, we need stub routines that check first if the
module is loaded.

Currently we have two structures which hold callbacks for these stubs, the
syscalls are in spufs_calls and the coredump calls are in spufs_coredump_calls.
In both cases the logic for registering/unregistering is essentially the same,
so we can simplify things by combining the two.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:19 +10:00
Michael Ellerman
78810ff672 [POWERPC] spufs: Add contents of npc file to SPU coredumps
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:19 +10:00
Michael Ellerman
74de08bc10 [POWERPC] spufs: Internal __spufs_get_foo() routines should take a spu_context *
The SPUFS attribute get routines take a void * because the generic attribute
code doesn't know what sort of data it's passing around.

However our internal __spufs_get_foo() routines can take a spu_context *
directly, which saves plonking it in and out of a void * again.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:18 +10:00
Michael Ellerman
936d5bf1d7 [POWERPC] spufs: Get rid of spufs_coredump_num_notes, it's not needed if we NULL terminate
The spufs_coredump_read array is NULL terminated, and we also store the size.
We only need one or the other, and the other arrays in file.c are NULL
terminated, so do that.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:18 +10:00
Michael Ellerman
c1a72173ab [POWERPC] spufs: Don't return -ENOSYS as extra notes size if spufs is not loaded
Because the SPU coredump code might be built as part of a module (spufs),
we have a stub which is called by the coredump code, this routine then calls
into spufs if it's loaded.

Unfortunately the stub returns -ENOSYS if spufs is not loaded, which is
interpreted by the coredump code as an extra note size of -38 bytes. This
leads to a corrupt core dump.

If spufs is not loaded there will be no SPU ELF notes to write, and so the
extra notes size will be == 0.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:18 +10:00
Michael Ellerman
59000b53c7 [POWERPC] spufs: Correctly calculate the size of the local-store to dump
The routine to dump the local store, __spufs_mem_read(), does not take the
spu_lslr_RW value into account - so we shouldn't check it when we're
calculating the size either.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:18 +10:00
Michael Ellerman
d464fb4410 [POWERPC] spufs: Write some SPU coredump values as ASCII
Unfortunately GDB expects some of the SPU coredump values to be identical
in format to what is found in spufs. This means we need to dump some of
the values as ASCII strings, not the actual values.

Because we don't know what the values will be, we always print the values
with the format "0x%.16lx", that way we know the result will be 19 bytes.

do_coredump_read() doesn't take a __user buffer, so remove the annotation,
and because we know that it's safe to just snprintf() directly to it.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:18 +10:00
Michael Ellerman
4fca9c4250 [POWERPC] spufs: Use computed sizes/#defines rather than literals in SPU coredump code
The spufs_coredump_reader array contains the size of the data that will be
returned by the read routine.  Currently these are specified as literals,
and though some are obvious, sizeof(u32) == 4, others are not, 69 * 8 ==  ???

Instead, use sizeof() whatever type is returned by each routine, or in
the case of spufs_mem_read() the #define LS_SIZE.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:17 +10:00
Michael Ellerman
9a5080f11d [POWERPC] spufs: Call spu_acquire_saved() before calculating the SPU note sizes
It makes sense to stop the SPU processes as soon as possible.  Also if we
dont acquire_saved() I think there's a possibility that the value in
csa.priv2.spu_lslr_RW won't be accurate.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:17 +10:00
Michael Ellerman
f9b7bbe7a8 [POWERPC] spufs: Remove ctx_info and ctx_info_list
Remove the ctx_info struct entirely, and also the ctx_info_list.  This
fixes a race where two processes can clobber each other's ctx_info structs.

Instead of using the list, we just repeat the search through the file
descriptor table.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:17 +10:00
Michael Ellerman
a595ed662c [POWERPC] spufs: Extract the file descriptor search logic in SPU coredump code
Extract the logic for searching through the file descriptors for spu contexts
into a separate routine, coredump_next_context(), so we can use it elsewhere
in future.  In the process we flatten the for loop, and move the NOSCHED test
into coredump_next_context().

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:17 +10:00
Jeremy Kerr
c70d4ca52b [POWERPC] cell: Remove DEBUG for SPU callbacks
We don't want SPE programs to be able to flood the kernel log by
invoking the SPE callback handler, so don't enable DEBUG for
spu_callbacks.c by default.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:17 +10:00
Jeremy Kerr
05a059f329 [POWERPC] spufs: Fix restore_decr_wrapped() to match CBE Handbook
Based on an original patch from Masato Noguchi
<Masato.Noguchi@jp.sony.com>.

We're currently not restoring the SPE decrementer as specified by the
CBE handbook. This change fixes our implementation to match, and makes
the function read more like the docs.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:17 +10:00
Jeremy Kerr
98f06978ff [POWERPC] cell: Unify spufs syscall path
At present, a built-in spufs will not use the spufs_calls callbacks, but
directly call sys_spu_create.  This saves us an indirect branch, but
means we have duplicated functions - one for CONFIG_SPU_FS=y and one for
=m.

This change unifies the spufs syscall path, and provides access to the
spufs_calls structure through a get/put pair.  At present, the only user
of the spufs_calls structure is spu_syscalls.c, but this will facilitate
adding the coredump calls later.

Everyone likes numbers, right?  Here's a before/after comparison with
CONFIG_SPU_FS=y, doing spu_create(); close(); 64k times.

Before:
	[jk@cell ~]$ time ./spu_create
	performing 65536 spu_create calls

	real    0m24.075s
	user    0m0.146s
	sys     0m23.925s

After:
	[jk@cell ~]$ time ./spu_create
	performing 65536 spu_create calls

	real    0m24.777s
	user    0m0.141s
	sys     0m24.631s

So, we're adding around 11us per syscall, at the benefit of having
only one syscall path.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Andre Detsch
36ddbb1380 [POWERPC] spufs: Fix race condition on gang->aff_ref_spu
Affinity reference point location (gang->aff_ref_spu) is reset
when the whole gang is descheduled. However, the last member of
a gang can be descheduled while we are trying to schedule another
member of the gang. This was leading to a race condition, and
the code was using gang->aff_ref_spu in an unsafe manner.

By holding the gang->aff_mutex a little bit longer, and increment
gang->aff_sched_count (which controls when gang->aff_ref_spu
should be reset) a little bit earlier, the problem is fixed.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Sebastian Siewior
8b0d3121a0 [POWERPC] spufs: Make isolated loader properly aligned
According to the comment in spufs_init_isolated_loader(), the isolated
loader should be aligned on a 16 byte boundary.
ARCH_{KMALLOC,SLAB}_MINALIGN is not defined so only 8 byte alignment is
guaranteed.

This enforces alignment via __get_free_pages.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Jeremy Kerr
6232a74f25 [POWERPC] spufs: Remove spu_harvest
Based on an initial patch from Sebastian Siewior
<sebastian@breakpoint.cc>

spu_harvest isn't used, remove it.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Jeremy Kerr
1e8b0f6d1b [POWERPC] spufs: Remove asmlinkage from do_spu_create
do_spu_create doesn't need the asmlinkage qualifier; remove it.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Sebastian Siewior
1238819a41 [POWERPC] spufs: Make file-internal functions & variables static
There are a few symbols used only in one file within spufs; this change
makes them static where suitable.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-19 15:12:16 +10:00
Michael Ellerman
6815800601 [POWERPC] Provide a default irq_host match, which matches on an exact of_node
The most common match semantic is an exact match based on the device node.
So provide a default implementation that does this, and hook it up if no
match routine is specified.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-14 01:33:20 +10:00
Michael Ellerman
52964f87c6 [POWERPC] Add an optional device_node pointer to the irq_host
The majority of irq_host implementations (3 out of 4) are associated
with a device_node, and need to stash it somewhere. Rather than having
it somewhere different for each host, add an optional device_node pointer
to the irq_host structure.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-14 01:33:20 +10:00
Masato Noguchi
b7f90a406f [POWERPC] cell/PS3: Fix a bug that causes the PS3 to hang on the SPU Class 0 interrupt.
The Cell BE Architecture spec states that the SPU MFC Class 0 interrupt
is edge-triggered.  The current spu interrupt handler assumes this
behavior and does not clear the interrupt status.

The PS3 hypervisor visualizes all SPU interrupts as level, and on return
from the interrupt handler the hypervisor will deliver a new virtual
interrupt for any unmasked interrupts which for which the status has not
been cleared.  This fix clears the interrupt status in the interrupt
handler.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-09-11 04:30:36 +10:00
Andre Detsch
ada83daab3 [POWERPC] spufs: Don't call spu_run_init from spu_reacquire_runnable
This fixes a major bug which was happening when a SPU thread advances
its execution right after being restored to a SPU.  A potentially
outdated NPC value was being (re)written to the SPU.

So, spu_run_init, in this case, was either not doing anything relevant,
or breaking the execution of the SPU thread.

This fixes a common problem of losing a mailbox write when it was done
to a saved context.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-30 16:27:18 +10:00
Arnd Bergmann
62ee68e3bc [POWERPC] spufs: Fix update of mailbox status register during backed wbox write
When a process writes into the inbound spu mailbox (wbox) while the
context is saved, we accidentally break the contents of the mb_stat_R
register by clearing other entries of the mailbox status register. This
can cause the user side to hang.

This change fixes the problem by only altering the appropriate bits
of the mailbox status register during a backing-store write.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-30 16:27:18 +10:00
Christian Krafft
aac2e68481 [POWERPC] spu_manage: fix spu_unit_number for celleb device tree
This fixes a regression introduced with 2.6.23-rc4 after on some
confusion about the device tree interfaces.

IBM QS21 device trees provide "physical-id", so we changed the code to
run on that and remain compatible with all IBM machines.

However, the Toshiba Celleb device tree provides the "unit-id" property,
which was in the Linux code, but never used in this way on IBM hardware.

Legacy device tree used the reg property for the physical id of an spe.
This patch fixes find_spu_unit_number to look for the spu id in that order.
The length is checked to avoid misinterpretation in case the attributes
unit-id or reg do not contain the id.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Jeremy Kerr <jk@ozlabs.org>
2007-08-30 01:35:05 +02:00
Arnd Bergmann
3addf55c94 [POWERPC] cell: Support pinhole-reset on IBM cell blades
The Cell Broadband Engine has a method of injecting a
system-reset-exception from an external source into the
operating system, which should trigger the regular behaviour
of entering xmon or kdump.

Unfortunately, the exception handler cannot distinguish it from
other interrupt causes by the SRR1 register, which gets used
for this on Power 6 and others.

IBM Blade servers that want to support triggering the
system reset exception using a pinhole button in the front
panel therefore use an extra register to determine the
reset cause.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>

--
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-25 16:58:26 +10:00
Christian Krafft
fa7f374bbf [POWERPC] spu_manage: Use newer physical-id attribute
Legacy device tree used the reg property for the physical id of an
spe.  On newer device tree layouts the reg property contains the
"correct" value in the reg attribute.  So there has been intoduced the
"physical-id" on newer devicetree layouts.  The id is stored by
spu_manage into the spu struct as spe_id.  cbe_thermal has been
changed to use the spu->spe_id.  There's no need for the thermal code
to check devicetree attributes for itself.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Cc: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-25 16:58:26 +10:00
Jeremy Kerr
ad941fe4b6 [POWERPC] cell: Fix errno for modular spufs_create with invalid neighbour
At present, spu_create with an invalid neighbo(u)r will return -ENOSYS,
not -EBADF, but only when spufs.o is built as a module.

This change adds the appropriate errno, making the behaviour the same
as the built-in case.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-15 15:12:44 +10:00
Andre Detsch
f5996449e3 [POWERPC] cell: Move SPU affinity init to spu_management_of_ops
This patch moves affinity initialization code from spu_base.c to a
new spu_management_of_ops function (init_affinity), which is empty
in the case of PS3. This fixes a linking problem that was happening
when compiling for PS3.
Also, some small code style changes were made.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-10 21:04:21 +10:00
Andre Detsch
683e3ab2b5 [POWERPC] spufs: Fix affinity after introduction of node_allowed() calls
This patch fixes affinity reference point placement, which was not being
done in some situations, after the introduction of node_allowed() calls.

The previously used parameter, 'ctx', is just the iterator of the
previous list_for_each_entry_reverse loop, and its value might be
invalid at the end of the loop. Also, the right context to seek
for information when defining the reference ctx location
_is_ the reference ctx.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-08-03 19:18:10 +10:00
Christoph Hellwig
9d78592ed7 [POWERPC] spusched: Fix initial timeslice calculation
Currently we calculate the first timeslice for every context
incorrectly - alloc_spu_context calls spu_set_timeslice before we set
ctx->prio so we always calculate the longest possible timeslice for the
lowest possible priority.

This patch makes sure to update the schedule-related fields before
calculating the timeslice and also makes sure we update the timeslice for
a non-running context when entering spu_run so a priority change affects
the context as soon as possible.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-26 16:17:56 +10:00
Masato Noguchi
6f6a6dc0c8 [POWERPC] spufs: Fix incorrect initialization of cbe_spu_info.spus
We currently initialize cbe_spu_info[].spus in both init_spu_base and
spu_sched_init. The initialise in spu_sched_init clears the SPU list,
so we end up with no physical SPUs. Because of this, the spu_run
syscall will block forever.

This change removes the unnecessary initialization in spu_sched_init.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-26 16:17:55 +10:00
Christoph Hellwig
b2c863bd2d spusched: fix mismerge in spufs.h
spufs.h now has two enums for the sched_flags leading to identical
values for SPU_SCHED_WAS_ACTIVE and SPU_SCHED_NOTIFY_ACTIVE.  Merge
them into a single enum as they were in the IBM development tree.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-24 12:24:58 -07:00
Al Viro
5fa63fccc5 Fix ppc64 mismerge
Fix a mismerge in commit 8b6f50ef1d:
"spufs: make signal-notification files readonly for NOSCHED contexts",
where structs got duplicated.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-22 10:41:27 -07:00
Jeremy Kerr
8b6f50ef1d spufs: make signal-notification files readonly for NOSCHED contexts
Reading from the signal{1,2} files requires a spu_acquire_saved, so make these
files write-only for contexts created with SPU_CREATE_NOSCHED.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-21 17:49:16 -07:00
Christoph Hellwig
486acd4850 [CELL] spufs: rework list management and associated locking
This sorts out the various lists and related locks in the spu code.

In detail:

 - the per-node free_spus and active_list are gone.  Instead struct spu
   gained an alloc_state member telling whether the spu is free or not
 - the per-node spus array is now locked by a per-node mutex, which
   takes over from the global spu_lock and the per-node active_mutex
 - the spu_alloc* and spu_free function are gone as the state change is
   now done inline in the spufs code.  This allows some more sharing of
   code for the affinity vs normal case and more efficient locking
 - some little refactoring in the affinity code for this locking scheme

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:28 +02:00
Bob Nelson
1474855d08 [CELL] oprofile: add support to OProfile for profiling CELL BE SPUs
From: Maynard Johnson <mpjohn@us.ibm.com>

This patch updates the existing arch/powerpc/oprofile/op_model_cell.c
to add in the SPU profiling capabilities.  In addition, a 'cell' subdirectory
was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code.
Exports spu_set_profile_private_kref and spu_get_profile_private_kref which
are used by OProfile to store private profile information in spufs data
structures.

Also incorporated several fixes from other patches (rrn).  Check pointer
returned from kzalloc.  Eliminated unnecessary cast.  Better error
handling and cleanup in the related area.  64-bit unsigned long parameter
was being demoted to 32-bit unsigned int and eventually promoted back to
unsigned long.

Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
2007-07-20 21:42:24 +02:00
Bob Nelson
36aaccc1e9 [CELL] oprofile: enable SPU switch notification to detect currently active SPU tasks
From: Maynard Johnson <mpjohn@us.ibm.com>

This patch adds to the capability of spu_switch_event_register so that
the caller is also notified of currently active SPU tasks.
Exports spu_switch_event_register and spu_switch_event_unregister so
that OProfile can get access to the notifications provided.

Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com>
Signed-off-by: Carl Love <carll@us.ibm.com>
Signed-off-by: Bob Nelson <rrnelson@us.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Acked-by: Paul Mackerras <paulus@samba.org>
2007-07-20 21:42:20 +02:00
Christoph Hellwig
2414059420 [CELL] spu_base: locking cleanup
Sort out the locking mess in spu_base and document the current rules.
As an added benefit spu_alloc* and spu_free don't block anymore.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:19 +02:00
Arnd Bergmann
9e7cbcbb6e [CELL] cell: indexing of SPUs based on firmware vicinity properties
This patch links spus according to their physical position using
information provided by the firmware through a special vicinity
device-tree property. This property is present in current version
of Malta firmware.

Example of vicinity properties for a node in Malta:

Node:        Vicinity property contains phandles of:
spe@0        [ spe@100000 , mic-tm@50a000 ]
spe@100000   [ spe@0      , spe@200000    ]
spe@200000   [ spe@100000 , spe@300000    ]
spe@300000   [ spe@200000 , bif0@512000   ]
spe@80000    [ spe@180000 , mic-tm@50a000 ]
spe@180000   [ spe@80000  , spe@280000    ]
spe@280000   [ spe@180000 , spe@380000    ]
spe@380000   [ spe@280000 , bif0@512000   ]

Only spe@* have a vicinity property (e.g., bif0@512000 and
mic-tm@50a000 do not have it).

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:18 +02:00
Arnd Bergmann
cbc23d3e7c [CELL] spufs: integration of SPE affinity with the scheduller
This patch makes the scheduller honor affinity information for each
context being scheduled. If the context has no affinity information,
behaviour is unchanged. If there are affinity information, context is
schedulled to be run on the exact spu recommended by the affinity
placement algorithm.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:18 +02:00
Arnd Bergmann
c5fc8d2a92 [CELL] cell: add placement computation for scheduling of affinity contexts
This patch provides the spu affinity placement logic for the spufs scheduler.
Each time a gang is going to be scheduled, the placement of a reference
context is defined. The placement of all other contexts with affinity from
the gang is defined based on this reference context location and on a
precomputed displacement offset.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:17 +02:00
Arnd Bergmann
8e68e2f248 [CELL] spufs: extension of spu_create to support affinity definition
This patch adds support for additional flags at spu_create, which relate
to the establishment of affinity between contexts and contexts to memory.
A fourth, optional, parameter is supported. This parameter represent
a affinity neighbor of the context being created, and is used when defining
SPU-SPU affinity.
Affinity is represented as a doubly linked list of spu_contexts.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:15 +02:00
Arnd Bergmann
3ad216cae8 [CELL] cell: add hardcoded spu vicinity information for QS20
This patch allows the use of spu affinity on QS20, whose
original FW does not provide affinity information.
This is done through two hardcoded arrays, and by reading the reg
property from each spu.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:13 +02:00
Arnd Bergmann
9d92af621f [CELL] cell: add vicinity information on spus
This patch adds affinity data to each spu instance.
A doubly linked list is created, meant to connect the spus
in the physical order they are placed in the BE. SPUs
near to memory should be marked as having memory affinity.
Adjustments of the fields acording to FW properties is done
in separate patches, one for CPBW, one for Malta (patch for
Malta under testing).

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:12 +02:00
Arnd Bergmann
aa6d5b2025 [CELL] cell: add per BE structure with info about its SPUs
Addition of a spufs-global "cbe_info" array. Each entry contains information
about one Cell/B.E. node, namelly:
* list of spus (both free and busy spus are in this list);
* list of free spus (replacing the static spu_list from spu_base.c)
* number of spus;
* number of reserved (non scheduleable) spus.

SPE affinity implementation actually requires only access to one spu per
BE node (since it implements its own pointer to walk through the other spus
of the ring) and the number of scheduleable spus (n_spus - non_sched_spus)
However having this more general structure can be useful for other
functionalities, concentrating per-cbe statistics / data.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:11 +02:00
Masato Noguchi
7e90b74967 [CELL] spufs: use find_first_bit() instead of sched_find_first_bit()
spu_sched->bitmap has MAX_PRIO(=140) width in bits.However, since
ff80a77f20, sched_find_first_bit()
only supports 100-bit bitmaps.

Thus, spu_sched->bitmap should be treated by generic find_first_bit().

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:07 +02:00
Jeremy Kerr
50af32a94b [CELL] spufs: remove unused file argument from spufs_run_spu()
From: Sebastian Siewior <cbe-oss-dev@ml.breakpoint.cc>

The 'file' argument is unused in spufs_run_spu(). This change removes
it.

Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:03 +02:00
Masato Noguchi
ca53da3abb [CELL] spufs: change decrementer restore timing
The SPU decrementer should be restored after the LSCSA DMA has
completed.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:42:03 +02:00
Masato Noguchi
cf17df223c [CELL] spufs: dont halt decrementer at restore step 47
No need to halt the SPE decrementer at context restore step 47, it will
be done in step 7.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:58 +02:00
Masato Noguchi
a103f347a5 [CELL] spufs: limit saving MFC_CNTL bits
At save step 8, the mfc control register in the CSA should be written
_only_ with Sc and Sm bits (at least MFC_CNTL[Dh] should be set to 0)

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:57 +02:00
Masato Noguchi
d40a01d4f4 [CELL] spufs: fix read and write for decr_status file
The decr_status in the LSCSA is valid only in the sequence of context
restore. Thus, it's nonsense to read and/or write it through spufs.

This patch changes decr_status node to access MFC_CNTL[Ds] in the CSA.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:56 +02:00
Masato Noguchi
1cfc0f86eb [CELL] spufs: fix decr_status meanings
The decr_status in the LSCSA is confusedly used as two meanings:
 * SPU decrementer was running
 * SPU decrementer was wrapped as a result of adjust
and the code to set decr_status is missing.

This patch fixes these problems by using the decr_status argument as a
set of flags. This requires a rebuild of the shipped spu_restore code.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:55 +02:00
Masato Noguchi
cfd529b25d [CELL] spufs: remove needless context save/restore code
The following steps are not needed in the SPE context save/restore
paths:

Save Step 12: save_mfc_decr()
  save suspend_time to CSA (It will be done by step 14)
  save ch 7 (decrementer value will be saved in LSCSA by spe-side step 10)

Restore Step 59: restore_ch_part1()
  restore ch 1 (it will be done by spe-side step 15)

This change removes the unnecessary steps.

Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:54 +02:00
Jeremy Kerr
daced0f718 [CELL] spufs: fix array size of channel index
Based on a fix from Masato Noguchi <Masato.Noguchi@jp.sony.com>.

Remove the (incorrect) array size declarations in the spufs channel
arrays, and use ARRAY_SIZE rather than hardcoded values.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:53 +02:00
Christoph Hellwig
27b1ea091f [CELL] spufs: make sure context are scheduled again after spu_acquire_saved
Currently a process is removed from the physical spu when spu_acquire_saved
is saved but never put back.  This patch adds a new spu_release_saved
that is to be paired with spu_acquire_saved and put the process back if
it has been in RUNNABLE state before.

Niether Jeremy not be are entirely happy about this exact patch because
it adds another spu_activate call outside of the owner thread, but I
feel this is the best short-term fix we can come up with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:52 +02:00
Andre Detsch
27ec41d3a1 [CELL] spufs: add spu stats in sysfs and ctx stat file in spufs
This patch exports per-context statistics in spufs as long as spu
statistics in sysfs.

It was formed by merging:
"spufs: add spu stats in sysfs"   From: Christoph Hellwig
"spufs: add stat file to spufs"   From: Christoph Hellwig
"spufs: fix libassist accounting" From: Jeremy Kerr
"spusched: fix spu utilization statistics" From: Luke Browning
And some adjustments by myself, after suggestions on cbe-oss-dev.

Having separate patches was making the review process harder
than it should, as we end up integrating spus and ctx statistics
accounting much more than it was on the first implementation.

Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:50 +02:00
Jeremy Kerr
e840cfe681 [CELL] spufs: Remove spurious WARN_ON for spu_deactivate for NOSCHED contexts
In 6cbf93960e64f313f6e247cbca7afaa50e3ee2c we added a WARN_ON for
calling spu_deactivate on contexts created with the SPU_CREATE_NOSCHED
flag. However, all NOSCHED contexts will need to be deactivated when
the context is destroyed, so this gives a spurious warning when any
NOSCHED context is closed.

This change removes the WARN_ON.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:49 +02:00
Jeremy Kerr
d054b36ffd [CELL] spufs: Make signal-notification files readonly for NOSCHED contexts
Reading from the signal{1,2} files requires a spu_acquire_saved, so
make these files write-only for contexts created with
SPU_CREATE_NOSCHED.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:48 +02:00
Kazunori Asayama
49776d30ae [CELL] spufs: Avoid unexpectedly restaring MFC during context save
The current SPU context saving procedure in SPUFS unexpectedly
restarts MFC when halting decrementer, because MFC_CNTL[Dh] is set
without MFC_CNTL[Sm]. This bug causes, for example, saving broken DMA
queues. Here is a patch to fix the problem.

Signed-off-by: Kazunori Asayama <asayama@sm.sony.co.jp>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:47 +02:00
Sebastian Siewior
d145031755 [CELL] spufs: remove section mismatch warning
WARNING: arch/powerpc/platforms/cell/spufs/spufs.o(.init.text+0x158): Section
mismatch: reference to .exit.text:.spu_sched_exit (between '.init_module' and
'.spu_sched_init')

was introduced by c99c1994a2
This patch removes the warning.

Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:46 +02:00
Michael Ellerman
ce21b3c964 [CELL] add support for MSI on Axon-based Cell systems
This patch adds support for the setup and decoding of MSIs
on Axon-based Cell systems, using the MSIC mechanism.

This involves setting up an area of BE memory which the Axon
then uses as a FIFO for MSI messages. When one or more MSIs
are decoded by the MSIC we receive an interrupt on the MPIC,
and the MSI messages are written into the FIFO. At the moment
we use a 64KB FIFO, one per MSIC/BE.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:45 +02:00
Andre Detsch
8d2655e621 [CELL] saving spus information for kexec crash
This patch adds support for investigating spus information after a
kernel crash event, through kdump vmcore file.
Implementation is based on xmon code, but the new functionality was
kept independent from xmon.

Signed-off-by: Lucio Jose Herculano Correia <luciojhc@br.ibm.com>
Signed-off-by: Andre Detsch <adetsch@br.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:43 +02:00
Jean-Christophe DUBOIS
b86ce01c77 [CELL] allow linux to map Cell regs on legacy SLOF tree.
The platforms missing the "cpus" property in the "be" node are mono-Cell
platforms such as CAB or Getaway.

Therefore it is possible to assume that if there is no "cpus" properties
under the "be" node then we can safely return the "device node" without
more checking. This is a bit hacky but ... it allows it to work on
these platforms.

Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net>
Acked-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:41 +02:00
Jean-Christophe DUBOIS
827e3648dc [CELL] fix cbe_thermal for legacy SLOF tree.
Previous patch changed based on Christian Krafft's comment.

On some legacy SLOF tree the generic code is unable to ioremap some Cell BE
registers. Therefore the "generic" functions are returning a NULL pointer,
triggering a crash on such platforms.

Let's handle this more gracefully.

Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net>
Acked-by: Christian Kraff <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:40 +02:00
Jean-Christophe DUBOIS
64bafa9db7 [CELL] fix cbe_cpufreq for legacy SLOF tree.
Previous patch changed based on Christian Krafft's comment.

On some legacy SLOF tree the generic code is unable to ioremap some Cell BE
registers. Therefore the "generic" functions are returning a NULL pointer,
triggering a crash on such platforms.

Let's handle this more gracefully.

Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net>
Acked-by: Christian Kraff <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:39 +02:00
Christian Krafft
74889e41d9 [CELL] cbe_cpufreq: reorganize code
This patch reorganizes the code of the driver into three files.
Two cbe_cpufreq_pmi.c and cbe_cpufreq_pervasive.c care about hardware.
cbe_cpufreq.c contains the logic.
There is no changed behaviour, except that the PMI related function
is now located in a seperate module cbe_cpufreq_pmi. This module
will be required by cbe_cpufreq, if CONFIG_CBE_CPUFREQ_PMI has been set.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:38 +02:00
Christian Krafft
1e21fd5af3 [CELL] cbe_cpufreq: fix minor issues
Minor issues have been fixed:
* added a missing call to of_node_put()
* signedness of a function parameter
* added some line breaks
* changed global pmi_frequency_limit to a
  per node pmi_slow_mode_limit array

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:37 +02:00
Christian Krafft
e5ecc87192 [CELL] cbe_cpufreq: fix initialization
This patch fixes the initialization of the cbe_cpufreq driver.
The code that initializes the PMI related functions was called per cpu:
* registering cpufreq notifier block
* registering a pmi handler

This ends in a bug that the notifier block gets called in an endless loop.
The initialization code is being put to the
module init code path by this patch. This way it only gets called once.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:36 +02:00
Christian Krafft
a964b9be3e [CELL] cbe_cpufreq: fix latency measurement
This patch fixes the debug code that calculates the transition time when
changing the slow modes on a Cell BE cpu.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:35 +02:00
Christian Krafft
813f90728e [CELL] pmi: remove support for mutiple devices.
The pmi driver got simplified by removing support for multiple devices.
As there is no more than one pmi device per maschine, there is no need to
specify the device for listening and sending messages.

This way the caller (cbe_cpufreq) doesn't need to scan the device tree.
When registering the handler on a board without a pmi
interface, pmi.c will just return -ENODEV.

The patch that fixed the breakage of cell_defconfig has been
broken out of the earlier version of this patch. So this is
the version that applies cleanly on top of it.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20 21:41:34 +02:00
Paul Mundt
20c2df83d2 mm: Remove slab destructors from kmem_cache_create().
Slab destructors were no longer supported after Christoph's
c59def9f22 change. They've been
BUGs for both slab and slub, and slob never supported them
either.

This rips out support for the dtor pointer from kmem_cache_create()
completely and fixes up every single callsite in the kernel (there were
about 224, not including the slab allocator definitions themselves,
or the documentation references).

Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-07-20 10:11:58 +09:00
Christoph Hellwig
8042297747 fix spufs build after ->fault changes
83c54070ee broke spufs by incorrectly
updating the code, this patch gets it to compile again.

It's probably still broken due to the scheduler changes, but this
at least makes sure cell kernels can still be built.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Geoff Levand <geoffrey.levand@am.sony.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 14:30:14 -07:00
Nick Piggin
83c54070ee mm: fault feedback #2
This patch completes Linus's wish that the fault return codes be made into
bit flags, which I agree makes everything nicer.  This requires requires
all handle_mm_fault callers to be modified (possibly the modifications
should go further and do things like fault accounting in handle_mm_fault --
however that would be for another patch).

[akpm@linux-foundation.org: fix alpha build]
[akpm@linux-foundation.org: fix s390 build]
[akpm@linux-foundation.org: fix sparc build]
[akpm@linux-foundation.org: fix sparc64 build]
[akpm@linux-foundation.org: fix ia64 build]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Ian Molton <spyro@f2s.com>
Cc: Bryan Wu <bryan.wu@analog.com>
Cc: Mikael Starvik <starvik@axis.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Hirokazu Takata <takata@linux-m32r.org>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: Greg Ungerer <gerg@uclinux.org>
Cc: Matthew Wilcox <willy@debian.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp>
Cc: Richard Curnow <rc@rc0.org.uk>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it>
Cc: Miles Bader <uclinux-v850@lsi.nec.co.jp>
Cc: Chris Zankel <chris@zankel.net>
Acked-by: Kyle McMartin <kyle@mcmartin.ca>
Acked-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
Acked-by: Ralf Baechle <ralf@linux-mips.org>
Acked-by: Andi Kleen <ak@muc.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Still apparently needs some ARM and PPC loving - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-19 10:04:41 -07:00
Geert Uytterhoeven
bce9451310 Cell: Draw SPE helper penguin logos
Let spu_management_ops.enumerate_spus() return the number of found SPEs
and use that information to draw some little helper penguin logos.

Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-By: James Simmons <jsimmons@infradead.org>
Cc: "Antonino A. Daplas" <adaplas@pol.net>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-17 10:23:13 -07:00
Paul Mackerras
bf22f6fe2d Merge branch 'for-2.6.23' into merge 2007-07-11 13:28:26 +10:00
Kazunori Asayama
8d038e0433 [POWERPC] spufs: Save dma_tagstatus_R in CSA
The function backing_ops->read_mfc_tagstatus() doesn't return a
correct value because the dma_tagstatus_R register isn't saved in
CSA.  This fixes the problem.

Signed-off-by: Kazunori Asayama <asayama@sm.sony.co.jp>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:47 +10:00
Kazunori Asayama
933b0e3524 [POWERPC] spufs: Fix lost events in poll/epoll on mfc
When waiting for I/O events on mfc in an SPU context by using
poll/epoll syscalls, some of the events can be lost because of wrong
order of poll_wait and MFC status checks in the spufs_mfc_poll
function and non-atomic update of tagwait.  This fixes the
problem.

Signed-off-by: Kazunori Asayama <asayama@sm.sony.co.jp>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Christoph Hellwig
fe2f896d67 [POWERPC] spufs: Add spu stats in sysfs
Export spu statistics in sysfs.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Christoph Hellwig
27449971e6 [POWERPC] spusched: Fix runqueue corruption
spu_activate can be called from multiple threads at the same time on
behalf of the same spu context.  We need to make sure to only add it
once to avoid runqueue corruption.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Christoph Hellwig
c77239b8be [POWERPC] spusched: Disable tick when not needed
Only enable the scheduler tick if we have any context waiting to be
scheduled.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Jeremy Kerr
08c9692b16 [POWERPC] spufs: Fix libassist accounting
We're currently too permissive with counting libassist calls - fix the
check on the SPE stop-and-signal status.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Christoph Hellwig
e9f8a0b65a [POWERPC] spufs: Add stat file to spufs
Export per-context statistics in spufs.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Christoph Hellwig
65de66f0b8 [POWERPC] spufs: Implement /proc/spu_loadavg
Provide load average information for spu context.  The format
is identical to /proc/loadavg, which is also where a lot of code
and concepts is borrowed from.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:46 +10:00
Christoph Hellwig
476273adc7 [POWERPC] spufs: Add tid file
The new tid file contains the ID of the thread currently running the
context, if any.  This is used so that the new spu-top and spu-ps
tools can find the thread in /proc.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Jeremy Kerr
7022543ee4 [POWERPC] spufs: Trivial whitespace fixes
Remove redundant whitespace in arch/powerpc/platforms/cell/spufs/

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Jeremy Kerr
b8c295f908 [POWERPC] spufs: Remove spufs_dir_inode_operations
spufs_dir_inode_operations is exactly the same as
simple_dir_inode_operations.  Use that instead.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Christoph Hellwig
df09cf3e2c [POWERPC] spusched: No preemption for nosched contexts
And last but not least we need to make sure the scheduler tick never
preempts a nosched context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Christoph Hellwig
46cbf93960 [POWERPC] spusched: Catch nosched contexts in spu_deactivate
spu_deactivate should never be called for nosched contets.  Put in
a check so we can print a stacktrace and exit early in case it
happes erroneously.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Christoph Hellwig
ea1ae5949d [POWERPC] spusched: fix cpu/node binding
Add a cpus_allowed allowed filed to struct spu_context so that we always
use the cpu mask of the owning thread instead of the one happening to
call into the scheduler.  Also use this information in
grab_runnable_context to avoid spurious wakeups.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Christoph Hellwig
2cf2b3b49f [POWERPC] spusched: Update scheduling paramters on every spu_run
Update scheduling information on every spu_run to allow for setting
threads to realtime priority just before running them.  This requires
some slightly ugly code in spufs_run_spu because we can just update
the information unlocked if the spu is not runnable, but we need to
acquire the active_mutex when it is runnable to protect against
find_victim.  This locking scheme requires opencoding
spu_acquire_runnable in spufs_run_spu which actually is a nice cleanup
all by itself.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Jeremy Kerr
f3f59bec0c [POWERPC] spusched: Print out scheduling tunables with DEBUG
Print out a few scheduler tuning parameters when we've compiled
with DEBUG defined.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:45 +10:00
Jeremy Kerr
60e2423933 [POWERPC] spusched: Fix timeslice calculations
The current timeslice code mixes 'jiffies' up with 'spesched ticks'. This
change correctly defines the number of time slices each SPE contexts is
given, and clarifies the comment.

This brings the default timeslice for SPE contexts into a reasonable
range.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:44 +10:00
Christoph Hellwig
fe443ef2ac [POWERPC] spusched: Dynamic timeslicing for SCHED_OTHER
Enable preemptive scheduling for non-RT contexts.

We use the same algorithms as the CPU scheduler to calculate the time
slice length, and for now we also use the same timeslice length as the
CPU scheduler. This might be not enough for good performance and can be
changed after some benchmarking.

Note that currently we do not boost the priority for contexts waiting
on the runqueue for a long time, so contexts with a higher nice value
could starve ones with less priority.  This could easily be fixed once
the rework of the spu lists that Luke and I discussed is done.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:44 +10:00
Christoph Hellwig
3790180220 [POWERPC] spusched: Switch from workqueues to kthread + timer tick
Get rid of the scheduler workqueues that complicated things a lot to
a dedicated spu scheduler thread that gets woken by a traditional
scheduler tick.  By default this scheduler tick runs a HZ * 10, aka
one spu scheduler tick for every 10 cpu ticks.

Currently the tick is not disabled when we have less context than
available spus, but I will implement this later.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:44 +10:00
Sebastian Siewior
be7031773e [POWERPC] spufs: Add bit definition
Add a bit define from book, and replace one hex number with a
symbol, for clarity.

Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:44 +10:00
Sebastian Siewior
7a896dc5f4 [POWERPC] spufs: fix building spufs/spu_save_dump.h
Currently it fails with gcc from sdk 2.1 because of a spec change [1].
Maybe we should start using the definitions from spu_mfcio.h.

[1] http://gcc.gnu.org/ml/gcc-patches/2006-11/msg01598.html

Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03 15:24:44 +10:00
Christian Krafft
ee5d1b7f2a [POWERPC] Fix PMI breakage in cbe_cbufreq driver
The recent change to cell_defconfig to enable cpufreq on Cell exposed
the fact that the cbe_cpufreq driver currently needs the PMI interface
code to compile, but Kconfig doesn't make sure that the PMI interface
code gets built if cbe_cpufreq is enabled.

In fact cbe_cpufreq can work without PMI, so this ifdefs out the code
that deals with PMI.  This is a minimal solution for 2.6.22; a more
comprehensive solution will be merged for 2.6.23.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-02 10:35:58 +10:00
Geoff Levand
6deac06612 [POWERPC] cell: Add spu shutdown method
Add a shutdown method to spu_sysdev_class to allow proper spu resource
cleanup on system shutdown.  This is needed to support kexec on the PS3
platform.

Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-28 19:16:32 +10:00
Benjamin Herrenschmidt
cbe709c168 [POWERPC] spufs: Add a "capabilities" file to spu contexts
This adds a "capabilities" file to spu contexts consisting of a
list of linefeed separated capability names. The current exposed
capabilities are "sched" (the context is scheduleable) and
"step" (the context supports single stepping).

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14 22:29:56 +10:00
Benjamin Herrenschmidt
05169237b5 [POWERPC] spufs: Add support for SPU single stepping
This patch adds support for SPU single stepping. The single
step bit is set in the SPU when the current process is
being single-stepped via ptrace. The spu then stops and
returns with a specific flag set and the syscall exit code
will generate the SIGTRAP.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14 22:29:56 +10:00
Benjamin Herrenschmidt
3d5134ee83 [POWERPC] Rewrite IO allocation & mapping on powerpc64
This rewrites pretty much from scratch the handling of MMIO and PIO
space allocations on powerpc64.  The main goals are:

 - Get rid of imalloc and use more common code where possible
 - Simplify the current mess so that PIO space is allocated and
   mapped in a single place for PCI bridges
 - Handle allocation constraints of PIO for all bridges including
   hot plugged ones within the 2GB space reserved for IO ports,
   so that devices on hotplugged busses will now work with drivers
   that assume IO ports fit in an int.
 - Cleanup and separate tracking of the ISA space in the reserved
   low 64K of IO space. No ISA -> Nothing mapped there.

I booted a cell blade with IDE on PIO and MMIO and a dual G5 so
far, that's it :-)

With this patch, all allocations are done using the code in
mm/vmalloc.c, though we use the low level __get_vm_area with
explicit start/stop constraints in order to manage separate
areas for vmalloc/vmap, ioremap, and PCI IOs.

This greatly simplifies a lot of things, as you can see in the
diffstat of that patch :-)

A new pair of functions pcibios_map/unmap_io_space() now replace
all of the previous code that used to manipulate PCI IOs space.
The allocation is done at mapping time, which is now called from
scan_phb's, just before the devices are probed (instead of after,
which is by itself a bug fix). The only other caller is the PCI
hotplug code for hot adding PCI-PCI bridges (slots).

imalloc is gone, as is the "sub-allocation" thing, but I do beleive
that hotplug should still work in the sense that the space allocation
is always done by the PHB, but if you unmap a child bus of this PHB
(which seems to be possible), then the code should properly tear
down all the HPTE mappings for that area of the PHB allocated IO space.

I now always reserve the first 64K of IO space for the bridge with
the ISA bus on it. I have moved the code for tracking ISA in a separate
file which should also make it smarter if we ever are capable of
hot unplugging or re-plugging an ISA bridge.

This should have a side effect on platforms like powermac where VGA IOs
will no longer work. This is done on purpose though as they would have
worked semi-randomly before. The idea at this point is to isolate drivers
that might need to access those and fix them by providing a proper
function to obtain an offset to the legacy IOs of a given bus.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-14 22:29:56 +10:00
Sebastian Siewior
87873c8680 [POWERPC] spufs: Fix error handling in spufs_fill_dir()
The error path in spufs_fill_dir() is broken. If d_alloc_name() or
spufs_new_file() fails, spufs_prune_dir() is getting called. At this time
dir->inode is not set and a NULL pointer is dereferenced by mutex_lock().
This bugfix replaces spufs_prune_dir() with a shorter version that does
not touch dir->inode but simply removes all children.

Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:40 +10:00
Christoph Hellwig
e5c0b9ec53 [POWERPC] spufs: Don't yield nosched context
Nosched context sould never be scheduled out, thus we must not
deactivate them in spu_yield ever.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:40 +10:00
Thomas Renninger
1552cb923e [POWERPC] cbe_cpufreq: Limit frequency via cpufreq notifier chain
... and get rid of cpufreq_set_policy call that caused a build
failure due interfering commits.

Signed-off-by: Thomas Renninger <trenn@suse.de>
Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:40 +10:00
Christoph Hellwig
bb5db29aa0 [POWERPC] spufs scheduler: Fix wakeup races
Fix the race between checking for contexts on the runqueue and actually
waking them in spu_deactive and spu_yield.

The guts of spu_reschedule are split into a new helper called
grab_runnable_context which shows if there is a runnable thread below
a specified priority and if yes removes if from the runqueue and uses
it.  This function is used by the new __spu_deactivate hepler shared
by preemption and spu_yield to grab a new context before deactivating
a specified priority and if yes removes if from the runqueue and uses
it.  This function is used by the new __spu_deactivate hepler shared
by preemption and spu_yield to grab a new context before deactivating
the old one.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:39 +10:00
Christoph Hellwig
47d3a5faa3 [POWERPC] spufs: Synchronize pte invalidation vs ps close
Make sure the mapping_lock also protects access to the various address_space
pointers used for tearing down the ptes on a spu context switch.

Because unmap_mapping_range can sleep we need to turn mapping_lock from
a spinlock into a sleeping mutex.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:39 +10:00
Sebastian Siewior
89df00855b [POWERPC] spufs: Free mm if spufs_fill_dir() failed
In case spufs_fill_dir() fails only put_spu_context()
gets called for cleanup and the acquired mm_struct never gets freed.

Signed-off-by: Sebastian Siewior <bigeasy@linux.vnet.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:39 +10:00
Jeremy Kerr
877907d37d [POWERPC] spufs: Fix gang destroy leaks
Previously, closing a SPE gang that still has contexts would trigger
a WARN_ON, and leak the allocated gang.

This change fixes the problem by using the gang's reference counts to
destroy the gang instead. The gangs will persist until their last
reference (be it context or open file handle) is gone.

Also, avoid using statements with side-effects in a WARN_ON().

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:39 +10:00
Christoph Hellwig
ce92987bab [POWERPC] spufs: Hook up spufs_release_mem
Currently spufs_mem_release and the mem file doesn't have any release
method hooked up, leading to leaks everytime is used.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:39 +10:00
Arnd Bergmann
8f18a15819 [POWERPC] spufs: Refuse to load the module when not running on cell
As noticed by David Woodhouse, it's currently possible to mount
spufs on any machine, which means that it actually will get
mounted by fedora.
This refuses to load the module on platforms that have no
support for SPUs.

Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07 11:44:39 +10:00
Christoph Lameter
a35afb830f Remove SLAB_CTOR_CONSTRUCTOR
SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Steven French <sfrench@us.ibm.com>
Cc: Michael Halcrow <mhalcrow@us.ibm.com>
Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Miklos Szeredi <miklos@szeredi.hu>
Cc: Steven Whitehouse <swhiteho@redhat.com>
Cc: Roman Zippel <zippel@linux-m68k.org>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Dave Kleikamp <shaggy@austin.ibm.com>
Cc: Trond Myklebust <trond.myklebust@fys.uio.no>
Cc: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Anton Altaparmakov <aia21@cantab.net>
Cc: Mark Fasheh <mark.fasheh@oracle.com>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Jan Kara <jack@ucw.cz>
Cc: David Chinner <dgc@sgi.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17 05:23:04 -07:00
Benjamin Herrenschmidt
e1fa2e136f powerpc: fixup hard_irq_disable semantics
This patch renames the raw hard_irq_{enable,disable} into
__hard_irq_{enable,disable} and introduces a higher level hard_irq_disable()
function that can be used by any code to enforce that IRQs are fully disabled,
not only lazy disabled.

The difference with the __ versions is that it will update some per-processor
fields so that the kernel keeps track and properly re-enables them in the next
local_irq_disable();

This prepares powerpc for my next patch that introduces hard_irq_disable()
generically.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11 08:29:34 -07:00
Linus Torvalds
aabded9c3a Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc:
  [POWERPC] Further fixes for the removal of 4level-fixup hack from ppc32
  [POWERPC] EEH: log all PCI-X and PCI-E AER registers
  [POWERPC] EEH: capture and log pci state on error
  [POWERPC] EEH: Split up long error msg
  [POWERPC] EEH: log error only after driver notification.
  [POWERPC] fsl_soc: Make mac_addr const in fs_enet_of_init().
  [POWERPC] Don't use SLAB/SLUB for PTE pages
  [POWERPC] Spufs support for 64K LS mappings on 4K kernels
  [POWERPC] Add ability to 4K kernel to hash in 64K pages
  [POWERPC] Introduce address space "slices"
  [POWERPC] Small fixes & cleanups in segment page size demotion
  [POWERPC] iSeries: Make HVC_ISERIES the default
  [POWERPC] iSeries: suppress build warning in lparmap.c
  [POWERPC] Mark pages that don't exist as nosave
  [POWERPC] swsusp: Introduce register_nosave_region_late
2007-05-09 12:56:01 -07:00
Michael Opdenacker
59c51591a0 Fix occurrences of "the the "
Signed-off-by: Michael Opdenacker <michael@free-electrons.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
2007-05-09 08:57:56 +02:00
Benjamin Herrenschmidt
f1fa74f4af [POWERPC] Spufs support for 64K LS mappings on 4K kernels
This adds an option to spufs when the kernel is configured for
4K page to give it the ability to use 64K pages for SPE local store
mappings.

Currently, we are optimistic and try order 4 allocations when creating
contexts. If that fails, the code will fallback to 4K automatically.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-09 16:35:00 +10:00
Benjamin Herrenschmidt
d0f13e3c20 [POWERPC] Introduce address space "slices"
The basic issue is to be able to do what hugetlbfs does but with
different page sizes for some other special filesystems; more
specifically, my need is:

 - Huge pages

 - SPE local store mappings using 64K pages on a 4K base page size
kernel on Cell

 - Some special 4K segments in 64K-page kernels for mapping a dodgy
type of powerpc-specific infiniband hardware that requires 4K MMU
mappings for various reasons I won't explain here.

The main issues are:

 - To maintain/keep track of the page size per "segment" (as we can
only have one page size per segment on powerpc, which are 256MB
divisions of the address space).

 - To make sure special mappings stay within their allotted
"segments" (including MAP_FIXED crap)

 - To make sure everybody else doesn't mmap/brk/grow_stack into a
"segment" that is used for a special mapping

Some of the necessary mechanisms to handle that were present in the
hugetlbfs code, but mostly in ways not suitable for anything else.

The patch relies on some changes to the generic get_unmapped_area()
that just got merged.  It still hijacks hugetlb callbacks here or
there as the generic code hasn't been entirely cleaned up yet but
that shouldn't be a problem.

So what is a slice ?  Well, I re-used the mechanism used formerly by our
hugetlbfs implementation which divides the address space in
"meta-segments" which I called "slices".  The division is done using
256MB slices below 4G, and 1T slices above.  Thus the address space is
divided currently into 16 "low" slices and 16 "high" slices.  (Special
case: high slice 0 is the area between 4G and 1T).

Doing so simplifies significantly the tracking of segments and avoids
having to keep track of all the 256MB segments in the address space.

While I used the "concepts" of hugetlbfs, I mostly re-implemented
everything in a more generic way and "ported" hugetlbfs to it.

Slices can have an associated page size, which is encoded in the mmu
context and used by the SLB miss handler to set the segment sizes.  The
hash code currently doesn't care, it has a specific check for hugepages,
though I might add a mechanism to provide per-slice hash mapping
functions in the future.

The slice code provide a pair of "generic" get_unmapped_area() (bottomup
and topdown) functions that should work with any slice size.  There is
some trickiness here so I would appreciate people to have a look at the
implementation of these and let me know if I got something wrong.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-09 16:35:00 +10:00
Linus Torvalds
df6d3916f3 Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (77 commits)
  [POWERPC] Abolish powerpc_flash_init()
  [POWERPC] Early serial debug support for PPC44x
  [POWERPC] Support for the Ebony 440GP reference board in arch/powerpc
  [POWERPC] Add device tree for Ebony
  [POWERPC] Add powerpc/platforms/44x, disable platforms/4xx for now
  [POWERPC] MPIC U3/U4 MSI backend
  [POWERPC] MPIC MSI allocator
  [POWERPC] Enable MSI mappings for MPIC
  [POWERPC] Tell Phyp we support MSI
  [POWERPC] RTAS MSI implementation
  [POWERPC] PowerPC MSI infrastructure
  [POWERPC] Rip out the existing powerpc msi stubs
  [POWERPC] Remove use of 4level-fixup.h for ppc32
  [POWERPC] Add powerpc PCI-E reset API implementation
  [POWERPC] Holly bootwrapper
  [POWERPC] Holly DTS
  [POWERPC] Holly defconfig
  [POWERPC] Add support for 750CL Holly board
  [POWERPC] Generalize tsi108 PCI setup
  [POWERPC] Generalize tsi108 PHY types
  ...

Fixed conflict in include/asm-powerpc/kdebug.h manually

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 11:50:19 -07:00
Randy Dunlap
e63340ae6b header cleaning: don't include smp_lock.h when not used
Remove includes of <linux/smp_lock.h> where it is not used/needed.
Suggested by Al Viro.

Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc,
sparc64, and arm (all 59 defconfigs).

Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-08 11:15:07 -07:00
Paul Mackerras
02bbc0f09c Merge branch 'linux-2.6' 2007-05-08 13:37:51 +10:00
Christoph Lameter
50953fe9e0 slab allocators: Remove SLAB_DEBUG_INITIAL flag
I have never seen a use of SLAB_DEBUG_INITIAL.  It is only supported by
SLAB.

I think its purpose was to have a callback after an object has been freed
to verify that the state is the constructor state again?  The callback is
performed before each freeing of an object.

I would think that it is much easier to check the object state manually
before the free.  That also places the check near the code object
manipulation of the object.

Also the SLAB_DEBUG_INITIAL callback is only performed if the kernel was
compiled with SLAB debugging on.  If there would be code in a constructor
handling SLAB_DEBUG_INITIAL then it would have to be conditional on
SLAB_DEBUG otherwise it would just be dead code.  But there is no such code
in the kernel.  I think SLUB_DEBUG_INITIAL is too problematic to make real
use of, difficult to understand and there are easier ways to accomplish the
same effect (i.e.  add debug code before kfree).

There is a related flag SLAB_CTOR_VERIFY that is frequently checked to be
clear in fs inode caches.  Remove the pointless checks (they would even be
pointless without removeal of SLAB_DEBUG_INITIAL) from the fs constructors.

This is the last slab flag that SLUB did not support.  Remove the check for
unimplemented flags from SLUB.

Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-07 12:12:57 -07:00
Stephen Rothwell
55b61fec22 [POWERPC] Rename device_is_compatible to of_device_is_compatible
for consistency with other Open Firmware interfaces (and Sparc).

This is just a straight replacement.

This leaves the compatibility define in place.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-07 20:31:14 +10:00
Thomas Gleixner
057b184a00 [POWERPC] Spinlock initializer cleanup
Use DEFINE_SPINLOCK instead of initializing spinlocks to
SPIN_LOCK_UNLOCKED, since DEFINE_SPINLOCK is better for lockdep.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-30 11:02:06 +10:00
Stephen Rothwell
12d371a69e [POWERPC] get_property cleanups
Just another pass through arch/powerpc for old usages.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-30 11:02:05 +10:00
Olof Johansson
4bd4aa1967 [POWERPC] cell: cbe_cpufreq cleanup and crash fix
cbe_cpufreq cleanups:

* comment format
* whitespace
* don't init on non-cell platforms

Signed-off-by: Olof Johansson <olof@lixom.net>
Acked-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-30 11:02:05 +10:00
Olaf Hering
8d8a0241eb [POWERPC] Generic check_legacy_ioport
check_legacy_ioport makes only sense on PREP, CHRP and pSeries.
They may have an isa node with PS/2, parport, floppy and serial ports.

Remove the check_legacy_ioport call from ppc_md, it's not needed
anymore.  Hardware capabilities come from the device-tree.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-04-27 21:14:30 +10:00
Paul Mackerras
b142eb3a5a Merge branch 'for-2.6.22' of master.kernel.org:/pub/scm/linux/kernel/git/arnd/cell-2.6 into for-2.6.22 2007-04-24 11:46:09 +10:00
Jeremy Kerr
150f7e3cfe [POWERPC] cell: enable RTAS-based PTCAL for Cell XDR memory
Enable Periodic Recalibration (PTCAL) support for Cell XDR memory,
using the new ibm,cbe-start-ptcal and ibm,cbe-stop-ptcal RTAS calls.

Tested on QS20 and QS21 (by Thomas Huth). It seems that SLOF has
problems disabling, at least on QS20; this patch should only be
used once these problems have been addressed.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:41 +02:00
Christian Krafft
9dd855a729 [POWERPC] cell: add support for proper device-tree
This patch adds support for a proper device-tree.
A porper device-tree on cell contains be nodes
for each CBE containg nodes for SPEs and all the
other special devices on it.
Ofcourse oldschool devicetree is still supported.

Signed-off-by: Christian Krafft <krafft@de.ibm.com>
Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23 21:44:40 +02:00