After KYCR2 is set, udelay might become necessary if there are only a
small number of keys attached. This patch introduces an optional delay
through the platform data to address this problem.
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This tidies up the printks when running on SMP, and aids in debugging
when certain cores are unable to be scaled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up broken clock re-parenting undertaken by the SH7709 clock
framework code, which is currently in conflict with the legacy CPG
framework. With this change in place, the legacy CPG ancestry is used,
and we manage to avoid contending on the clock_list_sem mutex, which is
already held under the legacy registration path, resulting in livelock.
In order for SH7709 to fully support the varying clock modes, it needs to
implement a more complete clock framework. After this change it is in
sync with legacy CPG mode, which ends up being the default configuration
for this CPU anyways.
Signed-off-by: Rafael Ignacio Zurita <rizurita@yahoo.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
when you use romImage on EcoVec24, 1st Linux will enable USB device.
But no-one disable it.
So re-started Linux will get interrupt before USB driver is attached.
This patch disable USB device at first
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
romimage macros which are used in kfr2r09 is very useful for other board.
This patch divides kfr2r09's romimage.h into
romimage-macros and partner-jet-setup.
Signed-off-by: Kuninori Morimoto <morimoto.kuninori@renesas.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
There was quite a lot of tab->space damage done here from a former patch,
clean it up once and for all.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Now that the cache purging is handled manually by all copy_page()
callers, we can kill off copy_page()'s on writeback. This optimizes the
non-aliasing case.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up a number of outstanding issues observed with old mappings
on the same colour hanging around. This requires some more optimal
handling, but is a safe fallback until all of the corner cases have been
handled.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This fixes up the kmap_coherent/kunmap_coherent() interface for recent
changes both in the page fault path and the shared cache flushers, as
well as adding in some optimizations.
One of the key things to note here is that the TLB flush itself is
deferred until the unmap, and the call in to update_mmu_cache() itself
goes away, relying on the regular page fault path to handle the lazy
dcache writeback if necessary.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This board doesn't use trapped I/O for anything, so just kill off the
select. This was causing problems in the unhandled page fault die path.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This builds on top of the previous reversion and implements a special
on_each_cpu() variant that simple disables preemption across the call
while leaving the interrupt state to the function itself. There were some
unintended consequences with IRQ disabling in some of these paths on UP
that ran in to a deadlock scenario with IRQs being missed.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This reverts commit 64a6d72213.
Unfortunately we can't use on_each_cpu() for all of the cache ops, as
some of them only require preempt disabling. This seems to be the same
issue that impacts the mips r4k caches, where this code was based on.
This fixes up a deadlock that showed up in some IRQ context cases.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
The kgdb stub has traditionally tied in to the NMI slot, and manually
handled debounce. Now that we have a generic way to do this instead, all
of the stub-specific debounce silliness can be killed off.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This implements support for NMI debugging that was shamelessly copied
from the avr32 port. A bit of special magic is needed in the interrupt
exception path given that the NMI exception handler is stubbed in to the
regular exception handling table despite being reported in INTEVT. So we
mangle the lookup and kick off an EXPEVT-style exception dispatch from
the INTEVT path for exceptions that do_IRQ() has no chance of handling.
As a result, we also drop the evt2irq() conversion from the do_IRQ() path
and just do it in assembly.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This adopts the special-cased 2-way write-through dcache flusher for
N-ways and moves it in to the generic path. Assignment is done at runtime
via the check for the CCR_CACHE_WT bit in the same path as the per-way
writeback flushers.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Some unaligned accesses are completely expected. For example, the
trapped_io code uses the unaligned access fixup code path so there's no
need to warn about having to fixup the unaligned access.
Signed-off-by: Matt Fleming <matt@console-pimps.org>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is supposed to be the equivalent of __NR_syscalls, not
__NR_syscalls -1. The x86 code this was based on had simply fallen
out of sync at the time this was implemented. Fix it up now.
As a result, tracing of __NR_perf_counter_open works as advertised.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This patch changes the way in which "multi-evt" interrups are handled.
The intc_evt2irq_table and related intc_evt2irq() have been removed and
the "redirecting" handler is installed for the coupled interrupts.
Thanks to that the do_IRQ() function don't have to use another level
of indirection for all the interrupts...
Signed-off-by: Pawel Moll <pawel.moll@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
sys_cacheflush should return with EINVAL if the cache parameter is not
one of ICACHE, DCACHE or BCACHE.
So, we need to include 0 in the first check.
It also adds the three definitions above as wrapper of the existent macros.
PS: ltp cacheflush01 test now passes.
Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com>
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Change the method used to flush the cache in write-through mode to
avoid corrupted data being written back to memory.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Allow peripherals before the start of RAM to be remapped.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
It is possible for the CPU to re-enable it's interrupt block bit
before the write to the interrupt controller has actually masked out
the external interupt at the controller. We get around this by
reading back from the interrupt controller which will ensure the
write has happened.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Adds a system call to allow user code to flush code from the cache.
You can use instructions for the data side, but the iside can
only be done by a flush ROM which really only works with a direct
mapped cache. The later SH4's have 2 way Iside, so this call allows
a portable way to flush the cache.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
This is a pure documentation, to try to explain why the cache flushing code
for the SH4 is implemented the way it is.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Optimise memcpy_to/fromio. This is used extensivly by MTD, so is a
worthwhile performance gain. The main savings come from not repeatedly
calling readl/writel, and doing word instead of byte at a time
transfers. Also using "movca.l" on SH4 gives a small performance win.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
After performing the port2addr conversion, and checking that the data is
correctly aligned, simply call __raw_readsX/writesX. These have already been
optimised.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Reading from the ROM is not a good idea as it could disturb some
flash operation that it is in progress.
Signed-off-by: Stuart Menefy <stuart.menefy@st.com>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>