forked from luck/tmp_suning_uos_patched
d01447b319
This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
19 lines
463 B
C
19 lines
463 B
C
#ifndef __ASM_SH_UNCACHED_H
|
|
#define __ASM_SH_UNCACHED_H
|
|
|
|
#include <linux/bug.h>
|
|
|
|
#ifdef CONFIG_UNCACHED_MAPPING
|
|
extern unsigned long uncached_start, uncached_end;
|
|
|
|
extern int virt_addr_uncached(unsigned long kaddr);
|
|
extern void uncached_init(void);
|
|
extern void uncached_resize(unsigned long size);
|
|
#else
|
|
#define virt_addr_uncached(kaddr) (0)
|
|
#define uncached_init() do { } while (0)
|
|
#define uncached_resize(size) BUG()
|
|
#endif
|
|
|
|
#endif /* __ASM_SH_UNCACHED_H */
|