2019-05-27 14:55:01 +08:00
|
|
|
// SPDX-License-Identifier: GPL-2.0-or-later
|
powerpc/8xx: Map linear kernel RAM with 8M pages
On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.
MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.
In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.
In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.
With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-02-10 00:07:50 +08:00
|
|
|
/*
|
|
|
|
* This file contains the routines for initializing the MMU
|
|
|
|
* on the 8xx series of chips.
|
|
|
|
* -- christophe
|
|
|
|
*
|
|
|
|
* Derived from arch/powerpc/mm/40x_mmu.c:
|
|
|
|
*/
|
|
|
|
|
|
|
|
#include <linux/memblock.h>
|
2018-10-18 13:22:27 +08:00
|
|
|
#include <linux/mmu_context.h>
|
2020-05-19 13:49:22 +08:00
|
|
|
#include <linux/hugetlb.h>
|
2016-05-17 15:02:45 +08:00
|
|
|
#include <asm/fixmap.h>
|
|
|
|
#include <asm/code-patching.h>
|
2020-05-06 11:40:26 +08:00
|
|
|
#include <asm/inst.h>
|
2020-05-19 13:49:22 +08:00
|
|
|
#include <asm/pgalloc.h>
|
powerpc/8xx: Map linear kernel RAM with 8M pages
On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.
MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.
In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.
In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.
With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-02-10 00:07:50 +08:00
|
|
|
|
2019-03-29 17:59:59 +08:00
|
|
|
#include <mm/mmu_decl.h>
|
powerpc/8xx: Map linear kernel RAM with 8M pages
On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.
MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.
In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.
In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.
With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-02-10 00:07:50 +08:00
|
|
|
|
2016-05-17 15:02:45 +08:00
|
|
|
#define IMMR_SIZE (FIX_IMMR_SIZE << PAGE_SHIFT)
|
|
|
|
|
powerpc/8xx: Map linear kernel RAM with 8M pages
On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.
MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.
In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.
In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.
With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-02-10 00:07:50 +08:00
|
|
|
extern int __map_without_ltlbs;
|
2016-05-17 15:02:45 +08:00
|
|
|
|
2017-07-12 18:08:45 +08:00
|
|
|
static unsigned long block_mapped_ram;
|
|
|
|
|
2016-05-17 15:02:45 +08:00
|
|
|
/*
|
2019-11-26 21:16:50 +08:00
|
|
|
* Return PA for this VA if it is in an area mapped with LTLBs or fixmap.
|
2017-07-12 18:08:45 +08:00
|
|
|
* Otherwise, returns 0
|
2016-05-17 15:02:45 +08:00
|
|
|
*/
|
|
|
|
phys_addr_t v_block_mapped(unsigned long va)
|
|
|
|
{
|
|
|
|
unsigned long p = PHYS_IMMR_BASE;
|
|
|
|
|
|
|
|
if (va >= VIRT_IMMR_BASE && va < VIRT_IMMR_BASE + IMMR_SIZE)
|
|
|
|
return p + va - VIRT_IMMR_BASE;
|
2019-11-26 21:16:50 +08:00
|
|
|
if (__map_without_ltlbs)
|
|
|
|
return 0;
|
2017-07-12 18:08:45 +08:00
|
|
|
if (va >= PAGE_OFFSET && va < PAGE_OFFSET + block_mapped_ram)
|
|
|
|
return __pa(va);
|
2016-05-17 15:02:45 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2019-11-26 21:16:50 +08:00
|
|
|
* Return VA for a given PA mapped with LTLBs or fixmap
|
|
|
|
* Return 0 if not mapped
|
2016-05-17 15:02:45 +08:00
|
|
|
*/
|
|
|
|
unsigned long p_block_mapped(phys_addr_t pa)
|
|
|
|
{
|
|
|
|
unsigned long p = PHYS_IMMR_BASE;
|
|
|
|
|
|
|
|
if (pa >= p && pa < p + IMMR_SIZE)
|
|
|
|
return VIRT_IMMR_BASE + pa - p;
|
2019-11-26 21:16:50 +08:00
|
|
|
if (__map_without_ltlbs)
|
|
|
|
return 0;
|
2017-07-12 18:08:45 +08:00
|
|
|
if (pa < block_mapped_ram)
|
|
|
|
return (unsigned long)__va(pa);
|
2016-05-17 15:02:45 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-05-19 13:49:22 +08:00
|
|
|
static pte_t __init *early_hugepd_alloc_kernel(hugepd_t *pmdp, unsigned long va)
|
|
|
|
{
|
|
|
|
if (hpd_val(*pmdp) == 0) {
|
|
|
|
pte_t *ptep = memblock_alloc(sizeof(pte_basic_t), SZ_4K);
|
|
|
|
|
|
|
|
if (!ptep)
|
|
|
|
return NULL;
|
|
|
|
|
|
|
|
hugepd_populate_kernel((hugepd_t *)pmdp, ptep, PAGE_SHIFT_8M);
|
|
|
|
hugepd_populate_kernel((hugepd_t *)pmdp + 1, ptep, PAGE_SHIFT_8M);
|
|
|
|
}
|
|
|
|
return hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int __ref __early_map_kernel_hugepage(unsigned long va, phys_addr_t pa,
|
|
|
|
pgprot_t prot, int psize, bool new)
|
|
|
|
{
|
2020-06-09 12:33:05 +08:00
|
|
|
pmd_t *pmdp = pmd_off_k(va);
|
2020-05-19 13:49:22 +08:00
|
|
|
pte_t *ptep;
|
|
|
|
|
|
|
|
if (WARN_ON(psize != MMU_PAGE_512K && psize != MMU_PAGE_8M))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (new) {
|
|
|
|
if (WARN_ON(slab_is_available()))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
if (psize == MMU_PAGE_512K)
|
|
|
|
ptep = early_pte_alloc_kernel(pmdp, va);
|
|
|
|
else
|
|
|
|
ptep = early_hugepd_alloc_kernel((hugepd_t *)pmdp, va);
|
|
|
|
} else {
|
|
|
|
if (psize == MMU_PAGE_512K)
|
|
|
|
ptep = pte_offset_kernel(pmdp, va);
|
|
|
|
else
|
|
|
|
ptep = hugepte_offset(*(hugepd_t *)pmdp, va, PGDIR_SHIFT);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (WARN_ON(!ptep))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
/* The PTE should never be already present */
|
|
|
|
if (new && WARN_ON(pte_present(*ptep) && pgprot_val(prot)))
|
|
|
|
return -EINVAL;
|
|
|
|
|
|
|
|
set_huge_pte_at(&init_mm, va, ptep, pte_mkhuge(pfn_pte(pa >> PAGE_SHIFT, prot)));
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
powerpc/8xx: Map linear kernel RAM with 8M pages
On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.
MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.
In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.
In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.
With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-02-10 00:07:50 +08:00
|
|
|
/*
|
|
|
|
* MMU_init_hw does the chip-specific initialization of the MMU hardware.
|
|
|
|
*/
|
|
|
|
void __init MMU_init_hw(void)
|
|
|
|
{
|
|
|
|
}
|
|
|
|
|
2020-05-19 13:49:14 +08:00
|
|
|
static bool immr_is_mapped __initdata;
|
|
|
|
|
|
|
|
void __init mmu_mapin_immr(void)
|
2016-05-17 15:02:45 +08:00
|
|
|
{
|
2020-05-19 13:49:14 +08:00
|
|
|
if (immr_is_mapped)
|
|
|
|
return;
|
|
|
|
|
|
|
|
immr_is_mapped = true;
|
|
|
|
|
2020-05-19 13:49:23 +08:00
|
|
|
__early_map_kernel_hugepage(VIRT_IMMR_BASE, PHYS_IMMR_BASE,
|
|
|
|
PAGE_KERNEL_NCG, MMU_PAGE_512K, true);
|
2016-05-17 15:02:45 +08:00
|
|
|
}
|
|
|
|
|
2020-05-19 13:49:25 +08:00
|
|
|
static void mmu_mapin_ram_chunk(unsigned long offset, unsigned long top,
|
|
|
|
pgprot_t prot, bool new)
|
2020-05-19 13:49:24 +08:00
|
|
|
{
|
|
|
|
unsigned long v = PAGE_OFFSET + offset;
|
|
|
|
unsigned long p = offset;
|
|
|
|
|
|
|
|
WARN_ON(!IS_ALIGNED(offset, SZ_512K) || !IS_ALIGNED(top, SZ_512K));
|
|
|
|
|
|
|
|
for (; p < ALIGN(p, SZ_8M) && p < top; p += SZ_512K, v += SZ_512K)
|
|
|
|
__early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new);
|
|
|
|
for (; p < ALIGN_DOWN(top, SZ_8M) && p < top; p += SZ_8M, v += SZ_8M)
|
|
|
|
__early_map_kernel_hugepage(v, p, prot, MMU_PAGE_8M, new);
|
|
|
|
for (; p < ALIGN_DOWN(top, SZ_512K) && p < top; p += SZ_512K, v += SZ_512K)
|
|
|
|
__early_map_kernel_hugepage(v, p, prot, MMU_PAGE_512K, new);
|
|
|
|
|
|
|
|
if (!new)
|
|
|
|
flush_tlb_kernel_range(PAGE_OFFSET + v, PAGE_OFFSET + top);
|
|
|
|
}
|
|
|
|
|
2019-02-22 03:08:38 +08:00
|
|
|
unsigned long __init mmu_mapin_ram(unsigned long base, unsigned long top)
|
2016-05-17 15:02:51 +08:00
|
|
|
{
|
2020-05-19 13:49:24 +08:00
|
|
|
unsigned long etext8 = ALIGN(__pa(_etext), SZ_8M);
|
|
|
|
unsigned long sinittext = __pa(_sinittext);
|
2020-05-19 13:49:26 +08:00
|
|
|
bool strict_boundary = strict_kernel_rwx_enabled() || debug_pagealloc_enabled();
|
|
|
|
unsigned long boundary = strict_boundary ? sinittext : etext8;
|
2020-05-19 13:49:24 +08:00
|
|
|
unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
|
|
|
|
|
|
|
|
WARN_ON(top < einittext8);
|
|
|
|
|
2020-05-19 13:49:14 +08:00
|
|
|
mmu_mapin_immr();
|
|
|
|
|
2020-05-19 13:49:24 +08:00
|
|
|
if (__map_without_ltlbs)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, true);
|
2020-05-19 13:49:26 +08:00
|
|
|
if (debug_pagealloc_enabled()) {
|
|
|
|
top = boundary;
|
|
|
|
} else {
|
|
|
|
mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL_TEXT, true);
|
|
|
|
mmu_mapin_ram_chunk(einittext8, top, PAGE_KERNEL, true);
|
|
|
|
}
|
2020-05-19 13:49:24 +08:00
|
|
|
|
|
|
|
if (top > SZ_32M)
|
|
|
|
memblock_set_current_limit(top);
|
|
|
|
|
|
|
|
block_mapped_ram = top;
|
|
|
|
|
|
|
|
return top;
|
powerpc/8xx: Map linear kernel RAM with 8M pages
On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.
MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.
In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.
In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.
With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
2016-02-10 00:07:50 +08:00
|
|
|
}
|
2016-02-10 00:07:54 +08:00
|
|
|
|
2019-02-22 03:08:51 +08:00
|
|
|
void mmu_mark_initmem_nx(void)
|
|
|
|
{
|
2020-05-19 13:49:24 +08:00
|
|
|
unsigned long etext8 = ALIGN(__pa(_etext), SZ_8M);
|
|
|
|
unsigned long sinittext = __pa(_sinittext);
|
|
|
|
unsigned long boundary = strict_kernel_rwx_enabled() ? sinittext : etext8;
|
|
|
|
unsigned long einittext8 = ALIGN(__pa(_einittext), SZ_8M);
|
|
|
|
|
|
|
|
mmu_mapin_ram_chunk(0, boundary, PAGE_KERNEL_TEXT, false);
|
|
|
|
mmu_mapin_ram_chunk(boundary, einittext8, PAGE_KERNEL, false);
|
2020-05-19 13:49:25 +08:00
|
|
|
|
|
|
|
if (IS_ENABLED(CONFIG_PIN_TLB_TEXT))
|
|
|
|
mmu_pin_tlb(block_mapped_ram, false);
|
2019-02-22 03:08:51 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef CONFIG_STRICT_KERNEL_RWX
|
|
|
|
void mmu_mark_rodata_ro(void)
|
|
|
|
{
|
2020-05-19 13:49:24 +08:00
|
|
|
unsigned long sinittext = __pa(_sinittext);
|
|
|
|
|
|
|
|
mmu_mapin_ram_chunk(0, sinittext, PAGE_KERNEL_ROX, false);
|
2020-05-19 13:49:25 +08:00
|
|
|
if (IS_ENABLED(CONFIG_PIN_TLB_DATA))
|
|
|
|
mmu_pin_tlb(block_mapped_ram, true);
|
2019-02-22 03:08:51 +08:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2017-07-12 18:08:55 +08:00
|
|
|
void __init setup_initial_memory_limit(phys_addr_t first_memblock_base,
|
|
|
|
phys_addr_t first_memblock_size)
|
2016-02-10 00:07:54 +08:00
|
|
|
{
|
|
|
|
/* We don't currently support the first MEMBLOCK not mapping 0
|
|
|
|
* physical on those processors
|
|
|
|
*/
|
|
|
|
BUG_ON(first_memblock_base != 0);
|
|
|
|
|
2019-02-14 00:06:21 +08:00
|
|
|
/* 8xx can only access 32MB at the moment */
|
2020-05-19 13:49:15 +08:00
|
|
|
memblock_set_current_limit(min_t(u64, first_memblock_size, SZ_32M));
|
2016-02-10 00:07:54 +08:00
|
|
|
}
|
2016-02-10 00:08:18 +08:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Set up to use a given MMU context.
|
|
|
|
* id is context number, pgd is PGD pointer.
|
|
|
|
*
|
|
|
|
* We place the physical address of the new task page directory loaded
|
|
|
|
* into the MMU base register, and set the ASID compare register with
|
|
|
|
* the new "context."
|
|
|
|
*/
|
|
|
|
void set_context(unsigned long id, pgd_t *pgd)
|
|
|
|
{
|
|
|
|
s16 offset = (s16)(__pa(swapper_pg_dir));
|
|
|
|
|
|
|
|
/* Context switch the PTE pointer for the Abatron BDI2000.
|
|
|
|
* The PGDIR is passed as second argument.
|
|
|
|
*/
|
2019-02-21 18:37:53 +08:00
|
|
|
if (IS_ENABLED(CONFIG_BDI_SWITCH))
|
|
|
|
abatron_pteptrs[1] = pgd;
|
2016-02-10 00:08:18 +08:00
|
|
|
|
2018-11-29 22:07:15 +08:00
|
|
|
/* Register M_TWB will contain base address of level 1 table minus the
|
2016-02-10 00:08:18 +08:00
|
|
|
* lower part of the kernel PGDIR base address, so that all accesses to
|
|
|
|
* level 1 table are done relative to lower part of kernel PGDIR base
|
|
|
|
* address.
|
|
|
|
*/
|
2018-11-29 22:07:15 +08:00
|
|
|
mtspr(SPRN_M_TWB, __pa(pgd) - offset);
|
2016-02-10 00:08:18 +08:00
|
|
|
|
|
|
|
/* Update context */
|
powerpc/mm/slice: Fix hugepage allocation at hint address on 8xx
On the 8xx, the page size is set in the PMD entry and applies to
all pages of the page table pointed by the said PMD entry.
When an app has some regular pages allocated (e.g. see below) and tries
to mmap() a huge page at a hint address covered by the same PMD entry,
the kernel accepts the hint allthough the 8xx cannot handle different
page sizes in the same PMD entry.
10000000-10001000 r-xp 00000000 00:0f 2597 /root/malloc
10010000-10011000 rwxp 00000000 00:0f 2597 /root/malloc
mmap(0x10080000, 524288, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS|0x40000, -1, 0) = 0x10080000
This results the app remaining forever in do_page_fault()/hugetlb_fault()
and when interrupting that app, we get the following warning:
[162980.035629] WARNING: CPU: 0 PID: 2777 at arch/powerpc/mm/hugetlbpage.c:354 hugetlb_free_pgd_range+0xc8/0x1e4
[162980.035699] CPU: 0 PID: 2777 Comm: malloc Tainted: G W 4.14.6 #85
[162980.035744] task: c67e2c00 task.stack: c668e000
[162980.035783] NIP: c000fe18 LR: c00e1eec CTR: c00f90c0
[162980.035830] REGS: c668fc20 TRAP: 0700 Tainted: G W (4.14.6)
[162980.035854] MSR: 00029032 <EE,ME,IR,DR,RI> CR: 24044224 XER: 20000000
[162980.036003]
[162980.036003] GPR00: c00e1eec c668fcd0 c67e2c00 00000010 c6869410 10080000 00000000 77fb4000
[162980.036003] GPR08: ffff0001 0683c001 00000000 ffffff80 44028228 10018a34 00004008 418004fc
[162980.036003] GPR16: c668e000 00040100 c668e000 c06c0000 c668fe78 c668e000 c6835ba0 c668fd48
[162980.036003] GPR24: 00000000 73ffffff 74000000 00000001 77fb4000 100fffff 10100000 10100000
[162980.036743] NIP [c000fe18] hugetlb_free_pgd_range+0xc8/0x1e4
[162980.036839] LR [c00e1eec] free_pgtables+0x12c/0x150
[162980.036861] Call Trace:
[162980.036939] [c668fcd0] [c00f0774] unlink_anon_vmas+0x1c4/0x214 (unreliable)
[162980.037040] [c668fd10] [c00e1eec] free_pgtables+0x12c/0x150
[162980.037118] [c668fd40] [c00eabac] exit_mmap+0xe8/0x1b4
[162980.037210] [c668fda0] [c0019710] mmput.part.9+0x20/0xd8
[162980.037301] [c668fdb0] [c001ecb0] do_exit+0x1f0/0x93c
[162980.037386] [c668fe00] [c001f478] do_group_exit+0x40/0xcc
[162980.037479] [c668fe10] [c002a76c] get_signal+0x47c/0x614
[162980.037570] [c668fe70] [c0007840] do_signal+0x54/0x244
[162980.037654] [c668ff30] [c0007ae8] do_notify_resume+0x34/0x88
[162980.037744] [c668ff40] [c000dae8] do_user_signal+0x74/0xc4
[162980.037781] Instruction dump:
[162980.037821] 7fdff378 81370000 54a3463a 80890020 7d24182e 7c841a14 712a0004 4082ff94
[162980.038014] 2f890000 419e0010 712a0ff0 408200e0 <0fe00000> 54a9000a 7f984840 419d0094
[162980.038216] ---[ end trace c0ceeca8e7a5800a ]---
[162980.038754] BUG: non-zero nr_ptes on freeing mm: 1
[162985.363322] BUG: non-zero nr_ptes on freeing mm: -1
In order to fix this, this patch uses the address space "slices"
implemented for BOOK3S/64 and enhanced to support PPC32 by the
preceding patch.
This patch modifies the context.id on the 8xx to be in the range
[1:16] instead of [0:15] in order to identify context.id == 0 as
not initialised contexts as done on BOOK3S
This patch activates CONFIG_PPC_MM_SLICES when CONFIG_HUGETLB_PAGE is
selected for the 8xx
Alltough we could in theory have as many slices as PMD entries, the
current slices implementation limits the number of low slices to 16.
This limitation is not preventing us to fix the initial issue allthough
it is suboptimal. It will be cured in a subsequent patch.
Fixes: 4b91428699477 ("powerpc/8xx: Implement support of hugepages")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2018-02-22 22:27:26 +08:00
|
|
|
mtspr(SPRN_M_CASID, id - 1);
|
2016-02-10 00:08:18 +08:00
|
|
|
/* sync */
|
|
|
|
mb();
|
|
|
|
}
|
2016-02-10 00:08:21 +08:00
|
|
|
|
|
|
|
void flush_instruction_cache(void)
|
|
|
|
{
|
|
|
|
isync();
|
|
|
|
mtspr(SPRN_IC_CST, IDC_INVALL);
|
|
|
|
isync();
|
|
|
|
}
|
2019-03-11 16:30:33 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_PPC_KUEP
|
|
|
|
void __init setup_kuep(bool disabled)
|
|
|
|
{
|
|
|
|
if (disabled)
|
|
|
|
return;
|
|
|
|
|
|
|
|
pr_info("Activating Kernel Userspace Execution Prevention\n");
|
|
|
|
|
|
|
|
mtspr(SPRN_MI_AP, MI_APG_KUEP);
|
|
|
|
}
|
|
|
|
#endif
|
2019-03-11 16:30:34 +08:00
|
|
|
|
|
|
|
#ifdef CONFIG_PPC_KUAP
|
|
|
|
void __init setup_kuap(bool disabled)
|
|
|
|
{
|
|
|
|
pr_info("Activating Kernel Userspace Access Protection\n");
|
|
|
|
|
|
|
|
if (disabled)
|
|
|
|
pr_warn("KUAP cannot be disabled yet on 8xx when compiled in\n");
|
|
|
|
|
|
|
|
mtspr(SPRN_MD_AP, MD_APG_KUAP);
|
|
|
|
}
|
|
|
|
#endif
|