forked from luck/tmp_suning_uos_patched
bpf: Use VM_MAP instead of VM_ALLOC for ringbuf
commit b293dcc473d22a62dc6d78de2b15e4f49515db56 upstream.
After commit 2fd3fb0be1d1 ("kasan, vmalloc: unpoison VM_ALLOC pages
after mapping"), non-VM_ALLOC mappings will be marked as accessible
in __get_vm_area_node() when KASAN is enabled. But now the flag for
ringbuf area is VM_ALLOC, so KASAN will complain out-of-bound access
after vmap() returns. Because the ringbuf area is created by mapping
allocated pages, so use VM_MAP instead.
After the change, info in /proc/vmallocinfo also changes from
[start]-[end] 24576 ringbuf_map_alloc+0x171/0x290 vmalloc user
to
[start]-[end] 24576 ringbuf_map_alloc+0x171/0x290 vmap user
Fixes: 457f44363a
("bpf: Implement BPF ring buffer and verifier support for it")
Reported-by: syzbot+5ad567a418794b9b5983@syzkaller.appspotmail.com
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20220202060158.6260-1-houtao1@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
parent
f744a06404
commit
6304a613a9
|
@ -108,7 +108,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node)
|
||||||
}
|
}
|
||||||
|
|
||||||
rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
|
rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages,
|
||||||
VM_ALLOC | VM_USERMAP, PAGE_KERNEL);
|
VM_MAP | VM_USERMAP, PAGE_KERNEL);
|
||||||
if (rb) {
|
if (rb) {
|
||||||
kmemleak_not_leak(pages);
|
kmemleak_not_leak(pages);
|
||||||
rb->pages = pages;
|
rb->pages = pages;
|
||||||
|
|
Loading…
Reference in New Issue
Block a user