kernel_optimize_test/kernel/bpf
Tatsuhiko Yasumatsu e95620c3bd bpf: Fix integer overflow involving bucket_size
[ Upstream commit c4eb1f403243fc7bbb7de644db8587c03de36da6 ]

In __htab_map_lookup_and_delete_batch(), hash buckets are iterated
over to count the number of elements in each bucket (bucket_size).
If bucket_size is large enough, the multiplication to calculate
kvmalloc() size could overflow, resulting in out-of-bounds write
as reported by KASAN:

  [...]
  [  104.986052] BUG: KASAN: vmalloc-out-of-bounds in __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.986489] Write of size 4194224 at addr ffffc9010503be70 by task crash/112
  [  104.986889]
  [  104.987193] CPU: 0 PID: 112 Comm: crash Not tainted 5.14.0-rc4 #13
  [  104.987552] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1.1 04/01/2014
  [  104.988104] Call Trace:
  [  104.988410]  dump_stack_lvl+0x34/0x44
  [  104.988706]  print_address_description.constprop.0+0x21/0x140
  [  104.988991]  ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.989327]  ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.989622]  kasan_report.cold+0x7f/0x11b
  [  104.989881]  ? __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.990239]  kasan_check_range+0x17c/0x1e0
  [  104.990467]  memcpy+0x39/0x60
  [  104.990670]  __htab_map_lookup_and_delete_batch+0x5ce/0xb60
  [  104.990982]  ? __wake_up_common+0x4d/0x230
  [  104.991256]  ? htab_of_map_free+0x130/0x130
  [  104.991541]  bpf_map_do_batch+0x1fb/0x220
  [...]

In hashtable, if the elements' keys have the same jhash() value, the
elements will be put into the same bucket. By putting a lot of elements
into a single bucket, the value of bucket_size can be increased to
trigger the integer overflow.

Triggering the overflow is possible for both callers with CAP_SYS_ADMIN
and callers without CAP_SYS_ADMIN.

It will be trivial for a caller with CAP_SYS_ADMIN to intentionally
reach this overflow by enabling BPF_F_ZERO_SEED. As this flag will set
the random seed passed to jhash() to 0, it will be easy for the caller
to prepare keys which will be hashed into the same value, and thus put
all the elements into the same bucket.

If the caller does not have CAP_SYS_ADMIN, BPF_F_ZERO_SEED cannot be
used. However, it will be still technically possible to trigger the
overflow, by guessing the random seed value passed to jhash() (32bit)
and repeating the attempt to trigger the overflow. In this case,
the probability to trigger the overflow will be low and will take
a very long time.

Fix the integer overflow by calling kvmalloc_array() instead of
kvmalloc() to allocate memory.

Fixes: 057996380a ("bpf: Add batch ops to all htab bpf map")
Signed-off-by: Tatsuhiko Yasumatsu <th.yasumatsu@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20210806150419.109658-1-th.yasumatsu@gmail.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
2021-08-18 08:59:10 +02:00
..
preload bpf: Fix umd memory leak in copy_process() 2021-03-30 14:32:03 +02:00
arraymap.c bpf: Allow for map-in-map with dynamic inner array map entries 2020-10-11 10:21:04 -07:00
bpf_inode_storage.c bpf: Change inode_storage's lookup_elem return value from NULL to -EBADF 2021-03-30 14:31:56 +02:00
bpf_iter.c bpf: Fix an unitialized value in bpf_iter 2021-03-04 11:37:33 +01:00
bpf_local_storage.c bpf: Use hlist_add_head_rcu when linking to local_storage 2020-09-19 01:12:35 +02:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-03-04 11:37:29 +01:00
bpf_lru_list.h
bpf_lsm.c bpf: Update verification logic for LSM programs 2020-11-06 13:15:21 -08:00
bpf_struct_ops_types.h
bpf_struct_ops.c bpf: Fix fexit trampoline. 2021-04-07 15:00:03 +02:00
btf.c bpf: Forbid trampoline attach for functions with variable arguments 2021-06-16 12:01:35 +02:00
cgroup.c bpf, cgroup: Fix problematic bounds check 2021-02-10 09:29:12 +01:00
core.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-08-04 12:46:44 +02:00
cpumap.c bpf, cpumap: Remove rcpu pointer from cpu_map_build_skb signature 2020-09-28 23:30:42 +02:00
devmap.c bpf, devmap: Use GFP_KERNEL for xdp bulk queue allocation 2021-03-04 11:37:33 +01:00
disasm.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-08-04 12:46:44 +02:00
disasm.h
dispatcher.c
hashtab.c bpf: Fix integer overflow involving bucket_size 2021-08-18 08:59:10 +02:00
helpers.c bpf, lockdown, audit: Fix buggy SELinux lockdown permission checks 2021-06-10 13:39:19 +02:00
inode.c bpf: link: Refuse non-O_RDWR flags in BPF_OBJ_GET 2021-04-14 08:42:00 +02:00
local_storage.c
lpm_trie.c
Makefile bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE 2020-10-29 20:01:46 -07:00
map_in_map.c
map_in_map.h
map_iter.c
net_namespace.c
offload.c
percpu_freelist.c bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c
queue_stack_maps.c
reuseport_array.c bpf, net: Rework cookie generator as per-cpu one 2020-09-30 11:50:35 -07:00
ringbuf.c bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc() 2021-07-19 09:44:54 +02:00
stackmap.c bpf: Refcount task stack in bpf_get_task_stack 2021-04-14 08:42:01 +02:00
syscall.c bpf: Prevent double bpf_prog_put call from bpf_tracing_prog_attach 2021-01-27 11:55:07 +01:00
sysfs_btf.c bpf: Fix sysfs export of empty BTF section 2020-09-21 21:50:24 +02:00
task_iter.c bpf: Save correct stopping point in file seq iteration 2021-01-19 18:27:28 +01:00
tnum.c
trampoline.c bpf: Fix fexit trampoline. 2021-04-07 15:00:03 +02:00
verifier.c bpf: Fix pointer arithmetic mask tightening under state pruning 2021-08-04 12:46:45 +02:00