forked from luck/tmp_suning_uos_patched
547c556f4d
ext4_writepages() on an encrypted file has to encrypt the data, but it
can't modify the pagecache pages in-place, so it encrypts the data into
bounce pages and writes those instead. All bounce pages are allocated
from a mempool using GFP_NOFS.
This is not correct use of a mempool, and it can deadlock. This is
because GFP_NOFS includes __GFP_DIRECT_RECLAIM, which enables the "never
fail" mode for mempool_alloc() where a failed allocation will fall back
to waiting for one of the preallocated elements in the pool.
But since this mode is used for all a bio's pages and not just the
first, it can deadlock waiting for pages already in the bio to be freed.
This deadlock can be reproduced by patching mempool_alloc() to pretend
that pool->alloc() always fails (so that it always falls back to the
preallocations), and then creating an encrypted file of size > 128 KiB.
Fix it by only using GFP_NOFS for the first page in the bio. For
subsequent pages just use GFP_NOWAIT, and if any of those fail, just
submit the bio and start a new one.
This will need to be fixed in f2fs too, but that's less straightforward.
Fixes:
|
||
---|---|---|
.. | ||
acl.c | ||
acl.h | ||
balloc.c | ||
bitmap.c | ||
block_validity.c | ||
dir.c | ||
ext4_extents.h | ||
ext4_jbd2.c | ||
ext4_jbd2.h | ||
ext4.h | ||
extents_status.c | ||
extents_status.h | ||
extents.c | ||
file.c | ||
fsmap.c | ||
fsmap.h | ||
fsync.c | ||
hash.c | ||
ialloc.c | ||
indirect.c | ||
inline.c | ||
inode-test.c | ||
inode.c | ||
ioctl.c | ||
Kconfig | ||
Makefile | ||
mballoc.c | ||
mballoc.h | ||
migrate.c | ||
mmp.c | ||
move_extent.c | ||
namei.c | ||
page-io.c | ||
readpage.c | ||
resize.c | ||
super.c | ||
symlink.c | ||
sysfs.c | ||
truncate.h | ||
verity.c | ||
xattr_security.c | ||
xattr_trusted.c | ||
xattr_user.c | ||
xattr.c | ||
xattr.h |