forked from luck/tmp_suning_uos_patched
0749708352
This makes the newly optimized x86 strncpy_from_user clear the final
bytes in the word past the final NUL character, rather than copy them as
the word they were in the source.
NOTE! Unlike the silly semantics of the libc 'strncpy()' function, the
kernel strncpy_from_user() has never cleared all of the end of the
destination buffer. And neither does it do so now: it only clears the
bytes at the end of the last word it copied.
So why make this change at all? It doesn't really cost us anything extra
(we have to calculate the mask to get the length anyway), and it means
that *if* any user actually cares about zeroing the whole buffer, they
can do a "memset()" before the strncpy_from_user(), and we will no
longer write random bytes after the NUL character.
In particular, the buffer contents will now at no point contain random
source data from beyond the end of the string.
In other words, it makes behavior a bit more repeatable at no new cost,
so it's a small cleanup. I've been carrying this as a patch for the
last few weeks or so in my tree (done at the same time the sign error
was fixed in commit
|
||
---|---|---|
.. | ||
.gitignore | ||
atomic64_32.c | ||
atomic64_386_32.S | ||
atomic64_cx8_32.S | ||
cache-smp.c | ||
checksum_32.S | ||
clear_page_64.S | ||
cmpxchg.c | ||
cmpxchg8b_emu.S | ||
cmpxchg16b_emu.S | ||
copy_page_64.S | ||
copy_user_64.S | ||
copy_user_nocache_64.S | ||
csum-copy_64.S | ||
csum-partial_64.c | ||
csum-wrappers_64.c | ||
delay.c | ||
getuser.S | ||
inat.c | ||
insn.c | ||
iomap_copy_64.S | ||
Makefile | ||
memcpy_32.c | ||
memcpy_64.S | ||
memmove_64.S | ||
memset_64.S | ||
mmx_32.c | ||
msr-reg-export.c | ||
msr-reg.S | ||
msr-smp.c | ||
msr.c | ||
putuser.S | ||
rwlock.S | ||
rwsem.S | ||
string_32.c | ||
strstr_32.c | ||
thunk_32.S | ||
thunk_64.S | ||
usercopy_32.c | ||
usercopy_64.c | ||
usercopy.c | ||
x86-opcode-map.txt |