kernel_optimize_test/arch/x86/lib
Linus Torvalds 0749708352 x86: make word-at-a-time strncpy_from_user clear bytes at the end
This makes the newly optimized x86 strncpy_from_user clear the final
bytes in the word past the final NUL character, rather than copy them as
the word they were in the source.

NOTE! Unlike the silly semantics of the libc 'strncpy()' function, the
kernel strncpy_from_user() has never cleared all of the end of the
destination buffer.  And neither does it do so now: it only clears the
bytes at the end of the last word it copied.

So why make this change at all? It doesn't really cost us anything extra
(we have to calculate the mask to get the length anyway), and it means
that *if* any user actually cares about zeroing the whole buffer, they
can do a "memset()" before the strncpy_from_user(), and we will no
longer write random bytes after the NUL character.

In particular, the buffer contents will now at no point contain random
source data from beyond the end of the string.

In other words, it makes behavior a bit more repeatable at no new cost,
so it's a small cleanup.  I've been carrying this as a patch for the
last few weeks or so in my tree (done at the same time the sign error
was fixed in commit 12e993b894), I might as well commit it.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-04-28 14:27:38 -07:00
..
.gitignore
atomic64_32.c x86: Adjust asm constraints in atomic64 wrappers 2012-01-20 17:29:31 -08:00
atomic64_386_32.S x86: atomic64 assembly improvements 2012-01-20 17:29:49 -08:00
atomic64_cx8_32.S x86: atomic64 assembly improvements 2012-01-20 17:29:49 -08:00
cache-smp.c
checksum_32.S
clear_page_64.S x86, mem: clear_page_64.S: Support clear_page() with enhanced REP MOVSB/STOSB 2011-05-17 15:40:27 -07:00
cmpxchg.c
cmpxchg8b_emu.S
cmpxchg16b_emu.S
copy_page_64.S x86-64: Slightly shorten copy_page() 2012-01-06 12:25:37 +01:00
copy_user_64.S x86, 64-bit: Fix copy_[to/from]_user() checks for the userspace address limit 2011-05-18 12:49:00 +02:00
copy_user_nocache_64.S
csum-copy_64.S
csum-partial_64.c
csum-wrappers_64.c
delay.c x86: Derandom delay_tsc for 64 bit 2012-03-09 12:43:27 -08:00
getuser.S
inat.c x86: Fix to decode grouped AVX with VEX pp bits 2012-02-11 15:11:35 +01:00
insn.c x86: Handle failures of parsing immediate operands in the instruction decoder 2012-04-16 08:56:11 +02:00
iomap_copy_64.S
Makefile Merge branch 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip 2011-07-22 17:02:24 -07:00
memcpy_32.c
memcpy_64.S x86-64: Handle byte-wise tail copying in memcpy() without a loop 2012-01-26 21:19:20 +01:00
memmove_64.S x86: Make alternative instruction pointers relative 2011-07-13 11:22:56 -07:00
memset_64.S x86-64: Fix memset() to support sizes of 4Gb and above 2012-01-26 11:50:04 +01:00
mmx_32.c
msr-reg-export.c
msr-reg.S
msr-smp.c
msr.c
putuser.S
rwlock.S x86: Fix write lock scalability 64-bit issue 2011-07-21 09:03:36 +02:00
rwsem.S x86: Unify rwsem assembly implementation 2011-07-21 09:03:32 +02:00
string_32.c x86/i386: Use less assembly in strlen(), speed things up a bit 2011-12-12 18:33:42 +01:00
strstr_32.c
thunk_32.S
thunk_64.S x86: Fix write lock scalability 64-bit issue 2011-07-21 09:03:36 +02:00
usercopy_32.c x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up 2012-04-11 09:41:28 -07:00
usercopy_64.c x86: merge 32/64-bit versions of 'strncpy_from_user()' and speed it up 2012-04-11 09:41:28 -07:00
usercopy.c x86: make word-at-a-time strncpy_from_user clear bytes at the end 2012-04-28 14:27:38 -07:00
x86-opcode-map.txt Merge branches 'sched-urgent-for-linus', 'perf-urgent-for-linus' and 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2012-01-19 14:53:06 -08:00