forked from luck/tmp_suning_uos_patched
39c0cbe215
Entering nohz code on every micro-idle is costing ~10% throughput for netperf TCP_RR when scheduling cross-cpu. Rate limiting entry fixes this, but raises ticks a bit. On my Q6600, an idle box goes from ~85 interrupts/sec to 128. The higher the context switch rate, the more nohz entry costs. With this patch and some cycle recovery patches in my tree, max cross cpu context switch rate is improved by ~16%, a large portion of which of which is this ratelimiting. Signed-off-by: Mike Galbraith <efault@gmx.de> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1268301003.6785.28.camel@marge.simson.net> Signed-off-by: Ingo Molnar <mingo@elte.hu> |
||
---|---|---|
.. | ||
clockevents.c | ||
clocksource.c | ||
jiffies.c | ||
Kconfig | ||
Makefile | ||
ntp.c | ||
tick-broadcast.c | ||
tick-common.c | ||
tick-internal.h | ||
tick-oneshot.c | ||
tick-sched.c | ||
timecompare.c | ||
timeconv.c | ||
timekeeping.c | ||
timer_list.c | ||
timer_stats.c |