kernel_optimize_test/kernel/sched
Vincent Guittot b4c9c9f156 sched/fair: Prefer prev cpu in asymmetric wakeup path
During fast wakeup path, scheduler always check whether local or prev
cpus are good candidates for the task before looking for other cpus in
the domain. With commit b7a331615d ("sched/fair: Add asymmetric CPU
capacity wakeup scan") the heterogenous system gains a dedicated path
but doesn't try to reuse prev cpu whenever possible. If the previous
cpu is idle and belong to the LLC domain, we should check it 1st
before looking for another cpu because it stays one of the best
candidate and this also stabilizes task placement on the system.

This change aligns asymmetric path behavior with symmetric one and reduces
cases where the task migrates across all cpus of the sd_asym_cpucapacity
domains at wakeup.

This change does not impact normal EAS mode but only the overloaded case or
when EAS is not used.

- On hikey960 with performance governor (EAS disable)

./perf bench sched pipe -T -l 50000
             mainline           w/ patch
# migrations   999364                  0
ops/sec        149313(+/-0.28%)   182587(+/- 0.40) +22%

- On hikey with performance governor

./perf bench sched pipe -T -l 50000
             mainline           w/ patch
# migrations        0                  0
ops/sec         47721(+/-0.76%)    47899(+/- 0.56) +0.4%

According to test on hikey, the patch doesn't impact symmetric system
compared to current implementation (only tested on arm64)

Also read the uclamped value of task's utilization at most twice instead
instead each time we compare task's utilization with cpu's capacity.

Fixes: b7a331615d ("sched/fair: Add asymmetric CPU capacity wakeup scan")
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Link: https://lkml.kernel.org/r/20201029161824.26389-1-vincent.guittot@linaro.org
2020-11-10 18:38:48 +01:00
..
autogroup.c
autogroup.h
clock.c
completion.c
core.c sched/features: Fix !CONFIG_JUMP_LABEL case 2020-10-14 19:55:46 +02:00
cpuacct.c
cpudeadline.c
cpudeadline.h
cpufreq_schedutil.c Power management updates for 5.9-rc1 2020-08-03 20:28:08 -07:00
cpufreq.c
cpupri.c
cpupri.h
cputime.c
deadline.c sched/deadline: Unthrottle PI boosted threads while enqueuing 2020-10-03 16:30:53 +02:00
debug.c sched/topology: Move sd_flag_debug out of #ifdef CONFIG_SYSCTL 2020-09-09 10:09:03 +02:00
fair.c sched/fair: Prefer prev cpu in asymmetric wakeup path 2020-11-10 18:38:48 +01:00
features.h sched/rt: Disable RT_RUNTIME_SHARE by default 2020-09-25 14:23:24 +02:00
idle.c cpuidle: Move trace_cpu_idle() into generic code 2020-08-26 12:41:54 +02:00
isolation.c
loadavg.c sched: nohz: stop passing around unused "ticks" parameter. 2020-07-22 10:22:04 +02:00
Makefile
membarrier.c rseq/membarrier: Add MEMBARRIER_CMD_PRIVATE_EXPEDITED_RSEQ 2020-09-25 14:23:27 +02:00
pelt.c sched: Add a tracepoint to track rq->nr_running 2020-07-08 11:39:02 +02:00
pelt.h
psi.c
rt.c sched: Remove struct sched_class::next field 2020-06-25 13:45:44 +02:00
sched-pelt.h
sched.h sched/features: Fix !CONFIG_JUMP_LABEL case 2020-10-14 19:55:46 +02:00
smp.h
stats.c
stats.h
stop_task.c sched: Remove struct sched_class::next field 2020-06-25 13:45:44 +02:00
swait.c
topology.c Scheduler changes for v5.10: 2020-10-12 12:56:01 -07:00
wait_bit.c
wait.c list: add "list_del_init_careful()" to go with "list_empty_careful()" 2020-08-02 20:39:44 -07:00