kernel_optimize_test/kernel/sched
Viresh Kumar 323af6deaf sched/fair: Load balance aggressively for SCHED_IDLE CPUs
The fair scheduler performs periodic load balance on every CPU to check
if it can pull some tasks from other busy CPUs. The duration of this
periodic load balance is set to sd->balance_interval for the idle CPUs
and is calculated by multiplying the sd->balance_interval with the
sd->busy_factor (set to 32 by default) for the busy CPUs. The
multiplication is done for busy CPUs to avoid doing load balance too
often and rather spend more time executing actual task. While that is
the right thing to do for the CPUs busy with SCHED_OTHER or SCHED_BATCH
tasks, it may not be the optimal thing for CPUs running only SCHED_IDLE
tasks.

With the recent enhancements in the fair scheduler around SCHED_IDLE
CPUs, we now prefer to enqueue a newly-woken task to a SCHED_IDLE
CPU instead of other busy or idle CPUs. The same reasoning should be
applied to the load balancer as well to make it migrate tasks more
aggressively to a SCHED_IDLE CPU, as that will reduce the scheduling
latency of the migrated (SCHED_OTHER) tasks.

This patch makes minimal changes to the fair scheduler to do the next
load balance soon after the last non SCHED_IDLE task is dequeued from a
runqueue, i.e. making the CPU SCHED_IDLE. Also the sd->busy_factor is
ignored while calculating the balance_interval for such CPUs. This is
done to avoid delaying the periodic load balance by few hundred
milliseconds for SCHED_IDLE CPUs.

This is tested on ARM64 Hikey620 platform (octa-core) with the help of
rt-app and it is verified, using kernel traces, that the newly
SCHED_IDLE CPU does load balancing shortly after it becomes SCHED_IDLE
and pulls tasks from other busy CPUs.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
Link: https://lkml.kernel.org/r/e485827eb8fe7db0943d6f3f6e0f5a4a70272781.1578471925.git.viresh.kumar@linaro.org
2020-01-17 10:19:20 +01:00
..
autogroup.c
autogroup.h
clock.c sched/clock: Use static_branch_likely() with sched_clock_running 2019-11-29 08:10:54 +01:00
completion.c
core.c sched/uclamp: Make uclamp util helpers use and return UL values 2019-12-25 10:42:08 +01:00
cpuacct.c
cpudeadline.c
cpudeadline.h
cpufreq_schedutil.c sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() 2019-12-25 10:42:08 +01:00
cpufreq.c cpufreq: Avoid leaving stale IRQ work items during CPU offline 2019-12-12 17:59:43 +01:00
cpupri.c sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
cpupri.h sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
cputime.c sched/vtime: Bring up complete kcpustat accessor 2019-11-21 07:33:24 +01:00
deadline.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
debug.c
fair.c sched/fair: Load balance aggressively for SCHED_IDLE CPUs 2020-01-17 10:19:20 +01:00
features.h sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases 2019-10-29 10:01:07 +01:00
idle.c Power management updates for 5.5-rc1 2019-11-26 19:06:44 -08:00
isolation.c
loadavg.c
Makefile
membarrier.c membarrier: Fix RCU locking bug caused by faulty merge 2019-10-01 21:27:50 +02:00
pelt.c schied/fair: Skip calculating @contrib without load 2019-12-17 13:32:51 +01:00
pelt.h
psi.c psi: Fix a division error in psi poll() 2019-12-17 13:32:48 +01:00
rt.c sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
sched-pelt.h
sched.h sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() 2019-12-25 10:42:08 +01:00
stats.c
stats.h
stop_task.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
swait.c
topology.c Linux 5.4-rc7 2019-11-11 08:34:59 +01:00
wait_bit.c sched/wait: fix ___wait_var_event(exclusive) 2019-12-17 13:32:50 +01:00
wait.c Add wake_up_interruptible_sync_poll_locked() 2019-10-31 15:12:23 +00:00