kernel_optimize_test/kernel/sched
Vincent Guittot 9f68395333 sched/pelt: Add a new runnable average signal
Now that runnable_load_avg has been removed, we can replace it by a new
signal that will highlight the runnable pressure on a cfs_rq. This signal
track the waiting time of tasks on rq and can help to better define the
state of rqs.

At now, only util_avg is used to define the state of a rq:
  A rq with more that around 80% of utilization and more than 1 tasks is
  considered as overloaded.

But the util_avg signal of a rq can become temporaly low after that a task
migrated onto another rq which can bias the classification of the rq.

When tasks compete for the same rq, their runnable average signal will be
higher than util_avg as it will include the waiting time and we can use
this signal to better classify cfs_rqs.

The new runnable_avg will track the runnable time of a task which simply
adds the waiting time to the running time. The runnable _avg of cfs_rq
will be the /Sum of se's runnable_avg and the runnable_avg of group entity
will follow the one of the rq similarly to util_avg.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: "Dietmar Eggemann <dietmar.eggemann@arm.com>"
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Valentin Schneider <valentin.schneider@arm.com>
Cc: Phil Auld <pauld@redhat.com>
Cc: Hillf Danton <hdanton@sina.com>
Link: https://lore.kernel.org/r/20200224095223.13361-9-mgorman@techsingularity.net
2020-02-24 11:36:36 +01:00
..
autogroup.c
autogroup.h
clock.c sched/clock: Use static_branch_likely() with sched_clock_running 2019-11-29 08:10:54 +01:00
completion.c
core.c sched/pelt: Remove unused runnable load average 2020-02-24 11:36:36 +01:00
cpuacct.c
cpudeadline.c
cpudeadline.h
cpufreq_schedutil.c sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() 2019-12-25 10:42:08 +01:00
cpufreq.c cpufreq: Avoid leaving stale IRQ work items during CPU offline 2019-12-12 17:59:43 +01:00
cpupri.c sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
cpupri.h sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
cputime.c sched/cputime: move rq parameter in irqtime_account_process_tick 2020-01-17 10:19:21 +01:00
deadline.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
debug.c sched/pelt: Add a new runnable average signal 2020-02-24 11:36:36 +01:00
fair.c sched/pelt: Add a new runnable average signal 2020-02-24 11:36:36 +01:00
features.h sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases 2019-10-29 10:01:07 +01:00
idle.c idle: fix spelling mistake "iterrupts" -> "interrupts" 2020-01-17 10:19:22 +01:00
isolation.c genirq, sched/isolation: Isolate from handling managed interrupts 2020-01-22 16:29:49 +01:00
loadavg.c timers/nohz: Update NOHZ load in remote tick 2020-01-28 21:36:44 +01:00
Makefile
membarrier.c
pelt.c sched/pelt: Add a new runnable average signal 2020-02-24 11:36:36 +01:00
pelt.h
psi.c Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2020-02-15 12:51:22 -08:00
rt.c sched/rt: Optimize checking group RT scheduler constraints 2020-01-28 21:37:09 +01:00
sched-pelt.h
sched.h sched/pelt: Add a new runnable average signal 2020-02-24 11:36:36 +01:00
stats.c
stats.h
stop_task.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
swait.c
topology.c sched/topology: Remove SD_BALANCE_WAKE on asymmetric capacity systems 2020-02-20 21:03:14 +01:00
wait_bit.c sched/wait: fix ___wait_var_event(exclusive) 2019-12-17 13:32:50 +01:00
wait.c Add wake_up_interruptible_sync_poll_locked() 2019-10-31 15:12:23 +00:00