kernel_optimize_test/kernel/sched
Vincent Guittot a4f9a0e51b sched/fair: Remove redundant call to cpufreq_update_util()
With commit

  bef69dd878 ("sched/cpufreq: Move the cfs_rq_util_change() call to cpufreq_update_util()")

update_load_avg() has become the central point for calling cpufreq
(not including the update of blocked load). This change helps to
simplify further the number of calls to cpufreq_update_util() and to
remove last redundant ones. With update_load_avg(), we are now sure
that cpufreq_update_util() will be called after every task attachment
to a cfs_rq and especially after propagating this event down to the
util_avg of the root cfs_rq, which is the level that is used by
cpufreq governors like schedutil to set the frequency of a CPU.

The SCHED_CPUFREQ_MIGRATION flag forces an early call to cpufreq when
the migration happens in a cgroup whereas util_avg of root cfs_rq is
not yet updated and this call is duplicated with the one that happens
immediately after when the migration event reaches the root cfs_rq.
The dedicated flag SCHED_CPUFREQ_MIGRATION is now useless and can be
removed. The interface of attach_entity_load_avg() can also be
simplified accordingly.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lkml.kernel.org/r/1579083620-24943-1-git-send-email-vincent.guittot@linaro.org
2020-01-17 10:19:22 +01:00
..
autogroup.c
autogroup.h
clock.c sched/clock: Use static_branch_likely() with sched_clock_running 2019-11-29 08:10:54 +01:00
completion.c
core.c sched/core: Fix size of rq::uclamp initialization 2020-01-17 10:19:20 +01:00
cpuacct.c
cpudeadline.c
cpudeadline.h
cpufreq_schedutil.c sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() 2019-12-25 10:42:08 +01:00
cpufreq.c cpufreq: Avoid leaving stale IRQ work items during CPU offline 2019-12-12 17:59:43 +01:00
cpupri.c sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
cpupri.h sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
cputime.c sched/cputime: move rq parameter in irqtime_account_process_tick 2020-01-17 10:19:21 +01:00
deadline.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
debug.c sched/debug: Reset watchdog on all CPUs while processing sysrq-t 2020-01-17 10:19:20 +01:00
fair.c sched/fair: Remove redundant call to cpufreq_update_util() 2020-01-17 10:19:22 +01:00
features.h sched/fair/util_est: Implement faster ramp-up EWMA on utilization increases 2019-10-29 10:01:07 +01:00
idle.c Power management updates for 5.5-rc1 2019-11-26 19:06:44 -08:00
isolation.c sched/isolation: Prefer housekeeping CPU in local node 2019-07-25 15:51:55 +02:00
loadavg.c
Makefile
membarrier.c membarrier: Fix RCU locking bug caused by faulty merge 2019-10-01 21:27:50 +02:00
pelt.c schied/fair: Skip calculating @contrib without load 2019-12-17 13:32:51 +01:00
pelt.h
psi.c sched/psi: create /proc/pressure and /proc/pressure/{io|memory|cpu} only when psi enabled 2020-01-17 10:19:22 +01:00
rt.c sched/rt: Make RT capacity-aware 2019-12-25 10:42:10 +01:00
sched-pelt.h
sched.h sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with() 2019-12-25 10:42:08 +01:00
stats.c
stats.h sched/stats: Fix unlikely() use of sched_info_on() 2019-07-25 15:51:55 +02:00
stop_task.c sched/core: Further clarify sched_class::set_next_task() 2019-11-11 08:35:21 +01:00
swait.c
topology.c Linux 5.4-rc7 2019-11-11 08:34:59 +01:00
wait_bit.c sched/wait: fix ___wait_var_event(exclusive) 2019-12-17 13:32:50 +01:00
wait.c Add wake_up_interruptible_sync_poll_locked() 2019-10-31 15:12:23 +00:00