forked from luck/tmp_suning_uos_patched
804d402fb6
Capacity Awareness refers to the fact that on heterogeneous systems (like Arm big.LITTLE), the capacity of the CPUs is not uniform, hence when placing tasks we need to be aware of this difference of CPU capacities. In such scenarios we want to ensure that the selected CPU has enough capacity to meet the requirement of the running task. Enough capacity means here that capacity_orig_of(cpu) >= task.requirement. The definition of task.requirement is dependent on the scheduling class. For CFS, utilization is used to select a CPU that has >= capacity value than the cfs_task.util. capacity_orig_of(cpu) >= cfs_task.util DL isn't capacity aware at the moment but can make use of the bandwidth reservation to implement that in a similar manner CFS uses utilization. The following patchset implements that: https://lore.kernel.org/lkml/20190506044836.2914-1-luca.abeni@santannapisa.it/ capacity_orig_of(cpu)/SCHED_CAPACITY >= dl_deadline/dl_runtime For RT we don't have a per task utilization signal and we lack any information in general about what performance requirement the RT task needs. But with the introduction of uclamp, RT tasks can now control that by setting uclamp_min to guarantee a minimum performance point. ATM the uclamp value are only used for frequency selection; but on heterogeneous systems this is not enough and we need to ensure that the capacity of the CPU is >= uclamp_min. Which is what implemented here. capacity_orig_of(cpu) >= rt_task.uclamp_min Note that by default uclamp.min is 1024, which means that RT tasks will always be biased towards the big CPUs, which make for a better more predictable behavior for the default case. Must stress that the bias acts as a hint rather than a definite placement strategy. For example, if all big cores are busy executing other RT tasks we can't guarantee that a new RT task will be placed there. On non-heterogeneous systems the original behavior of RT should be retained. Similarly if uclamp is not selected in the config. [ mingo: Minor edits to comments. ] Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20191009104611.15363-1-qais.yousef@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
28 lines
668 B
C
28 lines
668 B
C
/* SPDX-License-Identifier: GPL-2.0 */
|
|
|
|
#define CPUPRI_NR_PRIORITIES (MAX_RT_PRIO + 2)
|
|
|
|
#define CPUPRI_INVALID -1
|
|
#define CPUPRI_IDLE 0
|
|
#define CPUPRI_NORMAL 1
|
|
/* values 2-101 are RT priorities 0-99 */
|
|
|
|
struct cpupri_vec {
|
|
atomic_t count;
|
|
cpumask_var_t mask;
|
|
};
|
|
|
|
struct cpupri {
|
|
struct cpupri_vec pri_to_cpu[CPUPRI_NR_PRIORITIES];
|
|
int *cpu_to_pri;
|
|
};
|
|
|
|
#ifdef CONFIG_SMP
|
|
int cpupri_find(struct cpupri *cp, struct task_struct *p,
|
|
struct cpumask *lowest_mask,
|
|
bool (*fitness_fn)(struct task_struct *p, int cpu));
|
|
void cpupri_set(struct cpupri *cp, int cpu, int pri);
|
|
int cpupri_init(struct cpupri *cp);
|
|
void cpupri_cleanup(struct cpupri *cp);
|
|
#endif
|