forked from luck/tmp_suning_uos_patched
sched/fair: Make select_idle_cpu() more aggressive
Kitsunyan reported desktop latency issues on his Celeron 887 because
of commit:
1b568f0aab
("sched/core: Optimize SCHED_SMT")
... even though his CPU doesn't do SMT.
The effect of running the SMT code on a !SMT part is basically a more
aggressive select_idle_cpu(). Removing the avg condition fixed things
for him.
I also know FB likes this test gone, even though other workloads like
having it.
For now, take it out by default, until we get a better idea.
Reported-by: kitsunyan <kitsunyan@inbox.ru>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chris Mason <clm@fb.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
4977ab6e92
commit
4c77b18cf8
|
@ -5797,7 +5797,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
|
|||
* Due to large variance we need a large fuzz factor; hackbench in
|
||||
* particularly is sensitive here.
|
||||
*/
|
||||
if ((avg_idle / 512) < avg_cost)
|
||||
if (sched_feat(SIS_AVG_CPU) && (avg_idle / 512) < avg_cost)
|
||||
return -1;
|
||||
|
||||
time = local_clock();
|
||||
|
|
|
@ -51,6 +51,11 @@ SCHED_FEAT(NONTASK_CAPACITY, true)
|
|||
*/
|
||||
SCHED_FEAT(TTWU_QUEUE, true)
|
||||
|
||||
/*
|
||||
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
|
||||
*/
|
||||
SCHED_FEAT(SIS_AVG_CPU, false)
|
||||
|
||||
#ifdef HAVE_RT_PUSH_IPI
|
||||
/*
|
||||
* In order to avoid a thundering herd attack of CPUs that are
|
||||
|
|
Loading…
Reference in New Issue
Block a user