forked from luck/tmp_suning_uos_patched
d153b15344
Eric reported a sysbench regression against commit:3fed382b46
("sched/numa: Implement NUMA node level wake_affine()") Similarly, Rik was looking at the NAS-lu.C benchmark, which regressed against his v3.10 enterprise kernel. PRE (current tip/master): ivb-ep sysbench: 2: [30 secs] transactions: 64110 (2136.94 per sec.) 5: [30 secs] transactions: 143644 (4787.99 per sec.) 10: [30 secs] transactions: 274298 (9142.93 per sec.) 20: [30 secs] transactions: 418683 (13955.45 per sec.) 40: [30 secs] transactions: 320731 (10690.15 per sec.) 80: [30 secs] transactions: 355096 (11834.28 per sec.) hsw-ex NAS: OMP_PROC_BIND/lu.C.x_threads_144_run_1.log: Time in seconds = 18.01 OMP_PROC_BIND/lu.C.x_threads_144_run_2.log: Time in seconds = 17.89 OMP_PROC_BIND/lu.C.x_threads_144_run_3.log: Time in seconds = 17.93 lu.C.x_threads_144_run_1.log: Time in seconds = 434.68 lu.C.x_threads_144_run_2.log: Time in seconds = 405.36 lu.C.x_threads_144_run_3.log: Time in seconds = 433.83 POST (+patch): ivb-ep sysbench: 2: [30 secs] transactions: 64494 (2149.75 per sec.) 5: [30 secs] transactions: 145114 (4836.99 per sec.) 10: [30 secs] transactions: 278311 (9276.69 per sec.) 20: [30 secs] transactions: 437169 (14571.60 per sec.) 40: [30 secs] transactions: 669837 (22326.73 per sec.) 80: [30 secs] transactions: 631739 (21055.88 per sec.) hsw-ex NAS: lu.C.x_threads_144_run_1.log: Time in seconds = 23.36 lu.C.x_threads_144_run_2.log: Time in seconds = 22.96 lu.C.x_threads_144_run_3.log: Time in seconds = 22.52 This patch takes out all the shiny wake_affine() stuff and goes back to utter basics. Between the two CPUs involved with the wakeup (the CPU doing the wakeup and the CPU we ran on previously) pick the CPU we can run on _now_. This restores much of the regressions against the older kernels, but leaves some ground in the overloaded case. The default-enabled WA_WEIGHT (which will be introduced in the next patch) is an attempt to address the overloaded situation. Reported-by: Eric Farman <farman@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: jinpuwang@gmail.com Cc: vcaputo@pengaru.com Fixes:3fed382b46
("sched/numa: Implement NUMA node level wake_affine()") Signed-off-by: Ingo Molnar <mingo@kernel.org>
85 lines
2.2 KiB
C
85 lines
2.2 KiB
C
/*
|
|
* Only give sleepers 50% of their service deficit. This allows
|
|
* them to run sooner, but does not allow tons of sleepers to
|
|
* rip the spread apart.
|
|
*/
|
|
SCHED_FEAT(GENTLE_FAIR_SLEEPERS, true)
|
|
|
|
/*
|
|
* Place new tasks ahead so that they do not starve already running
|
|
* tasks
|
|
*/
|
|
SCHED_FEAT(START_DEBIT, true)
|
|
|
|
/*
|
|
* Prefer to schedule the task we woke last (assuming it failed
|
|
* wakeup-preemption), since its likely going to consume data we
|
|
* touched, increases cache locality.
|
|
*/
|
|
SCHED_FEAT(NEXT_BUDDY, false)
|
|
|
|
/*
|
|
* Prefer to schedule the task that ran last (when we did
|
|
* wake-preempt) as that likely will touch the same data, increases
|
|
* cache locality.
|
|
*/
|
|
SCHED_FEAT(LAST_BUDDY, true)
|
|
|
|
/*
|
|
* Consider buddies to be cache hot, decreases the likelyness of a
|
|
* cache buddy being migrated away, increases cache locality.
|
|
*/
|
|
SCHED_FEAT(CACHE_HOT_BUDDY, true)
|
|
|
|
/*
|
|
* Allow wakeup-time preemption of the current task:
|
|
*/
|
|
SCHED_FEAT(WAKEUP_PREEMPTION, true)
|
|
|
|
SCHED_FEAT(HRTICK, false)
|
|
SCHED_FEAT(DOUBLE_TICK, false)
|
|
SCHED_FEAT(LB_BIAS, true)
|
|
|
|
/*
|
|
* Decrement CPU capacity based on time not spent running tasks
|
|
*/
|
|
SCHED_FEAT(NONTASK_CAPACITY, true)
|
|
|
|
/*
|
|
* Queue remote wakeups on the target CPU and process them
|
|
* using the scheduler IPI. Reduces rq->lock contention/bounces.
|
|
*/
|
|
SCHED_FEAT(TTWU_QUEUE, true)
|
|
|
|
/*
|
|
* When doing wakeups, attempt to limit superfluous scans of the LLC domain.
|
|
*/
|
|
SCHED_FEAT(SIS_AVG_CPU, false)
|
|
SCHED_FEAT(SIS_PROP, true)
|
|
|
|
/*
|
|
* Issue a WARN when we do multiple update_rq_clock() calls
|
|
* in a single rq->lock section. Default disabled because the
|
|
* annotations are not complete.
|
|
*/
|
|
SCHED_FEAT(WARN_DOUBLE_CLOCK, false)
|
|
|
|
#ifdef HAVE_RT_PUSH_IPI
|
|
/*
|
|
* In order to avoid a thundering herd attack of CPUs that are
|
|
* lowering their priorities at the same time, and there being
|
|
* a single CPU that has an RT task that can migrate and is waiting
|
|
* to run, where the other CPUs will try to take that CPUs
|
|
* rq lock and possibly create a large contention, sending an
|
|
* IPI to that CPU and let that CPU push the RT task to where
|
|
* it should go may be a better scenario.
|
|
*/
|
|
SCHED_FEAT(RT_PUSH_IPI, true)
|
|
#endif
|
|
|
|
SCHED_FEAT(RT_RUNTIME_SHARE, true)
|
|
SCHED_FEAT(LB_MIN, false)
|
|
SCHED_FEAT(ATTACH_AGE_LOAD, true)
|
|
|
|
SCHED_FEAT(WA_IDLE, true)
|