forked from luck/tmp_suning_uos_patched
sched/fair: Reset nr_balance_failed after active balancing
To force a task migration during active balancing, nr_balance_failed is set to cache_nice_tries + 1. However nr_balance_failed is not reset. As a side effect, the next regular load balance under the same sd, a cache hot task might be migrated, just because nr_balance_failed count is high. Resetting nr_balance_failed after a successful active balance ensures that a hot task is not unreasonably migrated. This can be verified by looking at othe number of hot task migrations reported by /proc/schedstat. Signed-off-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1458735884-30105-1-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
parent
d740037fac
commit
d02c071183
|
@ -7399,10 +7399,7 @@ static int load_balance(int this_cpu, struct rq *this_rq,
|
|||
&busiest->active_balance_work);
|
||||
}
|
||||
|
||||
/*
|
||||
* We've kicked active balancing, reset the failure
|
||||
* counter.
|
||||
*/
|
||||
/* We've kicked active balancing, force task migration. */
|
||||
sd->nr_balance_failed = sd->cache_nice_tries+1;
|
||||
}
|
||||
} else
|
||||
|
@ -7637,10 +7634,13 @@ static int active_load_balance_cpu_stop(void *data)
|
|||
schedstat_inc(sd, alb_count);
|
||||
|
||||
p = detach_one_task(&env);
|
||||
if (p)
|
||||
if (p) {
|
||||
schedstat_inc(sd, alb_pushed);
|
||||
else
|
||||
/* Active balancing done, reset the failure counter. */
|
||||
sd->nr_balance_failed = 0;
|
||||
} else {
|
||||
schedstat_inc(sd, alb_failed);
|
||||
}
|
||||
}
|
||||
rcu_read_unlock();
|
||||
out_unlock:
|
||||
|
|
Loading…
Reference in New Issue
Block a user