forked from luck/tmp_suning_uos_patched
sched: fix niced_granularity() shift
fix niced_granularity(). This resulted in under-scheduling for CPU-bound negative nice level tasks (and this in turn caused higher than necessary latencies in nice-0 tasks). Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
parent
7fd0d2dde9
commit
a0dc72601d
|
@ -291,7 +291,7 @@ niced_granularity(struct sched_entity *curr, unsigned long granularity)
|
|||
/*
|
||||
* It will always fit into 'long':
|
||||
*/
|
||||
return (long) (tmp >> WMULT_SHIFT);
|
||||
return (long) (tmp >> (WMULT_SHIFT-NICE_0_SHIFT));
|
||||
}
|
||||
|
||||
static inline void
|
||||
|
|
Loading…
Reference in New Issue
Block a user