summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2009-09-07 18:28:05 +0200
committerIngo Molnar <mingo@elte.hu>2009-09-07 20:39:06 +0200
commit71a29aa7b600595d0ef373ea605ac656876d1f2f (patch)
tree1288e5bd2bd4b5c599c2b8c7649c30987101a8ec /kernel
parentcdd2ab3de4301728b20efd6225681d3ff591a938 (diff)
sched: Deal with low-load in wake_affine()
wake_affine() would always fail under low-load situations where both prev and this were idle, because adding a single task will always be a significant imbalance, even if there's nothing around that could balance it. Deal with this by allowing imbalance when there's nothing you can do about it. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched_fair.c12
1 files changed, 11 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index d7fda41ddaf0..cc97ea498f24 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1262,7 +1262,17 @@ wake_affine(struct sched_domain *this_sd, struct rq *this_rq,
tg = task_group(p);
weight = p->se.load.weight;
- balanced = 100*(tl + effective_load(tg, this_cpu, weight, weight)) <=
+ /*
+ * In low-load situations, where prev_cpu is idle and this_cpu is idle
+ * due to the sync cause above having dropped tl to 0, we'll always have
+ * an imbalance, but there's really nothing you can do about that, so
+ * that's good too.
+ *
+ * Otherwise check if either cpus are near enough in load to allow this
+ * task to be woken on this_cpu.
+ */
+ balanced = !tl ||
+ 100*(tl + effective_load(tg, this_cpu, weight, weight)) <=
imbalance*(load + effective_load(tg, prev_cpu, 0, weight));
/*