diff options
author | Peter Zijlstra <peterz@infradead.org> | 2014-02-27 10:40:35 +0100 |
---|---|---|
committer | Jiri Slaby <jslaby@suse.cz> | 2014-06-27 10:25:15 +0200 |
commit | 5ff029e2b396ac09fc52addd73bb0f4003c70ef2 (patch) | |
tree | 9b4f0b606f139249fa09c487d72a06798fac5d84 /kernel | |
parent | dcc23f13ff973c49651f0f020495a26baec32343 (diff) |
sched: Make scale_rt_power() deal with backward clocks
commit cadefd3d6cc914d95163ba1eda766bfe7ce1e5b7 upstream.
Mike reported that, while unlikely, its entirely possible for
scale_rt_power() to see the time go backwards. This yields rather
'interesting' results.
So like all other sites that deal with clocks; make this one ignore
backward clock movement too.
Reported-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20140227094035.GZ9987@twins.programming.kicks-ass.net
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched/fair.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 25658d2c68d0..898622244bdf 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4404,6 +4404,7 @@ static unsigned long scale_rt_power(int cpu) { struct rq *rq = cpu_rq(cpu); u64 total, available, age_stamp, avg; + s64 delta; /* * Since we're reading these variables without serialization make sure @@ -4412,7 +4413,11 @@ static unsigned long scale_rt_power(int cpu) age_stamp = ACCESS_ONCE(rq->age_stamp); avg = ACCESS_ONCE(rq->rt_avg); - total = sched_avg_period() + (rq_clock(rq) - age_stamp); + delta = rq_clock(rq) - age_stamp; + if (unlikely(delta < 0)) + delta = 0; + + total = sched_avg_period() + delta; if (unlikely(total < avg)) { /* Ensures that power won't end up being negative */ |