<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/kernel, branch Colibri_T30_LinuxImageV2.3Beta1_20140804</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>cgroup: remove synchronize_rcu() from cgroup_attach_{task|proc}()</title>
<updated>2014-07-25T20:31:06+00:00</updated>
<author>
<name>Li Zefan</name>
<email>lizefan@huawei.com</email>
</author>
<published>2013-01-14T09:23:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=bfcf5eebc04870bac5dbeaa923e6cce14266a1de'/>
<id>bfcf5eebc04870bac5dbeaa923e6cce14266a1de</id>
<content type='text'>
These 2 syncronize_rcu()s make attaching a task to a cgroup
quite slow, and it can't be ignored in some situations.

A real case from Colin Cross: Android uses cgroups heavily to
manage thread priorities, putting threads in a background group
with reduced cpu.shares when they are not visible to the user,
and in a foreground group when they are. Some RPCs from foreground
threads to background threads will temporarily move the background
thread into the foreground group for the duration of the RPC.
This results in many calls to cgroup_attach_task.

In cgroup_attach_task() it's task-&gt;cgroups that is protected by RCU,
and put_css_set() calls kfree_rcu() to free it.

If we remove this synchronize_rcu(), there can be threads in RCU-read
sections accessing their old cgroup via current-&gt;cgroups with
concurrent rmdir operation, but this is safe.

 # time for ((i=0; i&lt;50; i++)) { echo $$ &gt; /mnt/sub/tasks; echo $$ &gt; /mnt/tasks; }

real    0m2.524s
user    0m0.008s
sys     0m0.004s

With this patch:

real    0m0.004s
user    0m0.004s
sys     0m0.000s

tj: These synchronize_rcu()s are utterly confused.  synchornize_rcu()
    necessarily has to come between two operations to guarantee that
    the changes made by the former operation are visible to all rcu
    readers before proceeding to the latter operation.  Here,
    synchornize_rcu() are at the end of attach operations with nothing
    beyond it.  Its only effect would be delaying completion of
    write(2) to sysfs tasks/procs files until all rcu readers see the
    change, which doesn't mean anything.

Signed-off-by: Li Zefan &lt;lizefan@huawei.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Colin Cross &lt;ccross@google.com&gt;

Conflicts:
	kernel/cgroup.c
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
These 2 syncronize_rcu()s make attaching a task to a cgroup
quite slow, and it can't be ignored in some situations.

A real case from Colin Cross: Android uses cgroups heavily to
manage thread priorities, putting threads in a background group
with reduced cpu.shares when they are not visible to the user,
and in a foreground group when they are. Some RPCs from foreground
threads to background threads will temporarily move the background
thread into the foreground group for the duration of the RPC.
This results in many calls to cgroup_attach_task.

In cgroup_attach_task() it's task-&gt;cgroups that is protected by RCU,
and put_css_set() calls kfree_rcu() to free it.

If we remove this synchronize_rcu(), there can be threads in RCU-read
sections accessing their old cgroup via current-&gt;cgroups with
concurrent rmdir operation, but this is safe.

 # time for ((i=0; i&lt;50; i++)) { echo $$ &gt; /mnt/sub/tasks; echo $$ &gt; /mnt/tasks; }

real    0m2.524s
user    0m0.008s
sys     0m0.004s

With this patch:

real    0m0.004s
user    0m0.004s
sys     0m0.000s

tj: These synchronize_rcu()s are utterly confused.  synchornize_rcu()
    necessarily has to come between two operations to guarantee that
    the changes made by the former operation are visible to all rcu
    readers before proceeding to the latter operation.  Here,
    synchornize_rcu() are at the end of attach operations with nothing
    beyond it.  Its only effect would be delaying completion of
    write(2) to sysfs tasks/procs files until all rcu readers see the
    change, which doesn't mean anything.

Signed-off-by: Li Zefan &lt;lizefan@huawei.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Colin Cross &lt;ccross@google.com&gt;

Conflicts:
	kernel/cgroup.c
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'l4t/l4t-r16-r2' into colibri</title>
<updated>2014-03-12T15:06:34+00:00</updated>
<author>
<name>Marcel Ziswiler</name>
<email>marcel.ziswiler@toradex.com</email>
</author>
<published>2014-03-12T15:06:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d25f27034e1e3ca1f86c3e748ce0f565f13bff7f'/>
<id>d25f27034e1e3ca1f86c3e748ce0f565f13bff7f</id>
<content type='text'>
Conflicts:
	drivers/media/video/tegra_v4l2_camera.c
reverted to current driver supporting ACM rather than CSI2
	drivers/media/video/videobuf2-dma-nvmap.c
	drivers/video/tegra/host/Makefile
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Conflicts:
	drivers/media/video/tegra_v4l2_camera.c
reverted to current driver supporting ACM rather than CSI2
	drivers/media/video/videobuf2-dma-nvmap.c
	drivers/video/tegra/host/Makefile
</pre>
</div>
</content>
</entry>
<entry>
<title>perf: Treat attr.config as u64 in perf_swevent_init()</title>
<updated>2013-05-22T16:18:11+00:00</updated>
<author>
<name>Preetham Chandru R</name>
<email>pchandru@nvidia.com</email>
</author>
<published>2013-05-15T11:31:46+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c75951a17ef41e0cc108f1ced0213bc2620f6534'/>
<id>c75951a17ef41e0cc108f1ced0213bc2620f6534</id>
<content type='text'>
Trinity discovered that we fail to check all 64 bits of
attr.config passed by user space, resulting to out-of-bounds
access of the perf_swevent_enabled array in
sw_perf_event_destroy().

Introduced in commit b0a873ebb ("perf: Register PMU
implementations").

Bug 1289245

Signed-off-by: Tommi Rantala &lt;tt.rantala@gmail.com&gt;
Signed-off-by: Preetham Chandru R &lt;pchandru@nvidia.com&gt;
(cherry picked from commit 8176cced706b5e5d15887584150764894e94e02f)
Change-Id: Idde0330d7430f2ba1645f4dfed063c5df9bbb44a
Reviewed-on: http://git-master/r/228851
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Kiran Adduri &lt;kadduri@nvidia.com&gt;
Reviewed-by: Bo Yan &lt;byan@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Trinity discovered that we fail to check all 64 bits of
attr.config passed by user space, resulting to out-of-bounds
access of the perf_swevent_enabled array in
sw_perf_event_destroy().

Introduced in commit b0a873ebb ("perf: Register PMU
implementations").

Bug 1289245

Signed-off-by: Tommi Rantala &lt;tt.rantala@gmail.com&gt;
Signed-off-by: Preetham Chandru R &lt;pchandru@nvidia.com&gt;
(cherry picked from commit 8176cced706b5e5d15887584150764894e94e02f)
Change-Id: Idde0330d7430f2ba1645f4dfed063c5df9bbb44a
Reviewed-on: http://git-master/r/228851
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Kiran Adduri &lt;kadduri@nvidia.com&gt;
Reviewed-by: Bo Yan &lt;byan@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>arm: workqueue: fix get_pool_nr_running()</title>
<updated>2012-11-26T14:07:11+00:00</updated>
<author>
<name>Marcel Ziswiler</name>
<email>marcel.ziswiler@toradex.com</email>
</author>
<published>2012-11-26T14:07:11+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ab906de0601813dffce00213c6e7a1c7b5fad5fd'/>
<id>ab906de0601813dffce00213c6e7a1c7b5fad5fd</id>
<content type='text'>
Unfortunately when merging 15 patches NVIDIA missed 3 resp. 4 lines
which failed UP case (e.g. on PXA). The offending commit in question is

9dfdd9ac17ac9955b431cb962df3d0492384ba0e

The list of commits the above should have included is as follows:

974271c485a4d8bb801decc616748f90aafb07ec
bd7bdd43dcb81bb08240b9401b36a104f77dc135: just some comments missing
63d95a9150ee3bbd4117fcd609dee40313b454d9
11ebea50dbc1ade5994b2c838a096078d4c02399
4ce62e9e30cacc26885cab133ad1de358dd79f21: 2 lines missing
3270476a6c0ce322354df8679652f060d66526dc: one line missing
6575820221f7a4dd6eadecf7bf83cdd154335eda
f2d5a0ee06c1813f985bb9386f3ccc0d0315720f
403c821d452c03be4ced571ac91339a9d3631b17
6037315269d62bf967286ae2670fdd6b6acedab9
bc2ae0f5bb2f39e6db06a62f9d353e4601a332a1
25511a477657884d2164f338341fa89652610507
3ce63377305b694f53e7dd0c72907591c5344224
628c78e7ea19d5b70d2b6a59030362168cdbe1ad: just some comments missing
8db25e7891a47e03db6f04344a9c92be16e391bb
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Unfortunately when merging 15 patches NVIDIA missed 3 resp. 4 lines
which failed UP case (e.g. on PXA). The offending commit in question is

9dfdd9ac17ac9955b431cb962df3d0492384ba0e

The list of commits the above should have included is as follows:

974271c485a4d8bb801decc616748f90aafb07ec
bd7bdd43dcb81bb08240b9401b36a104f77dc135: just some comments missing
63d95a9150ee3bbd4117fcd609dee40313b454d9
11ebea50dbc1ade5994b2c838a096078d4c02399
4ce62e9e30cacc26885cab133ad1de358dd79f21: 2 lines missing
3270476a6c0ce322354df8679652f060d66526dc: one line missing
6575820221f7a4dd6eadecf7bf83cdd154335eda
f2d5a0ee06c1813f985bb9386f3ccc0d0315720f
403c821d452c03be4ced571ac91339a9d3631b17
6037315269d62bf967286ae2670fdd6b6acedab9
bc2ae0f5bb2f39e6db06a62f9d353e4601a332a1
25511a477657884d2164f338341fa89652610507
3ce63377305b694f53e7dd0c72907591c5344224
628c78e7ea19d5b70d2b6a59030362168cdbe1ad: just some comments missing
8db25e7891a47e03db6f04344a9c92be16e391bb
</pre>
</div>
</content>
</entry>
<entry>
<title>PM QoS: Add disable parameter</title>
<updated>2012-08-22T19:04:18+00:00</updated>
<author>
<name>Antti P Miettinen</name>
<email>amiettinen@nvidia.com</email>
</author>
<published>2012-08-20T16:36:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=064618cb59271bf674f489219028807ff712b241'/>
<id>064618cb59271bf674f489219028807ff712b241</id>
<content type='text'>
For testing purposes it is useful to be able to disable
PM Qos.

Bug 1020898
Bug 917572

Change-Id: I266f5b5730cfe4705197d8b09db7f9eda6766c7c
Signed-off-by: Antti P Miettinen &lt;amiettinen@nvidia.com&gt;
Reviewed-on: http://git-master/r/124667
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Juha Tukkinen &lt;jtukkinen@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For testing purposes it is useful to be able to disable
PM Qos.

Bug 1020898
Bug 917572

Change-Id: I266f5b5730cfe4705197d8b09db7f9eda6766c7c
Signed-off-by: Antti P Miettinen &lt;amiettinen@nvidia.com&gt;
Reviewed-on: http://git-master/r/124667
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Juha Tukkinen &lt;jtukkinen@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>workqueue: CPU hotplug keep idle workers</title>
<updated>2012-08-13T21:49:54+00:00</updated>
<author>
<name>Mitch Luban</name>
<email>mluban@nvidia.com</email>
</author>
<published>2012-07-25T19:59:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=9dfdd9ac17ac9955b431cb962df3d0492384ba0e'/>
<id>9dfdd9ac17ac9955b431cb962df3d0492384ba0e</id>
<content type='text'>
This change merges two patchsets. The first set,
containing 6 patches, reimplements WQ_HIGHPRI
to use a seperate worker_pool. gcwq-&gt;pools[0]
is used for normal priority work and pools[1]
for high priority.

The second patchset contains 9 patches and
reimplements CPU hotplug to keep idle workers.
Updates workqueue CPU hotplug path to use a
disassociated global_cwq, which runs as an
unbound one (WQ_UNBOUND). While this requires
rebinding idle workers, overall hotplug path
is much simpler.

Original patchset:
http://thread.gmane.org/gmane.linux.kernel/1329164

Bug 978010

Change-Id: Ic66ec8848a8d111b5278e63ef6a410846dfd8fcc
Signed-off-by: Mitch Luban &lt;mluban@nvidia.com&gt;
Reviewed-on: http://git-master/r/118387
Reviewed-by: Diwakar Tundlam &lt;dtundlam@nvidia.com&gt;
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani &lt;bnihalani@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This change merges two patchsets. The first set,
containing 6 patches, reimplements WQ_HIGHPRI
to use a seperate worker_pool. gcwq-&gt;pools[0]
is used for normal priority work and pools[1]
for high priority.

The second patchset contains 9 patches and
reimplements CPU hotplug to keep idle workers.
Updates workqueue CPU hotplug path to use a
disassociated global_cwq, which runs as an
unbound one (WQ_UNBOUND). While this requires
rebinding idle workers, overall hotplug path
is much simpler.

Original patchset:
http://thread.gmane.org/gmane.linux.kernel/1329164

Bug 978010

Change-Id: Ic66ec8848a8d111b5278e63ef6a410846dfd8fcc
Signed-off-by: Mitch Luban &lt;mluban@nvidia.com&gt;
Reviewed-on: http://git-master/r/118387
Reviewed-by: Diwakar Tundlam &lt;dtundlam@nvidia.com&gt;
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
GVS: Gerrit_Virtual_Submit
Reviewed-by: Bharat Nihalani &lt;bnihalani@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>kthread: disable preemption during complete()</title>
<updated>2012-07-31T21:58:16+00:00</updated>
<author>
<name>Peter Boonstoppel</name>
<email>pboonstoppel@nvidia.com</email>
</author>
<published>2012-07-19T21:58:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=3e34666ada220c17362c25e0c19d2319db449662'/>
<id>3e34666ada220c17362c25e0c19d2319db449662</id>
<content type='text'>
After a kthread is created it signals the requester using complete()
and enters TASK_UNINTERRUPTIBLE. However, since complete() wakes up
the requesting thread this can cause a preemption. The preemption will
not remove the task from the runqueue (for that schedule() has to be
invoked directly).

This is a problem if directly after kthread creation you try to do a
kthread_bind(), which will block in HZ steps until the thread is off
the runqueue.

This patch disables preemption during complete(), since we call
schedule() directly afterwards, so it will correctly enter
TASK_UNINTERRUPTIBLE. This speeds up kthread creation/binding during
cpu hotplug significantly.

Change-Id: I856ddd4e01ebdb198ba90f343b4a0c5933fd2b23
Signed-off-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
After a kthread is created it signals the requester using complete()
and enters TASK_UNINTERRUPTIBLE. However, since complete() wakes up
the requesting thread this can cause a preemption. The preemption will
not remove the task from the runqueue (for that schedule() has to be
invoked directly).

This is a problem if directly after kthread creation you try to do a
kthread_bind(), which will block in HZ steps until the thread is off
the runqueue.

This patch disables preemption during complete(), since we call
schedule() directly afterwards, so it will correctly enter
TASK_UNINTERRUPTIBLE. This speeds up kthread creation/binding during
cpu hotplug significantly.

Change-Id: I856ddd4e01ebdb198ba90f343b4a0c5933fd2b23
Signed-off-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>sched: unthrottle rt runqueues in __disable_runtime()</title>
<updated>2012-07-11T13:35:11+00:00</updated>
<author>
<name>Peter Boonstoppel</name>
<email>pboonstoppel@nvidia.com</email>
</author>
<published>2012-05-17T22:15:43+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=55b6cb3c764e578ce4141d13fc42b79e2091ce8a'/>
<id>55b6cb3c764e578ce4141d13fc42b79e2091ce8a</id>
<content type='text'>
migrate_tasks() uses _pick_next_task_rt() to get tasks from the
real-time runqueues to be migrated. When rt_rq is throttled
_pick_next_task_rt() won't return anything, in which case
migrate_tasks() can't move all threads over and gets stuck in an
infinite loop.

Instead unthrottle rt runqueues before migrating tasks.

Bug 976709

Change-Id: Ie3696702abc560fe8ffa7d2fb5dc5d54d532cc0d
Signed-off-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
(cherry picked from commit 4d18ba5765c206bf9f37634f532d97dabd507a58)
Reviewed-on: http://git-master/r/103417
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Aleksandr Frid &lt;afrid@nvidia.com&gt;
Reviewed-by: Yu-Huan Hsu &lt;yhsu@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
migrate_tasks() uses _pick_next_task_rt() to get tasks from the
real-time runqueues to be migrated. When rt_rq is throttled
_pick_next_task_rt() won't return anything, in which case
migrate_tasks() can't move all threads over and gets stuck in an
infinite loop.

Instead unthrottle rt runqueues before migrating tasks.

Bug 976709

Change-Id: Ie3696702abc560fe8ffa7d2fb5dc5d54d532cc0d
Signed-off-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
(cherry picked from commit 4d18ba5765c206bf9f37634f532d97dabd507a58)
Reviewed-on: http://git-master/r/103417
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Aleksandr Frid &lt;afrid@nvidia.com&gt;
Reviewed-by: Yu-Huan Hsu &lt;yhsu@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>scheduler: Re-compute time-average nr_running on read</title>
<updated>2012-07-01T16:15:19+00:00</updated>
<author>
<name>Alex Frid</name>
<email>afrid@nvidia.com</email>
</author>
<published>2012-05-18T19:18:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1802afb2ad9ee1b6c1e11207f3fcdd3a56a1e0f0'/>
<id>1802afb2ad9ee1b6c1e11207f3fcdd3a56a1e0f0</id>
<content type='text'>
Re-compute time-average nr_running when it is read. This would
prevent reading stalled average value if there were no run-queue
changes for a long time. New average value is returned to the reader,
but not stored to avoid concurrent writes. Light-weight sequential
counter synchronization is used to assure data consistency for
re-computing average.

Change-Id: I8e4ea1b28ea00b3ddaf6ef7cdcd27866f87d360b
Signed-off-by: Alex Frid &lt;afrid@nvidia.com&gt;
(cherry picked from commit 527a759d9b40bf57958eb002edd2bb82014dab99)
Reviewed-on: http://git-master/r/111637
Reviewed-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Tested-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
Reviewed-by: Yu-Huan Hsu &lt;yhsu@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Re-compute time-average nr_running when it is read. This would
prevent reading stalled average value if there were no run-queue
changes for a long time. New average value is returned to the reader,
but not stored to avoid concurrent writes. Light-weight sequential
counter synchronization is used to assure data consistency for
re-computing average.

Change-Id: I8e4ea1b28ea00b3ddaf6ef7cdcd27866f87d360b
Signed-off-by: Alex Frid &lt;afrid@nvidia.com&gt;
(cherry picked from commit 527a759d9b40bf57958eb002edd2bb82014dab99)
Reviewed-on: http://git-master/r/111637
Reviewed-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Tested-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
Reviewed-by: Yu-Huan Hsu &lt;yhsu@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>scheduler: compute time-average nr_running per run-queue</title>
<updated>2012-07-01T16:15:12+00:00</updated>
<author>
<name>Diwakar Tundlam</name>
<email>dtundlam@nvidia.com</email>
</author>
<published>2012-05-07T22:12:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0b5a8a6f30fe0eb7919294c58ddedaeab069ce2a'/>
<id>0b5a8a6f30fe0eb7919294c58ddedaeab069ce2a</id>
<content type='text'>
Compute the time-average number of running tasks per run-queue for a
trailing window of a fixed time period. The detla add/sub to the
average value is weighted by the amount of time per nr_running value
relative to the total measurement period.

Change-Id: I076e24ff4ed65bed3b8dd8d2b279a503318071ff
Signed-off-by: Diwakar Tundlam &lt;dtundlam@nvidia.com&gt;
(cherry picked from commit 3a12d7499cee352e8a46eaf700259ba3c733f0e3)
Reviewed-on: http://git-master/r/111635
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Tested-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Reviewed-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
Reviewed-by: Yu-Huan Hsu &lt;yhsu@nvidia.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Compute the time-average number of running tasks per run-queue for a
trailing window of a fixed time period. The detla add/sub to the
average value is weighted by the amount of time per nr_running value
relative to the total measurement period.

Change-Id: I076e24ff4ed65bed3b8dd8d2b279a503318071ff
Signed-off-by: Diwakar Tundlam &lt;dtundlam@nvidia.com&gt;
(cherry picked from commit 3a12d7499cee352e8a46eaf700259ba3c733f0e3)
Reviewed-on: http://git-master/r/111635
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Tested-by: Sai Gurrappadi &lt;sgurrappadi@nvidia.com&gt;
Reviewed-by: Peter Boonstoppel &lt;pboonstoppel@nvidia.com&gt;
Reviewed-by: Yu-Huan Hsu &lt;yhsu@nvidia.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
