<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/kernel/rcutree.h, branch v3.10.2</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>rcu: Don't call wakeup() with rcu_node structure -&gt;lock held</title>
<updated>2013-06-10T20:37:11+00:00</updated>
<author>
<name>Steven Rostedt</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-05-28T21:32:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=016a8d5be6ddcc72ef0432d82d9f6fa34f61b907'/>
<id>016a8d5be6ddcc72ef0432d82d9f6fa34f61b907</id>
<content type='text'>
This commit fixes a lockdep-detected deadlock by moving a wake_up()
call out from a rnp-&gt;lock critical section.  Please see below for
the long version of this story.

On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:

&gt; [12572.705832] ======================================================
&gt; [12572.750317] [ INFO: possible circular locking dependency detected ]
&gt; [12572.796978] 3.10.0-rc3+ #39 Not tainted
&gt; [12572.833381] -------------------------------------------------------
&gt; [12572.862233] trinity-child17/31341 is trying to acquire lock:
&gt; [12572.870390]  (rcu_node_0){..-.-.}, at: [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12572.878859]
&gt; but task is already holding lock:
&gt; [12572.894894]  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12572.903381]
&gt; which lock already depends on the new lock.
&gt;
&gt; [12572.927541]
&gt; the existing dependency chain (in reverse order) is:
&gt; [12572.943736]
&gt; -&gt; #4 (&amp;ctx-&gt;lock){-.-...}:
&gt; [12572.960032]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12572.968337]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12572.976633]        [&lt;ffffffff8113c987&gt;] __perf_event_task_sched_out+0x2e7/0x5e0
&gt; [12572.984969]        [&lt;ffffffff81088953&gt;] perf_event_task_sched_out+0x93/0xa0
&gt; [12572.993326]        [&lt;ffffffff816ea0bf&gt;] __schedule+0x2cf/0x9c0
&gt; [12573.001652]        [&lt;ffffffff816eacfe&gt;] schedule_user+0x2e/0x70
&gt; [12573.009998]        [&lt;ffffffff816ecd64&gt;] retint_careful+0x12/0x2e
&gt; [12573.018321]
&gt; -&gt; #3 (&amp;rq-&gt;lock){-.-.-.}:
&gt; [12573.034628]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.042930]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.051248]        [&lt;ffffffff8108e6a7&gt;] wake_up_new_task+0xb7/0x260
&gt; [12573.059579]        [&lt;ffffffff810492f5&gt;] do_fork+0x105/0x470
&gt; [12573.067880]        [&lt;ffffffff81049686&gt;] kernel_thread+0x26/0x30
&gt; [12573.076202]        [&lt;ffffffff816cee63&gt;] rest_init+0x23/0x140
&gt; [12573.084508]        [&lt;ffffffff81ed8e1f&gt;] start_kernel+0x3f1/0x3fe
&gt; [12573.092852]        [&lt;ffffffff81ed856f&gt;] x86_64_start_reservations+0x2a/0x2c
&gt; [12573.101233]        [&lt;ffffffff81ed863d&gt;] x86_64_start_kernel+0xcc/0xcf
&gt; [12573.109528]
&gt; -&gt; #2 (&amp;p-&gt;pi_lock){-.-.-.}:
&gt; [12573.125675]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.133829]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.141964]        [&lt;ffffffff8108e881&gt;] try_to_wake_up+0x31/0x320
&gt; [12573.150065]        [&lt;ffffffff8108ebe2&gt;] default_wake_function+0x12/0x20
&gt; [12573.158151]        [&lt;ffffffff8107bbf8&gt;] autoremove_wake_function+0x18/0x40
&gt; [12573.166195]        [&lt;ffffffff81085398&gt;] __wake_up_common+0x58/0x90
&gt; [12573.174215]        [&lt;ffffffff81086909&gt;] __wake_up+0x39/0x50
&gt; [12573.182146]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.190119]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.198023]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.205860]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.213656]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0
&gt; [12573.221379]
&gt; -&gt; #1 (&amp;rsp-&gt;gp_wq){..-.-.}:
&gt; [12573.236329]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.243783]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.251178]        [&lt;ffffffff810868f3&gt;] __wake_up+0x23/0x50
&gt; [12573.258505]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.265891]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.273248]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.280564]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.287807]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0

Notice the above call chain.

rcu_start_future_gp() is called with the rnp-&gt;lock held. Then it calls
rcu_start_gp_advance, which does a wakeup.

You can't do wakeups while holding the rnp-&gt;lock, as that would mean
that you could not do a rcu_read_unlock() while holding the rq lock, or
any lock that was taken while holding the rq lock. This is because...
(See below).

&gt; [12573.295067]
&gt; -&gt; #0 (rcu_node_0){..-.-.}:
&gt; [12573.309293]        [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.316568]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.323825]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.331081]        [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.338377]        [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.345648]        [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.352942]        [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.360211]        [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.367514]        [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.374816]        [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

Notice the above trace.

perf took its own ctx-&gt;lock, which can be taken while holding the rq
lock. While holding this lock, it did a rcu_read_unlock(). The
perf_lock_task_context() basically looks like:

rcu_read_lock();
raw_spin_lock(ctx-&gt;lock);
rcu_read_unlock();

Now, what looks to have happened, is that we scheduled after taking that
first rcu_read_lock() but before taking the spin lock. When we scheduled
back in and took the ctx-&gt;lock, the following rcu_read_unlock()
triggered the "special" code.

The rcu_read_unlock_special() takes the rnp-&gt;lock, which gives us a
possible deadlock scenario.

	CPU0		CPU1		CPU2
	----		----		----

				     rcu_nocb_kthread()
    lock(rq-&gt;lock);
		    lock(ctx-&gt;lock);
				     lock(rnp-&gt;lock);

				     wake_up();

				     lock(rq-&gt;lock);

		    rcu_read_unlock();

		    rcu_read_unlock_special();

		    lock(rnp-&gt;lock);
    lock(ctx-&gt;lock);

**** DEADLOCK ****

&gt; [12573.382068]
&gt; other info that might help us debug this:
&gt;
&gt; [12573.403229] Chain exists of:
&gt;   rcu_node_0 --&gt; &amp;rq-&gt;lock --&gt; &amp;ctx-&gt;lock
&gt;
&gt; [12573.424471]  Possible unsafe locking scenario:
&gt;
&gt; [12573.438499]        CPU0                    CPU1
&gt; [12573.445599]        ----                    ----
&gt; [12573.452691]   lock(&amp;ctx-&gt;lock);
&gt; [12573.459799]                                lock(&amp;rq-&gt;lock);
&gt; [12573.467010]                                lock(&amp;ctx-&gt;lock);
&gt; [12573.474192]   lock(rcu_node_0);
&gt; [12573.481262]
&gt;  *** DEADLOCK ***
&gt;
&gt; [12573.501931] 1 lock held by trinity-child17/31341:
&gt; [12573.508990]  #0:  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12573.516475]
&gt; stack backtrace:
&gt; [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
&gt; [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
&gt; [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
&gt; [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
&gt; [12573.567856] Call Trace:
&gt; [12573.575011]  [&lt;ffffffff816e375b&gt;] dump_stack+0x19/0x1b
&gt; [12573.582284]  [&lt;ffffffff816dfa5d&gt;] print_circular_bug+0x200/0x20f
&gt; [12573.589637]  [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.596982]  [&lt;ffffffff810918f5&gt;] ? sched_clock_cpu+0xb5/0x100
&gt; [12573.604344]  [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.611652]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.619030]  [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.626331]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.633671]  [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.640992]  [&lt;ffffffff811390ed&gt;] ? perf_lock_task_context+0x7d/0x2d0
&gt; [12573.648330]  [&lt;ffffffff810b429e&gt;] ? put_lock_stats.isra.29+0xe/0x40
&gt; [12573.655662]  [&lt;ffffffff813095a0&gt;] ? delay_tsc+0x90/0xe0
&gt; [12573.662964]  [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.670276]  [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.677622]  [&lt;ffffffff81139070&gt;] ? __perf_event_enable+0x370/0x370
&gt; [12573.684981]  [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.692358]  [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.699753]  [&lt;ffffffff8108cd9d&gt;] ? get_parent_ip+0xd/0x50
&gt; [12573.707135]  [&lt;ffffffff810b71fd&gt;] ? trace_hardirqs_on_caller+0xfd/0x1c0
&gt; [12573.714599]  [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.721996]  [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

This commit delays the wakeup via irq_work(), which is what
perf and ftrace use to perform wakeups in critical sections.

Reported-by: Dave Jones &lt;davej@redhat.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit fixes a lockdep-detected deadlock by moving a wake_up()
call out from a rnp-&gt;lock critical section.  Please see below for
the long version of this story.

On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:

&gt; [12572.705832] ======================================================
&gt; [12572.750317] [ INFO: possible circular locking dependency detected ]
&gt; [12572.796978] 3.10.0-rc3+ #39 Not tainted
&gt; [12572.833381] -------------------------------------------------------
&gt; [12572.862233] trinity-child17/31341 is trying to acquire lock:
&gt; [12572.870390]  (rcu_node_0){..-.-.}, at: [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12572.878859]
&gt; but task is already holding lock:
&gt; [12572.894894]  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12572.903381]
&gt; which lock already depends on the new lock.
&gt;
&gt; [12572.927541]
&gt; the existing dependency chain (in reverse order) is:
&gt; [12572.943736]
&gt; -&gt; #4 (&amp;ctx-&gt;lock){-.-...}:
&gt; [12572.960032]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12572.968337]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12572.976633]        [&lt;ffffffff8113c987&gt;] __perf_event_task_sched_out+0x2e7/0x5e0
&gt; [12572.984969]        [&lt;ffffffff81088953&gt;] perf_event_task_sched_out+0x93/0xa0
&gt; [12572.993326]        [&lt;ffffffff816ea0bf&gt;] __schedule+0x2cf/0x9c0
&gt; [12573.001652]        [&lt;ffffffff816eacfe&gt;] schedule_user+0x2e/0x70
&gt; [12573.009998]        [&lt;ffffffff816ecd64&gt;] retint_careful+0x12/0x2e
&gt; [12573.018321]
&gt; -&gt; #3 (&amp;rq-&gt;lock){-.-.-.}:
&gt; [12573.034628]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.042930]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.051248]        [&lt;ffffffff8108e6a7&gt;] wake_up_new_task+0xb7/0x260
&gt; [12573.059579]        [&lt;ffffffff810492f5&gt;] do_fork+0x105/0x470
&gt; [12573.067880]        [&lt;ffffffff81049686&gt;] kernel_thread+0x26/0x30
&gt; [12573.076202]        [&lt;ffffffff816cee63&gt;] rest_init+0x23/0x140
&gt; [12573.084508]        [&lt;ffffffff81ed8e1f&gt;] start_kernel+0x3f1/0x3fe
&gt; [12573.092852]        [&lt;ffffffff81ed856f&gt;] x86_64_start_reservations+0x2a/0x2c
&gt; [12573.101233]        [&lt;ffffffff81ed863d&gt;] x86_64_start_kernel+0xcc/0xcf
&gt; [12573.109528]
&gt; -&gt; #2 (&amp;p-&gt;pi_lock){-.-.-.}:
&gt; [12573.125675]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.133829]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.141964]        [&lt;ffffffff8108e881&gt;] try_to_wake_up+0x31/0x320
&gt; [12573.150065]        [&lt;ffffffff8108ebe2&gt;] default_wake_function+0x12/0x20
&gt; [12573.158151]        [&lt;ffffffff8107bbf8&gt;] autoremove_wake_function+0x18/0x40
&gt; [12573.166195]        [&lt;ffffffff81085398&gt;] __wake_up_common+0x58/0x90
&gt; [12573.174215]        [&lt;ffffffff81086909&gt;] __wake_up+0x39/0x50
&gt; [12573.182146]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.190119]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.198023]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.205860]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.213656]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0
&gt; [12573.221379]
&gt; -&gt; #1 (&amp;rsp-&gt;gp_wq){..-.-.}:
&gt; [12573.236329]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.243783]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.251178]        [&lt;ffffffff810868f3&gt;] __wake_up+0x23/0x50
&gt; [12573.258505]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.265891]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.273248]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.280564]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.287807]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0

Notice the above call chain.

rcu_start_future_gp() is called with the rnp-&gt;lock held. Then it calls
rcu_start_gp_advance, which does a wakeup.

You can't do wakeups while holding the rnp-&gt;lock, as that would mean
that you could not do a rcu_read_unlock() while holding the rq lock, or
any lock that was taken while holding the rq lock. This is because...
(See below).

&gt; [12573.295067]
&gt; -&gt; #0 (rcu_node_0){..-.-.}:
&gt; [12573.309293]        [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.316568]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.323825]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.331081]        [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.338377]        [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.345648]        [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.352942]        [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.360211]        [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.367514]        [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.374816]        [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

Notice the above trace.

perf took its own ctx-&gt;lock, which can be taken while holding the rq
lock. While holding this lock, it did a rcu_read_unlock(). The
perf_lock_task_context() basically looks like:

rcu_read_lock();
raw_spin_lock(ctx-&gt;lock);
rcu_read_unlock();

Now, what looks to have happened, is that we scheduled after taking that
first rcu_read_lock() but before taking the spin lock. When we scheduled
back in and took the ctx-&gt;lock, the following rcu_read_unlock()
triggered the "special" code.

The rcu_read_unlock_special() takes the rnp-&gt;lock, which gives us a
possible deadlock scenario.

	CPU0		CPU1		CPU2
	----		----		----

				     rcu_nocb_kthread()
    lock(rq-&gt;lock);
		    lock(ctx-&gt;lock);
				     lock(rnp-&gt;lock);

				     wake_up();

				     lock(rq-&gt;lock);

		    rcu_read_unlock();

		    rcu_read_unlock_special();

		    lock(rnp-&gt;lock);
    lock(ctx-&gt;lock);

**** DEADLOCK ****

&gt; [12573.382068]
&gt; other info that might help us debug this:
&gt;
&gt; [12573.403229] Chain exists of:
&gt;   rcu_node_0 --&gt; &amp;rq-&gt;lock --&gt; &amp;ctx-&gt;lock
&gt;
&gt; [12573.424471]  Possible unsafe locking scenario:
&gt;
&gt; [12573.438499]        CPU0                    CPU1
&gt; [12573.445599]        ----                    ----
&gt; [12573.452691]   lock(&amp;ctx-&gt;lock);
&gt; [12573.459799]                                lock(&amp;rq-&gt;lock);
&gt; [12573.467010]                                lock(&amp;ctx-&gt;lock);
&gt; [12573.474192]   lock(rcu_node_0);
&gt; [12573.481262]
&gt;  *** DEADLOCK ***
&gt;
&gt; [12573.501931] 1 lock held by trinity-child17/31341:
&gt; [12573.508990]  #0:  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12573.516475]
&gt; stack backtrace:
&gt; [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
&gt; [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
&gt; [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
&gt; [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
&gt; [12573.567856] Call Trace:
&gt; [12573.575011]  [&lt;ffffffff816e375b&gt;] dump_stack+0x19/0x1b
&gt; [12573.582284]  [&lt;ffffffff816dfa5d&gt;] print_circular_bug+0x200/0x20f
&gt; [12573.589637]  [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.596982]  [&lt;ffffffff810918f5&gt;] ? sched_clock_cpu+0xb5/0x100
&gt; [12573.604344]  [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.611652]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.619030]  [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.626331]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.633671]  [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.640992]  [&lt;ffffffff811390ed&gt;] ? perf_lock_task_context+0x7d/0x2d0
&gt; [12573.648330]  [&lt;ffffffff810b429e&gt;] ? put_lock_stats.isra.29+0xe/0x40
&gt; [12573.655662]  [&lt;ffffffff813095a0&gt;] ? delay_tsc+0x90/0xe0
&gt; [12573.662964]  [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.670276]  [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.677622]  [&lt;ffffffff81139070&gt;] ? __perf_event_enable+0x370/0x370
&gt; [12573.684981]  [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.692358]  [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.699753]  [&lt;ffffffff8108cd9d&gt;] ? get_parent_ip+0xd/0x50
&gt; [12573.707135]  [&lt;ffffffff810b71fd&gt;] ? trace_hardirqs_on_caller+0xfd/0x1c0
&gt; [12573.714599]  [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.721996]  [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

This commit delays the wakeup via irq_work(), which is what
perf and ftrace use to perform wakeups in critical sections.

Reported-by: Dave Jones &lt;davej@redhat.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge commit '8700c95adb03' into timers/nohz</title>
<updated>2013-05-02T15:54:19+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>fweisbec@gmail.com</email>
</author>
<published>2013-05-02T15:37:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c032862fba51a3ca504752d3a25186b324c5ce83'/>
<id>c032862fba51a3ca504752d3a25186b324c5ce83</id>
<content type='text'>
The full dynticks tree needs the latest RCU and sched
upstream updates in order to fix some dependencies.

Merge a common upstream merge point that has these
updates.

Conflicts:
	include/linux/perf_event.h
	kernel/rcutree.h
	kernel/rcutree_plugin.h

Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The full dynticks tree needs the latest RCU and sched
upstream updates in order to fix some dependencies.

Merge a common upstream merge point that has these
updates.

Conflicts:
	include/linux/perf_event.h
	kernel/rcutree.h
	kernel/rcutree_plugin.h

Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>nohz: Ensure full dynticks CPUs are RCU nocbs</title>
<updated>2013-04-19T11:54:04+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>fweisbec@gmail.com</email>
</author>
<published>2013-03-26T22:47:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=d1e43fa5f8bb25f83a86a29f11fcfb57ed4d7566'/>
<id>d1e43fa5f8bb25f83a86a29f11fcfb57ed4d7566</id>
<content type='text'>
We need full dynticks CPU to also be RCU nocb so
that we don't have to keep the tick to handle RCU
callbacks.

Make sure the range passed to nohz_full= boot
parameter is a subset of rcu_nocbs=

The CPUs that fail to meet this requirement will be
excluded from the nohz_full range. This is checked
early in boot time, before any CPU has the opportunity
to stop its tick.

Suggested-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Reviewed-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Geoff Levand &lt;geoff@infradead.org&gt;
Cc: Gilad Ben Yossef &lt;gilad@benyossef.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We need full dynticks CPU to also be RCU nocb so
that we don't have to keep the tick to handle RCU
callbacks.

Make sure the range passed to nohz_full= boot
parameter is a subset of rcu_nocbs=

The CPUs that fail to meet this requirement will be
excluded from the nohz_full range. This is checked
early in boot time, before any CPU has the opportunity
to stop its tick.

Suggested-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Reviewed-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Geoff Levand &lt;geoff@infradead.org&gt;
Cc: Gilad Ben Yossef &lt;gilad@benyossef.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Kick adaptive-ticks CPUs that are holding up RCU grace periods</title>
<updated>2013-04-15T18:18:36+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-04-12T23:19:10+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=65d798f0f9339ae2c4ebe9480e3260b33382a584'/>
<id>65d798f0f9339ae2c4ebe9480e3260b33382a584</id>
<content type='text'>
Adaptive-ticks CPUs inform RCU when they enter kernel mode, but they do
not necessarily turn the scheduler-clock tick back on.  This state of
affairs could result in RCU waiting on an adaptive-ticks CPU running
for an extended period in kernel mode.  Such a CPU will never run the
RCU state machine, and could therefore indefinitely extend the RCU state
machine, sooner or later resulting in an OOM condition.

This patch, inspired by an earlier patch by Frederic Weisbecker, therefore
causes RCU's force-quiescent-state processing to check for this condition
and to send an IPI to CPUs that remain in that state for too long.
"Too long" currently means about three jiffies by default, which is
quite some time for a CPU to remain in the kernel without blocking.
The rcu_tree.jiffies_till_first_fqs and rcutree.jiffies_till_next_fqs
sysfs variables may be used to tune "too long" if needed.

Reported-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Reviewed-by: Josh Triplett &lt;josh@joshtriplett.org&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Geoff Levand &lt;geoff@infradead.org&gt;
Cc: Gilad Ben Yossef &lt;gilad@benyossef.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Adaptive-ticks CPUs inform RCU when they enter kernel mode, but they do
not necessarily turn the scheduler-clock tick back on.  This state of
affairs could result in RCU waiting on an adaptive-ticks CPU running
for an extended period in kernel mode.  Such a CPU will never run the
RCU state machine, and could therefore indefinitely extend the RCU state
machine, sooner or later resulting in an OOM condition.

This patch, inspired by an earlier patch by Frederic Weisbecker, therefore
causes RCU's force-quiescent-state processing to check for this condition
and to send an IPI to CPUs that remain in that state for too long.
"Too long" currently means about three jiffies by default, which is
quite some time for a CPU to remain in the kernel without blocking.
The rcu_tree.jiffies_till_first_fqs and rcutree.jiffies_till_next_fqs
sysfs variables may be used to tune "too long" if needed.

Reported-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Reviewed-by: Josh Triplett &lt;josh@joshtriplett.org&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Chris Metcalf &lt;cmetcalf@tilera.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Geoff Levand &lt;geoff@infradead.org&gt;
Cc: Gilad Ben Yossef &lt;gilad@benyossef.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branches 'doc.2013.03.12a', 'fixes.2013.03.13a' and 'idlenocb.2013.03.26b' into HEAD</title>
<updated>2013-03-26T15:07:38+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2013-03-26T15:07:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=6d87669357936bffa1e8fea7a4e7743e76905736'/>
<id>6d87669357936bffa1e8fea7a4e7743e76905736</id>
<content type='text'>
doc.2013.03.12a: Documentation changes.

fixes.2013.03.13a: Miscellaneous fixes.

idlenocb.2013.03.26b: Remove restrictions on no-CBs CPUs, make
	RCU_FAST_NO_HZ take advantage of numbered callbacks, add
	callback acceleration based on numbered callbacks.
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
doc.2013.03.12a: Documentation changes.

fixes.2013.03.13a: Miscellaneous fixes.

idlenocb.2013.03.26b: Remove restrictions on no-CBs CPUs, make
	RCU_FAST_NO_HZ take advantage of numbered callbacks, add
	callback acceleration based on numbered callbacks.
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Abstract rcu_start_future_gp() from rcu_nocb_wait_gp()</title>
<updated>2013-03-26T15:04:57+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2012-12-30T23:21:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0446be489795d8bb994125a916ef03211f539e54'/>
<id>0446be489795d8bb994125a916ef03211f539e54</id>
<content type='text'>
CPUs going idle will need to record the need for a future grace
period, but won't actually need to block waiting on it.  This commit
therefore splits rcu_start_future_gp(), which does the recording, from
rcu_nocb_wait_gp(), which now invokes rcu_start_future_gp() to do the
recording, after which rcu_nocb_wait_gp() does the waiting.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CPUs going idle will need to record the need for a future grace
period, but won't actually need to block waiting on it.  This commit
therefore splits rcu_start_future_gp(), which does the recording, from
rcu_nocb_wait_gp(), which now invokes rcu_start_future_gp() to do the
recording, after which rcu_nocb_wait_gp() does the waiting.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Rename n_nocb_gp_requests to need_future_gp</title>
<updated>2013-03-26T15:04:56+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2012-12-30T21:06:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8b425aa8f1acfe48aed919c7aadff2ed290fe969'/>
<id>8b425aa8f1acfe48aed919c7aadff2ed290fe969</id>
<content type='text'>
CPUs going idle need to be able to indicate their need for future grace
periods.  A mechanism for doing this already exists for no-callbacks
CPUs, so the idea is to re-use that mechanism.  This commit therefore
moves the -&gt;n_nocb_gp_requests field of the rcu_node structure out from
under the CONFIG_RCU_NOCB_CPU #ifdef and renames it to -&gt;need_future_gp.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
CPUs going idle need to be able to indicate their need for future grace
periods.  A mechanism for doing this already exists for no-callbacks
CPUs, so the idea is to re-use that mechanism.  This commit therefore
moves the -&gt;n_nocb_gp_requests field of the rcu_node structure out from
under the CONFIG_RCU_NOCB_CPU #ifdef and renames it to -&gt;need_future_gp.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Make RCU_FAST_NO_HZ take advantage of numbered callbacks</title>
<updated>2013-03-26T15:04:51+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2012-12-28T19:30:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c0f4dfd4f90f1667d234d21f15153ea09a2eaa66'/>
<id>c0f4dfd4f90f1667d234d21f15153ea09a2eaa66</id>
<content type='text'>
Because RCU callbacks are now associated with the number of the grace
period that they must wait for, CPUs can now take advance callbacks
corresponding to grace periods that ended while a given CPU was in
dyntick-idle mode.  This eliminates the need to try forcing the RCU
state machine while entering idle, thus reducing the CPU intensiveness
of RCU_FAST_NO_HZ, which should increase its energy efficiency.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Because RCU callbacks are now associated with the number of the grace
period that they must wait for, CPUs can now take advance callbacks
corresponding to grace periods that ended while a given CPU was in
dyntick-idle mode.  This eliminates the need to try forcing the RCU
state machine while entering idle, thus reducing the CPU intensiveness
of RCU_FAST_NO_HZ, which should increase its energy efficiency.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Distinguish "rcuo" kthreads by RCU flavor</title>
<updated>2013-03-26T15:04:48+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2012-12-03T16:16:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=a488985851cf2facd2227bd982cc2c251df56268'/>
<id>a488985851cf2facd2227bd982cc2c251df56268</id>
<content type='text'>
Currently, the per-no-CBs-CPU kthreads are named "rcuo" followed by
the CPU number, for example, "rcuo".  This is problematic given that
there are either two or three RCU flavors, each of which gets a per-CPU
kthread with exactly the same name.  This commit therefore introduces
a one-letter abbreviation for each RCU flavor, namely 'b' for RCU-bh,
'p' for RCU-preempt, and 's' for RCU-sched.  This abbreviation is used
to distinguish the "rcuo" kthreads, for example, for CPU 0 we would have
"rcuob/0", "rcuop/0", and "rcuos/0".

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Dietmar Eggemann &lt;dietmar.eggemann@arm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, the per-no-CBs-CPU kthreads are named "rcuo" followed by
the CPU number, for example, "rcuo".  This is problematic given that
there are either two or three RCU flavors, each of which gets a per-CPU
kthread with exactly the same name.  This commit therefore introduces
a one-letter abbreviation for each RCU flavor, namely 'b' for RCU-bh,
'p' for RCU-preempt, and 's' for RCU-sched.  This abbreviation is used
to distinguish the "rcuo" kthreads, for example, for CPU 0 we would have
"rcuob/0", "rcuop/0", and "rcuos/0".

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Dietmar Eggemann &lt;dietmar.eggemann@arm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Introduce proper blocking to no-CBs kthreads GP waits</title>
<updated>2013-03-26T15:04:44+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paul.mckenney@linaro.org</email>
</author>
<published>2013-02-11T04:48:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4'/>
<id>dae6e64d2bcfd4b06304ab864c7e3a4f6b5fedf4</id>
<content type='text'>
Currently, the no-CBs kthreads do repeated timed waits for grace periods
to elapse.  This is crude and energy inefficient, so this commit allows
no-CBs kthreads to specify exactly which grace period they are waiting
for and also allows them to block for the entire duration until the
desired grace period completes.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, the no-CBs kthreads do repeated timed waits for grace periods
to elapse.  This is crude and energy inefficient, so this commit allows
no-CBs kthreads to specify exactly which grace period they are waiting
for and also allows them to block for the entire duration until the
desired grace period completes.

Signed-off-by: Paul E. McKenney &lt;paul.mckenney@linaro.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
