<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/init/Kconfig, branch v3.10.78</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>init/Kconfig: Fix HAVE_FUTEX_CMPXCHG to not break up the EXPERT menu</title>
<updated>2014-10-09T19:18:42+00:00</updated>
<author>
<name>Josh Triplett</name>
<email>josh@joshtriplett.org</email>
</author>
<published>2014-10-03T23:19:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0000372a96216b393bacfa50fb0253c40f8cf3d1'/>
<id>0000372a96216b393bacfa50fb0253c40f8cf3d1</id>
<content type='text'>
commit 62b4d2041117f35ab2409c9f5c4b8d3dc8e59d0f upstream.

commit 03b8c7b623c80af264c4c8d6111e5c6289933666 ("futex: Allow
architectures to skip futex_atomic_cmpxchg_inatomic() test") added the
HAVE_FUTEX_CMPXCHG symbol right below FUTEX.  This placed it right in
the middle of the options for the EXPERT menu.  However,
HAVE_FUTEX_CMPXCHG does not depend on EXPERT or FUTEX, so Kconfig stops
placing items in the EXPERT menu, and displays the remaining several
EXPERT items (starting with EPOLL) directly in the General Setup menu.

Since both users of HAVE_FUTEX_CMPXCHG only select it "if FUTEX", make
HAVE_FUTEX_CMPXCHG itself depend on FUTEX.  With this change, the
subsequent items display as part of the EXPERT menu again; the EMBEDDED
menu now appears as the next top-level item in the General Setup menu,
which makes General Setup much shorter and more usable.

Signed-off-by: Josh Triplett &lt;josh@joshtriplett.org&gt;
Acked-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 62b4d2041117f35ab2409c9f5c4b8d3dc8e59d0f upstream.

commit 03b8c7b623c80af264c4c8d6111e5c6289933666 ("futex: Allow
architectures to skip futex_atomic_cmpxchg_inatomic() test") added the
HAVE_FUTEX_CMPXCHG symbol right below FUTEX.  This placed it right in
the middle of the options for the EXPERT menu.  However,
HAVE_FUTEX_CMPXCHG does not depend on EXPERT or FUTEX, so Kconfig stops
placing items in the EXPERT menu, and displays the remaining several
EXPERT items (starting with EPOLL) directly in the General Setup menu.

Since both users of HAVE_FUTEX_CMPXCHG only select it "if FUTEX", make
HAVE_FUTEX_CMPXCHG itself depend on FUTEX.  With this change, the
subsequent items display as part of the EXPERT menu again; the EMBEDDED
menu now appears as the next top-level item in the General Setup menu,
which makes General Setup much shorter and more usable.

Signed-off-by: Josh Triplett &lt;josh@joshtriplett.org&gt;
Acked-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>futex: Allow architectures to skip futex_atomic_cmpxchg_inatomic() test</title>
<updated>2014-04-14T13:42:19+00:00</updated>
<author>
<name>Heiko Carstens</name>
<email>heiko.carstens@de.ibm.com</email>
</author>
<published>2014-03-02T12:09:47+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f26c70a452dc0507bf7d3d2c3158ee7808e14f1c'/>
<id>f26c70a452dc0507bf7d3d2c3158ee7808e14f1c</id>
<content type='text'>
commit 03b8c7b623c80af264c4c8d6111e5c6289933666 upstream.

If an architecture has futex_atomic_cmpxchg_inatomic() implemented and there
is no runtime check necessary, allow to skip the test within futex_init().

This allows to get rid of some code which would always give the same result,
and also allows the compiler to optimize a couple of if statements away.

Signed-off-by: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Finn Thain &lt;fthain@telegraphics.com.au&gt;
Cc: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Link: http://lkml.kernel.org/r/20140302120947.GA3641@osiris
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
[geert: Backported to v3.10..v3.13]
Signed-off-by: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 03b8c7b623c80af264c4c8d6111e5c6289933666 upstream.

If an architecture has futex_atomic_cmpxchg_inatomic() implemented and there
is no runtime check necessary, allow to skip the test within futex_init().

This allows to get rid of some code which would always give the same result,
and also allows the compiler to optimize a couple of if statements away.

Signed-off-by: Heiko Carstens &lt;heiko.carstens@de.ibm.com&gt;
Cc: Finn Thain &lt;fthain@telegraphics.com.au&gt;
Cc: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Link: http://lkml.kernel.org/r/20140302120947.GA3641@osiris
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
[geert: Backported to v3.10..v3.13]
Signed-off-by: Geert Uytterhoeven &lt;geert@linux-m68k.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Don't call wakeup() with rcu_node structure -&gt;lock held</title>
<updated>2013-06-10T20:37:11+00:00</updated>
<author>
<name>Steven Rostedt</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2013-05-28T21:32:53+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=016a8d5be6ddcc72ef0432d82d9f6fa34f61b907'/>
<id>016a8d5be6ddcc72ef0432d82d9f6fa34f61b907</id>
<content type='text'>
This commit fixes a lockdep-detected deadlock by moving a wake_up()
call out from a rnp-&gt;lock critical section.  Please see below for
the long version of this story.

On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:

&gt; [12572.705832] ======================================================
&gt; [12572.750317] [ INFO: possible circular locking dependency detected ]
&gt; [12572.796978] 3.10.0-rc3+ #39 Not tainted
&gt; [12572.833381] -------------------------------------------------------
&gt; [12572.862233] trinity-child17/31341 is trying to acquire lock:
&gt; [12572.870390]  (rcu_node_0){..-.-.}, at: [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12572.878859]
&gt; but task is already holding lock:
&gt; [12572.894894]  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12572.903381]
&gt; which lock already depends on the new lock.
&gt;
&gt; [12572.927541]
&gt; the existing dependency chain (in reverse order) is:
&gt; [12572.943736]
&gt; -&gt; #4 (&amp;ctx-&gt;lock){-.-...}:
&gt; [12572.960032]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12572.968337]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12572.976633]        [&lt;ffffffff8113c987&gt;] __perf_event_task_sched_out+0x2e7/0x5e0
&gt; [12572.984969]        [&lt;ffffffff81088953&gt;] perf_event_task_sched_out+0x93/0xa0
&gt; [12572.993326]        [&lt;ffffffff816ea0bf&gt;] __schedule+0x2cf/0x9c0
&gt; [12573.001652]        [&lt;ffffffff816eacfe&gt;] schedule_user+0x2e/0x70
&gt; [12573.009998]        [&lt;ffffffff816ecd64&gt;] retint_careful+0x12/0x2e
&gt; [12573.018321]
&gt; -&gt; #3 (&amp;rq-&gt;lock){-.-.-.}:
&gt; [12573.034628]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.042930]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.051248]        [&lt;ffffffff8108e6a7&gt;] wake_up_new_task+0xb7/0x260
&gt; [12573.059579]        [&lt;ffffffff810492f5&gt;] do_fork+0x105/0x470
&gt; [12573.067880]        [&lt;ffffffff81049686&gt;] kernel_thread+0x26/0x30
&gt; [12573.076202]        [&lt;ffffffff816cee63&gt;] rest_init+0x23/0x140
&gt; [12573.084508]        [&lt;ffffffff81ed8e1f&gt;] start_kernel+0x3f1/0x3fe
&gt; [12573.092852]        [&lt;ffffffff81ed856f&gt;] x86_64_start_reservations+0x2a/0x2c
&gt; [12573.101233]        [&lt;ffffffff81ed863d&gt;] x86_64_start_kernel+0xcc/0xcf
&gt; [12573.109528]
&gt; -&gt; #2 (&amp;p-&gt;pi_lock){-.-.-.}:
&gt; [12573.125675]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.133829]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.141964]        [&lt;ffffffff8108e881&gt;] try_to_wake_up+0x31/0x320
&gt; [12573.150065]        [&lt;ffffffff8108ebe2&gt;] default_wake_function+0x12/0x20
&gt; [12573.158151]        [&lt;ffffffff8107bbf8&gt;] autoremove_wake_function+0x18/0x40
&gt; [12573.166195]        [&lt;ffffffff81085398&gt;] __wake_up_common+0x58/0x90
&gt; [12573.174215]        [&lt;ffffffff81086909&gt;] __wake_up+0x39/0x50
&gt; [12573.182146]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.190119]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.198023]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.205860]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.213656]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0
&gt; [12573.221379]
&gt; -&gt; #1 (&amp;rsp-&gt;gp_wq){..-.-.}:
&gt; [12573.236329]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.243783]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.251178]        [&lt;ffffffff810868f3&gt;] __wake_up+0x23/0x50
&gt; [12573.258505]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.265891]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.273248]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.280564]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.287807]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0

Notice the above call chain.

rcu_start_future_gp() is called with the rnp-&gt;lock held. Then it calls
rcu_start_gp_advance, which does a wakeup.

You can't do wakeups while holding the rnp-&gt;lock, as that would mean
that you could not do a rcu_read_unlock() while holding the rq lock, or
any lock that was taken while holding the rq lock. This is because...
(See below).

&gt; [12573.295067]
&gt; -&gt; #0 (rcu_node_0){..-.-.}:
&gt; [12573.309293]        [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.316568]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.323825]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.331081]        [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.338377]        [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.345648]        [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.352942]        [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.360211]        [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.367514]        [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.374816]        [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

Notice the above trace.

perf took its own ctx-&gt;lock, which can be taken while holding the rq
lock. While holding this lock, it did a rcu_read_unlock(). The
perf_lock_task_context() basically looks like:

rcu_read_lock();
raw_spin_lock(ctx-&gt;lock);
rcu_read_unlock();

Now, what looks to have happened, is that we scheduled after taking that
first rcu_read_lock() but before taking the spin lock. When we scheduled
back in and took the ctx-&gt;lock, the following rcu_read_unlock()
triggered the "special" code.

The rcu_read_unlock_special() takes the rnp-&gt;lock, which gives us a
possible deadlock scenario.

	CPU0		CPU1		CPU2
	----		----		----

				     rcu_nocb_kthread()
    lock(rq-&gt;lock);
		    lock(ctx-&gt;lock);
				     lock(rnp-&gt;lock);

				     wake_up();

				     lock(rq-&gt;lock);

		    rcu_read_unlock();

		    rcu_read_unlock_special();

		    lock(rnp-&gt;lock);
    lock(ctx-&gt;lock);

**** DEADLOCK ****

&gt; [12573.382068]
&gt; other info that might help us debug this:
&gt;
&gt; [12573.403229] Chain exists of:
&gt;   rcu_node_0 --&gt; &amp;rq-&gt;lock --&gt; &amp;ctx-&gt;lock
&gt;
&gt; [12573.424471]  Possible unsafe locking scenario:
&gt;
&gt; [12573.438499]        CPU0                    CPU1
&gt; [12573.445599]        ----                    ----
&gt; [12573.452691]   lock(&amp;ctx-&gt;lock);
&gt; [12573.459799]                                lock(&amp;rq-&gt;lock);
&gt; [12573.467010]                                lock(&amp;ctx-&gt;lock);
&gt; [12573.474192]   lock(rcu_node_0);
&gt; [12573.481262]
&gt;  *** DEADLOCK ***
&gt;
&gt; [12573.501931] 1 lock held by trinity-child17/31341:
&gt; [12573.508990]  #0:  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12573.516475]
&gt; stack backtrace:
&gt; [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
&gt; [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
&gt; [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
&gt; [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
&gt; [12573.567856] Call Trace:
&gt; [12573.575011]  [&lt;ffffffff816e375b&gt;] dump_stack+0x19/0x1b
&gt; [12573.582284]  [&lt;ffffffff816dfa5d&gt;] print_circular_bug+0x200/0x20f
&gt; [12573.589637]  [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.596982]  [&lt;ffffffff810918f5&gt;] ? sched_clock_cpu+0xb5/0x100
&gt; [12573.604344]  [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.611652]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.619030]  [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.626331]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.633671]  [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.640992]  [&lt;ffffffff811390ed&gt;] ? perf_lock_task_context+0x7d/0x2d0
&gt; [12573.648330]  [&lt;ffffffff810b429e&gt;] ? put_lock_stats.isra.29+0xe/0x40
&gt; [12573.655662]  [&lt;ffffffff813095a0&gt;] ? delay_tsc+0x90/0xe0
&gt; [12573.662964]  [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.670276]  [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.677622]  [&lt;ffffffff81139070&gt;] ? __perf_event_enable+0x370/0x370
&gt; [12573.684981]  [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.692358]  [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.699753]  [&lt;ffffffff8108cd9d&gt;] ? get_parent_ip+0xd/0x50
&gt; [12573.707135]  [&lt;ffffffff810b71fd&gt;] ? trace_hardirqs_on_caller+0xfd/0x1c0
&gt; [12573.714599]  [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.721996]  [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

This commit delays the wakeup via irq_work(), which is what
perf and ftrace use to perform wakeups in critical sections.

Reported-by: Dave Jones &lt;davej@redhat.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit fixes a lockdep-detected deadlock by moving a wake_up()
call out from a rnp-&gt;lock critical section.  Please see below for
the long version of this story.

On Tue, 2013-05-28 at 16:13 -0400, Dave Jones wrote:

&gt; [12572.705832] ======================================================
&gt; [12572.750317] [ INFO: possible circular locking dependency detected ]
&gt; [12572.796978] 3.10.0-rc3+ #39 Not tainted
&gt; [12572.833381] -------------------------------------------------------
&gt; [12572.862233] trinity-child17/31341 is trying to acquire lock:
&gt; [12572.870390]  (rcu_node_0){..-.-.}, at: [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12572.878859]
&gt; but task is already holding lock:
&gt; [12572.894894]  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12572.903381]
&gt; which lock already depends on the new lock.
&gt;
&gt; [12572.927541]
&gt; the existing dependency chain (in reverse order) is:
&gt; [12572.943736]
&gt; -&gt; #4 (&amp;ctx-&gt;lock){-.-...}:
&gt; [12572.960032]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12572.968337]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12572.976633]        [&lt;ffffffff8113c987&gt;] __perf_event_task_sched_out+0x2e7/0x5e0
&gt; [12572.984969]        [&lt;ffffffff81088953&gt;] perf_event_task_sched_out+0x93/0xa0
&gt; [12572.993326]        [&lt;ffffffff816ea0bf&gt;] __schedule+0x2cf/0x9c0
&gt; [12573.001652]        [&lt;ffffffff816eacfe&gt;] schedule_user+0x2e/0x70
&gt; [12573.009998]        [&lt;ffffffff816ecd64&gt;] retint_careful+0x12/0x2e
&gt; [12573.018321]
&gt; -&gt; #3 (&amp;rq-&gt;lock){-.-.-.}:
&gt; [12573.034628]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.042930]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.051248]        [&lt;ffffffff8108e6a7&gt;] wake_up_new_task+0xb7/0x260
&gt; [12573.059579]        [&lt;ffffffff810492f5&gt;] do_fork+0x105/0x470
&gt; [12573.067880]        [&lt;ffffffff81049686&gt;] kernel_thread+0x26/0x30
&gt; [12573.076202]        [&lt;ffffffff816cee63&gt;] rest_init+0x23/0x140
&gt; [12573.084508]        [&lt;ffffffff81ed8e1f&gt;] start_kernel+0x3f1/0x3fe
&gt; [12573.092852]        [&lt;ffffffff81ed856f&gt;] x86_64_start_reservations+0x2a/0x2c
&gt; [12573.101233]        [&lt;ffffffff81ed863d&gt;] x86_64_start_kernel+0xcc/0xcf
&gt; [12573.109528]
&gt; -&gt; #2 (&amp;p-&gt;pi_lock){-.-.-.}:
&gt; [12573.125675]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.133829]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.141964]        [&lt;ffffffff8108e881&gt;] try_to_wake_up+0x31/0x320
&gt; [12573.150065]        [&lt;ffffffff8108ebe2&gt;] default_wake_function+0x12/0x20
&gt; [12573.158151]        [&lt;ffffffff8107bbf8&gt;] autoremove_wake_function+0x18/0x40
&gt; [12573.166195]        [&lt;ffffffff81085398&gt;] __wake_up_common+0x58/0x90
&gt; [12573.174215]        [&lt;ffffffff81086909&gt;] __wake_up+0x39/0x50
&gt; [12573.182146]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.190119]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.198023]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.205860]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.213656]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0
&gt; [12573.221379]
&gt; -&gt; #1 (&amp;rsp-&gt;gp_wq){..-.-.}:
&gt; [12573.236329]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.243783]        [&lt;ffffffff816ebe9b&gt;] _raw_spin_lock_irqsave+0x4b/0x90
&gt; [12573.251178]        [&lt;ffffffff810868f3&gt;] __wake_up+0x23/0x50
&gt; [12573.258505]        [&lt;ffffffff810fc3da&gt;] rcu_start_gp_advanced.isra.11+0x4a/0x50
&gt; [12573.265891]        [&lt;ffffffff810fdb09&gt;] rcu_start_future_gp+0x1c9/0x1f0
&gt; [12573.273248]        [&lt;ffffffff810fe2c4&gt;] rcu_nocb_kthread+0x114/0x930
&gt; [12573.280564]        [&lt;ffffffff8107a91d&gt;] kthread+0xed/0x100
&gt; [12573.287807]        [&lt;ffffffff816f4b1c&gt;] ret_from_fork+0x7c/0xb0

Notice the above call chain.

rcu_start_future_gp() is called with the rnp-&gt;lock held. Then it calls
rcu_start_gp_advance, which does a wakeup.

You can't do wakeups while holding the rnp-&gt;lock, as that would mean
that you could not do a rcu_read_unlock() while holding the rq lock, or
any lock that was taken while holding the rq lock. This is because...
(See below).

&gt; [12573.295067]
&gt; -&gt; #0 (rcu_node_0){..-.-.}:
&gt; [12573.309293]        [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.316568]        [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.323825]        [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.331081]        [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.338377]        [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.345648]        [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.352942]        [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.360211]        [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.367514]        [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.374816]        [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

Notice the above trace.

perf took its own ctx-&gt;lock, which can be taken while holding the rq
lock. While holding this lock, it did a rcu_read_unlock(). The
perf_lock_task_context() basically looks like:

rcu_read_lock();
raw_spin_lock(ctx-&gt;lock);
rcu_read_unlock();

Now, what looks to have happened, is that we scheduled after taking that
first rcu_read_lock() but before taking the spin lock. When we scheduled
back in and took the ctx-&gt;lock, the following rcu_read_unlock()
triggered the "special" code.

The rcu_read_unlock_special() takes the rnp-&gt;lock, which gives us a
possible deadlock scenario.

	CPU0		CPU1		CPU2
	----		----		----

				     rcu_nocb_kthread()
    lock(rq-&gt;lock);
		    lock(ctx-&gt;lock);
				     lock(rnp-&gt;lock);

				     wake_up();

				     lock(rq-&gt;lock);

		    rcu_read_unlock();

		    rcu_read_unlock_special();

		    lock(rnp-&gt;lock);
    lock(ctx-&gt;lock);

**** DEADLOCK ****

&gt; [12573.382068]
&gt; other info that might help us debug this:
&gt;
&gt; [12573.403229] Chain exists of:
&gt;   rcu_node_0 --&gt; &amp;rq-&gt;lock --&gt; &amp;ctx-&gt;lock
&gt;
&gt; [12573.424471]  Possible unsafe locking scenario:
&gt;
&gt; [12573.438499]        CPU0                    CPU1
&gt; [12573.445599]        ----                    ----
&gt; [12573.452691]   lock(&amp;ctx-&gt;lock);
&gt; [12573.459799]                                lock(&amp;rq-&gt;lock);
&gt; [12573.467010]                                lock(&amp;ctx-&gt;lock);
&gt; [12573.474192]   lock(rcu_node_0);
&gt; [12573.481262]
&gt;  *** DEADLOCK ***
&gt;
&gt; [12573.501931] 1 lock held by trinity-child17/31341:
&gt; [12573.508990]  #0:  (&amp;ctx-&gt;lock){-.-...}, at: [&lt;ffffffff811390ed&gt;] perf_lock_task_context+0x7d/0x2d0
&gt; [12573.516475]
&gt; stack backtrace:
&gt; [12573.530395] CPU: 1 PID: 31341 Comm: trinity-child17 Not tainted 3.10.0-rc3+ #39
&gt; [12573.545357]  ffffffff825b4f90 ffff880219f1dbc0 ffffffff816e375b ffff880219f1dc00
&gt; [12573.552868]  ffffffff816dfa5d ffff880219f1dc50 ffff88023ce4d1f8 ffff88023ce4ca40
&gt; [12573.560353]  0000000000000001 0000000000000001 ffff88023ce4d1f8 ffff880219f1dcc0
&gt; [12573.567856] Call Trace:
&gt; [12573.575011]  [&lt;ffffffff816e375b&gt;] dump_stack+0x19/0x1b
&gt; [12573.582284]  [&lt;ffffffff816dfa5d&gt;] print_circular_bug+0x200/0x20f
&gt; [12573.589637]  [&lt;ffffffff810b8d36&gt;] __lock_acquire+0x1786/0x1af0
&gt; [12573.596982]  [&lt;ffffffff810918f5&gt;] ? sched_clock_cpu+0xb5/0x100
&gt; [12573.604344]  [&lt;ffffffff810b9851&gt;] lock_acquire+0x91/0x1f0
&gt; [12573.611652]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.619030]  [&lt;ffffffff816ebc90&gt;] _raw_spin_lock+0x40/0x80
&gt; [12573.626331]  [&lt;ffffffff811054ff&gt;] ? rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.633671]  [&lt;ffffffff811054ff&gt;] rcu_read_unlock_special+0x9f/0x4c0
&gt; [12573.640992]  [&lt;ffffffff811390ed&gt;] ? perf_lock_task_context+0x7d/0x2d0
&gt; [12573.648330]  [&lt;ffffffff810b429e&gt;] ? put_lock_stats.isra.29+0xe/0x40
&gt; [12573.655662]  [&lt;ffffffff813095a0&gt;] ? delay_tsc+0x90/0xe0
&gt; [12573.662964]  [&lt;ffffffff810760a6&gt;] __rcu_read_unlock+0x96/0xa0
&gt; [12573.670276]  [&lt;ffffffff811391b3&gt;] perf_lock_task_context+0x143/0x2d0
&gt; [12573.677622]  [&lt;ffffffff81139070&gt;] ? __perf_event_enable+0x370/0x370
&gt; [12573.684981]  [&lt;ffffffff8113938e&gt;] find_get_context+0x4e/0x1f0
&gt; [12573.692358]  [&lt;ffffffff811403f4&gt;] SYSC_perf_event_open+0x514/0xbd0
&gt; [12573.699753]  [&lt;ffffffff8108cd9d&gt;] ? get_parent_ip+0xd/0x50
&gt; [12573.707135]  [&lt;ffffffff810b71fd&gt;] ? trace_hardirqs_on_caller+0xfd/0x1c0
&gt; [12573.714599]  [&lt;ffffffff81140e49&gt;] SyS_perf_event_open+0x9/0x10
&gt; [12573.721996]  [&lt;ffffffff816f4dd4&gt;] tracesys+0xdd/0xe2

This commit delays the wakeup via irq_work(), which is what
perf and ftrace use to perform wakeups in critical sections.

Reported-by: Dave Jones &lt;davej@redhat.com&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2013-05-05T20:23:27+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2013-05-05T20:23:27+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=534c97b0950b1967bca1c753aeaed32f5db40264'/>
<id>534c97b0950b1967bca1c753aeaed32f5db40264</id>
<content type='text'>
Pull 'full dynticks' support from Ingo Molnar:
 "This tree from Frederic Weisbecker adds a new, (exciting! :-) core
  kernel feature to the timer and scheduler subsystems: 'full dynticks',
  or CONFIG_NO_HZ_FULL=y.

  This feature extends the nohz variable-size timer tick feature from
  idle to busy CPUs (running at most one task) as well, potentially
  reducing the number of timer interrupts significantly.

  This feature got motivated by real-time folks and the -rt tree, but
  the general utility and motivation of full-dynticks runs wider than
  that:

   - HPC workloads get faster: CPUs running a single task should be able
     to utilize a maximum amount of CPU power.  A periodic timer tick at
     HZ=1000 can cause a constant overhead of up to 1.0%.  This feature
     removes that overhead - and speeds up the system by 0.5%-1.0% on
     typical distro configs even on modern systems.

   - Real-time workload latency reduction: CPUs running critical tasks
     should experience as little jitter as possible.  The last remaining
     source of kernel-related jitter was the periodic timer tick.

   - A single task executing on a CPU is a pretty common situation,
     especially with an increasing number of cores/CPUs, so this feature
     helps desktop and mobile workloads as well.

  The cost of the feature is mainly related to increased timer
  reprogramming overhead when a CPU switches its tick period, and thus
  slightly longer to-idle and from-idle latency.

  Configuration-wise a third mode of operation is added to the existing
  two NOHZ kconfig modes:

   - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named
     as a config option.  This is the traditional Linux periodic tick
     design: there's a HZ tick going on all the time, regardless of
     whether a CPU is idle or not.

   - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the
     periodic tick when a CPU enters idle mode.

   - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the
     tick when a CPU is idle, also slows the tick down to 1 Hz (one
     timer interrupt per second) when only a single task is running on a
     CPU.

  The .config behavior is compatible: existing !CONFIG_NO_HZ and
  CONFIG_NO_HZ=y settings get translated to the new values, without the
  user having to configure anything.  CONFIG_NO_HZ_FULL is turned off by
  default.

  This feature is based on a lot of infrastructure work that has been
  steadily going upstream in the last 2-3 cycles: related RCU support
  and non-periodic cputime support in particular is upstream already.

  This tree adds the final pieces and activates the feature.  The pull
  request is marked RFC because:

   - it's marked 64-bit only at the moment - the 32-bit support patch is
     small but did not get ready in time.

   - it has a number of fresh commits that came in after the merge
     window.  The overwhelming majority of commits are from before the
     merge window, but still some aspects of the tree are fresh and so I
     marked it RFC.

   - it's a pretty wide-reaching feature with lots of effects - and
     while the components have been in testing for some time, the full
     combination is still not very widely used.  That it's default-off
     should reduce its regression abilities and obviously there are no
     known regressions with CONFIG_NO_HZ_FULL=y enabled either.

   - the feature is not completely idempotent: there is no 100%
     equivalent replacement for a periodic scheduler/timer tick.  In
     particular there's ongoing work to map out and reduce its effects
     on scheduler load-balancing and statistics.  This should not impact
     correctness though, there are no known regressions related to this
     feature at this point.

   - it's a pretty ambitious feature that with time will likely be
     enabled by most Linux distros, and we'd like you to make input on
     its design/implementation, if you dislike some aspect we missed.
     Without flaming us to crisp! :-)

  Future plans:

   - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off
     the periodic tick altogether when there's a single busy task on a
     CPU.  We'd first like 1 Hz to be exposed more widely before we go
     for the 0 Hz target though.

   - once we reach 0 Hz we can remove the periodic tick assumption from
     nr_running&gt;=2 as well, by essentially interrupting busy tasks only
     as frequently as the sched_latency constraints require us to do -
     once every 4-40 msecs, depending on nr_running.

  I am personally leaning towards biting the bullet and doing this in
  v3.10, like the -rt tree this effort has been going on for too long -
  but the final word is up to you as usual.

  More technical details can be found in Documentation/timers/NO_HZ.txt"

* 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits)
  sched: Keep at least 1 tick per second for active dynticks tasks
  rcu: Fix full dynticks' dependency on wide RCU nocb mode
  nohz: Protect smp_processor_id() in tick_nohz_task_switch()
  nohz_full: Add documentation.
  cputime_nsecs: use math64.h for nsec resolution conversion helpers
  nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config
  nohz: Reduce overhead under high-freq idling patterns
  nohz: Remove full dynticks' superfluous dependency on RCU tree
  nohz: Fix unavailable tick_stop tracepoint in dynticks idle
  nohz: Add basic tracing
  nohz: Select wide RCU nocb for full dynticks
  nohz: Disable the tick when irq resume in full dynticks CPU
  nohz: Re-evaluate the tick for the new task after a context switch
  nohz: Prepare to stop the tick on irq exit
  nohz: Implement full dynticks kick
  nohz: Re-evaluate the tick from the scheduler IPI
  sched: New helper to prevent from stopping the tick in full dynticks
  sched: Kick full dynticks CPU that have more than one task enqueued.
  perf: New helper to prevent full dynticks CPUs from stopping tick
  perf: Kick full dynticks CPU if events rotation is needed
  ...
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull 'full dynticks' support from Ingo Molnar:
 "This tree from Frederic Weisbecker adds a new, (exciting! :-) core
  kernel feature to the timer and scheduler subsystems: 'full dynticks',
  or CONFIG_NO_HZ_FULL=y.

  This feature extends the nohz variable-size timer tick feature from
  idle to busy CPUs (running at most one task) as well, potentially
  reducing the number of timer interrupts significantly.

  This feature got motivated by real-time folks and the -rt tree, but
  the general utility and motivation of full-dynticks runs wider than
  that:

   - HPC workloads get faster: CPUs running a single task should be able
     to utilize a maximum amount of CPU power.  A periodic timer tick at
     HZ=1000 can cause a constant overhead of up to 1.0%.  This feature
     removes that overhead - and speeds up the system by 0.5%-1.0% on
     typical distro configs even on modern systems.

   - Real-time workload latency reduction: CPUs running critical tasks
     should experience as little jitter as possible.  The last remaining
     source of kernel-related jitter was the periodic timer tick.

   - A single task executing on a CPU is a pretty common situation,
     especially with an increasing number of cores/CPUs, so this feature
     helps desktop and mobile workloads as well.

  The cost of the feature is mainly related to increased timer
  reprogramming overhead when a CPU switches its tick period, and thus
  slightly longer to-idle and from-idle latency.

  Configuration-wise a third mode of operation is added to the existing
  two NOHZ kconfig modes:

   - CONFIG_HZ_PERIODIC: [formerly !CONFIG_NO_HZ], now explicitly named
     as a config option.  This is the traditional Linux periodic tick
     design: there's a HZ tick going on all the time, regardless of
     whether a CPU is idle or not.

   - CONFIG_NO_HZ_IDLE: [formerly CONFIG_NO_HZ=y], this turns off the
     periodic tick when a CPU enters idle mode.

   - CONFIG_NO_HZ_FULL: this new mode, in addition to turning off the
     tick when a CPU is idle, also slows the tick down to 1 Hz (one
     timer interrupt per second) when only a single task is running on a
     CPU.

  The .config behavior is compatible: existing !CONFIG_NO_HZ and
  CONFIG_NO_HZ=y settings get translated to the new values, without the
  user having to configure anything.  CONFIG_NO_HZ_FULL is turned off by
  default.

  This feature is based on a lot of infrastructure work that has been
  steadily going upstream in the last 2-3 cycles: related RCU support
  and non-periodic cputime support in particular is upstream already.

  This tree adds the final pieces and activates the feature.  The pull
  request is marked RFC because:

   - it's marked 64-bit only at the moment - the 32-bit support patch is
     small but did not get ready in time.

   - it has a number of fresh commits that came in after the merge
     window.  The overwhelming majority of commits are from before the
     merge window, but still some aspects of the tree are fresh and so I
     marked it RFC.

   - it's a pretty wide-reaching feature with lots of effects - and
     while the components have been in testing for some time, the full
     combination is still not very widely used.  That it's default-off
     should reduce its regression abilities and obviously there are no
     known regressions with CONFIG_NO_HZ_FULL=y enabled either.

   - the feature is not completely idempotent: there is no 100%
     equivalent replacement for a periodic scheduler/timer tick.  In
     particular there's ongoing work to map out and reduce its effects
     on scheduler load-balancing and statistics.  This should not impact
     correctness though, there are no known regressions related to this
     feature at this point.

   - it's a pretty ambitious feature that with time will likely be
     enabled by most Linux distros, and we'd like you to make input on
     its design/implementation, if you dislike some aspect we missed.
     Without flaming us to crisp! :-)

  Future plans:

   - there's ongoing work to reduce 1Hz to 0Hz, to essentially shut off
     the periodic tick altogether when there's a single busy task on a
     CPU.  We'd first like 1 Hz to be exposed more widely before we go
     for the 0 Hz target though.

   - once we reach 0 Hz we can remove the periodic tick assumption from
     nr_running&gt;=2 as well, by essentially interrupting busy tasks only
     as frequently as the sched_latency constraints require us to do -
     once every 4-40 msecs, depending on nr_running.

  I am personally leaning towards biting the bullet and doing this in
  v3.10, like the -rt tree this effort has been going on for too long -
  but the final word is up to you as usual.

  More technical details can be found in Documentation/timers/NO_HZ.txt"

* 'timers-nohz-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits)
  sched: Keep at least 1 tick per second for active dynticks tasks
  rcu: Fix full dynticks' dependency on wide RCU nocb mode
  nohz: Protect smp_processor_id() in tick_nohz_task_switch()
  nohz_full: Add documentation.
  cputime_nsecs: use math64.h for nsec resolution conversion helpers
  nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config
  nohz: Reduce overhead under high-freq idling patterns
  nohz: Remove full dynticks' superfluous dependency on RCU tree
  nohz: Fix unavailable tick_stop tracepoint in dynticks idle
  nohz: Add basic tracing
  nohz: Select wide RCU nocb for full dynticks
  nohz: Disable the tick when irq resume in full dynticks CPU
  nohz: Re-evaluate the tick for the new task after a context switch
  nohz: Prepare to stop the tick on irq exit
  nohz: Implement full dynticks kick
  nohz: Re-evaluate the tick from the scheduler IPI
  sched: New helper to prevent from stopping the tick in full dynticks
  sched: Kick full dynticks CPU that have more than one task enqueued.
  perf: New helper to prevent full dynticks CPUs from stopping tick
  perf: Kick full dynticks CPU if events rotation is needed
  ...
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Fix full dynticks' dependency on wide RCU nocb mode</title>
<updated>2013-05-04T06:30:34+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>fweisbec@gmail.com</email>
</author>
<published>2013-05-02T23:28:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=73c30828771acafb0a5e3a1c4cf75e6c5dc5f98a'/>
<id>73c30828771acafb0a5e3a1c4cf75e6c5dc5f98a</id>
<content type='text'>
Commit 0637e029392386e6996f5d6574aadccee8315efa
("nohz: Select wide RCU nocb for full dynticks") intended
to force CONFIG_RCU_NOCB_CPU_ALL=y when full dynticks is
enabled.

However this option is part of a choice menu and Kconfig's
"select" instruction has no effect on such targets.

Fix this by using reverse dependencies on the targets we
don't want instead.

Reviewed-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Commit 0637e029392386e6996f5d6574aadccee8315efa
("nohz: Select wide RCU nocb for full dynticks") intended
to force CONFIG_RCU_NOCB_CPU_ALL=y when full dynticks is
enabled.

However this option is part of a choice menu and Kconfig's
"select" instruction has no effect on such targets.

Fix this by using reverse dependencies on the targets we
don't want instead.

Reviewed-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge commit '8700c95adb03' into timers/nohz</title>
<updated>2013-05-02T15:54:19+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>fweisbec@gmail.com</email>
</author>
<published>2013-05-02T15:37:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c032862fba51a3ca504752d3a25186b324c5ce83'/>
<id>c032862fba51a3ca504752d3a25186b324c5ce83</id>
<content type='text'>
The full dynticks tree needs the latest RCU and sched
upstream updates in order to fix some dependencies.

Merge a common upstream merge point that has these
updates.

Conflicts:
	include/linux/perf_event.h
	kernel/rcutree.h
	kernel/rcutree_plugin.h

Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The full dynticks tree needs the latest RCU and sched
upstream updates in order to fix some dependencies.

Merge a common upstream merge point that has these
updates.

Conflicts:
	include/linux/perf_event.h
	kernel/rcutree.h
	kernel/rcutree_plugin.h

Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>init/Kconfig: re-order CONFIG_EXPERT options to fix menuconfig display</title>
<updated>2013-05-01T00:04:09+00:00</updated>
<author>
<name>Mike Frysinger</name>
<email>vapier@gentoo.org</email>
</author>
<published>2013-04-30T22:28:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=657a52095fa3e8560d41047851f4e73a410f3ed2'/>
<id>657a52095fa3e8560d41047851f4e73a410f3ed2</id>
<content type='text'>
The kconfig language requires that dependent options all follow the
menuconfig symbol in order to be collapsed below it.  Recently some hidden
options were added below the EXPERT menuconfig, but did not depend on
EXPERT (because hidden options can't).  This broke the display.  So
re-order all these options, and while we're here stick the PCI quirks
under the EXPERT menu (since it isn't sitting with any related options).

Before this commit, we get:
	[*] Configure standard kernel features (expert users)  ---&gt;
	[ ] Sysctl syscall support
	[*] Load all symbols for debugging/ksymoops
	...
	[ ] Embedded system

Now we get the older (and correct) behavior:
	[*] Configure standard kernel features (expert users)  ---&gt;
	[ ] Embedded system
And if you go into the expert menu you get the expert options:
	[ ] Sysctl syscall support
	[*] Load all symbols for debugging/ksymoops
	...

Signed-off-by: Mike Frysinger &lt;vapier@gentoo.org&gt;
Acked-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Cc: zhangwei(Jovi) &lt;jovi.zhangwei@huawei.com&gt;
Cc: Michal Marek &lt;mmarek@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The kconfig language requires that dependent options all follow the
menuconfig symbol in order to be collapsed below it.  Recently some hidden
options were added below the EXPERT menuconfig, but did not depend on
EXPERT (because hidden options can't).  This broke the display.  So
re-order all these options, and while we're here stick the PCI quirks
under the EXPERT menu (since it isn't sitting with any related options).

Before this commit, we get:
	[*] Configure standard kernel features (expert users)  ---&gt;
	[ ] Sysctl syscall support
	[*] Load all symbols for debugging/ksymoops
	...
	[ ] Embedded system

Now we get the older (and correct) behavior:
	[*] Configure standard kernel features (expert users)  ---&gt;
	[ ] Embedded system
And if you go into the expert menu you get the expert options:
	[ ] Sysctl syscall support
	[*] Load all symbols for debugging/ksymoops
	...

Signed-off-by: Mike Frysinger &lt;vapier@gentoo.org&gt;
Acked-by: Randy Dunlap &lt;rdunlap@infradead.org&gt;
Cc: zhangwei(Jovi) &lt;jovi.zhangwei@huawei.com&gt;
Cc: Michal Marek &lt;mmarek@suse.cz&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip</title>
<updated>2013-04-30T14:43:28+00:00</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2013-04-30T14:43:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=16fa94b532b1958f508e07eca1a9256351241fbc'/>
<id>16fa94b532b1958f508e07eca1a9256351241fbc</id>
<content type='text'>
Pull scheduler changes from Ingo Molnar:
 "The main changes in this development cycle were:

   - full dynticks preparatory work by Frederic Weisbecker

   - factor out the cpu time accounting code better, by Li Zefan

   - multi-CPU load balancer cleanups and improvements by Joonsoo Kim

   - various smaller fixes and cleanups"

* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
  sched: Fix init NOHZ_IDLE flag
  sched: Prevent to re-select dst-cpu in load_balance()
  sched: Rename load_balance_tmpmask to load_balance_mask
  sched: Move up affinity check to mitigate useless redoing overhead
  sched: Don't consider other cpus in our group in case of NEWLY_IDLE
  sched: Explicitly cpu_idle_type checking in rebalance_domains()
  sched: Change position of resched_cpu() in load_balance()
  sched: Fix wrong rq's runnable_avg update with rt tasks
  sched: Document task_struct::personality field
  sched/cpuacct/UML: Fix header file dependency bug on the UML build
  cgroup: Kill subsys.active flag
  sched/cpuacct: No need to check subsys active state
  sched/cpuacct: Initialize cpuacct subsystem earlier
  sched/cpuacct: Initialize root cpuacct earlier
  sched/cpuacct: Allocate per_cpu cpuusage for root cpuacct statically
  sched/cpuacct: Clean up cpuacct.h
  sched/cpuacct: Remove redundant NULL checks in cpuacct_acount_field()
  sched/cpuacct: Remove redundant NULL checks in cpuacct_charge()
  sched/cpuacct: Add cpuacct_acount_field()
  sched/cpuacct: Add cpuacct_init()
  ...
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull scheduler changes from Ingo Molnar:
 "The main changes in this development cycle were:

   - full dynticks preparatory work by Frederic Weisbecker

   - factor out the cpu time accounting code better, by Li Zefan

   - multi-CPU load balancer cleanups and improvements by Joonsoo Kim

   - various smaller fixes and cleanups"

* 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
  sched: Fix init NOHZ_IDLE flag
  sched: Prevent to re-select dst-cpu in load_balance()
  sched: Rename load_balance_tmpmask to load_balance_mask
  sched: Move up affinity check to mitigate useless redoing overhead
  sched: Don't consider other cpus in our group in case of NEWLY_IDLE
  sched: Explicitly cpu_idle_type checking in rebalance_domains()
  sched: Change position of resched_cpu() in load_balance()
  sched: Fix wrong rq's runnable_avg update with rt tasks
  sched: Document task_struct::personality field
  sched/cpuacct/UML: Fix header file dependency bug on the UML build
  cgroup: Kill subsys.active flag
  sched/cpuacct: No need to check subsys active state
  sched/cpuacct: Initialize cpuacct subsystem earlier
  sched/cpuacct: Initialize root cpuacct earlier
  sched/cpuacct: Allocate per_cpu cpuusage for root cpuacct statically
  sched/cpuacct: Clean up cpuacct.h
  sched/cpuacct: Remove redundant NULL checks in cpuacct_acount_field()
  sched/cpuacct: Remove redundant NULL checks in cpuacct_charge()
  sched/cpuacct: Add cpuacct_acount_field()
  sched/cpuacct: Add cpuacct_init()
  ...
</pre>
</div>
</content>
</entry>
<entry>
<title>nohz: Select VIRT_CPU_ACCOUNTING_GEN from full dynticks config</title>
<updated>2013-04-26T16:56:59+00:00</updated>
<author>
<name>Frederic Weisbecker</name>
<email>fweisbec@gmail.com</email>
</author>
<published>2013-04-26T13:16:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c58b0df12a6b5c497637db0676effd00e1fbab13'/>
<id>c58b0df12a6b5c497637db0676effd00e1fbab13</id>
<content type='text'>
Turn the full dynticks passive dependency on VIRT_CPU_ACCOUNTING_GEN
to an active one.

The full dynticks Kconfig is currently hidden behind the full dynticks
cputime accounting, which is an awkward and counter-intuitive layout:
the user first has to select the dynticks cputime accounting in order
to make the full dynticks feature to be visible.

We definetly want it the other way around. The usual way to perform
this kind of active dependency is use "select" on the depended target.
Now we can't use the Kconfig "select" instruction when the target is
a "choice".

So this patch inspires on how the RCU subsystem Kconfig interact
with its dependencies on SMP and PREEMPT: we make sure that cputime
accounting can't propose another option than VIRT_CPU_ACCOUNTING_GEN
when NO_HZ_FULL is selected by using the right "depends on" instruction
for each cputime accounting choices.

v2: Keep full dynticks cputime accounting available even without
full dynticks, as per Paul McKenney's suggestion.

Reported-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Turn the full dynticks passive dependency on VIRT_CPU_ACCOUNTING_GEN
to an active one.

The full dynticks Kconfig is currently hidden behind the full dynticks
cputime accounting, which is an awkward and counter-intuitive layout:
the user first has to select the dynticks cputime accounting in order
to make the full dynticks feature to be visible.

We definetly want it the other way around. The usual way to perform
this kind of active dependency is use "select" on the depended target.
Now we can't use the Kconfig "select" instruction when the target is
a "choice".

So this patch inspires on how the RCU subsystem Kconfig interact
with its dependencies on SMP and PREEMPT: we make sure that cputime
accounting can't propose another option than VIRT_CPU_ACCOUNTING_GEN
when NO_HZ_FULL is selected by using the right "depends on" instruction
for each cputime accounting choices.

v2: Keep full dynticks cputime accounting available even without
full dynticks, as per Paul McKenney's suggestion.

Reported-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Frederic Weisbecker &lt;fweisbec@gmail.com&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: Hakan Akkan &lt;hakanakkan@gmail.com&gt;
Cc: Ingo Molnar &lt;mingo@kernel.org&gt;
Cc: Kevin Hilman &lt;khilman@linaro.org&gt;
Cc: Li Zhong &lt;zhong@linux.vnet.ibm.com&gt;
Cc: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Cc: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
Cc: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Thomas Gleixner &lt;tglx@linutronix.de&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branch 'rcu/next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into core/rcu</title>
<updated>2013-04-10T10:55:49+00:00</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@kernel.org</email>
</author>
<published>2013-04-10T10:55:49+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8fcfae31719c0a6c03f2cf63f815b46d378d8be4'/>
<id>8fcfae31719c0a6c03f2cf63f815b46d378d8be4</id>
<content type='text'>
Pull RCU updates from Paul E. McKenney:

  * Remove restrictions on no-CBs CPUs, make RCU_FAST_NO_HZ
    take advantage of numbered callbacks, do additional callback
    accelerations based on numbered callbacks.  Posted to LKML
    at https://lkml.org/lkml/2013/3/18/960.

  * RCU documentation updates.  Posted to LKML at
    https://lkml.org/lkml/2013/3/18/570.

  * Miscellaneous fixes.  Posted to LKML at
    https://lkml.org/lkml/2013/3/18/594.

Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Pull RCU updates from Paul E. McKenney:

  * Remove restrictions on no-CBs CPUs, make RCU_FAST_NO_HZ
    take advantage of numbered callbacks, do additional callback
    accelerations based on numbered callbacks.  Posted to LKML
    at https://lkml.org/lkml/2013/3/18/960.

  * RCU documentation updates.  Posted to LKML at
    https://lkml.org/lkml/2013/3/18/570.

  * Miscellaneous fixes.  Posted to LKML at
    https://lkml.org/lkml/2013/3/18/594.

Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
</pre>
</div>
</content>
</entry>
</feed>
