<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/kernel/rcu/tree_plugin.h, branch v4.9.16</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>rcu: Narrow early boot window of illegal synchronous grace periods</title>
<updated>2017-01-26T07:24:37+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2017-01-10T10:28:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=90687fc3c8c386a16326089d68cf616b8049440f'/>
<id>90687fc3c8c386a16326089d68cf616b8049440f</id>
<content type='text'>
commit 52d7e48b86fc108e45a656d8e53e4237993c481d upstream.

The current preemptible RCU implementation goes through three phases
during bootup.  In the first phase, there is only one CPU that is running
with preemption disabled, so that a no-op is a synchronous grace period.
In the second mid-boot phase, the scheduler is running, but RCU has
not yet gotten its kthreads spawned (and, for expedited grace periods,
workqueues are not yet running.  During this time, any attempt to do
a synchronous grace period will hang the system (or complain bitterly,
depending).  In the third and final phase, RCU is fully operational and
everything works normally.

This has been OK for some time, but there has recently been some
synchronous grace periods showing up during the second mid-boot phase.
This code worked "by accident" for awhile, but started failing as soon
as expedited RCU grace periods switched over to workqueues in commit
8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue").
Note that the code was buggy even before this commit, as it was subject
to failure on real-time systems that forced all expedited grace periods
to run as normal grace periods (for example, using the rcu_normal ksysfs
parameter).  The callchain from the failure case is as follows:

early_amd_iommu_init()
|-&gt; acpi_put_table(ivrs_base);
|-&gt; acpi_tb_put_table(table_desc);
|-&gt; acpi_tb_invalidate_table(table_desc);
|-&gt; acpi_tb_release_table(...)
|-&gt; acpi_os_unmap_memory
|-&gt; acpi_os_unmap_iomem
|-&gt; acpi_os_map_cleanup
|-&gt; synchronize_rcu_expedited

The kernel showing this callchain was built with CONFIG_PREEMPT_RCU=y,
which caused the code to try using workqueues before they were
initialized, which did not go well.

This commit therefore reworks RCU to permit synchronous grace periods
to proceed during this mid-boot phase.  This commit is therefore a
fix to a regression introduced in v4.9, and is therefore being put
forward post-merge-window in v4.10.

This commit sets a flag from the existing rcu_scheduler_starting()
function which causes all synchronous grace periods to take the expedited
path.  The expedited path now checks this flag, using the requesting task
to drive the expedited grace period forward during the mid-boot phase.
Finally, this flag is updated by a core_initcall() function named
rcu_exp_runtime_mode(), which causes the runtime codepaths to be used.

Note that this arrangement assumes that tasks are not sent POSIX signals
(or anything similar) from the time that the first task is spawned
through core_initcall() time.

Fixes: 8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue")
Reported-by: "Zheng, Lv" &lt;lv.zheng@intel.com&gt;
Reported-by: Borislav Petkov &lt;bp@alien8.de&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Stan Kain &lt;stan.kain@gmail.com&gt;
Tested-by: Ivan &lt;waffolz@hotmail.com&gt;
Tested-by: Emanuel Castelo &lt;emanuel.castelo@gmail.com&gt;
Tested-by: Bruno Pesavento &lt;bpesavento@infinito.it&gt;
Tested-by: Borislav Petkov &lt;bp@suse.de&gt;
Tested-by: Frederic Bezies &lt;fredbezies@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 52d7e48b86fc108e45a656d8e53e4237993c481d upstream.

The current preemptible RCU implementation goes through three phases
during bootup.  In the first phase, there is only one CPU that is running
with preemption disabled, so that a no-op is a synchronous grace period.
In the second mid-boot phase, the scheduler is running, but RCU has
not yet gotten its kthreads spawned (and, for expedited grace periods,
workqueues are not yet running.  During this time, any attempt to do
a synchronous grace period will hang the system (or complain bitterly,
depending).  In the third and final phase, RCU is fully operational and
everything works normally.

This has been OK for some time, but there has recently been some
synchronous grace periods showing up during the second mid-boot phase.
This code worked "by accident" for awhile, but started failing as soon
as expedited RCU grace periods switched over to workqueues in commit
8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue").
Note that the code was buggy even before this commit, as it was subject
to failure on real-time systems that forced all expedited grace periods
to run as normal grace periods (for example, using the rcu_normal ksysfs
parameter).  The callchain from the failure case is as follows:

early_amd_iommu_init()
|-&gt; acpi_put_table(ivrs_base);
|-&gt; acpi_tb_put_table(table_desc);
|-&gt; acpi_tb_invalidate_table(table_desc);
|-&gt; acpi_tb_release_table(...)
|-&gt; acpi_os_unmap_memory
|-&gt; acpi_os_unmap_iomem
|-&gt; acpi_os_map_cleanup
|-&gt; synchronize_rcu_expedited

The kernel showing this callchain was built with CONFIG_PREEMPT_RCU=y,
which caused the code to try using workqueues before they were
initialized, which did not go well.

This commit therefore reworks RCU to permit synchronous grace periods
to proceed during this mid-boot phase.  This commit is therefore a
fix to a regression introduced in v4.9, and is therefore being put
forward post-merge-window in v4.10.

This commit sets a flag from the existing rcu_scheduler_starting()
function which causes all synchronous grace periods to take the expedited
path.  The expedited path now checks this flag, using the requesting task
to drive the expedited grace period forward during the mid-boot phase.
Finally, this flag is updated by a core_initcall() function named
rcu_exp_runtime_mode(), which causes the runtime codepaths to be used.

Note that this arrangement assumes that tasks are not sent POSIX signals
(or anything similar) from the time that the first task is spawned
through core_initcall() time.

Fixes: 8b355e3bc140 ("rcu: Drive expedited grace periods from workqueue")
Reported-by: "Zheng, Lv" &lt;lv.zheng@intel.com&gt;
Reported-by: Borislav Petkov &lt;bp@alien8.de&gt;
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
Tested-by: Stan Kain &lt;stan.kain@gmail.com&gt;
Tested-by: Ivan &lt;waffolz@hotmail.com&gt;
Tested-by: Emanuel Castelo &lt;emanuel.castelo@gmail.com&gt;
Tested-by: Bruno Pesavento &lt;bpesavento@infinito.it&gt;
Tested-by: Borislav Petkov &lt;bp@suse.de&gt;
Tested-by: Frederic Bezies &lt;fredbezies@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Fix soft lockup for rcu_nocb_kthread</title>
<updated>2016-08-22T14:53:20+00:00</updated>
<author>
<name>Ding Tianhong</name>
<email>dingtianhong@huawei.com</email>
</author>
<published>2016-06-15T07:27:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=bedc1969150d480c462cdac320fa944b694a7162'/>
<id>bedc1969150d480c462cdac320fa944b694a7162</id>
<content type='text'>
Carrying out the following steps results in a softlockup in the
RCU callback-offload (rcuo) kthreads:

1. Connect to ixgbevf, and set the speed to 10Gb/s.
2. Use ifconfig to bring the nic up and down repeatedly.

[  317.005148] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
[  368.106005] BUG: soft lockup - CPU#1 stuck for 22s! [rcuos/1:15]
[  368.106005] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[  368.106005] task: ffff88057dd8a220 ti: ffff88057dd9c000 task.ti: ffff88057dd9c000
[  368.106005] RIP: 0010:[&lt;ffffffff81579e04&gt;]  [&lt;ffffffff81579e04&gt;] fib_table_lookup+0x14/0x390
[  368.106005] RSP: 0018:ffff88061fc83ce8  EFLAGS: 00000286
[  368.106005] RAX: 0000000000000001 RBX: 00000000020155c0 RCX: 0000000000000001
[  368.106005] RDX: ffff88061fc83d50 RSI: ffff88061fc83d70 RDI: ffff880036d11a00
[  368.106005] RBP: ffff88061fc83d08 R08: 0000000000000001 R09: 0000000000000000
[  368.106005] R10: ffff880036d11a00 R11: ffffffff819e0900 R12: ffff88061fc83c58
[  368.106005] R13: ffffffff816154dd R14: ffff88061fc83d08 R15: 00000000020155c0
[  368.106005] FS:  0000000000000000(0000) GS:ffff88061fc80000(0000) knlGS:0000000000000000
[  368.106005] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  368.106005] CR2: 00007f8c2aee9c40 CR3: 000000057b222000 CR4: 00000000000407e0
[  368.106005] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  368.106005] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  368.106005] Stack:
[  368.106005]  00000000010000c0 ffff88057b766000 ffff8802e380b000 ffff88057af03e00
[  368.106005]  ffff88061fc83dc0 ffffffff815349a6 ffff88061fc83d40 ffffffff814ee146
[  368.106005]  ffff8802e380af00 00000000e380af00 ffffffff819e0900 020155c0010000c0
[  368.106005] Call Trace:
[  368.106005]  &lt;IRQ&gt;
[  368.106005]
[  368.106005]  [&lt;ffffffff815349a6&gt;] ip_route_input_noref+0x516/0xbd0
[  368.106005]  [&lt;ffffffff814ee146&gt;] ? skb_release_data+0xd6/0x110
[  368.106005]  [&lt;ffffffff814ee20a&gt;] ? kfree_skb+0x3a/0xa0
[  368.106005]  [&lt;ffffffff8153698f&gt;] ip_rcv_finish+0x29f/0x350
[  368.106005]  [&lt;ffffffff81537034&gt;] ip_rcv+0x234/0x380
[  368.106005]  [&lt;ffffffff814fd656&gt;] __netif_receive_skb_core+0x676/0x870
[  368.106005]  [&lt;ffffffff814fd868&gt;] __netif_receive_skb+0x18/0x60
[  368.106005]  [&lt;ffffffff814fe4de&gt;] process_backlog+0xae/0x180
[  368.106005]  [&lt;ffffffff814fdcb2&gt;] net_rx_action+0x152/0x240
[  368.106005]  [&lt;ffffffff81077b3f&gt;] __do_softirq+0xef/0x280
[  368.106005]  [&lt;ffffffff8161619c&gt;] call_softirq+0x1c/0x30
[  368.106005]  &lt;EOI&gt;
[  368.106005]
[  368.106005]  [&lt;ffffffff81015d95&gt;] do_softirq+0x65/0xa0
[  368.106005]  [&lt;ffffffff81077174&gt;] local_bh_enable+0x94/0xa0
[  368.106005]  [&lt;ffffffff81114922&gt;] rcu_nocb_kthread+0x232/0x370
[  368.106005]  [&lt;ffffffff81098250&gt;] ? wake_up_bit+0x30/0x30
[  368.106005]  [&lt;ffffffff811146f0&gt;] ? rcu_start_gp+0x40/0x40
[  368.106005]  [&lt;ffffffff8109728f&gt;] kthread+0xcf/0xe0
[  368.106005]  [&lt;ffffffff810971c0&gt;] ? kthread_create_on_node+0x140/0x140
[  368.106005]  [&lt;ffffffff816147d8&gt;] ret_from_fork+0x58/0x90
[  368.106005]  [&lt;ffffffff810971c0&gt;] ? kthread_create_on_node+0x140/0x140

==================================cut here==============================

It turns out that the rcuos callback-offload kthread is busy processing
a very large quantity of RCU callbacks, and it is not reliquishing the
CPU while doing so.  This commit therefore adds an cond_resched_rcu_qs()
within the loop to allow other tasks to run.

Signed-off-by: Ding Tianhong &lt;dingtianhong@huawei.com&gt;
[ paulmck: Substituted cond_resched_rcu_qs for cond_resched. ]
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Carrying out the following steps results in a softlockup in the
RCU callback-offload (rcuo) kthreads:

1. Connect to ixgbevf, and set the speed to 10Gb/s.
2. Use ifconfig to bring the nic up and down repeatedly.

[  317.005148] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
[  368.106005] BUG: soft lockup - CPU#1 stuck for 22s! [rcuos/1:15]
[  368.106005] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[  368.106005] task: ffff88057dd8a220 ti: ffff88057dd9c000 task.ti: ffff88057dd9c000
[  368.106005] RIP: 0010:[&lt;ffffffff81579e04&gt;]  [&lt;ffffffff81579e04&gt;] fib_table_lookup+0x14/0x390
[  368.106005] RSP: 0018:ffff88061fc83ce8  EFLAGS: 00000286
[  368.106005] RAX: 0000000000000001 RBX: 00000000020155c0 RCX: 0000000000000001
[  368.106005] RDX: ffff88061fc83d50 RSI: ffff88061fc83d70 RDI: ffff880036d11a00
[  368.106005] RBP: ffff88061fc83d08 R08: 0000000000000001 R09: 0000000000000000
[  368.106005] R10: ffff880036d11a00 R11: ffffffff819e0900 R12: ffff88061fc83c58
[  368.106005] R13: ffffffff816154dd R14: ffff88061fc83d08 R15: 00000000020155c0
[  368.106005] FS:  0000000000000000(0000) GS:ffff88061fc80000(0000) knlGS:0000000000000000
[  368.106005] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  368.106005] CR2: 00007f8c2aee9c40 CR3: 000000057b222000 CR4: 00000000000407e0
[  368.106005] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  368.106005] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  368.106005] Stack:
[  368.106005]  00000000010000c0 ffff88057b766000 ffff8802e380b000 ffff88057af03e00
[  368.106005]  ffff88061fc83dc0 ffffffff815349a6 ffff88061fc83d40 ffffffff814ee146
[  368.106005]  ffff8802e380af00 00000000e380af00 ffffffff819e0900 020155c0010000c0
[  368.106005] Call Trace:
[  368.106005]  &lt;IRQ&gt;
[  368.106005]
[  368.106005]  [&lt;ffffffff815349a6&gt;] ip_route_input_noref+0x516/0xbd0
[  368.106005]  [&lt;ffffffff814ee146&gt;] ? skb_release_data+0xd6/0x110
[  368.106005]  [&lt;ffffffff814ee20a&gt;] ? kfree_skb+0x3a/0xa0
[  368.106005]  [&lt;ffffffff8153698f&gt;] ip_rcv_finish+0x29f/0x350
[  368.106005]  [&lt;ffffffff81537034&gt;] ip_rcv+0x234/0x380
[  368.106005]  [&lt;ffffffff814fd656&gt;] __netif_receive_skb_core+0x676/0x870
[  368.106005]  [&lt;ffffffff814fd868&gt;] __netif_receive_skb+0x18/0x60
[  368.106005]  [&lt;ffffffff814fe4de&gt;] process_backlog+0xae/0x180
[  368.106005]  [&lt;ffffffff814fdcb2&gt;] net_rx_action+0x152/0x240
[  368.106005]  [&lt;ffffffff81077b3f&gt;] __do_softirq+0xef/0x280
[  368.106005]  [&lt;ffffffff8161619c&gt;] call_softirq+0x1c/0x30
[  368.106005]  &lt;EOI&gt;
[  368.106005]
[  368.106005]  [&lt;ffffffff81015d95&gt;] do_softirq+0x65/0xa0
[  368.106005]  [&lt;ffffffff81077174&gt;] local_bh_enable+0x94/0xa0
[  368.106005]  [&lt;ffffffff81114922&gt;] rcu_nocb_kthread+0x232/0x370
[  368.106005]  [&lt;ffffffff81098250&gt;] ? wake_up_bit+0x30/0x30
[  368.106005]  [&lt;ffffffff811146f0&gt;] ? rcu_start_gp+0x40/0x40
[  368.106005]  [&lt;ffffffff8109728f&gt;] kthread+0xcf/0xe0
[  368.106005]  [&lt;ffffffff810971c0&gt;] ? kthread_create_on_node+0x140/0x140
[  368.106005]  [&lt;ffffffff816147d8&gt;] ret_from_fork+0x58/0x90
[  368.106005]  [&lt;ffffffff810971c0&gt;] ? kthread_create_on_node+0x140/0x140

==================================cut here==============================

It turns out that the rcuos callback-offload kthread is busy processing
a very large quantity of RCU callbacks, and it is not reliquishing the
CPU while doing so.  This commit therefore adds an cond_resched_rcu_qs()
within the loop to allow other tasks to run.

Signed-off-by: Ding Tianhong &lt;dingtianhong@huawei.com&gt;
[ paulmck: Substituted cond_resched_rcu_qs for cond_resched. ]
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Merge branches 'doc.2016.06.15a', 'fixes.2016.06.15b' and 'torture.2016.06.14a' into HEAD</title>
<updated>2016-06-15T23:58:03+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-06-15T23:58:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4d03754f04247bc4d469b78b61cac942df37445d'/>
<id>4d03754f04247bc4d469b78b61cac942df37445d</id>
<content type='text'>
doc.2016.06.15a: Documentation updates
fixes.2016.06.15b: Documentation updates
torture.2016.06.14a: Documentation updates
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
doc.2016.06.15a: Documentation updates
fixes.2016.06.15b: Documentation updates
torture.2016.06.14a: Documentation updates
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Correctly handle sparse possible cpus</title>
<updated>2016-06-15T23:00:05+00:00</updated>
<author>
<name>Mark Rutland</name>
<email>mark.rutland@arm.com</email>
</author>
<published>2016-06-03T14:20:04+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=bc75e99983df1efd977a5cd468893d55d52b8d70'/>
<id>bc75e99983df1efd977a5cd468893d55d52b8d70</id>
<content type='text'>
In many cases in the RCU tree code, we iterate over the set of cpus for
a leaf node described by rcu_node::grplo and rcu_node::grphi, checking
per-cpu data for each cpu in this range. However, if the set of possible
cpus is sparse, some cpus described in this range are not possible, and
thus no per-cpu region will have been allocated (or initialised) for
them by the generic percpu code.

Erroneous accesses to a per-cpu area for these !possible cpus may fault
or may hit other data depending on the addressed generated when the
erroneous per cpu offset is applied. In practice, both cases have been
observed on arm64 hardware (the former being silent, but detectable with
additional patches).

To avoid issues resulting from this, we must iterate over the set of
*possible* cpus for a given leaf node. This patch add a new helper,
for_each_leaf_node_possible_cpu, to enable this. As iteration is often
intertwined with rcu_node local bitmask manipulation, a new
leaf_node_cpu_bit helper is added to make this simpler and more
consistent. The RCU tree code is made to use both of these where
appropriate.

Without this patch, running reboot at a shell can result in an oops
like:

[ 3369.075979] Unable to handle kernel paging request at virtual address ffffff8008b21b4c
[ 3369.083881] pgd = ffffffc3ecdda000
[ 3369.087270] [ffffff8008b21b4c] *pgd=00000083eca48003, *pud=00000083eca48003, *pmd=0000000000000000
[ 3369.096222] Internal error: Oops: 96000007 [#1] PREEMPT SMP
[ 3369.101781] Modules linked in:
[ 3369.104825] CPU: 2 PID: 1817 Comm: NetworkManager Tainted: G        W       4.6.0+ #3
[ 3369.121239] task: ffffffc0fa13e000 ti: ffffffc3eb940000 task.ti: ffffffc3eb940000
[ 3369.128708] PC is at sync_rcu_exp_select_cpus+0x188/0x510
[ 3369.134094] LR is at sync_rcu_exp_select_cpus+0x104/0x510
[ 3369.139479] pc : [&lt;ffffff80081109a8&gt;] lr : [&lt;ffffff8008110924&gt;] pstate: 200001c5
[ 3369.146860] sp : ffffffc3eb9435a0
[ 3369.150162] x29: ffffffc3eb9435a0 x28: ffffff8008be4f88
[ 3369.155465] x27: ffffff8008b66c80 x26: ffffffc3eceb2600
[ 3369.160767] x25: 0000000000000001 x24: ffffff8008be4f88
[ 3369.166070] x23: ffffff8008b51c3c x22: ffffff8008b66c80
[ 3369.171371] x21: 0000000000000001 x20: ffffff8008b21b40
[ 3369.176673] x19: ffffff8008b66c80 x18: 0000000000000000
[ 3369.181975] x17: 0000007fa951a010 x16: ffffff80086a30f0
[ 3369.187278] x15: 0000007fa9505590 x14: 0000000000000000
[ 3369.192580] x13: ffffff8008b51000 x12: ffffffc3eb940000
[ 3369.197882] x11: 0000000000000006 x10: ffffff8008b51b78
[ 3369.203184] x9 : 0000000000000001 x8 : ffffff8008be4000
[ 3369.208486] x7 : ffffff8008b21b40 x6 : 0000000000001003
[ 3369.213788] x5 : 0000000000000000 x4 : ffffff8008b27280
[ 3369.219090] x3 : ffffff8008b21b4c x2 : 0000000000000001
[ 3369.224406] x1 : 0000000000000001 x0 : 0000000000000140
...
[ 3369.972257] [&lt;ffffff80081109a8&gt;] sync_rcu_exp_select_cpus+0x188/0x510
[ 3369.978685] [&lt;ffffff80081128b4&gt;] synchronize_rcu_expedited+0x64/0xa8
[ 3369.985026] [&lt;ffffff80086b987c&gt;] synchronize_net+0x24/0x30
[ 3369.990499] [&lt;ffffff80086ddb54&gt;] dev_deactivate_many+0x28c/0x298
[ 3369.996493] [&lt;ffffff80086b6bb8&gt;] __dev_close_many+0x60/0xd0
[ 3370.002052] [&lt;ffffff80086b6d48&gt;] __dev_close+0x28/0x40
[ 3370.007178] [&lt;ffffff80086bf62c&gt;] __dev_change_flags+0x8c/0x158
[ 3370.012999] [&lt;ffffff80086bf718&gt;] dev_change_flags+0x20/0x60
[ 3370.018558] [&lt;ffffff80086cf7f0&gt;] do_setlink+0x288/0x918
[ 3370.023771] [&lt;ffffff80086d0798&gt;] rtnl_newlink+0x398/0x6a8
[ 3370.029158] [&lt;ffffff80086cee84&gt;] rtnetlink_rcv_msg+0xe4/0x220
[ 3370.034891] [&lt;ffffff80086e274c&gt;] netlink_rcv_skb+0xc4/0xf8
[ 3370.040364] [&lt;ffffff80086ced8c&gt;] rtnetlink_rcv+0x2c/0x40
[ 3370.045663] [&lt;ffffff80086e1fe8&gt;] netlink_unicast+0x160/0x238
[ 3370.051309] [&lt;ffffff80086e24b8&gt;] netlink_sendmsg+0x2f0/0x358
[ 3370.056956] [&lt;ffffff80086a0070&gt;] sock_sendmsg+0x18/0x30
[ 3370.062168] [&lt;ffffff80086a21cc&gt;] ___sys_sendmsg+0x26c/0x280
[ 3370.067728] [&lt;ffffff80086a30ac&gt;] __sys_sendmsg+0x44/0x88
[ 3370.073027] [&lt;ffffff80086a3100&gt;] SyS_sendmsg+0x10/0x20
[ 3370.078153] [&lt;ffffff8008085e70&gt;] el0_svc_naked+0x24/0x28

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Reported-by: Dennis Chen &lt;dennis.chen@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Josh Triplett &lt;josh@joshtriplett.org&gt;
Cc: Lai Jiangshan &lt;jiangshanlai@gmail.com&gt;
Cc: Mathieu Desnoyers &lt;mathieu.desnoyers@efficios.com&gt;
Cc: Steve Capper &lt;steve.capper@arm.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In many cases in the RCU tree code, we iterate over the set of cpus for
a leaf node described by rcu_node::grplo and rcu_node::grphi, checking
per-cpu data for each cpu in this range. However, if the set of possible
cpus is sparse, some cpus described in this range are not possible, and
thus no per-cpu region will have been allocated (or initialised) for
them by the generic percpu code.

Erroneous accesses to a per-cpu area for these !possible cpus may fault
or may hit other data depending on the addressed generated when the
erroneous per cpu offset is applied. In practice, both cases have been
observed on arm64 hardware (the former being silent, but detectable with
additional patches).

To avoid issues resulting from this, we must iterate over the set of
*possible* cpus for a given leaf node. This patch add a new helper,
for_each_leaf_node_possible_cpu, to enable this. As iteration is often
intertwined with rcu_node local bitmask manipulation, a new
leaf_node_cpu_bit helper is added to make this simpler and more
consistent. The RCU tree code is made to use both of these where
appropriate.

Without this patch, running reboot at a shell can result in an oops
like:

[ 3369.075979] Unable to handle kernel paging request at virtual address ffffff8008b21b4c
[ 3369.083881] pgd = ffffffc3ecdda000
[ 3369.087270] [ffffff8008b21b4c] *pgd=00000083eca48003, *pud=00000083eca48003, *pmd=0000000000000000
[ 3369.096222] Internal error: Oops: 96000007 [#1] PREEMPT SMP
[ 3369.101781] Modules linked in:
[ 3369.104825] CPU: 2 PID: 1817 Comm: NetworkManager Tainted: G        W       4.6.0+ #3
[ 3369.121239] task: ffffffc0fa13e000 ti: ffffffc3eb940000 task.ti: ffffffc3eb940000
[ 3369.128708] PC is at sync_rcu_exp_select_cpus+0x188/0x510
[ 3369.134094] LR is at sync_rcu_exp_select_cpus+0x104/0x510
[ 3369.139479] pc : [&lt;ffffff80081109a8&gt;] lr : [&lt;ffffff8008110924&gt;] pstate: 200001c5
[ 3369.146860] sp : ffffffc3eb9435a0
[ 3369.150162] x29: ffffffc3eb9435a0 x28: ffffff8008be4f88
[ 3369.155465] x27: ffffff8008b66c80 x26: ffffffc3eceb2600
[ 3369.160767] x25: 0000000000000001 x24: ffffff8008be4f88
[ 3369.166070] x23: ffffff8008b51c3c x22: ffffff8008b66c80
[ 3369.171371] x21: 0000000000000001 x20: ffffff8008b21b40
[ 3369.176673] x19: ffffff8008b66c80 x18: 0000000000000000
[ 3369.181975] x17: 0000007fa951a010 x16: ffffff80086a30f0
[ 3369.187278] x15: 0000007fa9505590 x14: 0000000000000000
[ 3369.192580] x13: ffffff8008b51000 x12: ffffffc3eb940000
[ 3369.197882] x11: 0000000000000006 x10: ffffff8008b51b78
[ 3369.203184] x9 : 0000000000000001 x8 : ffffff8008be4000
[ 3369.208486] x7 : ffffff8008b21b40 x6 : 0000000000001003
[ 3369.213788] x5 : 0000000000000000 x4 : ffffff8008b27280
[ 3369.219090] x3 : ffffff8008b21b4c x2 : 0000000000000001
[ 3369.224406] x1 : 0000000000000001 x0 : 0000000000000140
...
[ 3369.972257] [&lt;ffffff80081109a8&gt;] sync_rcu_exp_select_cpus+0x188/0x510
[ 3369.978685] [&lt;ffffff80081128b4&gt;] synchronize_rcu_expedited+0x64/0xa8
[ 3369.985026] [&lt;ffffff80086b987c&gt;] synchronize_net+0x24/0x30
[ 3369.990499] [&lt;ffffff80086ddb54&gt;] dev_deactivate_many+0x28c/0x298
[ 3369.996493] [&lt;ffffff80086b6bb8&gt;] __dev_close_many+0x60/0xd0
[ 3370.002052] [&lt;ffffff80086b6d48&gt;] __dev_close+0x28/0x40
[ 3370.007178] [&lt;ffffff80086bf62c&gt;] __dev_change_flags+0x8c/0x158
[ 3370.012999] [&lt;ffffff80086bf718&gt;] dev_change_flags+0x20/0x60
[ 3370.018558] [&lt;ffffff80086cf7f0&gt;] do_setlink+0x288/0x918
[ 3370.023771] [&lt;ffffff80086d0798&gt;] rtnl_newlink+0x398/0x6a8
[ 3370.029158] [&lt;ffffff80086cee84&gt;] rtnetlink_rcv_msg+0xe4/0x220
[ 3370.034891] [&lt;ffffff80086e274c&gt;] netlink_rcv_skb+0xc4/0xf8
[ 3370.040364] [&lt;ffffff80086ced8c&gt;] rtnetlink_rcv+0x2c/0x40
[ 3370.045663] [&lt;ffffff80086e1fe8&gt;] netlink_unicast+0x160/0x238
[ 3370.051309] [&lt;ffffff80086e24b8&gt;] netlink_sendmsg+0x2f0/0x358
[ 3370.056956] [&lt;ffffff80086a0070&gt;] sock_sendmsg+0x18/0x30
[ 3370.062168] [&lt;ffffff80086a21cc&gt;] ___sys_sendmsg+0x26c/0x280
[ 3370.067728] [&lt;ffffff80086a30ac&gt;] __sys_sendmsg+0x44/0x88
[ 3370.073027] [&lt;ffffff80086a3100&gt;] SyS_sendmsg+0x10/0x20
[ 3370.078153] [&lt;ffffff8008085e70&gt;] el0_svc_naked+0x24/0x28

Signed-off-by: Mark Rutland &lt;mark.rutland@arm.com&gt;
Reported-by: Dennis Chen &lt;dennis.chen@arm.com&gt;
Cc: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Cc: Josh Triplett &lt;josh@joshtriplett.org&gt;
Cc: Lai Jiangshan &lt;jiangshanlai@gmail.com&gt;
Cc: Mathieu Desnoyers &lt;mathieu.desnoyers@efficios.com&gt;
Cc: Steve Capper &lt;steve.capper@arm.com&gt;
Cc: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>torture: Remove CONFIG_RCU_TORTURE_TEST_RUNNABLE, simplify code</title>
<updated>2016-06-14T23:02:15+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-04-19T21:42:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4e9a073f60367157fd64b65490654c39d4c44321'/>
<id>4e9a073f60367157fd64b65490654c39d4c44321</id>
<content type='text'>
This commit removes CONFIG_RCU_TORTURE_TEST_RUNNABLE in favor of the
already-existing rcutorture.torture_runnable kernel boot parameter.
It also converts an #ifdef into IS_ENABLED(), saving a few lines of code.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit removes CONFIG_RCU_TORTURE_TEST_RUNNABLE in favor of the
already-existing rcutorture.torture_runnable kernel boot parameter.
It also converts an #ifdef into IS_ENABLED(), saving a few lines of code.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Move expedited code from tree_plugin.h to tree_exp.h</title>
<updated>2016-06-14T23:01:42+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-04-15T23:44:07+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=40e0a6cfd53e37d9b8863cdbc0adb1f72e9311e7'/>
<id>40e0a6cfd53e37d9b8863cdbc0adb1f72e9311e7</id>
<content type='text'>
People have been having some difficulty finding their way around the
RCU code.  This commit therefore pulls some of the expedited grace-period
code from tree_plugin.h to a new tree_exp.h file.  This commit is strictly
code movement.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
People have been having some difficulty finding their way around the
RCU code.  This commit therefore pulls some of the expedited grace-period
code from tree_plugin.h to a new tree_exp.h file.  This commit is strictly
code movement.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Consolidate expedited GP code into exp_funnel_lock()</title>
<updated>2016-03-31T20:34:11+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-03-16T23:32:24+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=aff12cdf86e6fa891d1c30c0fad112d138bd7b10'/>
<id>aff12cdf86e6fa891d1c30c0fad112d138bd7b10</id>
<content type='text'>
This commit pulls the grace-period-start counter adjustment and tracing
from synchronize_rcu_expedited() and synchronize_sched_expedited()
into exp_funnel_lock(), thus eliminating some code duplication.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit pulls the grace-period-start counter adjustment and tracing
from synchronize_rcu_expedited() and synchronize_sched_expedited()
into exp_funnel_lock(), thus eliminating some code duplication.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Consolidate expedited GP tracing into rcu_exp_gp_seq_snap()</title>
<updated>2016-03-31T20:34:10+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-03-16T23:27:44+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=179e5dcd1e5bdfac1128431d131b31322aedd2bc'/>
<id>179e5dcd1e5bdfac1128431d131b31322aedd2bc</id>
<content type='text'>
This commit moves some duplicate code from synchronize_rcu_expedited()
and synchronize_sched_expedited() into rcu_exp_gp_seq_snap().  This
doesn't save lines of code, but does eliminate a "tell me twice" issue.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
This commit moves some duplicate code from synchronize_rcu_expedited()
and synchronize_sched_expedited() into rcu_exp_gp_seq_snap().  This
doesn't save lines of code, but does eliminate a "tell me twice" issue.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Consolidate expedited GP code into rcu_exp_wait_wake()</title>
<updated>2016-03-31T20:34:10+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-03-16T23:22:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4ea3e85b113ab37a2d55cfabf0d709ddec088bb3'/>
<id>4ea3e85b113ab37a2d55cfabf0d709ddec088bb3</id>
<content type='text'>
Currently, synchronize_rcu_expedited() and rcu_sched_expedited() have
significant duplicate code.  This commit therefore consolidates some of
this code into rcu_exp_wake(), which is now renamed to rcu_exp_wait_wake()
in recognition of its added responsibilities.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Currently, synchronize_rcu_expedited() and rcu_sched_expedited() have
significant duplicate code.  This commit therefore consolidates some of
this code into rcu_exp_wake(), which is now renamed to rcu_exp_wait_wake()
in recognition of its added responsibilities.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>rcu: Enforce expedited-GP fairness via funnel wait queue</title>
<updated>2016-03-31T20:34:08+00:00</updated>
<author>
<name>Paul E. McKenney</name>
<email>paulmck@linux.vnet.ibm.com</email>
</author>
<published>2016-01-31T01:57:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f6a12f34a448cc8a624070fd365c29c890138a48'/>
<id>f6a12f34a448cc8a624070fd365c29c890138a48</id>
<content type='text'>
The current mutex-based funnel-locking approach used by expedited grace
periods is subject to severe unfairness.  The problem arises when a
few tasks, making a path from leaves to root, all wake up before other
tasks do.  A new task can then follow this path all the way to the root,
which needlessly delays tasks whose grace period is done, but who do
not happen to acquire the lock quickly enough.

This commit avoids this problem by maintaining per-rcu_node wait queues,
along with a per-rcu_node counter that tracks the latest grace period
sought by an earlier task to visit this node.  If that grace period
would satisfy the current task, instead of proceeding up the tree,
it waits on the current rcu_node structure using a pair of wait queues
provided for that purpose.  This decouples awakening of old tasks from
the arrival of new tasks.

If the wakeups prove to be a bottleneck, additional kthreads can be
brought to bear for that purpose.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
The current mutex-based funnel-locking approach used by expedited grace
periods is subject to severe unfairness.  The problem arises when a
few tasks, making a path from leaves to root, all wake up before other
tasks do.  A new task can then follow this path all the way to the root,
which needlessly delays tasks whose grace period is done, but who do
not happen to acquire the lock quickly enough.

This commit avoids this problem by maintaining per-rcu_node wait queues,
along with a per-rcu_node counter that tracks the latest grace period
sought by an earlier task to visit this node.  If that grace period
would satisfy the current task, instead of proceeding up the tree,
it waits on the current rcu_node structure using a pair of wait queues
provided for that purpose.  This decouples awakening of old tasks from
the arrival of new tasks.

If the wakeups prove to be a bottleneck, additional kthreads can be
brought to bear for that purpose.

Signed-off-by: Paul E. McKenney &lt;paulmck@linux.vnet.ibm.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
