<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/drivers/md, branch v3.10.24</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>md: fix calculation of stacking limits on level change.</title>
<updated>2013-12-04T18:57:15+00:00</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2013-11-14T04:16:15+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8e43aac1ec86b5955ecf396395f6fd8b4be42157'/>
<id>8e43aac1ec86b5955ecf396395f6fd8b4be42157</id>
<content type='text'>
commit 02e5f5c0a0f726e66e3d8506ea1691e344277969 upstream.

The various -&gt;run routines of md personalities assume that the 'queue'
has been initialised by the blk_set_stacking_limits() call in
md_alloc().

However when the level is changed (by level_store()) the -&gt;run routine
for the new level is called for an array which has already had the
stacking limits modified.  This can result in incorrect final
settings.

So call blk_set_stacking_limits() before -&gt;run in level_store().

A specific consequence of this bug is that it causes
discard_granularity to be set incorrectly when reshaping a RAID4 to a
RAID0.

This is suitable for any -stable kernel since 3.3 in which
blk_set_stacking_limits() was introduced.

Reported-and-tested-by: "Baldysiak, Pawel" &lt;pawel.baldysiak@intel.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 02e5f5c0a0f726e66e3d8506ea1691e344277969 upstream.

The various -&gt;run routines of md personalities assume that the 'queue'
has been initialised by the blk_set_stacking_limits() call in
md_alloc().

However when the level is changed (by level_store()) the -&gt;run routine
for the new level is called for an array which has already had the
stacking limits modified.  This can result in incorrect final
settings.

So call blk_set_stacking_limits() before -&gt;run in level_store().

A specific consequence of this bug is that it causes
discard_granularity to be set incorrectly when reshaping a RAID4 to a
RAID0.

This is suitable for any -stable kernel since 3.3 in which
blk_set_stacking_limits() was introduced.

Reported-and-tested-by: "Baldysiak, Pawel" &lt;pawel.baldysiak@intel.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dm: allocate buffer for messages with small number of arguments using GFP_NOIO</title>
<updated>2013-12-04T18:56:44+00:00</updated>
<author>
<name>Mikulas Patocka</name>
<email>mpatocka@redhat.com</email>
</author>
<published>2013-10-31T17:55:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=4c52f001344510d1e26a33cfdf971a12338a63d9'/>
<id>4c52f001344510d1e26a33cfdf971a12338a63d9</id>
<content type='text'>
commit f36afb3957353d2529cb2b00f78fdccd14fc5e9c upstream.

dm-mpath and dm-thin must process messages even if some device is
suspended, so we allocate argv buffer with GFP_NOIO. These messages have
a small fixed number of arguments.

On the other hand, dm-switch needs to process bulk data using messages
so excessive use of GFP_NOIO could cause trouble.

The patch also lowers the default number of arguments from 64 to 8, so
that there is smaller load on GFP_NOIO allocations.

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Acked-by: Alasdair G Kergon &lt;agk@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit f36afb3957353d2529cb2b00f78fdccd14fc5e9c upstream.

dm-mpath and dm-thin must process messages even if some device is
suspended, so we allocate argv buffer with GFP_NOIO. These messages have
a small fixed number of arguments.

On the other hand, dm-switch needs to process bulk data using messages
so excessive use of GFP_NOIO could cause trouble.

The patch also lowers the default number of arguments from 64 to 8, so
that there is smaller load on GFP_NOIO allocations.

Signed-off-by: Mikulas Patocka &lt;mpatocka@redhat.com&gt;
Acked-by: Alasdair G Kergon &lt;agk@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dm cache: fix a race condition between queuing new migrations and quiescing for a shutdown</title>
<updated>2013-12-04T18:56:43+00:00</updated>
<author>
<name>Joe Thornber</name>
<email>ejt@redhat.com</email>
</author>
<published>2013-10-30T17:11:58+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8fafee9829f539e17ea8678a6f56d2b897ffe3cc'/>
<id>8fafee9829f539e17ea8678a6f56d2b897ffe3cc</id>
<content type='text'>
commit 66cb1910df17b38334153462ec8166e48058035f upstream.

The code that was trying to do this was inadequate.  The postsuspend
method (in ioctl context), needs to wait for the worker thread to
acknowledge the request to quiesce.  Otherwise the migration count may
drop to zero temporarily before the worker thread realises we're
quiescing.  In this case the target will be taken down, but the worker
thread may have issued a new migration, which will cause an oops when
it completes.

Signed-off-by: Joe Thornber &lt;ejt@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 66cb1910df17b38334153462ec8166e48058035f upstream.

The code that was trying to do this was inadequate.  The postsuspend
method (in ioctl context), needs to wait for the worker thread to
acknowledge the request to quiesce.  Otherwise the migration count may
drop to zero temporarily before the worker thread realises we're
quiescing.  In this case the target will be taken down, but the worker
thread may have issued a new migration, which will cause an oops when
it completes.

Signed-off-by: Joe Thornber &lt;ejt@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dm array: fix bug in growing array</title>
<updated>2013-12-04T18:56:42+00:00</updated>
<author>
<name>Joe Thornber</name>
<email>ejt@redhat.com</email>
</author>
<published>2013-10-30T11:19:59+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0c5fd99e89b5f288ffe5c2ed301d2ffaac091891'/>
<id>0c5fd99e89b5f288ffe5c2ed301d2ffaac091891</id>
<content type='text'>
commit 9c1d4de56066e4d6abc66ec188faafd7b303fb08 upstream.

Entries would be lost if the old tail block was partially filled.

Signed-off-by: Joe Thornber &lt;ejt@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 9c1d4de56066e4d6abc66ec188faafd7b303fb08 upstream.

Entries would be lost if the old tail block was partially filled.

Signed-off-by: Joe Thornber &lt;ejt@redhat.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dm mpath: fix race condition between multipath_dtr and pg_init_done</title>
<updated>2013-12-04T18:56:41+00:00</updated>
<author>
<name>Shiva Krishna Merla</name>
<email>shivakrishna.merla@netapp.com</email>
</author>
<published>2013-10-30T03:26:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=9fb1b9d041cb34f4381038e2622900dfdc4cd7b0'/>
<id>9fb1b9d041cb34f4381038e2622900dfdc4cd7b0</id>
<content type='text'>
commit 954a73d5d3073df2231820c718fdd2f18b0fe4c9 upstream.

Whenever multipath_dtr() is happening we must prevent queueing any
further path activation work.  Implement this by adding a new
'pg_init_disabled' flag to the multipath structure that denotes future
path activation work should be skipped if it is set.  By disabling
pg_init and then re-enabling in flush_multipath_work() we also avoid the
potential for pg_init to be initiated while suspending an mpath device.

Without this patch a race condition exists that may result in a kernel
panic:

1) If after pg_init_done() decrements pg_init_in_progress to 0, a call
   to wait_for_pg_init_completion() assumes there are no more pending path
   management commands.
2) If pg_init_required is set by pg_init_done(), due to retryable
   mode_select errors, then process_queued_ios() will again queue the
   path activation work.
3) If free_multipath() completes before activate_path() work is called a
   NULL pointer dereference like the following can be seen when
   accessing members of the recently destructed multipath:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000090
RIP: 0010:[&lt;ffffffffa003db1b&gt;]  [&lt;ffffffffa003db1b&gt;] activate_path+0x1b/0x30 [dm_multipath]
[&lt;ffffffff81090ac0&gt;] worker_thread+0x170/0x2a0
[&lt;ffffffff81096c80&gt;] ? autoremove_wake_function+0x0/0x40

[switch to disabling pg_init in flush_multipath_work &amp; header edits by Mike Snitzer]
Signed-off-by: Shiva Krishna Merla &lt;shivakrishna.merla@netapp.com&gt;
Reviewed-by: Krishnasamy Somasundaram &lt;somasundaram.krishnasamy@netapp.com&gt;
Tested-by: Speagle Andy &lt;Andy.Speagle@netapp.com&gt;
Acked-by: Junichi Nomura &lt;j-nomura@ce.jp.nec.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 954a73d5d3073df2231820c718fdd2f18b0fe4c9 upstream.

Whenever multipath_dtr() is happening we must prevent queueing any
further path activation work.  Implement this by adding a new
'pg_init_disabled' flag to the multipath structure that denotes future
path activation work should be skipped if it is set.  By disabling
pg_init and then re-enabling in flush_multipath_work() we also avoid the
potential for pg_init to be initiated while suspending an mpath device.

Without this patch a race condition exists that may result in a kernel
panic:

1) If after pg_init_done() decrements pg_init_in_progress to 0, a call
   to wait_for_pg_init_completion() assumes there are no more pending path
   management commands.
2) If pg_init_required is set by pg_init_done(), due to retryable
   mode_select errors, then process_queued_ios() will again queue the
   path activation work.
3) If free_multipath() completes before activate_path() work is called a
   NULL pointer dereference like the following can be seen when
   accessing members of the recently destructed multipath:

BUG: unable to handle kernel NULL pointer dereference at 0000000000000090
RIP: 0010:[&lt;ffffffffa003db1b&gt;]  [&lt;ffffffffa003db1b&gt;] activate_path+0x1b/0x30 [dm_multipath]
[&lt;ffffffff81090ac0&gt;] worker_thread+0x170/0x2a0
[&lt;ffffffff81096c80&gt;] ? autoremove_wake_function+0x0/0x40

[switch to disabling pg_init in flush_multipath_work &amp; header edits by Mike Snitzer]
Signed-off-by: Shiva Krishna Merla &lt;shivakrishna.merla@netapp.com&gt;
Reviewed-by: Krishnasamy Somasundaram &lt;somasundaram.krishnasamy@netapp.com&gt;
Tested-by: Speagle Andy &lt;Andy.Speagle@netapp.com&gt;
Acked-by: Junichi Nomura &lt;j-nomura@ce.jp.nec.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>md: Fix skipping recovery for read-only arrays.</title>
<updated>2013-11-13T03:05:32+00:00</updated>
<author>
<name>Lukasz Dorau</name>
<email>lukasz.dorau@intel.com</email>
</author>
<published>2013-10-24T01:55:17+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=ed840bec21c6f2f99ca34e974a5905e4f2116c1b'/>
<id>ed840bec21c6f2f99ca34e974a5905e4f2116c1b</id>
<content type='text'>
commit 61e4947c99c4494336254ec540c50186d186150b upstream.

Since:
        commit 7ceb17e87bde79d285a8b988cfed9eaeebe60b86
        md: Allow devices to be re-added to a read-only array.

spares are activated on a read-only array. In case of raid1 and raid10
personalities it causes that not-in-sync devices are marked in-sync
without checking if recovery has been finished.

If a read-only array is degraded and one of its devices is not in-sync
(because the array has been only partially recovered) recovery will be skipped.

This patch adds checking if recovery has been finished before marking a device
in-sync for raid1 and raid10 personalities. In case of raid5 personality
such condition is already present (at raid5.c:6029).

Bug was introduced in 3.10 and causes data corruption.

Signed-off-by: Pawel Baldysiak &lt;pawel.baldysiak@intel.com&gt;
Signed-off-by: Lukasz Dorau &lt;lukasz.dorau@intel.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 61e4947c99c4494336254ec540c50186d186150b upstream.

Since:
        commit 7ceb17e87bde79d285a8b988cfed9eaeebe60b86
        md: Allow devices to be re-added to a read-only array.

spares are activated on a read-only array. In case of raid1 and raid10
personalities it causes that not-in-sync devices are marked in-sync
without checking if recovery has been finished.

If a read-only array is degraded and one of its devices is not in-sync
(because the array has been only partially recovered) recovery will be skipped.

This patch adds checking if recovery has been finished before marking a device
in-sync for raid1 and raid10 personalities. In case of raid5 personality
such condition is already present (at raid5.c:6029).

Bug was introduced in 3.10 and causes data corruption.

Signed-off-by: Pawel Baldysiak &lt;pawel.baldysiak@intel.com&gt;
Signed-off-by: Lukasz Dorau &lt;lukasz.dorau@intel.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>md: avoid deadlock when md_set_badblocks.</title>
<updated>2013-11-13T03:05:32+00:00</updated>
<author>
<name>Bian Yu</name>
<email>bianyu@kedacom.com</email>
</author>
<published>2013-10-12T05:10:03+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=0465496671f4769e0f4f00481ce5bc5598c5caa2'/>
<id>0465496671f4769e0f4f00481ce5bc5598c5caa2</id>
<content type='text'>
commit 905b0297a9533d7a6ee00a01a990456636877dd6 upstream.

When operate harddisk and hit errors, md_set_badblocks is called after
scsi_restart_operations which already disabled the irq. but md_set_badblocks
will call write_sequnlock_irq and enable irq. so softirq can preempt the
current thread and that may cause a deadlock. I think this situation should
use write_sequnlock_irqsave/irqrestore instead.

I met the situation and the call trace is below:
[  638.919974] BUG: spinlock recursion on CPU#0, scsi_eh_13/1010
[  638.921923]  lock: 0xffff8800d4d51fc8, .magic: dead4ead, .owner: scsi_eh_13/1010, .owner_cpu: 0
[  638.923890] CPU: 0 PID: 1010 Comm: scsi_eh_13 Not tainted 3.12.0-rc5+ #37
[  638.925844] Hardware name: To be filled by O.E.M. To be filled by O.E.M./MAHOBAY, BIOS 4.6.5 03/05/2013
[  638.927816]  ffff880037ad4640 ffff880118c03d50 ffffffff8172ff85 0000000000000007
[  638.929829]  ffff8800d4d51fc8 ffff880118c03d70 ffffffff81730030 ffff8800d4d51fc8
[  638.931848]  ffffffff81a72eb0 ffff880118c03d90 ffffffff81730056 ffff8800d4d51fc8
[  638.933884] Call Trace:
[  638.935867]  &lt;IRQ&gt;  [&lt;ffffffff8172ff85&gt;] dump_stack+0x55/0x76
[  638.937878]  [&lt;ffffffff81730030&gt;] spin_dump+0x8a/0x8f
[  638.939861]  [&lt;ffffffff81730056&gt;] spin_bug+0x21/0x26
[  638.941836]  [&lt;ffffffff81336de4&gt;] do_raw_spin_lock+0xa4/0xc0
[  638.943801]  [&lt;ffffffff8173f036&gt;] _raw_spin_lock+0x66/0x80
[  638.945747]  [&lt;ffffffff814a73ed&gt;] ? scsi_device_unbusy+0x9d/0xd0
[  638.947672]  [&lt;ffffffff8173fb1b&gt;] ? _raw_spin_unlock+0x2b/0x50
[  638.949595]  [&lt;ffffffff814a73ed&gt;] scsi_device_unbusy+0x9d/0xd0
[  638.951504]  [&lt;ffffffff8149ec47&gt;] scsi_finish_command+0x37/0xe0
[  638.953388]  [&lt;ffffffff814a75e8&gt;] scsi_softirq_done+0xa8/0x140
[  638.955248]  [&lt;ffffffff8130e32b&gt;] blk_done_softirq+0x7b/0x90
[  638.957116]  [&lt;ffffffff8104fddd&gt;] __do_softirq+0xfd/0x330
[  638.958987]  [&lt;ffffffff810b964f&gt;] ? __lock_release+0x6f/0x100
[  638.960861]  [&lt;ffffffff8174a5cc&gt;] call_softirq+0x1c/0x30
[  638.962724]  [&lt;ffffffff81004c7d&gt;] do_softirq+0x8d/0xc0
[  638.964565]  [&lt;ffffffff8105024e&gt;] irq_exit+0x10e/0x150
[  638.966390]  [&lt;ffffffff8174ad4a&gt;] smp_apic_timer_interrupt+0x4a/0x60
[  638.968223]  [&lt;ffffffff817499af&gt;] apic_timer_interrupt+0x6f/0x80
[  638.970079]  &lt;EOI&gt;  [&lt;ffffffff810b964f&gt;] ? __lock_release+0x6f/0x100
[  638.971899]  [&lt;ffffffff8173fa6a&gt;] ? _raw_spin_unlock_irq+0x3a/0x50
[  638.973691]  [&lt;ffffffff8173fa60&gt;] ? _raw_spin_unlock_irq+0x30/0x50
[  638.975475]  [&lt;ffffffff81562393&gt;] md_set_badblocks+0x1f3/0x4a0
[  638.977243]  [&lt;ffffffff81566e07&gt;] rdev_set_badblocks+0x27/0x80
[  638.978988]  [&lt;ffffffffa00d97bb&gt;] raid5_end_read_request+0x36b/0x4e0 [raid456]
[  638.980723]  [&lt;ffffffff811b5a1d&gt;] bio_endio+0x1d/0x40
[  638.982463]  [&lt;ffffffff81304ff3&gt;] req_bio_endio.isra.65+0x83/0xa0
[  638.984214]  [&lt;ffffffff81306b9f&gt;] blk_update_request+0x7f/0x350
[  638.985967]  [&lt;ffffffff81306ea1&gt;] blk_update_bidi_request+0x31/0x90
[  638.987710]  [&lt;ffffffff813085e0&gt;] __blk_end_bidi_request+0x20/0x50
[  638.989439]  [&lt;ffffffff8130862f&gt;] __blk_end_request_all+0x1f/0x30
[  638.991149]  [&lt;ffffffff81308746&gt;] blk_peek_request+0x106/0x250
[  638.992861]  [&lt;ffffffff814a62a9&gt;] ? scsi_kill_request.isra.32+0xe9/0x130
[  638.994561]  [&lt;ffffffff814a633a&gt;] scsi_request_fn+0x4a/0x3d0
[  638.996251]  [&lt;ffffffff813040a7&gt;] __blk_run_queue+0x37/0x50
[  638.997900]  [&lt;ffffffff813045af&gt;] blk_run_queue+0x2f/0x50
[  638.999553]  [&lt;ffffffff814a5750&gt;] scsi_run_queue+0xe0/0x1c0
[  639.001185]  [&lt;ffffffff814a7721&gt;] scsi_run_host_queues+0x21/0x40
[  639.002798]  [&lt;ffffffff814a2e87&gt;] scsi_restart_operations+0x177/0x200
[  639.004391]  [&lt;ffffffff814a4fe9&gt;] scsi_error_handler+0xc9/0xe0
[  639.005996]  [&lt;ffffffff814a4f20&gt;] ? scsi_unjam_host+0xd0/0xd0
[  639.007600]  [&lt;ffffffff81072f6b&gt;] kthread+0xdb/0xe0
[  639.009205]  [&lt;ffffffff81072e90&gt;] ? flush_kthread_worker+0x170/0x170
[  639.010821]  [&lt;ffffffff81748cac&gt;] ret_from_fork+0x7c/0xb0
[  639.012437]  [&lt;ffffffff81072e90&gt;] ? flush_kthread_worker+0x170/0x170

This bug was introduce in commit  2e8ac30312973dd20e68073653
(the first time rdev_set_badblock was call from interrupt context),
so this patch is appropriate for 3.5 and subsequent kernels.

Signed-off-by: Bian Yu &lt;bianyu@kedacom.com&gt;
Reviewed-by: Jianpeng Ma &lt;majianpeng@gmail.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 905b0297a9533d7a6ee00a01a990456636877dd6 upstream.

When operate harddisk and hit errors, md_set_badblocks is called after
scsi_restart_operations which already disabled the irq. but md_set_badblocks
will call write_sequnlock_irq and enable irq. so softirq can preempt the
current thread and that may cause a deadlock. I think this situation should
use write_sequnlock_irqsave/irqrestore instead.

I met the situation and the call trace is below:
[  638.919974] BUG: spinlock recursion on CPU#0, scsi_eh_13/1010
[  638.921923]  lock: 0xffff8800d4d51fc8, .magic: dead4ead, .owner: scsi_eh_13/1010, .owner_cpu: 0
[  638.923890] CPU: 0 PID: 1010 Comm: scsi_eh_13 Not tainted 3.12.0-rc5+ #37
[  638.925844] Hardware name: To be filled by O.E.M. To be filled by O.E.M./MAHOBAY, BIOS 4.6.5 03/05/2013
[  638.927816]  ffff880037ad4640 ffff880118c03d50 ffffffff8172ff85 0000000000000007
[  638.929829]  ffff8800d4d51fc8 ffff880118c03d70 ffffffff81730030 ffff8800d4d51fc8
[  638.931848]  ffffffff81a72eb0 ffff880118c03d90 ffffffff81730056 ffff8800d4d51fc8
[  638.933884] Call Trace:
[  638.935867]  &lt;IRQ&gt;  [&lt;ffffffff8172ff85&gt;] dump_stack+0x55/0x76
[  638.937878]  [&lt;ffffffff81730030&gt;] spin_dump+0x8a/0x8f
[  638.939861]  [&lt;ffffffff81730056&gt;] spin_bug+0x21/0x26
[  638.941836]  [&lt;ffffffff81336de4&gt;] do_raw_spin_lock+0xa4/0xc0
[  638.943801]  [&lt;ffffffff8173f036&gt;] _raw_spin_lock+0x66/0x80
[  638.945747]  [&lt;ffffffff814a73ed&gt;] ? scsi_device_unbusy+0x9d/0xd0
[  638.947672]  [&lt;ffffffff8173fb1b&gt;] ? _raw_spin_unlock+0x2b/0x50
[  638.949595]  [&lt;ffffffff814a73ed&gt;] scsi_device_unbusy+0x9d/0xd0
[  638.951504]  [&lt;ffffffff8149ec47&gt;] scsi_finish_command+0x37/0xe0
[  638.953388]  [&lt;ffffffff814a75e8&gt;] scsi_softirq_done+0xa8/0x140
[  638.955248]  [&lt;ffffffff8130e32b&gt;] blk_done_softirq+0x7b/0x90
[  638.957116]  [&lt;ffffffff8104fddd&gt;] __do_softirq+0xfd/0x330
[  638.958987]  [&lt;ffffffff810b964f&gt;] ? __lock_release+0x6f/0x100
[  638.960861]  [&lt;ffffffff8174a5cc&gt;] call_softirq+0x1c/0x30
[  638.962724]  [&lt;ffffffff81004c7d&gt;] do_softirq+0x8d/0xc0
[  638.964565]  [&lt;ffffffff8105024e&gt;] irq_exit+0x10e/0x150
[  638.966390]  [&lt;ffffffff8174ad4a&gt;] smp_apic_timer_interrupt+0x4a/0x60
[  638.968223]  [&lt;ffffffff817499af&gt;] apic_timer_interrupt+0x6f/0x80
[  638.970079]  &lt;EOI&gt;  [&lt;ffffffff810b964f&gt;] ? __lock_release+0x6f/0x100
[  638.971899]  [&lt;ffffffff8173fa6a&gt;] ? _raw_spin_unlock_irq+0x3a/0x50
[  638.973691]  [&lt;ffffffff8173fa60&gt;] ? _raw_spin_unlock_irq+0x30/0x50
[  638.975475]  [&lt;ffffffff81562393&gt;] md_set_badblocks+0x1f3/0x4a0
[  638.977243]  [&lt;ffffffff81566e07&gt;] rdev_set_badblocks+0x27/0x80
[  638.978988]  [&lt;ffffffffa00d97bb&gt;] raid5_end_read_request+0x36b/0x4e0 [raid456]
[  638.980723]  [&lt;ffffffff811b5a1d&gt;] bio_endio+0x1d/0x40
[  638.982463]  [&lt;ffffffff81304ff3&gt;] req_bio_endio.isra.65+0x83/0xa0
[  638.984214]  [&lt;ffffffff81306b9f&gt;] blk_update_request+0x7f/0x350
[  638.985967]  [&lt;ffffffff81306ea1&gt;] blk_update_bidi_request+0x31/0x90
[  638.987710]  [&lt;ffffffff813085e0&gt;] __blk_end_bidi_request+0x20/0x50
[  638.989439]  [&lt;ffffffff8130862f&gt;] __blk_end_request_all+0x1f/0x30
[  638.991149]  [&lt;ffffffff81308746&gt;] blk_peek_request+0x106/0x250
[  638.992861]  [&lt;ffffffff814a62a9&gt;] ? scsi_kill_request.isra.32+0xe9/0x130
[  638.994561]  [&lt;ffffffff814a633a&gt;] scsi_request_fn+0x4a/0x3d0
[  638.996251]  [&lt;ffffffff813040a7&gt;] __blk_run_queue+0x37/0x50
[  638.997900]  [&lt;ffffffff813045af&gt;] blk_run_queue+0x2f/0x50
[  638.999553]  [&lt;ffffffff814a5750&gt;] scsi_run_queue+0xe0/0x1c0
[  639.001185]  [&lt;ffffffff814a7721&gt;] scsi_run_host_queues+0x21/0x40
[  639.002798]  [&lt;ffffffff814a2e87&gt;] scsi_restart_operations+0x177/0x200
[  639.004391]  [&lt;ffffffff814a4fe9&gt;] scsi_error_handler+0xc9/0xe0
[  639.005996]  [&lt;ffffffff814a4f20&gt;] ? scsi_unjam_host+0xd0/0xd0
[  639.007600]  [&lt;ffffffff81072f6b&gt;] kthread+0xdb/0xe0
[  639.009205]  [&lt;ffffffff81072e90&gt;] ? flush_kthread_worker+0x170/0x170
[  639.010821]  [&lt;ffffffff81748cac&gt;] ret_from_fork+0x7c/0xb0
[  639.012437]  [&lt;ffffffff81072e90&gt;] ? flush_kthread_worker+0x170/0x170

This bug was introduce in commit  2e8ac30312973dd20e68073653
(the first time rdev_set_badblock was call from interrupt context),
so this patch is appropriate for 3.5 and subsequent kernels.

Signed-off-by: Bian Yu &lt;bianyu@kedacom.com&gt;
Reviewed-by: Jianpeng Ma &lt;majianpeng@gmail.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>raid5: avoid finding "discard" stripe</title>
<updated>2013-11-13T03:05:31+00:00</updated>
<author>
<name>Shaohua Li</name>
<email>shli@kernel.org</email>
</author>
<published>2013-10-19T06:51:42+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=01e608d7276508fcafb76f2092db89885e62ef66'/>
<id>01e608d7276508fcafb76f2092db89885e62ef66</id>
<content type='text'>
commit d47648fcf0611812286f68131b40251c6fa54f5e upstream.

SCSI discard will damage discard stripe bio setting, eg, some fields are
changed. If the stripe is reused very soon, we have wrong bios setting. We
remove discard stripe from hash list, so next time the strip will be fully
initialized.

Suitable for backport to 3.7+.

Signed-off-by: Shaohua Li &lt;shli@fusionio.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit d47648fcf0611812286f68131b40251c6fa54f5e upstream.

SCSI discard will damage discard stripe bio setting, eg, some fields are
changed. If the stripe is reused very soon, we have wrong bios setting. We
remove discard stripe from hash list, so next time the strip will be fully
initialized.

Suitable for backport to 3.7+.

Signed-off-by: Shaohua Li &lt;shli@fusionio.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>raid5: set bio bi_vcnt 0 for discard request</title>
<updated>2013-11-13T03:05:31+00:00</updated>
<author>
<name>Shaohua Li</name>
<email>shli@kernel.org</email>
</author>
<published>2013-10-19T06:50:28+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=7e44a92662ce582268c4f35e68aad1f632ada8f8'/>
<id>7e44a92662ce582268c4f35e68aad1f632ada8f8</id>
<content type='text'>
commit 37c61ff31e9b5e3fcf3cc6579f5c68f6ad40c4b1 upstream.

SCSI layer will add new payload for discard request. If two bios are merged
to one, the second bio has bi_vcnt 1 which is set in raid5. This will confuse
SCSI and cause oops.

Suitable for backport to 3.7+

Reported-by: Jes Sorensen &lt;Jes.Sorensen@redhat.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fusionio.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Acked-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 37c61ff31e9b5e3fcf3cc6579f5c68f6ad40c4b1 upstream.

SCSI layer will add new payload for discard request. If two bios are merged
to one, the second bio has bi_vcnt 1 which is set in raid5. This will confuse
SCSI and cause oops.

Suitable for backport to 3.7+

Reported-by: Jes Sorensen &lt;Jes.Sorensen@redhat.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fusionio.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Acked-by: Martin K. Petersen &lt;martin.petersen@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Fixed incorrect order of arguments to bio_alloc_bioset()</title>
<updated>2013-11-13T03:05:30+00:00</updated>
<author>
<name>Kent Overstreet</name>
<email>kmo@daterainc.com</email>
</author>
<published>2013-10-22T22:35:50+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=955a23e181561a792d6e4c1572848b7ba306499f'/>
<id>955a23e181561a792d6e4c1572848b7ba306499f</id>
<content type='text'>
commit d4eddd42f592a0cf06818fae694a3d271f842e4d upstream.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit d4eddd42f592a0cf06818fae694a3d271f842e4d upstream.

Signed-off-by: Kent Overstreet &lt;kmo@daterainc.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
</feed>
