<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/drivers/md, branch v4.9.71</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>raid5: Set R5_Expanded on parity devices as well as data.</title>
<updated>2017-12-20T09:07:32+00:00</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.com</email>
</author>
<published>2017-10-17T05:18:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=f39486bd37ee3edd8658aff1226337c47e4ba32e'/>
<id>f39486bd37ee3edd8658aff1226337c47e4ba32e</id>
<content type='text'>
[ Upstream commit 235b6003fb28f0dd8e7ed8fbdb088bb548291766 ]

When reshaping a fully degraded raid5/raid6 to a larger
nubmer of devices, the new device(s) are not in-sync
and so that can make the newly grown stripe appear to be
"failed".
To avoid this, we set the R5_Expanded flag to say "Even though
this device is not fully in-sync, this block is safe so
don't treat the device as failed for this stripe".
This flag is set for data devices, not not for parity devices.

Consequently, if you have a RAID6 with two devices that are partly
recovered and a spare, and start a reshape to include the spare,
then when the reshape gets past the point where the recovery was
up to, it will think the stripes are failed and will get into
an infinite loop, failing to make progress.

So when contructing parity on an EXPAND_READY stripe,
set R5_Expanded.

Reported-by: Curt &lt;lightspd@gmail.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 235b6003fb28f0dd8e7ed8fbdb088bb548291766 ]

When reshaping a fully degraded raid5/raid6 to a larger
nubmer of devices, the new device(s) are not in-sync
and so that can make the newly grown stripe appear to be
"failed".
To avoid this, we set the R5_Expanded flag to say "Even though
this device is not fully in-sync, this block is safe so
don't treat the device as failed for this stripe".
This flag is set for data devices, not not for parity devices.

Consequently, if you have a RAID6 with two devices that are partly
recovered and a spare, and start a reshape to include the spare,
then when the reshape gets past the point where the recovery was
up to, it will think the stripes are failed and will get into
an infinite loop, failing to make progress.

So when contructing parity on an EXPAND_READY stripe,
set R5_Expanded.

Reported-by: Curt &lt;lightspd@gmail.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: fix wrong cache_misses statistics</title>
<updated>2017-12-20T09:07:30+00:00</updated>
<author>
<name>tang.junhui</name>
<email>tang.junhui@zte.com.cn</email>
</author>
<published>2017-10-30T21:46:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=9102ed6a5f6a53434f69264ac8457994fde14a88'/>
<id>9102ed6a5f6a53434f69264ac8457994fde14a88</id>
<content type='text'>
[ Upstream commit c157313791a999646901b3e3c6888514ebc36d62 ]

Currently, Cache missed IOs are identified by s-&gt;cache_miss, but actually,
there are many situations that missed IOs are not assigned a value for
s-&gt;cache_miss in cached_dev_cache_miss(), for example, a bypassed IO
(s-&gt;iop.bypass = 1), or the cache_bio allocate failed. In these situations,
it will go to out_put or out_submit, and s-&gt;cache_miss is null, which leads
bch_mark_cache_accounting() to treat this IO as a hit IO.

[ML: applied by 3-way merge]

Signed-off-by: tang.junhui &lt;tang.junhui@zte.com.cn&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reviewed-by: Coly Li &lt;colyli@suse.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit c157313791a999646901b3e3c6888514ebc36d62 ]

Currently, Cache missed IOs are identified by s-&gt;cache_miss, but actually,
there are many situations that missed IOs are not assigned a value for
s-&gt;cache_miss in cached_dev_cache_miss(), for example, a bypassed IO
(s-&gt;iop.bypass = 1), or the cache_bio allocate failed. In these situations,
it will go to out_put or out_submit, and s-&gt;cache_miss is null, which leads
bch_mark_cache_accounting() to treat this IO as a hit IO.

[ML: applied by 3-way merge]

Signed-off-by: tang.junhui &lt;tang.junhui@zte.com.cn&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reviewed-by: Coly Li &lt;colyli@suse.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: explicitly destroy mutex while exiting</title>
<updated>2017-12-20T09:07:30+00:00</updated>
<author>
<name>Liang Chen</name>
<email>liangchen.linux@gmail.com</email>
</author>
<published>2017-10-30T21:46:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c2a0531f59c363fd5b50be46b97248091b13484b'/>
<id>c2a0531f59c363fd5b50be46b97248091b13484b</id>
<content type='text'>
[ Upstream commit 330a4db89d39a6b43f36da16824eaa7a7509d34d ]

mutex_destroy does nothing most of time, but it's better to call
it to make the code future proof and it also has some meaning
for like mutex debug.

As Coly pointed out in a previous review, bcache_exit() may not be
able to handle all the references properly if userspace registers
cache and backing devices right before bch_debug_init runs and
bch_debug_init failes later. So not exposing userspace interface
until everything is ready to avoid that issue.

Signed-off-by: Liang Chen &lt;liangchen.linux@gmail.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reviewed-by: Coly Li &lt;colyli@suse.de&gt;
Reviewed-by: Eric Wheeler &lt;bcache@linux.ewheeler.net&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 330a4db89d39a6b43f36da16824eaa7a7509d34d ]

mutex_destroy does nothing most of time, but it's better to call
it to make the code future proof and it also has some meaning
for like mutex debug.

As Coly pointed out in a previous review, bcache_exit() may not be
able to handle all the references properly if userspace registers
cache and backing devices right before bch_debug_init runs and
bch_debug_init failes later. So not exposing userspace interface
until everything is ready to avoid that issue.

Signed-off-by: Liang Chen &lt;liangchen.linux@gmail.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reviewed-by: Coly Li &lt;colyli@suse.de&gt;
Reviewed-by: Eric Wheeler &lt;bcache@linux.ewheeler.net&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>md-cluster: free md_cluster_info if node leave cluster</title>
<updated>2017-12-20T09:07:18+00:00</updated>
<author>
<name>Guoqing Jiang</name>
<email>gqjiang@suse.com</email>
</author>
<published>2017-02-24T03:15:12+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=9bcd15bdfb6151e0418fa9321d4dce2b2eb0cb16'/>
<id>9bcd15bdfb6151e0418fa9321d4dce2b2eb0cb16</id>
<content type='text'>
[ Upstream commit 9c8043f337f14d1743006dfc59c03e80a42e3884 ]

To avoid memory leak, we need to free the cinfo which
is allocated when node join cluster.

Reviewed-by: NeilBrown &lt;neilb@suse.com&gt;
Signed-off-by: Guoqing Jiang &lt;gqjiang@suse.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 9c8043f337f14d1743006dfc59c03e80a42e3884 ]

To avoid memory leak, we need to free the cinfo which
is allocated when node join cluster.

Reviewed-by: NeilBrown &lt;neilb@suse.com&gt;
Signed-off-by: Guoqing Jiang &lt;gqjiang@suse.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>md: free unused memory after bitmap resize</title>
<updated>2017-12-16T15:25:47+00:00</updated>
<author>
<name>Zdenek Kabelac</name>
<email>zkabelac@redhat.com</email>
</author>
<published>2017-11-08T12:44:56+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=b7d3f2b5dca97de98dddbd4992dbe49d5a7723fa'/>
<id>b7d3f2b5dca97de98dddbd4992dbe49d5a7723fa</id>
<content type='text'>
[ Upstream commit 0868b99c214a3d55486c700de7c3f770b7243e7c ]

When bitmap is resized, the old kalloced chunks just are not released
once the resized bitmap starts to use new space.

This fixes in particular kmemleak reports like this one:

unreferenced object 0xffff8f4311e9c000 (size 4096):
  comm "lvm", pid 19333, jiffies 4295263268 (age 528.265s)
  hex dump (first 32 bytes):
    02 80 02 80 02 80 02 80 02 80 02 80 02 80 02 80  ................
    02 80 02 80 02 80 02 80 02 80 02 80 02 80 02 80  ................
  backtrace:
    [&lt;ffffffffa69471ca&gt;] kmemleak_alloc+0x4a/0xa0
    [&lt;ffffffffa628c10e&gt;] kmem_cache_alloc_trace+0x14e/0x2e0
    [&lt;ffffffffa676cfec&gt;] bitmap_checkpage+0x7c/0x110
    [&lt;ffffffffa676d0c5&gt;] bitmap_get_counter+0x45/0xd0
    [&lt;ffffffffa676d6b3&gt;] bitmap_set_memory_bits+0x43/0xe0
    [&lt;ffffffffa676e41c&gt;] bitmap_init_from_disk+0x23c/0x530
    [&lt;ffffffffa676f1ae&gt;] bitmap_load+0xbe/0x160
    [&lt;ffffffffc04c47d3&gt;] raid_preresume+0x203/0x2f0 [dm_raid]
    [&lt;ffffffffa677762f&gt;] dm_table_resume_targets+0x4f/0xe0
    [&lt;ffffffffa6774b52&gt;] dm_resume+0x122/0x140
    [&lt;ffffffffa6779b9f&gt;] dev_suspend+0x18f/0x290
    [&lt;ffffffffa677a3a7&gt;] ctl_ioctl+0x287/0x560
    [&lt;ffffffffa677a693&gt;] dm_ctl_ioctl+0x13/0x20
    [&lt;ffffffffa62d6b46&gt;] do_vfs_ioctl+0xa6/0x750
    [&lt;ffffffffa62d7269&gt;] SyS_ioctl+0x79/0x90
    [&lt;ffffffffa6956d41&gt;] entry_SYSCALL_64_fastpath+0x1f/0xc2

Signed-off-by: Zdenek Kabelac &lt;zkabelac@redhat.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
[ Upstream commit 0868b99c214a3d55486c700de7c3f770b7243e7c ]

When bitmap is resized, the old kalloced chunks just are not released
once the resized bitmap starts to use new space.

This fixes in particular kmemleak reports like this one:

unreferenced object 0xffff8f4311e9c000 (size 4096):
  comm "lvm", pid 19333, jiffies 4295263268 (age 528.265s)
  hex dump (first 32 bytes):
    02 80 02 80 02 80 02 80 02 80 02 80 02 80 02 80  ................
    02 80 02 80 02 80 02 80 02 80 02 80 02 80 02 80  ................
  backtrace:
    [&lt;ffffffffa69471ca&gt;] kmemleak_alloc+0x4a/0xa0
    [&lt;ffffffffa628c10e&gt;] kmem_cache_alloc_trace+0x14e/0x2e0
    [&lt;ffffffffa676cfec&gt;] bitmap_checkpage+0x7c/0x110
    [&lt;ffffffffa676d0c5&gt;] bitmap_get_counter+0x45/0xd0
    [&lt;ffffffffa676d6b3&gt;] bitmap_set_memory_bits+0x43/0xe0
    [&lt;ffffffffa676e41c&gt;] bitmap_init_from_disk+0x23c/0x530
    [&lt;ffffffffa676f1ae&gt;] bitmap_load+0xbe/0x160
    [&lt;ffffffffc04c47d3&gt;] raid_preresume+0x203/0x2f0 [dm_raid]
    [&lt;ffffffffa677762f&gt;] dm_table_resume_targets+0x4f/0xe0
    [&lt;ffffffffa6774b52&gt;] dm_resume+0x122/0x140
    [&lt;ffffffffa6779b9f&gt;] dev_suspend+0x18f/0x290
    [&lt;ffffffffa677a3a7&gt;] ctl_ioctl+0x287/0x560
    [&lt;ffffffffa677a693&gt;] dm_ctl_ioctl+0x13/0x20
    [&lt;ffffffffa62d6b46&gt;] do_vfs_ioctl+0xa6/0x750
    [&lt;ffffffffa62d7269&gt;] SyS_ioctl+0x79/0x90
    [&lt;ffffffffa6956d41&gt;] entry_SYSCALL_64_fastpath+0x1f/0xc2

Signed-off-by: Zdenek Kabelac &lt;zkabelac@redhat.com&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Sasha Levin &lt;alexander.levin@verizon.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: recover data from backing when data is clean</title>
<updated>2017-12-09T21:01:46+00:00</updated>
<author>
<name>Rui Hua</name>
<email>huarui.dev@gmail.com</email>
</author>
<published>2017-11-24T23:14:26+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=9db9b5f2b1b691c78b239a71a4b6a8d950323a78'/>
<id>9db9b5f2b1b691c78b239a71a4b6a8d950323a78</id>
<content type='text'>
commit e393aa2446150536929140739f09c6ecbcbea7f0 upstream.

When we send a read request and hit the clean data in cache device, there
is a situation called cache read race in bcache(see the commit in the tail
of cache_look_up(), the following explaination just copy from there):
The bucket we're reading from might be reused while our bio is in flight,
and we could then end up reading the wrong data. We guard against this
by checking (in bch_cache_read_endio()) if the pointer is stale again;
if so, we treat it as an error (s-&gt;iop.error = -EINTR) and reread from
the backing device (but we don't pass that error up anywhere)

It should be noted that cache read race happened under normal
circumstances, not the circumstance when SSD failed, it was counted
and shown in  /sys/fs/bcache/XXX/internal/cache_read_races.

Without this patch, when we use writeback mode, we will never reread from
the backing device when cache read race happened, until the whole cache
device is clean, because the condition
(s-&gt;recoverable &amp;&amp; (dc &amp;&amp; !atomic_read(&amp;dc-&gt;has_dirty))) is false in
cached_dev_read_error(). In this situation, the s-&gt;iop.error(= -EINTR)
will be passed up, at last, user will receive -EINTR when it's bio end,
this is not suitable, and wield to up-application.

In this patch, we use s-&gt;read_dirty_data to judge whether the read
request hit dirty data in cache device, it is safe to reread data from
the backing device when the read request hit clean data. This can not
only handle cache read race, but also recover data when failed read
request from cache device.

[edited by mlyle to fix up whitespace, commit log title, comment
spelling]

Fixes: d59b23795933 ("bcache: only permit to recovery read error when cache device is clean")
Signed-off-by: Hua Rui &lt;huarui.dev@gmail.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reviewed-by: Coly Li &lt;colyli@suse.de&gt;
Signed-off-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit e393aa2446150536929140739f09c6ecbcbea7f0 upstream.

When we send a read request and hit the clean data in cache device, there
is a situation called cache read race in bcache(see the commit in the tail
of cache_look_up(), the following explaination just copy from there):
The bucket we're reading from might be reused while our bio is in flight,
and we could then end up reading the wrong data. We guard against this
by checking (in bch_cache_read_endio()) if the pointer is stale again;
if so, we treat it as an error (s-&gt;iop.error = -EINTR) and reread from
the backing device (but we don't pass that error up anywhere)

It should be noted that cache read race happened under normal
circumstances, not the circumstance when SSD failed, it was counted
and shown in  /sys/fs/bcache/XXX/internal/cache_read_races.

Without this patch, when we use writeback mode, we will never reread from
the backing device when cache read race happened, until the whole cache
device is clean, because the condition
(s-&gt;recoverable &amp;&amp; (dc &amp;&amp; !atomic_read(&amp;dc-&gt;has_dirty))) is false in
cached_dev_read_error(). In this situation, the s-&gt;iop.error(= -EINTR)
will be passed up, at last, user will receive -EINTR when it's bio end,
this is not suitable, and wield to up-application.

In this patch, we use s-&gt;read_dirty_data to judge whether the read
request hit dirty data in cache device, it is safe to reread data from
the backing device when the read request hit clean data. This can not
only handle cache read race, but also recover data when failed read
request from cache device.

[edited by mlyle to fix up whitespace, commit log title, comment
spelling]

Fixes: d59b23795933 ("bcache: only permit to recovery read error when cache device is clean")
Signed-off-by: Hua Rui &lt;huarui.dev@gmail.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reviewed-by: Coly Li &lt;colyli@suse.de&gt;
Signed-off-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: only permit to recovery read error when cache device is clean</title>
<updated>2017-12-09T21:01:46+00:00</updated>
<author>
<name>Coly Li</name>
<email>colyli@suse.de</email>
</author>
<published>2017-10-30T21:46:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=322e659a03dcec9e19a2bbd116ee8f1c978a7030'/>
<id>322e659a03dcec9e19a2bbd116ee8f1c978a7030</id>
<content type='text'>
commit d59b23795933678c9638fd20c942d2b4f3cd6185 upstream.

When bcache does read I/Os, for example in writeback or writethrough mode,
if a read request on cache device is failed, bcache will try to recovery
the request by reading from cached device. If the data on cached device is
not synced with cache device, then requester will get a stale data.

For critical storage system like database, providing stale data from
recovery may result an application level data corruption, which is
unacceptible.

With this patch, for a failed read request in writeback or writethrough
mode, recovery a recoverable read request only happens when cache device
is clean. That is to say, all data on cached device is up to update.

For other cache modes in bcache, read request will never hit
cached_dev_read_error(), they don't need this patch.

Please note, because cache mode can be switched arbitrarily in run time, a
writethrough mode might be switched from a writeback mode. Therefore
checking dc-&gt;has_data in writethrough mode still makes sense.

Changelog:
V4: Fix parens error pointed by Michael Lyle.
v3: By response from Kent Oversteet, he thinks recovering stale data is a
    bug to fix, and option to permit it is unnecessary. So this version
    the sysfs file is removed.
v2: rename sysfs entry from allow_stale_data_on_failure  to
    allow_stale_data_on_failure, and fix the confusing commit log.
v1: initial patch posted.

[small change to patch comment spelling by mlyle]

Signed-off-by: Coly Li &lt;colyli@suse.de&gt;
Signed-off-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reported-by: Arne Wolf &lt;awolf@lenovo.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Cc: Kent Overstreet &lt;kent.overstreet@gmail.com&gt;
Cc: Nix &lt;nix@esperi.org.uk&gt;
Cc: Kai Krakow &lt;hurikhan77@gmail.com&gt;
Cc: Eric Wheeler &lt;bcache@lists.ewheeler.net&gt;
Cc: Junhui Tang &lt;tang.junhui@zte.com.cn&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit d59b23795933678c9638fd20c942d2b4f3cd6185 upstream.

When bcache does read I/Os, for example in writeback or writethrough mode,
if a read request on cache device is failed, bcache will try to recovery
the request by reading from cached device. If the data on cached device is
not synced with cache device, then requester will get a stale data.

For critical storage system like database, providing stale data from
recovery may result an application level data corruption, which is
unacceptible.

With this patch, for a failed read request in writeback or writethrough
mode, recovery a recoverable read request only happens when cache device
is clean. That is to say, all data on cached device is up to update.

For other cache modes in bcache, read request will never hit
cached_dev_read_error(), they don't need this patch.

Please note, because cache mode can be switched arbitrarily in run time, a
writethrough mode might be switched from a writeback mode. Therefore
checking dc-&gt;has_data in writethrough mode still makes sense.

Changelog:
V4: Fix parens error pointed by Michael Lyle.
v3: By response from Kent Oversteet, he thinks recovering stale data is a
    bug to fix, and option to permit it is unnecessary. So this version
    the sysfs file is removed.
v2: rename sysfs entry from allow_stale_data_on_failure  to
    allow_stale_data_on_failure, and fix the confusing commit log.
v1: initial patch posted.

[small change to patch comment spelling by mlyle]

Signed-off-by: Coly Li &lt;colyli@suse.de&gt;
Signed-off-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Reported-by: Arne Wolf &lt;awolf@lenovo.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Cc: Kent Overstreet &lt;kent.overstreet@gmail.com&gt;
Cc: Nix &lt;nix@esperi.org.uk&gt;
Cc: Kai Krakow &lt;hurikhan77@gmail.com&gt;
Cc: Eric Wheeler &lt;bcache@lists.ewheeler.net&gt;
Cc: Junhui Tang &lt;tang.junhui@zte.com.cn&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: Fix building error on MIPS</title>
<updated>2017-12-05T10:24:34+00:00</updated>
<author>
<name>Huacai Chen</name>
<email>chenhc@lemote.com</email>
</author>
<published>2017-11-24T23:14:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8588eb0ce6a639be06110d7bbc8f59d8468ed9b7'/>
<id>8588eb0ce6a639be06110d7bbc8f59d8468ed9b7</id>
<content type='text'>
commit cf33c1ee5254c6a430bc1538232b49c3ea13e613 upstream.

This patch try to fix the building error on MIPS. The reason is MIPS
has already defined the PTR macro, which conflicts with the PTR macro
in include/uapi/linux/bcache.h.

[fixed by mlyle: corrected a line-length issue]

Signed-off-by: Huacai Chen &lt;chenhc@lemote.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit cf33c1ee5254c6a430bc1538232b49c3ea13e613 upstream.

This patch try to fix the building error on MIPS. The reason is MIPS
has already defined the PTR macro, which conflicts with the PTR macro
in include/uapi/linux/bcache.h.

[fixed by mlyle: corrected a line-length issue]

Signed-off-by: Huacai Chen &lt;chenhc@lemote.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>bcache: check ca-&gt;alloc_thread initialized before wake up it</title>
<updated>2017-11-30T08:39:03+00:00</updated>
<author>
<name>Coly Li</name>
<email>colyli@suse.de</email>
</author>
<published>2017-10-13T23:35:29+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=770e10817e0980b61fa04f99432b1482242d65b4'/>
<id>770e10817e0980b61fa04f99432b1482242d65b4</id>
<content type='text'>
commit 91af8300d9c1d7c6b6a2fd754109e08d4798b8d8 upstream.

In bcache code, sysfs entries are created before all resources get
allocated, e.g. allocation thread of a cache set.

There is posibility for NULL pointer deference if a resource is accessed
but which is not initialized yet. Indeed Jorg Bornschein catches one on
cache set allocation thread and gets a kernel oops.

The reason for this bug is, when bch_bucket_alloc() is called during
cache set registration and attaching, ca-&gt;alloc_thread is not properly
allocated and initialized yet, call wake_up_process() on ca-&gt;alloc_thread
triggers NULL pointer deference failure. A simple and fast fix is, before
waking up ca-&gt;alloc_thread, checking whether it is allocated, and only
wake up ca-&gt;alloc_thread when it is not NULL.

Signed-off-by: Coly Li &lt;colyli@suse.de&gt;
Reported-by: Jorg Bornschein &lt;jb@capsec.org&gt;
Cc: Kent Overstreet &lt;kent.overstreet@gmail.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit 91af8300d9c1d7c6b6a2fd754109e08d4798b8d8 upstream.

In bcache code, sysfs entries are created before all resources get
allocated, e.g. allocation thread of a cache set.

There is posibility for NULL pointer deference if a resource is accessed
but which is not initialized yet. Indeed Jorg Bornschein catches one on
cache set allocation thread and gets a kernel oops.

The reason for this bug is, when bch_bucket_alloc() is called during
cache set registration and attaching, ca-&gt;alloc_thread is not properly
allocated and initialized yet, call wake_up_process() on ca-&gt;alloc_thread
triggers NULL pointer deference failure. A simple and fast fix is, before
waking up ca-&gt;alloc_thread, checking whether it is allocated, and only
wake up ca-&gt;alloc_thread when it is not NULL.

Signed-off-by: Coly Li &lt;colyli@suse.de&gt;
Reported-by: Jorg Bornschein &lt;jb@capsec.org&gt;
Cc: Kent Overstreet &lt;kent.overstreet@gmail.com&gt;
Reviewed-by: Michael Lyle &lt;mlyle@lyle.org&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
<entry>
<title>dm: fix race between dm_get_from_kobject() and __dm_destroy()</title>
<updated>2017-11-30T08:39:02+00:00</updated>
<author>
<name>Hou Tao</name>
<email>houtao1@huawei.com</email>
</author>
<published>2017-11-01T07:42:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1cd9686e0a3b5b5a09a2025c21cd4d92e8db0e1f'/>
<id>1cd9686e0a3b5b5a09a2025c21cd4d92e8db0e1f</id>
<content type='text'>
commit b9a41d21dceadf8104812626ef85dc56ee8a60ed upstream.

The following BUG_ON was hit when testing repeat creation and removal of
DM devices:

    kernel BUG at drivers/md/dm.c:2919!
    CPU: 7 PID: 750 Comm: systemd-udevd Not tainted 4.1.44
    Call Trace:
     [&lt;ffffffff81649e8b&gt;] dm_get_from_kobject+0x34/0x3a
     [&lt;ffffffff81650ef1&gt;] dm_attr_show+0x2b/0x5e
     [&lt;ffffffff817b46d1&gt;] ? mutex_lock+0x26/0x44
     [&lt;ffffffff811df7f5&gt;] sysfs_kf_seq_show+0x83/0xcf
     [&lt;ffffffff811de257&gt;] kernfs_seq_show+0x23/0x25
     [&lt;ffffffff81199118&gt;] seq_read+0x16f/0x325
     [&lt;ffffffff811de994&gt;] kernfs_fop_read+0x3a/0x13f
     [&lt;ffffffff8117b625&gt;] __vfs_read+0x26/0x9d
     [&lt;ffffffff8130eb59&gt;] ? security_file_permission+0x3c/0x44
     [&lt;ffffffff8117bdb8&gt;] ? rw_verify_area+0x83/0xd9
     [&lt;ffffffff8117be9d&gt;] vfs_read+0x8f/0xcf
     [&lt;ffffffff81193e34&gt;] ? __fdget_pos+0x12/0x41
     [&lt;ffffffff8117c686&gt;] SyS_read+0x4b/0x76
     [&lt;ffffffff817b606e&gt;] system_call_fastpath+0x12/0x71

The bug can be easily triggered, if an extra delay (e.g. 10ms) is added
between the test of DMF_FREEING &amp; DMF_DELETING and dm_get() in
dm_get_from_kobject().

To fix it, we need to ensure the test of DMF_FREEING &amp; DMF_DELETING and
dm_get() are done in an atomic way, so _minor_lock is used.

The other callers of dm_get() have also been checked to be OK: some
callers invoke dm_get() under _minor_lock, some callers invoke it under
_hash_lock, and dm_start_request() invoke it after increasing
md-&gt;open_count.

Signed-off-by: Hou Tao &lt;houtao1@huawei.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
commit b9a41d21dceadf8104812626ef85dc56ee8a60ed upstream.

The following BUG_ON was hit when testing repeat creation and removal of
DM devices:

    kernel BUG at drivers/md/dm.c:2919!
    CPU: 7 PID: 750 Comm: systemd-udevd Not tainted 4.1.44
    Call Trace:
     [&lt;ffffffff81649e8b&gt;] dm_get_from_kobject+0x34/0x3a
     [&lt;ffffffff81650ef1&gt;] dm_attr_show+0x2b/0x5e
     [&lt;ffffffff817b46d1&gt;] ? mutex_lock+0x26/0x44
     [&lt;ffffffff811df7f5&gt;] sysfs_kf_seq_show+0x83/0xcf
     [&lt;ffffffff811de257&gt;] kernfs_seq_show+0x23/0x25
     [&lt;ffffffff81199118&gt;] seq_read+0x16f/0x325
     [&lt;ffffffff811de994&gt;] kernfs_fop_read+0x3a/0x13f
     [&lt;ffffffff8117b625&gt;] __vfs_read+0x26/0x9d
     [&lt;ffffffff8130eb59&gt;] ? security_file_permission+0x3c/0x44
     [&lt;ffffffff8117bdb8&gt;] ? rw_verify_area+0x83/0xd9
     [&lt;ffffffff8117be9d&gt;] vfs_read+0x8f/0xcf
     [&lt;ffffffff81193e34&gt;] ? __fdget_pos+0x12/0x41
     [&lt;ffffffff8117c686&gt;] SyS_read+0x4b/0x76
     [&lt;ffffffff817b606e&gt;] system_call_fastpath+0x12/0x71

The bug can be easily triggered, if an extra delay (e.g. 10ms) is added
between the test of DMF_FREEING &amp; DMF_DELETING and dm_get() in
dm_get_from_kobject().

To fix it, we need to ensure the test of DMF_FREEING &amp; DMF_DELETING and
dm_get() are done in an atomic way, so _minor_lock is used.

The other callers of dm_get() have also been checked to be OK: some
callers invoke dm_get() under _minor_lock, some callers invoke it under
_hash_lock, and dm_start_request() invoke it after increasing
md-&gt;open_count.

Signed-off-by: Hou Tao &lt;houtao1@huawei.com&gt;
Signed-off-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</pre>
</div>
</content>
</entry>
</feed>
