<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux-toradex.git/Documentation/filesystems/caching, branch tegra</title>
<subtitle>Linux kernel for Apalis and Colibri modules</subtitle>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/'/>
<entry>
<title>FS-Cache: Add a helper to bulk uncache pages on an inode</title>
<updated>2011-07-07T20:21:56+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2011-07-07T11:19:48+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=c902ce1bfb40d8b049bd2319b388b4b68b04bc27'/>
<id>c902ce1bfb40d8b049bd2319b388b4b68b04bc27</id>
<content type='text'>
Add an FS-Cache helper to bulk uncache pages on an inode.  This will
only work for the circumstance where the pages in the cache correspond
1:1 with the pages attached to an inode's page cache.

This is required for CIFS and NFS: When disabling inode cookie, we were
returning the cookie and setting cifsi-&gt;fscache to NULL but failed to
invalidate any previously mapped pages.  This resulted in "Bad page
state" errors and manifested in other kind of errors when running
fsstress.  Fix it by uncaching mapped pages when we disable the inode
cookie.

This patch should fix the following oops and "Bad page state" errors
seen during fsstress testing.

  ------------[ cut here ]------------
  kernel BUG at fs/cachefiles/namei.c:201!
  invalid opcode: 0000 [#1] SMP
  Pid: 5, comm: kworker/u:0 Not tainted 2.6.38.7-30.fc15.x86_64 #1 Bochs Bochs
  RIP: 0010: cachefiles_walk_to_object+0x436/0x745 [cachefiles]
  RSP: 0018:ffff88002ce6dd00  EFLAGS: 00010282
  RAX: ffff88002ef165f0 RBX: ffff88001811f500 RCX: 0000000000000000
  RDX: 0000000000000000 RSI: 0000000000000100 RDI: 0000000000000282
  RBP: ffff88002ce6dda0 R08: 0000000000000100 R09: ffffffff81b3a300
  R10: 0000ffff00066c0a R11: 0000000000000003 R12: ffff88002ae54840
  R13: ffff88002ae54840 R14: ffff880029c29c00 R15: ffff88001811f4b0
  FS:  00007f394dd32720(0000) GS:ffff88002ef00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
  CR2: 00007fffcb62ddf8 CR3: 000000001825f000 CR4: 00000000000006e0
  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
  DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
  Process kworker/u:0 (pid: 5, threadinfo ffff88002ce6c000, task ffff88002ce55cc0)
  Stack:
   0000000000000246 ffff88002ce55cc0 ffff88002ce6dd58 ffff88001815dc00
   ffff8800185246c0 ffff88001811f618 ffff880029c29d18 ffff88001811f380
   ffff88002ce6dd50 ffffffff814757e4 ffff88002ce6dda0 ffffffff8106ac56
  Call Trace:
   cachefiles_lookup_object+0x78/0xd4 [cachefiles]
   fscache_lookup_object+0x131/0x16d [fscache]
   fscache_object_work_func+0x1bc/0x669 [fscache]
   process_one_work+0x186/0x298
   worker_thread+0xda/0x15d
   kthread+0x84/0x8c
   kernel_thread_helper+0x4/0x10
  RIP  cachefiles_walk_to_object+0x436/0x745 [cachefiles]
  ---[ end trace 1d481c9af1804caa ]---

I tested the uncaching by the following means:

 (1) Create a big file on my NFS server (104857600 bytes).

 (2) Read the file into the cache with md5sum on the NFS client.  Look in
     /proc/fs/fscache/stats:

	Pages  : mrk=25601 unc=0

 (3) Open the file for read/write ("bash 5&lt;&gt;/warthog/bigfile").  Look in proc
     again:

	Pages  : mrk=25601 unc=25601

Reported-by: Jeff Layton &lt;jlayton@redhat.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Reviewed-and-Tested-by: Suresh Jayaraman &lt;sjayaraman@suse.de&gt;
cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Add an FS-Cache helper to bulk uncache pages on an inode.  This will
only work for the circumstance where the pages in the cache correspond
1:1 with the pages attached to an inode's page cache.

This is required for CIFS and NFS: When disabling inode cookie, we were
returning the cookie and setting cifsi-&gt;fscache to NULL but failed to
invalidate any previously mapped pages.  This resulted in "Bad page
state" errors and manifested in other kind of errors when running
fsstress.  Fix it by uncaching mapped pages when we disable the inode
cookie.

This patch should fix the following oops and "Bad page state" errors
seen during fsstress testing.

  ------------[ cut here ]------------
  kernel BUG at fs/cachefiles/namei.c:201!
  invalid opcode: 0000 [#1] SMP
  Pid: 5, comm: kworker/u:0 Not tainted 2.6.38.7-30.fc15.x86_64 #1 Bochs Bochs
  RIP: 0010: cachefiles_walk_to_object+0x436/0x745 [cachefiles]
  RSP: 0018:ffff88002ce6dd00  EFLAGS: 00010282
  RAX: ffff88002ef165f0 RBX: ffff88001811f500 RCX: 0000000000000000
  RDX: 0000000000000000 RSI: 0000000000000100 RDI: 0000000000000282
  RBP: ffff88002ce6dda0 R08: 0000000000000100 R09: ffffffff81b3a300
  R10: 0000ffff00066c0a R11: 0000000000000003 R12: ffff88002ae54840
  R13: ffff88002ae54840 R14: ffff880029c29c00 R15: ffff88001811f4b0
  FS:  00007f394dd32720(0000) GS:ffff88002ef00000(0000) knlGS:0000000000000000
  CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
  CR2: 00007fffcb62ddf8 CR3: 000000001825f000 CR4: 00000000000006e0
  DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
  DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
  Process kworker/u:0 (pid: 5, threadinfo ffff88002ce6c000, task ffff88002ce55cc0)
  Stack:
   0000000000000246 ffff88002ce55cc0 ffff88002ce6dd58 ffff88001815dc00
   ffff8800185246c0 ffff88001811f618 ffff880029c29d18 ffff88001811f380
   ffff88002ce6dd50 ffffffff814757e4 ffff88002ce6dda0 ffffffff8106ac56
  Call Trace:
   cachefiles_lookup_object+0x78/0xd4 [cachefiles]
   fscache_lookup_object+0x131/0x16d [fscache]
   fscache_object_work_func+0x1bc/0x669 [fscache]
   process_one_work+0x186/0x298
   worker_thread+0xda/0x15d
   kthread+0x84/0x8c
   kernel_thread_helper+0x4/0x10
  RIP  cachefiles_walk_to_object+0x436/0x745 [cachefiles]
  ---[ end trace 1d481c9af1804caa ]---

I tested the uncaching by the following means:

 (1) Create a big file on my NFS server (104857600 bytes).

 (2) Read the file into the cache with md5sum on the NFS client.  Look in
     /proc/fs/fscache/stats:

	Pages  : mrk=25601 unc=0

 (3) Open the file for read/write ("bash 5&lt;&gt;/warthog/bigfile").  Look in proc
     again:

	Pages  : mrk=25601 unc=25601

Reported-by: Jeff Layton &lt;jlayton@redhat.com&gt;
Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
Reviewed-and-Tested-by: Suresh Jayaraman &lt;sjayaraman@suse.de&gt;
cc: stable@kernel.org
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix common misspellings</title>
<updated>2011-03-31T14:26:23+00:00</updated>
<author>
<name>Lucas De Marchi</name>
<email>lucas.demarchi@profusion.mobi</email>
</author>
<published>2011-03-31T01:57:33+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=25985edcedea6396277003854657b5f3cb31a628'/>
<id>25985edcedea6396277003854657b5f3cb31a628</id>
<content type='text'>
Fixes generated by 'codespell' and manually reviewed.

Signed-off-by: Lucas De Marchi &lt;lucas.demarchi@profusion.mobi&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Fixes generated by 'codespell' and manually reviewed.

Signed-off-by: Lucas De Marchi &lt;lucas.demarchi@profusion.mobi&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>fscache: convert object to use workqueue instead of slow-work</title>
<updated>2010-07-22T20:58:34+00:00</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2010-07-20T20:09:01+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=8b8edefa2fffbff97f9eec8b70e78ae23abad1a0'/>
<id>8b8edefa2fffbff97f9eec8b70e78ae23abad1a0</id>
<content type='text'>
Make fscache object state transition callbacks use workqueue instead
of slow-work.  New dedicated unbound CPU workqueue fscache_object_wq
is created.  get/put callbacks are renamed and modified to take
@object and called directly from the enqueue wrapper and the work
function.  While at it, make all open coded instances of get/put to
use fscache_get/put_object().

* Unbound workqueue is used.

* work_busy() output is printed instead of slow-work flags in object
  debugging outputs.  They mean basically the same thing bit-for-bit.

* sysctl fscache.object_max_active added to control concurrency.  The
  default value is nr_cpus clamped between 4 and
  WQ_UNBOUND_MAX_ACTIVE.

* slow_work_sleep_till_thread_needed() is replaced with fscache
  private implementation fscache_object_sleep_till_congested() which
  waits on fscache_object_wq congestion.

* debugfs support is dropped for now.  Tracing API based debug
  facility is planned to be added.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Make fscache object state transition callbacks use workqueue instead
of slow-work.  New dedicated unbound CPU workqueue fscache_object_wq
is created.  get/put callbacks are renamed and modified to take
@object and called directly from the enqueue wrapper and the work
function.  While at it, make all open coded instances of get/put to
use fscache_get/put_object().

* Unbound workqueue is used.

* work_busy() output is printed instead of slow-work flags in object
  debugging outputs.  They mean basically the same thing bit-for-bit.

* sysctl fscache.object_max_active added to control concurrency.  The
  default value is nr_cpus clamped between 4 and
  WQ_UNBOUND_MAX_ACTIVE.

* slow_work_sleep_till_thread_needed() is replaced with fscache
  private implementation fscache_object_sleep_till_congested() which
  waits on fscache_object_wq congestion.

* debugfs support is dropped for now.  Tracing API based debug
  facility is planned to be added.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>CacheFiles: Catch an overly long wait for an old active object</title>
<updated>2009-11-19T18:12:05+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:12:05+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=fee096deb4f33897937b974cb2c5168bab7935be'/>
<id>fee096deb4f33897937b974cb2c5168bab7935be</id>
<content type='text'>
Catch an overly long wait for an old, dying active object when we want to
replace it with a new one.  The probability is that all the slow-work threads
are hogged, and the delete can't get a look in.

What we do instead is:

 (1) if there's nothing in the slow work queue, we sleep until either the dying
     object has finished dying or there is something in the slow work queue
     behind which we can queue our object.

 (2) if there is something in the slow work queue, we return ETIMEDOUT to
     fscache_lookup_object(), which then puts us back on the slow work queue,
     presumably behind the deletion that we're blocked by.  We are then
     deferred for a while until we work our way back through the queue -
     without blocking a slow-work thread unnecessarily.

A backtrace similar to the following may appear in the log without this patch:

	INFO: task kslowd004:5711 blocked for more than 120 seconds.
	"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
	kslowd004     D 0000000000000000     0  5711      2 0x00000080
	 ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000
	 ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8
	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8
	Call Trace:
	 [&lt;ffffffff81058e21&gt;] ? trace_hardirqs_on+0xd/0xf
	 [&lt;ffffffffa011c4d8&gt;] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
	 [&lt;ffffffffa011c4e1&gt;] cachefiles_wait_bit+0x9/0xd [cachefiles]
	 [&lt;ffffffff81353153&gt;] __wait_on_bit+0x43/0x76
	 [&lt;ffffffff8111ae39&gt;] ? ext3_xattr_get+0x1ec/0x270
	 [&lt;ffffffff813531ef&gt;] out_of_line_wait_on_bit+0x69/0x74
	 [&lt;ffffffffa011c4d8&gt;] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
	 [&lt;ffffffff8104c125&gt;] ? wake_bit_function+0x0/0x2e
	 [&lt;ffffffffa011bc79&gt;] cachefiles_mark_object_active+0x203/0x23b [cachefiles]
	 [&lt;ffffffffa011c209&gt;] cachefiles_walk_to_object+0x558/0x827 [cachefiles]
	 [&lt;ffffffffa011a429&gt;] cachefiles_lookup_object+0xac/0x12a [cachefiles]
	 [&lt;ffffffffa00aa1e9&gt;] fscache_lookup_object+0x1c7/0x214 [fscache]
	 [&lt;ffffffffa00aafc5&gt;] fscache_object_state_machine+0xa5/0x52d [fscache]
	 [&lt;ffffffffa00ab4ac&gt;] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]
	 [&lt;ffffffff81082093&gt;] slow_work_execute+0x18f/0x2d1
	 [&lt;ffffffff8108239a&gt;] slow_work_thread+0x1c5/0x308
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff810821d5&gt;] ? slow_work_thread+0x0/0x308
	 [&lt;ffffffff8104be91&gt;] kthread+0x7a/0x82
	 [&lt;ffffffff8100beda&gt;] child_rip+0xa/0x20
	 [&lt;ffffffff8100b87c&gt;] ? restore_args+0x0/0x30
	 [&lt;ffffffff8104be17&gt;] ? kthread+0x0/0x82
	 [&lt;ffffffff8100bed0&gt;] ? child_rip+0x0/0x20
	1 lock held by kslowd004/5711:
	 #0:  (&amp;sb-&gt;s_type-&gt;i_mutex_key#7/1){+.+.+.}, at: [&lt;ffffffffa011be64&gt;] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles]

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Catch an overly long wait for an old, dying active object when we want to
replace it with a new one.  The probability is that all the slow-work threads
are hogged, and the delete can't get a look in.

What we do instead is:

 (1) if there's nothing in the slow work queue, we sleep until either the dying
     object has finished dying or there is something in the slow work queue
     behind which we can queue our object.

 (2) if there is something in the slow work queue, we return ETIMEDOUT to
     fscache_lookup_object(), which then puts us back on the slow work queue,
     presumably behind the deletion that we're blocked by.  We are then
     deferred for a while until we work our way back through the queue -
     without blocking a slow-work thread unnecessarily.

A backtrace similar to the following may appear in the log without this patch:

	INFO: task kslowd004:5711 blocked for more than 120 seconds.
	"echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.
	kslowd004     D 0000000000000000     0  5711      2 0x00000080
	 ffff88000340bb80 0000000000000046 ffff88002550d000 0000000000000000
	 ffff88002550d000 0000000000000007 ffff88000340bfd8 ffff88002550d2a8
	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff88002550d2a8
	Call Trace:
	 [&lt;ffffffff81058e21&gt;] ? trace_hardirqs_on+0xd/0xf
	 [&lt;ffffffffa011c4d8&gt;] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
	 [&lt;ffffffffa011c4e1&gt;] cachefiles_wait_bit+0x9/0xd [cachefiles]
	 [&lt;ffffffff81353153&gt;] __wait_on_bit+0x43/0x76
	 [&lt;ffffffff8111ae39&gt;] ? ext3_xattr_get+0x1ec/0x270
	 [&lt;ffffffff813531ef&gt;] out_of_line_wait_on_bit+0x69/0x74
	 [&lt;ffffffffa011c4d8&gt;] ? cachefiles_wait_bit+0x0/0xd [cachefiles]
	 [&lt;ffffffff8104c125&gt;] ? wake_bit_function+0x0/0x2e
	 [&lt;ffffffffa011bc79&gt;] cachefiles_mark_object_active+0x203/0x23b [cachefiles]
	 [&lt;ffffffffa011c209&gt;] cachefiles_walk_to_object+0x558/0x827 [cachefiles]
	 [&lt;ffffffffa011a429&gt;] cachefiles_lookup_object+0xac/0x12a [cachefiles]
	 [&lt;ffffffffa00aa1e9&gt;] fscache_lookup_object+0x1c7/0x214 [fscache]
	 [&lt;ffffffffa00aafc5&gt;] fscache_object_state_machine+0xa5/0x52d [fscache]
	 [&lt;ffffffffa00ab4ac&gt;] fscache_object_slow_work_execute+0x5f/0xa0 [fscache]
	 [&lt;ffffffff81082093&gt;] slow_work_execute+0x18f/0x2d1
	 [&lt;ffffffff8108239a&gt;] slow_work_thread+0x1c5/0x308
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff810821d5&gt;] ? slow_work_thread+0x0/0x308
	 [&lt;ffffffff8104be91&gt;] kthread+0x7a/0x82
	 [&lt;ffffffff8100beda&gt;] child_rip+0xa/0x20
	 [&lt;ffffffff8100b87c&gt;] ? restore_args+0x0/0x30
	 [&lt;ffffffff8104be17&gt;] ? kthread+0x0/0x82
	 [&lt;ffffffff8100bed0&gt;] ? child_rip+0x0/0x20
	1 lock held by kslowd004/5711:
	 #0:  (&amp;sb-&gt;s_type-&gt;i_mutex_key#7/1){+.+.+.}, at: [&lt;ffffffffa011be64&gt;] cachefiles_walk_to_object+0x1b3/0x827 [cachefiles]

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Start processing an object's operations on that object's death</title>
<updated>2009-11-19T18:11:45+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:11:45+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=60d543ca724be155c2b6166e36a00c80b21bd810'/>
<id>60d543ca724be155c2b6166e36a00c80b21bd810</id>
<content type='text'>
Start processing an object's operations when that object moves into the DYING
state as the object cannot be destroyed until all its outstanding operations
have completed.

Furthermore, make sure that read and allocation operations handle being woken
up on a dead object.  Such events are recorded in the Allocs.abt and
Retrvls.abt statistics as viewable through /proc/fs/fscache/stats.

The code for waiting for object activation for the read and allocation
operations is also extracted into its own function as it is much the same in
all cases, differing only in the stats incremented.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Start processing an object's operations when that object moves into the DYING
state as the object cannot be destroyed until all its outstanding operations
have completed.

Furthermore, make sure that read and allocation operations handle being woken
up on a dead object.  Such events are recorded in the Allocs.abt and
Retrvls.abt statistics as viewable through /proc/fs/fscache/stats.

The code for waiting for object activation for the read and allocation
operations is also extracted into its own function as it is much the same in
all cases, differing only in the stats incremented.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Handle pages pending storage that get evicted under OOM conditions</title>
<updated>2009-11-19T18:11:35+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:11:35+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=201a15428bd54f83eccec8b7c64a04b8f9431204'/>
<id>201a15428bd54f83eccec8b7c64a04b8f9431204</id>
<content type='text'>
Handle netfs pages that the vmscan algorithm wants to evict from the pagecache
under OOM conditions, but that are waiting for write to the cache.  Under these
conditions, vmscan calls the releasepage() function of the netfs, asking if a
page can be discarded.

The problem is typified by the following trace of a stuck process:

	kslowd005     D 0000000000000000     0  4253      2 0x00000080
	 ffff88001b14f370 0000000000000046 ffff880020d0d000 0000000000000007
	 0000000000000006 0000000000000001 ffff88001b14ffd8 ffff880020d0d2a8
	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff880020d0d2a8
	Call Trace:
	 [&lt;ffffffffa00782d8&gt;] __fscache_wait_on_page_write+0x8b/0xa7 [fscache]
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffffa0078240&gt;] ? __fscache_check_page_write+0x63/0x70 [fscache]
	 [&lt;ffffffffa00b671d&gt;] nfs_fscache_release_page+0x4e/0xc4 [nfs]
	 [&lt;ffffffffa00927f0&gt;] nfs_release_page+0x3c/0x41 [nfs]
	 [&lt;ffffffff810885d3&gt;] try_to_release_page+0x32/0x3b
	 [&lt;ffffffff81093203&gt;] shrink_page_list+0x316/0x4ac
	 [&lt;ffffffff8109372b&gt;] shrink_inactive_list+0x392/0x67c
	 [&lt;ffffffff813532fa&gt;] ? __mutex_unlock_slowpath+0x100/0x10b
	 [&lt;ffffffff81058df0&gt;] ? trace_hardirqs_on_caller+0x10c/0x130
	 [&lt;ffffffff8135330e&gt;] ? mutex_unlock+0x9/0xb
	 [&lt;ffffffff81093aa2&gt;] shrink_list+0x8d/0x8f
	 [&lt;ffffffff81093d1c&gt;] shrink_zone+0x278/0x33c
	 [&lt;ffffffff81052d6c&gt;] ? ktime_get_ts+0xad/0xba
	 [&lt;ffffffff81094b13&gt;] try_to_free_pages+0x22e/0x392
	 [&lt;ffffffff81091e24&gt;] ? isolate_pages_global+0x0/0x212
	 [&lt;ffffffff8108e743&gt;] __alloc_pages_nodemask+0x3dc/0x5cf
	 [&lt;ffffffff81089529&gt;] grab_cache_page_write_begin+0x65/0xaa
	 [&lt;ffffffff8110f8c0&gt;] ext3_write_begin+0x78/0x1eb
	 [&lt;ffffffff81089ec5&gt;] generic_file_buffered_write+0x109/0x28c
	 [&lt;ffffffff8103cb69&gt;] ? current_fs_time+0x22/0x29
	 [&lt;ffffffff8108a509&gt;] __generic_file_aio_write+0x350/0x385
	 [&lt;ffffffff8108a588&gt;] ? generic_file_aio_write+0x4a/0xae
	 [&lt;ffffffff8108a59e&gt;] generic_file_aio_write+0x60/0xae
	 [&lt;ffffffff810b2e82&gt;] do_sync_write+0xe3/0x120
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff810b18e1&gt;] ? __dentry_open+0x1a5/0x2b8
	 [&lt;ffffffff810b1a76&gt;] ? dentry_open+0x82/0x89
	 [&lt;ffffffffa00e693c&gt;] cachefiles_write_page+0x298/0x335 [cachefiles]
	 [&lt;ffffffffa0077147&gt;] fscache_write_op+0x178/0x2c2 [fscache]
	 [&lt;ffffffffa0075656&gt;] fscache_op_execute+0x7a/0xd1 [fscache]
	 [&lt;ffffffff81082093&gt;] slow_work_execute+0x18f/0x2d1
	 [&lt;ffffffff8108239a&gt;] slow_work_thread+0x1c5/0x308
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff810821d5&gt;] ? slow_work_thread+0x0/0x308
	 [&lt;ffffffff8104be91&gt;] kthread+0x7a/0x82
	 [&lt;ffffffff8100beda&gt;] child_rip+0xa/0x20
	 [&lt;ffffffff8100b87c&gt;] ? restore_args+0x0/0x30
	 [&lt;ffffffff8102ef83&gt;] ? tg_shares_up+0x171/0x227
	 [&lt;ffffffff8104be17&gt;] ? kthread+0x0/0x82
	 [&lt;ffffffff8100bed0&gt;] ? child_rip+0x0/0x20

In the above backtrace, the following is happening:

 (1) A page storage operation is being executed by a slow-work thread
     (fscache_write_op()).

 (2) FS-Cache farms the operation out to the cache to perform
     (cachefiles_write_page()).

 (3) CacheFiles is then calling Ext3 to perform the actual write, using Ext3's
     standard write (do_sync_write()) under KERNEL_DS directly from the netfs
     page.

 (4) However, for Ext3 to perform the write, it must allocate some memory, in
     particular, it must allocate at least one page cache page into which it
     can copy the data from the netfs page.

 (5) Under OOM conditions, the memory allocator can't immediately come up with
     a page, so it uses vmscan to find something to discard
     (try_to_free_pages()).

 (6) vmscan finds a clean netfs page it might be able to discard (possibly the
     one it's trying to write out).

 (7) The netfs is called to throw the page away (nfs_release_page()) - but it's
     called with __GFP_WAIT, so the netfs decides to wait for the store to
     complete (__fscache_wait_on_page_write()).

 (8) This blocks a slow-work processing thread - possibly against itself.

The system ends up stuck because it can't write out any netfs pages to the
cache without allocating more memory.

To avoid this, we make FS-Cache cancel some writes that aren't in the middle of
actually being performed.  This means that some data won't make it into the
cache this time.  To support this, a new FS-Cache function is added
fscache_maybe_release_page() that replaces what the netfs releasepage()
functions used to do with respect to the cache.

The decisions fscache_maybe_release_page() makes are counted and displayed
through /proc/fs/fscache/stats on a line labelled "VmScan".  There are four
counters provided: "nos=N" - pages that weren't pending storage; "gon=N" -
pages that were pending storage when we first looked, but weren't by the time
we got the object lock; "bsy=N" - pages that we ignored as they were actively
being written when we looked; and "can=N" - pages that we cancelled the storage
of.

What I'd really like to do is alter the behaviour of the cancellation
heuristics, depending on how necessary it is to expel pages.  If there are
plenty of other pages that aren't waiting to be written to the cache that
could be ejected first, then it would be nice to hold up on immediate
cancellation of cache writes - but I don't see a way of doing that.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Handle netfs pages that the vmscan algorithm wants to evict from the pagecache
under OOM conditions, but that are waiting for write to the cache.  Under these
conditions, vmscan calls the releasepage() function of the netfs, asking if a
page can be discarded.

The problem is typified by the following trace of a stuck process:

	kslowd005     D 0000000000000000     0  4253      2 0x00000080
	 ffff88001b14f370 0000000000000046 ffff880020d0d000 0000000000000007
	 0000000000000006 0000000000000001 ffff88001b14ffd8 ffff880020d0d2a8
	 000000000000ddf0 00000000000118c0 00000000000118c0 ffff880020d0d2a8
	Call Trace:
	 [&lt;ffffffffa00782d8&gt;] __fscache_wait_on_page_write+0x8b/0xa7 [fscache]
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffffa0078240&gt;] ? __fscache_check_page_write+0x63/0x70 [fscache]
	 [&lt;ffffffffa00b671d&gt;] nfs_fscache_release_page+0x4e/0xc4 [nfs]
	 [&lt;ffffffffa00927f0&gt;] nfs_release_page+0x3c/0x41 [nfs]
	 [&lt;ffffffff810885d3&gt;] try_to_release_page+0x32/0x3b
	 [&lt;ffffffff81093203&gt;] shrink_page_list+0x316/0x4ac
	 [&lt;ffffffff8109372b&gt;] shrink_inactive_list+0x392/0x67c
	 [&lt;ffffffff813532fa&gt;] ? __mutex_unlock_slowpath+0x100/0x10b
	 [&lt;ffffffff81058df0&gt;] ? trace_hardirqs_on_caller+0x10c/0x130
	 [&lt;ffffffff8135330e&gt;] ? mutex_unlock+0x9/0xb
	 [&lt;ffffffff81093aa2&gt;] shrink_list+0x8d/0x8f
	 [&lt;ffffffff81093d1c&gt;] shrink_zone+0x278/0x33c
	 [&lt;ffffffff81052d6c&gt;] ? ktime_get_ts+0xad/0xba
	 [&lt;ffffffff81094b13&gt;] try_to_free_pages+0x22e/0x392
	 [&lt;ffffffff81091e24&gt;] ? isolate_pages_global+0x0/0x212
	 [&lt;ffffffff8108e743&gt;] __alloc_pages_nodemask+0x3dc/0x5cf
	 [&lt;ffffffff81089529&gt;] grab_cache_page_write_begin+0x65/0xaa
	 [&lt;ffffffff8110f8c0&gt;] ext3_write_begin+0x78/0x1eb
	 [&lt;ffffffff81089ec5&gt;] generic_file_buffered_write+0x109/0x28c
	 [&lt;ffffffff8103cb69&gt;] ? current_fs_time+0x22/0x29
	 [&lt;ffffffff8108a509&gt;] __generic_file_aio_write+0x350/0x385
	 [&lt;ffffffff8108a588&gt;] ? generic_file_aio_write+0x4a/0xae
	 [&lt;ffffffff8108a59e&gt;] generic_file_aio_write+0x60/0xae
	 [&lt;ffffffff810b2e82&gt;] do_sync_write+0xe3/0x120
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff810b18e1&gt;] ? __dentry_open+0x1a5/0x2b8
	 [&lt;ffffffff810b1a76&gt;] ? dentry_open+0x82/0x89
	 [&lt;ffffffffa00e693c&gt;] cachefiles_write_page+0x298/0x335 [cachefiles]
	 [&lt;ffffffffa0077147&gt;] fscache_write_op+0x178/0x2c2 [fscache]
	 [&lt;ffffffffa0075656&gt;] fscache_op_execute+0x7a/0xd1 [fscache]
	 [&lt;ffffffff81082093&gt;] slow_work_execute+0x18f/0x2d1
	 [&lt;ffffffff8108239a&gt;] slow_work_thread+0x1c5/0x308
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff810821d5&gt;] ? slow_work_thread+0x0/0x308
	 [&lt;ffffffff8104be91&gt;] kthread+0x7a/0x82
	 [&lt;ffffffff8100beda&gt;] child_rip+0xa/0x20
	 [&lt;ffffffff8100b87c&gt;] ? restore_args+0x0/0x30
	 [&lt;ffffffff8102ef83&gt;] ? tg_shares_up+0x171/0x227
	 [&lt;ffffffff8104be17&gt;] ? kthread+0x0/0x82
	 [&lt;ffffffff8100bed0&gt;] ? child_rip+0x0/0x20

In the above backtrace, the following is happening:

 (1) A page storage operation is being executed by a slow-work thread
     (fscache_write_op()).

 (2) FS-Cache farms the operation out to the cache to perform
     (cachefiles_write_page()).

 (3) CacheFiles is then calling Ext3 to perform the actual write, using Ext3's
     standard write (do_sync_write()) under KERNEL_DS directly from the netfs
     page.

 (4) However, for Ext3 to perform the write, it must allocate some memory, in
     particular, it must allocate at least one page cache page into which it
     can copy the data from the netfs page.

 (5) Under OOM conditions, the memory allocator can't immediately come up with
     a page, so it uses vmscan to find something to discard
     (try_to_free_pages()).

 (6) vmscan finds a clean netfs page it might be able to discard (possibly the
     one it's trying to write out).

 (7) The netfs is called to throw the page away (nfs_release_page()) - but it's
     called with __GFP_WAIT, so the netfs decides to wait for the store to
     complete (__fscache_wait_on_page_write()).

 (8) This blocks a slow-work processing thread - possibly against itself.

The system ends up stuck because it can't write out any netfs pages to the
cache without allocating more memory.

To avoid this, we make FS-Cache cancel some writes that aren't in the middle of
actually being performed.  This means that some data won't make it into the
cache this time.  To support this, a new FS-Cache function is added
fscache_maybe_release_page() that replaces what the netfs releasepage()
functions used to do with respect to the cache.

The decisions fscache_maybe_release_page() makes are counted and displayed
through /proc/fs/fscache/stats on a line labelled "VmScan".  There are four
counters provided: "nos=N" - pages that weren't pending storage; "gon=N" -
pages that were pending storage when we first looked, but weren't by the time
we got the object lock; "bsy=N" - pages that we ignored as they were actively
being written when we looked; and "can=N" - pages that we cancelled the storage
of.

What I'd really like to do is alter the behaviour of the cancellation
heuristics, depending on how necessary it is to expel pages.  If there are
plenty of other pages that aren't waiting to be written to the cache that
could be ejected first, then it would be nice to hold up on immediate
cancellation of cache writes - but I don't see a way of doing that.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Handle read request vs lookup, creation or other cache failure</title>
<updated>2009-11-19T18:11:32+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:11:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=e3d4d28b1c8cc7c26536a50b43d86ccd39878550'/>
<id>e3d4d28b1c8cc7c26536a50b43d86ccd39878550</id>
<content type='text'>
FS-Cache doesn't correctly handle the netfs requesting a read from the cache
on an object that failed or was withdrawn by the cache.  A trace similar to
the following might be seen:

	CacheFiles: Lookup failed error -105
	[exe   ] unexpected submission OP165afe [OBJ6cac OBJECT_LC_DYING]
	[exe   ] objstate=OBJECT_LC_DYING [OBJECT_LC_DYING]
	[exe   ] objflags=0
	[exe   ] objevent=9 [fffffffffffffffb]
	[exe   ] ops=0 inp=0 exc=0
	Pid: 6970, comm: exe Not tainted 2.6.32-rc6-cachefs #50
	Call Trace:
	 [&lt;ffffffffa0076477&gt;] fscache_submit_op+0x3ff/0x45a [fscache]
	 [&lt;ffffffffa0077997&gt;] __fscache_read_or_alloc_pages+0x187/0x3c4 [fscache]
	 [&lt;ffffffffa00b6480&gt;] ? nfs_readpage_from_fscache_complete+0x0/0x66 [nfs]
	 [&lt;ffffffffa00b6388&gt;] __nfs_readpages_from_fscache+0x7e/0x176 [nfs]
	 [&lt;ffffffff8108e483&gt;] ? __alloc_pages_nodemask+0x11c/0x5cf
	 [&lt;ffffffffa009d796&gt;] nfs_readpages+0x114/0x1d7 [nfs]
	 [&lt;ffffffff81090314&gt;] __do_page_cache_readahead+0x15f/0x1ec
	 [&lt;ffffffff81090228&gt;] ? __do_page_cache_readahead+0x73/0x1ec
	 [&lt;ffffffff810903bd&gt;] ra_submit+0x1c/0x20
	 [&lt;ffffffff810906bb&gt;] ondemand_readahead+0x227/0x23a
	 [&lt;ffffffff81090762&gt;] page_cache_sync_readahead+0x17/0x19
	 [&lt;ffffffff8108a99e&gt;] generic_file_aio_read+0x236/0x5a0
	 [&lt;ffffffffa00937bd&gt;] nfs_file_read+0xe4/0xf3 [nfs]
	 [&lt;ffffffff810b2fa2&gt;] do_sync_read+0xe3/0x120
	 [&lt;ffffffff81354cc3&gt;] ? _spin_unlock_irq+0x2b/0x31
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff811848e5&gt;] ? selinux_file_permission+0x5d/0x10f
	 [&lt;ffffffff81352bdb&gt;] ? thread_return+0x3e/0x101
	 [&lt;ffffffff8117d7b0&gt;] ? security_file_permission+0x11/0x13
	 [&lt;ffffffff810b3b06&gt;] vfs_read+0xaa/0x16f
	 [&lt;ffffffff81058df0&gt;] ? trace_hardirqs_on_caller+0x10c/0x130
	 [&lt;ffffffff810b3c84&gt;] sys_read+0x45/0x6c
	 [&lt;ffffffff8100ae2b&gt;] system_call_fastpath+0x16/0x1b

The object state might also be OBJECT_DYING or OBJECT_WITHDRAWING.

This should be handled by simply rejecting the new operation with ENOBUFS.
There's no need to log an error for it.  Events of this type now appear in the
stats file under Ops:rej.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
FS-Cache doesn't correctly handle the netfs requesting a read from the cache
on an object that failed or was withdrawn by the cache.  A trace similar to
the following might be seen:

	CacheFiles: Lookup failed error -105
	[exe   ] unexpected submission OP165afe [OBJ6cac OBJECT_LC_DYING]
	[exe   ] objstate=OBJECT_LC_DYING [OBJECT_LC_DYING]
	[exe   ] objflags=0
	[exe   ] objevent=9 [fffffffffffffffb]
	[exe   ] ops=0 inp=0 exc=0
	Pid: 6970, comm: exe Not tainted 2.6.32-rc6-cachefs #50
	Call Trace:
	 [&lt;ffffffffa0076477&gt;] fscache_submit_op+0x3ff/0x45a [fscache]
	 [&lt;ffffffffa0077997&gt;] __fscache_read_or_alloc_pages+0x187/0x3c4 [fscache]
	 [&lt;ffffffffa00b6480&gt;] ? nfs_readpage_from_fscache_complete+0x0/0x66 [nfs]
	 [&lt;ffffffffa00b6388&gt;] __nfs_readpages_from_fscache+0x7e/0x176 [nfs]
	 [&lt;ffffffff8108e483&gt;] ? __alloc_pages_nodemask+0x11c/0x5cf
	 [&lt;ffffffffa009d796&gt;] nfs_readpages+0x114/0x1d7 [nfs]
	 [&lt;ffffffff81090314&gt;] __do_page_cache_readahead+0x15f/0x1ec
	 [&lt;ffffffff81090228&gt;] ? __do_page_cache_readahead+0x73/0x1ec
	 [&lt;ffffffff810903bd&gt;] ra_submit+0x1c/0x20
	 [&lt;ffffffff810906bb&gt;] ondemand_readahead+0x227/0x23a
	 [&lt;ffffffff81090762&gt;] page_cache_sync_readahead+0x17/0x19
	 [&lt;ffffffff8108a99e&gt;] generic_file_aio_read+0x236/0x5a0
	 [&lt;ffffffffa00937bd&gt;] nfs_file_read+0xe4/0xf3 [nfs]
	 [&lt;ffffffff810b2fa2&gt;] do_sync_read+0xe3/0x120
	 [&lt;ffffffff81354cc3&gt;] ? _spin_unlock_irq+0x2b/0x31
	 [&lt;ffffffff8104c0f1&gt;] ? autoremove_wake_function+0x0/0x34
	 [&lt;ffffffff811848e5&gt;] ? selinux_file_permission+0x5d/0x10f
	 [&lt;ffffffff81352bdb&gt;] ? thread_return+0x3e/0x101
	 [&lt;ffffffff8117d7b0&gt;] ? security_file_permission+0x11/0x13
	 [&lt;ffffffff810b3b06&gt;] vfs_read+0xaa/0x16f
	 [&lt;ffffffff81058df0&gt;] ? trace_hardirqs_on_caller+0x10c/0x130
	 [&lt;ffffffff810b3c84&gt;] sys_read+0x45/0x6c
	 [&lt;ffffffff8100ae2b&gt;] system_call_fastpath+0x16/0x1b

The object state might also be OBJECT_DYING or OBJECT_WITHDRAWING.

This should be handled by simply rejecting the new operation with ENOBUFS.
There's no need to log an error for it.  Events of this type now appear in the
stats file under Ops:rej.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Fix lock misorder in fscache_write_op()</title>
<updated>2009-11-19T18:11:25+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:11:25+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=1bccf513ac49d44604ba1cddcc29f5886e70f1b6'/>
<id>1bccf513ac49d44604ba1cddcc29f5886e70f1b6</id>
<content type='text'>
FS-Cache has two structs internally for keeping track of the internal state of
a cached file: the fscache_cookie struct, which represents the netfs's state,
and fscache_object struct, which represents the cache's state.  Each has a
pointer that points to the other (when both are in existence), and each has a
spinlock for pointer maintenance.

Since netfs operations approach these structures from the cookie side, they get
the cookie lock first, then the object lock.  Cache operations, on the other
hand, approach from the object side, and get the object lock first.  It is not
then permitted for a cache operation to get the cookie lock whilst it is
holding the object lock lest deadlock occur; instead, it must do one of two
things:

 (1) increment the cookie usage counter, drop the object lock and then get both
     locks in order, or

 (2) simply hold the object lock as certain parts of the cookie may not be
     altered whilst the object lock is held.

It is also not permitted to follow either pointer without holding the lock at
the end you start with.  To break the pointers between the cookie and the
object, both locks must be held.

fscache_write_op(), however, violates the locking rules: It attempts to get the
cookie lock without (a) checking that the cookie pointer is a valid pointer,
and (b) holding the object lock to protect the cookie pointer whilst it follows
it.  This is so that it can access the pending page store tree without
interference from __fscache_write_page().

This is fixed by splitting the cookie lock, such that the page store tracking
tree is protected by its own lock, and checking that the cookie pointer is
non-NULL before we attempt to follow it whilst holding the object lock.

The new lock is subordinate to both the cookie lock and the object lock, and so
should be taken after those.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
FS-Cache has two structs internally for keeping track of the internal state of
a cached file: the fscache_cookie struct, which represents the netfs's state,
and fscache_object struct, which represents the cache's state.  Each has a
pointer that points to the other (when both are in existence), and each has a
spinlock for pointer maintenance.

Since netfs operations approach these structures from the cookie side, they get
the cookie lock first, then the object lock.  Cache operations, on the other
hand, approach from the object side, and get the object lock first.  It is not
then permitted for a cache operation to get the cookie lock whilst it is
holding the object lock lest deadlock occur; instead, it must do one of two
things:

 (1) increment the cookie usage counter, drop the object lock and then get both
     locks in order, or

 (2) simply hold the object lock as certain parts of the cookie may not be
     altered whilst the object lock is held.

It is also not permitted to follow either pointer without holding the lock at
the end you start with.  To break the pointers between the cookie and the
object, both locks must be held.

fscache_write_op(), however, violates the locking rules: It attempts to get the
cookie lock without (a) checking that the cookie pointer is a valid pointer,
and (b) holding the object lock to protect the cookie pointer whilst it follows
it.  This is so that it can access the pending page store tree without
interference from __fscache_write_page().

This is fixed by splitting the cookie lock, such that the page store tracking
tree is protected by its own lock, and checking that the cookie pointer is
non-NULL before we attempt to follow it whilst holding the object lock.

The new lock is subordinate to both the cookie lock and the object lock, and so
should be taken after those.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Permit cache retrieval ops to be interrupted in the initial wait phase</title>
<updated>2009-11-19T18:11:19+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:11:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=5753c441889253e4323eee85f791a1d64cf08196'/>
<id>5753c441889253e4323eee85f791a1d64cf08196</id>
<content type='text'>
Permit the operations to retrieve data from the cache or to allocate space in
the cache for future writes to be interrupted whilst they're waiting for
permission for the operation to proceed.  Typically this wait occurs whilst the
cache object is being looked up on disk in the background.

If an interruption occurs, and the operation has not yet been given the
go-ahead to run, the operation is dequeued and cancelled, and control returns
to the read operation of the netfs routine with none of the requested pages
having been read or in any way marked as known by the cache.

This means that the initial wait is done interruptibly rather than
uninterruptibly.

In addition, extra stats values are made available to show the number of ops
cancelled and the number of cache space allocations interrupted.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Permit the operations to retrieve data from the cache or to allocate space in
the cache for future writes to be interrupted whilst they're waiting for
permission for the operation to proceed.  Typically this wait occurs whilst the
cache object is being looked up on disk in the background.

If an interruption occurs, and the operation has not yet been given the
go-ahead to run, the operation is dequeued and cancelled, and control returns
to the read operation of the netfs routine with none of the requested pages
having been read or in any way marked as known by the cache.

This means that the initial wait is done interruptibly rather than
uninterruptibly.

In addition, extra stats values are made available to show the number of ops
cancelled and the number of cache space allocations interrupted.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>FS-Cache: Add counters for entry/exit to/from cache operation functions</title>
<updated>2009-11-19T18:11:08+00:00</updated>
<author>
<name>David Howells</name>
<email>dhowells@redhat.com</email>
</author>
<published>2009-11-19T18:11:08+00:00</published>
<link rel='alternate' type='text/html' href='https://git.toradex.cn/cgit/linux-toradex.git/commit/?id=52bd75fdb135d6133d878ae60c6e7e3f4ebc1cfc'/>
<id>52bd75fdb135d6133d878ae60c6e7e3f4ebc1cfc</id>
<content type='text'>
Count entries to and exits from cache operation table functions.  Maintain
these as a single counter that's added to or removed from as appropriate.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Count entries to and exits from cache operation table functions.  Maintain
these as a single counter that's added to or removed from as appropriate.

Signed-off-by: David Howells &lt;dhowells@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
