diff options
author | Hugh Dickins <hugh@veritas.com> | 2007-05-04 00:53:54 +0200 |
---|---|---|
committer | Adrian Bunk <bunk@stusta.de> | 2007-05-04 00:53:54 +0200 |
commit | 7943951f236f91699a634097a70abc35927efeb9 (patch) | |
tree | 585ec55ceccc1df04677b88798f453b0605bc6de /mm | |
parent | ffd0472d4ece96766eec98ebb0ad649dc76248b8 (diff) |
holepunch: fix disconnected pages after second truncate
shmem_truncate_range has its own truncate_inode_pages_range, to free any
pages racily instantiated while it was in progress: a SHMEM_PAGEIN flag
is set when this might have happened. But holepunching gets no chance
to clear that flag at the start of vmtruncate_range, so it's always set
(unless a truncate came just before), so holepunch almost always does
this second truncate_inode_pages_range.
shmem holepunch has unlikely swap<->file races hereabouts whatever we do
(without a fuller rework than is fit for this release): I was going to
skip the second truncate in the punch_hole case, but Miklos points out
that would make holepunch correctness more vulnerable to swapoff. So
keep the second truncate, but follow it by an unmap_mapping_range to
eliminate the disconnected pages (freed from pagecache while still
mapped in userspace) that it might have left behind.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Adrian Bunk <bunk@stusta.de>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/shmem.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/mm/shmem.c b/mm/shmem.c index 9b5eeb4af508..9e71b6ca35eb 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -669,8 +669,16 @@ done2: * generic_delete_inode did it, before we lowered next_index. * Also, though shmem_getpage checks i_size before adding to * cache, no recheck after: so fix the narrow window there too. + * + * Recalling truncate_inode_pages_range and unmap_mapping_range + * every time for punch_hole (which never got a chance to clear + * SHMEM_PAGEIN at the start of vmtruncate_range) is expensive, + * yet hardly ever necessary: try to optimize them out later. */ truncate_inode_pages_range(inode->i_mapping, start, end); + if (punch_hole) + unmap_mapping_range(inode->i_mapping, start, + end - start, 1); } spin_lock(&info->lock); |