diff options
author | Johannes Weiner <hannes@cmpxchg.org> | 2016-03-15 14:57:04 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2016-03-15 16:55:16 -0700 |
commit | 81f8c3a461d16f0355ced3d56d6d1bb5923207a1 (patch) | |
tree | 5d821760ca548b4357221c0399b92b7154221c33 /fs/buffer.c | |
parent | 0db2cb8da89d991762ec2aece45e55ceaee34664 (diff) |
mm: memcontrol: generalize locking for the page->mem_cgroup binding
These patches tag the page cache radix tree eviction entries with the
memcg an evicted page belonged to, thus making per-cgroup LRU reclaim
work properly and be as adaptive to new cache workingsets as global
reclaim already is.
This should have been part of the original thrash detection patch
series, but was deferred due to the complexity of those patches.
This patch (of 5):
So far the only sites that needed to exclude charge migration to
stabilize page->mem_cgroup have been per-cgroup page statistics, hence
the name mem_cgroup_begin_page_stat(). But per-cgroup thrash detection
will add another site that needs to ensure page->mem_cgroup lifetime.
Rename these locking functions to the more generic lock_page_memcg() and
unlock_page_memcg(). Since charge migration is a cgroup1 feature only,
we might be able to delete it at some point, and these now easy to
identify locking sites along with it.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Suggested-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Acked-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/buffer.c')
-rw-r--r-- | fs/buffer.c | 14 |
1 files changed, 7 insertions, 7 deletions
diff --git a/fs/buffer.c b/fs/buffer.c index e1632abb4ca9..dc991510bb06 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -621,7 +621,7 @@ EXPORT_SYMBOL(mark_buffer_dirty_inode); * If warn is true, then emit a warning if the page is not uptodate and has * not been truncated. * - * The caller must hold mem_cgroup_begin_page_stat() lock. + * The caller must hold lock_page_memcg(). */ static void __set_page_dirty(struct page *page, struct address_space *mapping, struct mem_cgroup *memcg, int warn) @@ -683,17 +683,17 @@ int __set_page_dirty_buffers(struct page *page) } while (bh != head); } /* - * Use mem_group_begin_page_stat() to keep PageDirty synchronized with - * per-memcg dirty page counters. + * Lock out page->mem_cgroup migration to keep PageDirty + * synchronized with per-memcg dirty page counters. */ - memcg = mem_cgroup_begin_page_stat(page); + memcg = lock_page_memcg(page); newly_dirty = !TestSetPageDirty(page); spin_unlock(&mapping->private_lock); if (newly_dirty) __set_page_dirty(page, mapping, memcg, 1); - mem_cgroup_end_page_stat(memcg); + unlock_page_memcg(memcg); if (newly_dirty) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); @@ -1169,13 +1169,13 @@ void mark_buffer_dirty(struct buffer_head *bh) struct address_space *mapping = NULL; struct mem_cgroup *memcg; - memcg = mem_cgroup_begin_page_stat(page); + memcg = lock_page_memcg(page); if (!TestSetPageDirty(page)) { mapping = page_mapping(page); if (mapping) __set_page_dirty(page, mapping, memcg, 0); } - mem_cgroup_end_page_stat(memcg); + unlock_page_memcg(memcg); if (mapping) __mark_inode_dirty(mapping->host, I_DIRTY_PAGES); } |