diff options
author | Mel Gorman <mgorman@suse.de> | 2014-08-28 19:35:42 +0100 |
---|---|---|
committer | Jiri Slaby <jslaby@suse.cz> | 2014-09-26 11:52:09 +0200 |
commit | 4a4ede23dd902513b3a17d3e61cef9baf650d33e (patch) | |
tree | 51a179f456b40d97cc89a35a8045c3428cc334f6 /include | |
parent | b4fc580f75325271de2841891bb5816cea5ca101 (diff) |
mm: move zone->pages_scanned into a vmstat counter
commit 0d5d823ab4e608ec7b52ac4410de4cb74bbe0edd upstream.
zone->pages_scanned is a write-intensive cache line during page reclaim
and it's also updated during page free. Move the counter into vmstat to
take advantage of the per-cpu updates and do not update it in the free
paths unless necessary.
On a small UMA machine running tiobench the difference is marginal. On
a 4-node machine the overhead is more noticable. Note that automatic
NUMA balancing was disabled for this test as otherwise the system CPU
overhead is unpredictable.
3.16.0-rc3 3.16.0-rc3 3.16.0-rc3
vanillarearrange-v5 vmstat-v5
User 746.94 759.78 774.56
System 65336.22 58350.98 32847.27
Elapsed 27553.52 27282.02 27415.04
Note that the overhead reduction will vary depending on where exactly
pages are allocated and freed.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/mmzone.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 9ff35dad8f21..1df12c6a80d8 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -143,6 +143,7 @@ enum zone_stat_item { NR_SHMEM, /* shmem pages (included tmpfs/GEM pages) */ NR_DIRTIED, /* page dirtyings since bootup */ NR_WRITTEN, /* page writings since bootup */ + NR_PAGES_SCANNED, /* pages scanned since last reclaim */ #ifdef CONFIG_NUMA NUMA_HIT, /* allocated in intended node */ NUMA_MISS, /* allocated in non intended node */ @@ -478,7 +479,6 @@ struct zone { /* Fields commonly accessed by the page reclaim scanner */ spinlock_t lru_lock; - unsigned long pages_scanned; /* since last reclaim */ struct lruvec lruvec; /* |