From 48526149964e69fc54a06c409e13d36990386464 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Wed, 29 Jan 2014 14:05:41 -0800 Subject: mm/page-writeback.c: do not count anon pages as dirtyable memory commit a1c3bfb2f67ef766de03f1f56bdfff9c8595ab14 upstream. The VM is currently heavily tuned to avoid swapping. Whether that is good or bad is a separate discussion, but as long as the VM won't swap to make room for dirty cache, we can not consider anonymous pages when calculating the amount of dirtyable memory, the baseline to which dirty_background_ratio and dirty_ratio are applied. A simple workload that occupies a significant size (40+%, depending on memory layout, storage speeds etc.) of memory with anon/tmpfs pages and uses the remainder for a streaming writer demonstrates this problem. In that case, the actual cache pages are a small fraction of what is considered dirtyable overall, which results in an relatively large portion of the cache pages to be dirtied. As kswapd starts rotating these, random tasks enter direct reclaim and stall on IO. Only consider free pages and file pages dirtyable. Signed-off-by: Johannes Weiner Reported-by: Tejun Heo Tested-by: Tejun Heo Reviewed-by: Rik van Riel Cc: Mel Gorman Cc: Wu Fengguang Reviewed-by: Michal Hocko Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/vmstat.h | 3 --- 1 file changed, 3 deletions(-) (limited to 'include') diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h index c586679b6fef..9044769f2296 100644 --- a/include/linux/vmstat.h +++ b/include/linux/vmstat.h @@ -142,9 +142,6 @@ static inline unsigned long zone_page_state_snapshot(struct zone *zone, return x; } -extern unsigned long global_reclaimable_pages(void); -extern unsigned long zone_reclaimable_pages(struct zone *zone); - #ifdef CONFIG_NUMA /* * Determine the per node value of a stat item. This function -- cgit v1.2.3