diff options
author | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-01-29 16:37:38 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-01-29 16:37:38 -0800 |
commit | dc6e29da9162fa8fa2a9e798569c0f6e87975614 (patch) | |
tree | 9d1e9c8095ea4fbc7d528a63d4b21e32e44a70a9 /mm/page-writeback.c | |
parent | 5263bf65d6342e12ab716db8e529501670979321 (diff) |
Fix balance_dirty_page() calculations with CONFIG_HIGHMEM
This makes balance_dirty_page() always base its calculations on the
amount of non-highmem memory in the machine, rather than try to base it
on total memory and then falling back on non-highmem memory if the
mapping it was writing wasn't highmem capable.
This not only fixes a situation where two different writers can have
wildly different notions about what is a "balanced" dirty state, but it
also means that people with highmem machines don't run into an OOM
situation when regular memory fills up with dirty pages.
We used to try to handle the latter case by scaling down the dirty_ratio
if the machine had a lot of highmem pages in page_writeback_init(), but
it wasn't aggressive enough for some situations, and since basing the
dirty ratio on highmem memory was broken in the first place, let's just
stop doing so.
(A variation of this theme fixed Justin Piszcz's OOM problem when
copying an 18GB file on a RAID setup).
Acked-by: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Justin Piszcz <jpiszcz@lucidpixels.com>
Cc: Andrew Morton <akpm@osdl.org>
Cc: Neil Brown <neilb@suse.de>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Randy Dunlap <rdunlap@xenotime.net>
Cc: Christoph Lameter <clameter@sgi.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Adrian Bunk <bunk@stusta.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page-writeback.c')
-rw-r--r-- | mm/page-writeback.c | 41 |
1 files changed, 18 insertions, 23 deletions
diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 1d2fc89ca56d..be0efbde4994 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -133,11 +133,9 @@ get_dirty_limits(long *pbackground, long *pdirty, #ifdef CONFIG_HIGHMEM /* - * If this mapping can only allocate from low memory, - * we exclude high memory from our count. + * We always exclude high memory from our count. */ - if (mapping && !(mapping_gfp_mask(mapping) & __GFP_HIGHMEM)) - available_memory -= totalhigh_pages; + available_memory -= totalhigh_pages; #endif @@ -526,28 +524,25 @@ static struct notifier_block __cpuinitdata ratelimit_nb = { }; /* - * If the machine has a large highmem:lowmem ratio then scale back the default - * dirty memory thresholds: allowing too much dirty highmem pins an excessive - * number of buffer_heads. + * Called early on to tune the page writeback dirty limits. + * + * We used to scale dirty pages according to how total memory + * related to pages that could be allocated for buffers (by + * comparing nr_free_buffer_pages() to vm_total_pages. + * + * However, that was when we used "dirty_ratio" to scale with + * all memory, and we don't do that any more. "dirty_ratio" + * is now applied to total non-HIGHPAGE memory (by subtracting + * totalhigh_pages from vm_total_pages), and as such we can't + * get into the old insane situation any more where we had + * large amounts of dirty pages compared to a small amount of + * non-HIGHMEM memory. + * + * But we might still want to scale the dirty_ratio by how + * much memory the box has.. */ void __init page_writeback_init(void) { - long buffer_pages = nr_free_buffer_pages(); - long correction; - - correction = (100 * 4 * buffer_pages) / vm_total_pages; - - if (correction < 100) { - dirty_background_ratio *= correction; - dirty_background_ratio /= 100; - vm_dirty_ratio *= correction; - vm_dirty_ratio /= 100; - - if (dirty_background_ratio <= 0) - dirty_background_ratio = 1; - if (vm_dirty_ratio <= 0) - vm_dirty_ratio = 1; - } mod_timer(&wb_timer, jiffies + dirty_writeback_interval); writeback_set_ratelimit(); register_cpu_notifier(&ratelimit_nb); |