diff options
author | Li Zefan <lizefan@huawei.com> | 2013-09-10 11:43:37 +0800 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2014-04-02 00:58:40 +0100 |
commit | 78d926c3e43b2332393d3b203d62934f8fc73401 (patch) | |
tree | 77cc9d65e6c4a79ef13dbb8a718fdee873c0b919 /mm | |
parent | c1ce34960af581bb75f423b2fae6cb82304743c0 (diff) |
slub: Fix calculation of cpu slabs
commit 8afb1474db4701d1ab80cd8251137a3260e6913e upstream.
/sys/kernel/slab/:t-0000048 # cat cpu_slabs
231 N0=16 N1=215
/sys/kernel/slab/:t-0000048 # cat slabs
145 N0=36 N1=109
See, the number of slabs is smaller than that of cpu slabs.
The bug was introduced by commit 49e2258586b423684f03c278149ab46d8f8b6700
("slub: per cpu cache for partial pages").
We should use page->pages instead of page->pobjects when calculating
the number of cpu partial slabs. This also fixes the mapping of slabs
and nodes.
As there's no variable storing the number of total/active objects in
cpu partial slabs, and we don't have user interfaces requiring those
statistics, I just add WARN_ON for those cases.
Acked-by: Christoph Lameter <cl@linux.com>
Reviewed-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/slub.c | 8 |
1 files changed, 7 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c index 5710788c58e7..fc719f7dd407 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4483,7 +4483,13 @@ static ssize_t show_slab_objects(struct kmem_cache *s, page = c->partial; if (page) { - x = page->pobjects; + node = page_to_nid(page); + if (flags & SO_TOTAL) + WARN_ON_ONCE(1); + else if (flags & SO_OBJECTS) + WARN_ON_ONCE(1); + else + x = page->pages; total += x; nodes[node] += x; } |