diff options
author | Vladimir Davydov <vdavydov@parallels.com> | 2014-10-20 15:50:30 +0400 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2014-10-27 11:15:27 -0400 |
commit | 344736f29b359790facd0b7a521e367f1715c11c (patch) | |
tree | c32487de22e7640a828f28819b5707790ede5105 /mm/slab.c | |
parent | 8447a0fee974433f7e0035fd30e1edecf00e014f (diff) |
cpuset: simplify cpuset_node_allowed API
Current cpuset API for checking if a zone/node is allowed to allocate
from looks rather awkward. We have hardwall and softwall versions of
cpuset_node_allowed with the softwall version doing literally the same
as the hardwall version if __GFP_HARDWALL is passed to it in gfp flags.
If it isn't, the softwall version may check the given node against the
enclosing hardwall cpuset, which it needs to take the callback lock to
do.
Such a distinction was introduced by commit 02a0e53d8227 ("cpuset:
rework cpuset_zone_allowed api"). Before, we had the only version with
the __GFP_HARDWALL flag determining its behavior. The purpose of the
commit was to avoid sleep-in-atomic bugs when someone would mistakenly
call the function without the __GFP_HARDWALL flag for an atomic
allocation. The suffixes introduced were intended to make the callers
think before using the function.
However, since the callback lock was converted from mutex to spinlock by
the previous patch, the softwall check function cannot sleep, and these
precautions are no longer necessary.
So let's simplify the API back to the single check.
Suggested-by: David Rientjes <rientjes@google.com>
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: Zefan Li <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'mm/slab.c')
-rw-r--r-- | mm/slab.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slab.c b/mm/slab.c index eb2b2ea30130..063a91bc8826 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3012,7 +3012,7 @@ retry: for_each_zone_zonelist(zone, z, zonelist, high_zoneidx) { nid = zone_to_nid(zone); - if (cpuset_zone_allowed_hardwall(zone, flags) && + if (cpuset_zone_allowed(zone, flags | __GFP_HARDWALL) && get_node(cache, nid) && get_node(cache, nid)->free_objects) { obj = ____cache_alloc_node(cache, |