diff options
author | Li Zefan <lizefan@huawei.com> | 2014-07-09 16:47:41 +0800 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2014-07-09 15:56:16 -0400 |
commit | 554b0d1c845e42ef01d7f6f5f24b3e4c6129ce8f (patch) | |
tree | f887cbfb24c0f7e2a8c904913a368abb6d28f11c /kernel/cpuset.c | |
parent | 734d45130cb4f668fb33d182f6943523628582ef (diff) |
cpuset: inherit ancestor's masks if effective_{cpus, mems} becomes empty
We're going to have separate user-configured masks and effective ones.
Eventually configured masks can only be changed by writing cpuset.cpus
and cpuset.mems, and they won't be restricted by parent cpuset. While
effective masks reflect cpu/memory hotplug and hierachical restriction,
and these are the real masks that apply to the tasks in the cpuset.
We calculate effective mask this way:
- top cpuset's effective_mask == online_mask, otherwise
- cpuset's effective_mask == configured_mask & parent effective_mask,
if the result is empty, it inherits parent effective mask.
Those behavior changes are for default hierarchy only. For legacy
hierarchy, effective_mask and configured_mask are the same, so we won't
break old interfaces.
To make cs->effective_{cpus,mems} to be effective masks, we need to
- update the effective masks at hotplug
- update the effective masks at config change
- take on ancestor's mask when the effective mask is empty
The last item is done here.
This won't introduce behavior change.
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel/cpuset.c')
-rw-r--r-- | kernel/cpuset.c | 22 |
1 files changed, 22 insertions, 0 deletions
diff --git a/kernel/cpuset.c b/kernel/cpuset.c index da766c3736c4..f8340026d01c 100644 --- a/kernel/cpuset.c +++ b/kernel/cpuset.c @@ -877,6 +877,13 @@ static void update_cpumasks_hier(struct cpuset *cs, struct cpumask *new_cpus) cpumask_and(new_cpus, cp->cpus_allowed, parent->effective_cpus); + /* + * If it becomes empty, inherit the effective mask of the + * parent, which is guaranteed to have some CPUs. + */ + if (cpumask_empty(new_cpus)) + cpumask_copy(new_cpus, parent->effective_cpus); + /* Skip the whole subtree if the cpumask remains the same. */ if (cpumask_equal(new_cpus, cp->effective_cpus)) { pos_css = css_rightmost_descendant(pos_css); @@ -1123,6 +1130,13 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems) nodes_and(*new_mems, cp->mems_allowed, parent->effective_mems); + /* + * If it becomes empty, inherit the effective mask of the + * parent, which is guaranteed to have some MEMs. + */ + if (nodes_empty(*new_mems)) + *new_mems = parent->effective_mems; + /* Skip the whole subtree if the nodemask remains the same. */ if (nodes_equal(*new_mems, cp->effective_mems)) { pos_css = css_rightmost_descendant(pos_css); @@ -2102,7 +2116,11 @@ retry: mutex_lock(&callback_mutex); cpumask_andnot(cs->cpus_allowed, cs->cpus_allowed, &off_cpus); + + /* Inherit the effective mask of the parent, if it becomes empty. */ cpumask_andnot(cs->effective_cpus, cs->effective_cpus, &off_cpus); + if (on_dfl && cpumask_empty(cs->effective_cpus)) + cpumask_copy(cs->effective_cpus, parent_cs(cs)->effective_cpus); mutex_unlock(&callback_mutex); /* @@ -2117,7 +2135,11 @@ retry: mutex_lock(&callback_mutex); nodes_andnot(cs->mems_allowed, cs->mems_allowed, off_mems); + + /* Inherit the effective mask of the parent, if it becomes empty */ nodes_andnot(cs->effective_mems, cs->effective_mems, off_mems); + if (on_dfl && nodes_empty(cs->effective_mems)) + cs->effective_mems = parent_cs(cs)->effective_mems; mutex_unlock(&callback_mutex); /* |