diff options
author | Anjana V Kumar <anjanavk12@gmail.com> | 2013-10-12 10:59:17 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-11-13 12:05:30 +0900 |
commit | 5be794dc4bb66266acc91eb589ace48e82a6c77e (patch) | |
tree | 9ff0c9154141ed21d96be76c8f39f2df51260237 | |
parent | 955a23e181561a792d6e4c1572848b7ba306499f (diff) | |
download | linux-3.10-5be794dc4bb66266acc91eb589ace48e82a6c77e.tar.gz linux-3.10-5be794dc4bb66266acc91eb589ace48e82a6c77e.tar.bz2 linux-3.10-5be794dc4bb66266acc91eb589ace48e82a6c77e.zip |
cgroup: fix to break the while loop in cgroup_attach_task() correctly
commit ea84753c98a7ac6b74e530b64c444a912b3835ca upstream.
Both Anjana and Eunki reported a stall in the while_each_thread loop
in cgroup_attach_task().
It's because, when we attach a single thread to a cgroup, if the cgroup
is exiting or is already in that cgroup, we won't break the loop.
If the task is already in the cgroup, the bug can lead to another thread
being attached to the cgroup unexpectedly:
# echo 5207 > tasks
# cat tasks
5207
# echo 5207 > tasks
# cat tasks
5207
5215
What's worse, if the task to be attached isn't the leader of the thread
group, we might never exit the loop, hence cpu stall. Thanks for Oleg's
analysis.
This bug was introduced by commit 081aa458c38ba576bdd4265fc807fa95b48b9e79
("cgroup: consolidate cgroup_attach_task() and cgroup_attach_proc()")
[ lizf: - fixed the first continue, pointed out by Oleg,
- rewrote changelog. ]
Reported-by: Eunki Kim <eunki_kim@samsung.com>
Reported-by: Anjana V Kumar <anjanavk12@gmail.com>
Signed-off-by: Anjana V Kumar <anjanavk12@gmail.com>
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | kernel/cgroup.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 2e9b387971d..b6b26faf174 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -1995,7 +1995,7 @@ static int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk, /* @tsk either already exited or can't exit until the end */ if (tsk->flags & PF_EXITING) - continue; + goto next; /* as per above, nr_threads may decrease, but not increase. */ BUG_ON(i >= group_size); @@ -2003,7 +2003,7 @@ static int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk, ent.cgrp = task_cgroup_from_root(tsk, root); /* nothing to do if this task is already in the cgroup */ if (ent.cgrp == cgrp) - continue; + goto next; /* * saying GFP_ATOMIC has no effect here because we did prealloc * earlier, but it's good form to communicate our expectations. @@ -2011,7 +2011,7 @@ static int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk, retval = flex_array_put(group, i, &ent, GFP_ATOMIC); BUG_ON(retval != 0); i++; - + next: if (!threadgroup) break; } while_each_thread(leader, tsk); |