summaryrefslogtreecommitdiff
path: root/mm/page_cgroup.c
diff options
context:
space:
mode:
authorHugh Dickins <hughd@google.com>2012-03-05 20:52:55 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2012-03-06 08:18:23 -0800
commitc09ff089aa62380ad904ea785bd713c56720270e (patch)
tree6ddc11131cd557d0d3a32ddeb829bfefe542101b /mm/page_cgroup.c
parentf3969bf78f140f437f51787dfc2751943ba454d1 (diff)
downloadkernel-common-c09ff089aa62380ad904ea785bd713c56720270e.tar.gz
kernel-common-c09ff089aa62380ad904ea785bd713c56720270e.tar.bz2
kernel-common-c09ff089aa62380ad904ea785bd713c56720270e.zip
page_cgroup: fix horrid swap accounting regression
Why is memcg's swap accounting so broken? Insane counts, wrong ownership, unfreeable structures, which later get freed and then accessed after free. Turns out to be a tiny a little 3.3-rc1 regression in 9fb4b7cc0724 "page_cgroup: add helper function to get swap_cgroup": the helper function (actually named lookup_swap_cgroup()) returns an address using void* arithmetic, but the structure in question is a short. Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Bob Liu <lliubbo@gmail.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Johannes Weiner <jweiner@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_cgroup.c')
-rw-r--r--mm/page_cgroup.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c
index de1616aa9b1e..1ccbd714059c 100644
--- a/mm/page_cgroup.c
+++ b/mm/page_cgroup.c
@@ -379,13 +379,15 @@ static struct swap_cgroup *lookup_swap_cgroup(swp_entry_t ent,
pgoff_t offset = swp_offset(ent);
struct swap_cgroup_ctrl *ctrl;
struct page *mappage;
+ struct swap_cgroup *sc;
ctrl = &swap_cgroup_ctrl[swp_type(ent)];
if (ctrlp)
*ctrlp = ctrl;
mappage = ctrl->map[offset / SC_PER_PAGE];
- return page_address(mappage) + offset % SC_PER_PAGE;
+ sc = page_address(mappage);
+ return sc + offset % SC_PER_PAGE;
}
/**