summaryrefslogtreecommitdiff
path: root/mm/swapfile.c
diff options
context:
space:
mode:
authorKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>2009-01-07 18:07:49 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-01-08 08:31:04 -0800
commitbced0520fe462bb94021dcabd32e99630c171be2 (patch)
tree6fa234f4a25bc8231742aea13e7cc2664b0a69a6 /mm/swapfile.c
parent7a81b88cb53e335ff7d019e6398c95792c817d93 (diff)
downloadlinux-3.10-bced0520fe462bb94021dcabd32e99630c171be2.tar.gz
linux-3.10-bced0520fe462bb94021dcabd32e99630c171be2.tar.bz2
linux-3.10-bced0520fe462bb94021dcabd32e99630c171be2.zip
memcg: fix gfp_mask of callers of charge
Fix misuse of gfp_kernel. Now, most of callers of mem_cgroup_charge_xxx functions uses GFP_KERNEL. I think that this is from the fact that page_cgroup *was* dynamically allocated. But now, we allocate all page_cgroup at boot. And mem_cgroup_try_to_free_pages() reclaim memory from GFP_HIGHUSER_MOVABLE + specified GFP_RECLAIM_MASK. * This is because we just want to reduce memory usage. "Where we should reclaim from ?" is not a problem in memcg. This patch modifies gfp masks to be GFP_HIGUSER_MOVABLE if possible. Note: This patch is not for fixing behavior but for showing sane information in source code. Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Cc: Balbir Singh <balbir@in.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/swapfile.c')
-rw-r--r--mm/swapfile.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/swapfile.c b/mm/swapfile.c
index fb926efb516..ddc6d92be2c 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -695,7 +695,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd,
pte_t *pte;
int ret = 1;
- if (mem_cgroup_try_charge(vma->vm_mm, GFP_KERNEL, &ptr))
+ if (mem_cgroup_try_charge(vma->vm_mm, GFP_HIGHUSER_MOVABLE, &ptr))
ret = -ENOMEM;
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);