summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>2009-01-07 18:07:56 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2009-01-08 08:31:05 -0800
commitd13d144309d2e5a3e6ad978b16c1d0226ddc9231 (patch)
tree37c19902b527823956db969d9428737081b2a94d /include
parentc1e862c1f5ad34771b6d0a528cf681e0dcad7c86 (diff)
downloadlinux-3.10-d13d144309d2e5a3e6ad978b16c1d0226ddc9231.tar.gz
linux-3.10-d13d144309d2e5a3e6ad978b16c1d0226ddc9231.tar.bz2
linux-3.10-d13d144309d2e5a3e6ad978b16c1d0226ddc9231.zip
memcg: handle swap caches
SwapCache support for memory resource controller (memcg) Before mem+swap controller, memcg itself should handle SwapCache in proper way. This is cut-out from it. In current memcg, SwapCache is just leaked and the user can create tons of SwapCache. This is a leak of account and should be handled. SwapCache accounting is done as following. charge (anon) - charged when it's mapped. (because of readahead, charge at add_to_swap_cache() is not sane) uncharge (anon) - uncharged when it's dropped from swapcache and fully unmapped. means it's not uncharged at unmap. Note: delete from swap cache at swap-in is done after rmap information is established. charge (shmem) - charged at swap-in. this prevents charge at add_to_page_cache(). uncharge (shmem) - uncharged when it's dropped from swapcache and not on shmem's radix-tree. at migration, check against 'old page' is modified to handle shmem. Comparing to the old version discussed (and caused troubles), we have advantages of - PCG_USED bit. - simple migrating handling. So, situation is much easier than several months ago, maybe. [hugh@veritas.com: memcg: handle swap caches build fix] Reviewed-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Tested-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Balbir Singh <balbir@in.ibm.com> Cc: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/swap.h22
1 files changed, 22 insertions, 0 deletions
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 91dee50fe26..f8f3907533f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -333,6 +333,22 @@ static inline void disable_swap_token(void)
put_swap_token(swap_token_mm);
}
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+extern int mem_cgroup_cache_charge_swapin(struct page *page,
+ struct mm_struct *mm, gfp_t mask, bool locked);
+extern void mem_cgroup_uncharge_swapcache(struct page *page);
+#else
+static inline
+int mem_cgroup_cache_charge_swapin(struct page *page,
+ struct mm_struct *mm, gfp_t mask, bool locked)
+{
+ return 0;
+}
+static inline void mem_cgroup_uncharge_swapcache(struct page *page)
+{
+}
+#endif
+
#else /* CONFIG_SWAP */
#define nr_swap_pages 0L
@@ -409,6 +425,12 @@ static inline swp_entry_t get_swap_page(void)
#define has_swap_token(x) 0
#define disable_swap_token() do { } while(0)
+static inline int mem_cgroup_cache_charge_swapin(struct page *page,
+ struct mm_struct *mm, gfp_t mask, bool locked)
+{
+ return 0;
+}
+
#endif /* CONFIG_SWAP */
#endif /* __KERNEL__*/
#endif /* _LINUX_SWAP_H */