diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2009-01-07 18:08:17 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2009-01-08 08:31:07 -0800 |
commit | eeee9a8cd1e93c8b94e7788790fa9e2f8910c779 (patch) | |
tree | 2ef0a61a4ce12410ecfa48014a0181c03e73a3cb | |
parent | c9f299d9862deadf9fbee3ca28d915fdb006975a (diff) | |
download | linux-3.10-eeee9a8cd1e93c8b94e7788790fa9e2f8910c779.tar.gz linux-3.10-eeee9a8cd1e93c8b94e7788790fa9e2f8910c779.tar.bz2 linux-3.10-eeee9a8cd1e93c8b94e7788790fa9e2f8910c779.zip |
mm: make get_scan_ratio() safe for memcg
Currently, get_scan_ratio() always calculate the balancing value for
global reclaim and memcg reclaim doesn't use it. Therefore it doesn't
have scan_global_lru() condition.
However, we plan to expand get_scan_ratio() to be usable for memcg too,
latter. Then, The dependency code of global reclaim in the
get_scan_ratio() insert into scan_global_lru() condision explictly.
This patch doesn't have any functional change.
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Balbir Singh <balbir@in.ibm.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Cc: Hugh Dickins <hugh@veritas.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/vmscan.c | 15 |
1 files changed, 9 insertions, 6 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c index 6827d35954f..e2b31a522a6 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1376,13 +1376,16 @@ static void get_scan_ratio(struct zone *zone, struct scan_control *sc, zone_nr_pages(zone, sc, LRU_INACTIVE_ANON); file = zone_nr_pages(zone, sc, LRU_ACTIVE_FILE) + zone_nr_pages(zone, sc, LRU_INACTIVE_FILE); - free = zone_page_state(zone, NR_FREE_PAGES); - /* If we have very few page cache pages, force-scan anon pages. */ - if (unlikely(file + free <= zone->pages_high)) { - percent[0] = 100; - percent[1] = 0; - return; + if (scan_global_lru(sc)) { + free = zone_page_state(zone, NR_FREE_PAGES); + /* If we have very few page cache pages, + force-scan anon pages. */ + if (unlikely(file + free <= zone->pages_high)) { + percent[0] = 100; + percent[1] = 0; + return; + } } /* |