diff options
author | Johannes Weiner <hannes@cmpxchg.org> | 2011-02-01 15:52:42 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-02-02 16:03:19 -0800 |
commit | 9221edb7120e2dc3ae90f1c58514979f7ba40e46 (patch) | |
tree | 73818b41f9b7d73cc20fb9f4efbbe73a89b25058 /mm | |
parent | af241a083404acda7ba3690e5b7697949d729fcc (diff) | |
download | linux-rpi3-9221edb7120e2dc3ae90f1c58514979f7ba40e46.tar.gz linux-rpi3-9221edb7120e2dc3ae90f1c58514979f7ba40e46.tar.bz2 linux-rpi3-9221edb7120e2dc3ae90f1c58514979f7ba40e46.zip |
memcg: prevent endless loop when charging huge pages
The charging code can encounter a charge size that is bigger than a
regular page in two situations: one is a batched charge to fill the
per-cpu stocks, the other is a huge page charge.
This code is distributed over two functions, however, and only the outer
one is aware of huge pages. In case the charging fails, the inner
function will tell the outer function to retry if the charge size is
bigger than regular pages--assuming batched charging is the only case.
And the outer function will retry forever charging a huge page.
This patch makes sure the inner function can distinguish between batch
charging and a single huge page charge. It will only signal another
attempt if batch charging failed, and go into regular reclaim when it is
called on behalf of a huge page.
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Cc: Balbir Singh <balbir@linux.vnet.ibm.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memcontrol.c | 11 |
1 files changed, 9 insertions, 2 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 79abb1fd39d2..50eb50e100fd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1837,8 +1837,15 @@ static int __mem_cgroup_do_charge(struct mem_cgroup *mem, gfp_t gfp_mask, flags |= MEM_CGROUP_RECLAIM_NOSWAP; } else mem_over_limit = mem_cgroup_from_res_counter(fail_res, res); - - if (csize > PAGE_SIZE) /* change csize and retry */ + /* + * csize can be either a huge page (HPAGE_SIZE), a batch of + * regular pages (CHARGE_SIZE), or a single regular page + * (PAGE_SIZE). + * + * Never reclaim on behalf of optional batching, retry with a + * single page instead. + */ + if (csize == CHARGE_SIZE) return CHARGE_RETRY; if (!(gfp_mask & __GFP_WAIT)) |