diff options
author | Hugh Dickins <hughd@google.com> | 2012-05-18 11:28:34 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-05-19 10:10:27 -0700 |
commit | 62ade86ab6c7e26409229ca45503cae97bf698cf (patch) | |
tree | 569ce54e1fce1f985cc9525712c39b2b020e1fdf /mm | |
parent | b1dab2f0409c478fd2d9e227c2c018524eca9603 (diff) | |
download | linux-3.10-62ade86ab6c7e26409229ca45503cae97bf698cf.tar.gz linux-3.10-62ade86ab6c7e26409229ca45503cae97bf698cf.tar.bz2 linux-3.10-62ade86ab6c7e26409229ca45503cae97bf698cf.zip |
memcg,thp: fix res_counter:96 regression
Occasionally, testing memcg's move_charge_at_immigrate on rc7 shows
a flurry of hundreds of warnings at kernel/res_counter.c:96, where
res_counter_uncharge_locked() does WARN_ON(counter->usage < val).
The first trace of each flurry implicates __mem_cgroup_cancel_charge()
of mc.precharge, and an audit of mc.precharge handling points to
mem_cgroup_move_charge_pte_range()'s THP handling in commit 12724850e806
("memcg: avoid THP split in task migration").
Checking !mc.precharge is good everywhere else, when a single page is to
be charged; but here the "mc.precharge -= HPAGE_PMD_NR" likely to
follow, is liable to result in underflow (a lot can change since the
precharge was estimated).
Simply check against HPAGE_PMD_NR: there's probably a better
alternative, trying precharge for more, splitting if unsuccessful; but
this one-liner is safer for now - no kernel/res_counter.c:96 warnings
seen in 26 hours.
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memcontrol.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b659260c56a..7685d4a0b3c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5481,7 +5481,7 @@ static int mem_cgroup_move_charge_pte_range(pmd_t *pmd, * part of thp split is not executed yet. */ if (pmd_trans_huge_lock(pmd, vma) == 1) { - if (!mc.precharge) { + if (mc.precharge < HPAGE_PMD_NR) { spin_unlock(&vma->vm_mm->page_table_lock); return 0; } |