diff options
author | Hugh Dickins <hughd@google.com> | 2018-11-30 14:10:29 -0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2018-12-05 19:41:09 +0100 |
commit | 98f1ae169c2e7264b198c3df08612f1bcd0fcfac (patch) | |
tree | c565fc34854246c7c35fe94f194c186b8a794ba1 /mm | |
parent | 81d2848c99cc4951eb3f7c3ce4313e830b07cc2b (diff) | |
download | linux-exynos-98f1ae169c2e7264b198c3df08612f1bcd0fcfac.tar.gz linux-exynos-98f1ae169c2e7264b198c3df08612f1bcd0fcfac.tar.bz2 linux-exynos-98f1ae169c2e7264b198c3df08612f1bcd0fcfac.zip |
mm/khugepaged: fix crashes due to misaccounted holes
commit aaa52e340073b7f4593b3c4ddafcafa70cf838b5 upstream.
Huge tmpfs testing on a shortish file mapped into a pmd-rounded extent
hit shmem_evict_inode()'s WARN_ON(inode->i_blocks) followed by
clear_inode()'s BUG_ON(inode->i_data.nrpages) when the file was later
closed and unlinked.
khugepaged's collapse_shmem() was forgetting to update mapping->nrpages
on the rollback path, after it had added but then needs to undo some
holes.
There is indeed an irritating asymmetry between shmem_charge(), whose
callers want it to increment nrpages after successfully accounting
blocks, and shmem_uncharge(), when __delete_from_page_cache() already
decremented nrpages itself: oh well, just add a comment on that to them
both.
And shmem_recalc_inode() is supposed to be called when the accounting is
expected to be in balance (so it can deduce from imbalance that reclaim
discarded some pages): so change shmem_charge() to update nrpages
earlier (though it's rare for the difference to matter at all).
Link: http://lkml.kernel.org/r/alpine.LSU.2.11.1811261523450.2275@eggly.anvils
Fixes: 800d8c63b2e98 ("shmem: add huge pages support")
Fixes: f3f0e1d2150b2 ("khugepaged: add support of collapse for tmpfs/shmem pages")
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: <stable@vger.kernel.org> [4.8+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/khugepaged.c | 4 | ||||
-rw-r--r-- | mm/shmem.c | 6 |
2 files changed, 8 insertions, 2 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d4a06afbeda4..be7b0863e33c 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1539,8 +1539,10 @@ tree_unlocked: *hpage = NULL; } else { /* Something went wrong: rollback changes to the radix-tree */ - shmem_uncharge(mapping->host, nr_none); spin_lock_irq(&mapping->tree_lock); + mapping->nrpages -= nr_none; + shmem_uncharge(mapping->host, nr_none); + radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) { if (iter.index >= end) diff --git a/mm/shmem.c b/mm/shmem.c index fa08f56fd5e5..177fef62cbbd 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -296,12 +296,14 @@ bool shmem_charge(struct inode *inode, long pages) if (!shmem_inode_acct_block(inode, pages)) return false; + /* nrpages adjustment first, then shmem_recalc_inode() when balanced */ + inode->i_mapping->nrpages += pages; + spin_lock_irqsave(&info->lock, flags); info->alloced += pages; inode->i_blocks += pages * BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irqrestore(&info->lock, flags); - inode->i_mapping->nrpages += pages; return true; } @@ -311,6 +313,8 @@ void shmem_uncharge(struct inode *inode, long pages) struct shmem_inode_info *info = SHMEM_I(inode); unsigned long flags; + /* nrpages adjustment done by __delete_from_page_cache() or caller */ + spin_lock_irqsave(&info->lock, flags); info->alloced -= pages; inode->i_blocks -= pages * BLOCKS_PER_PAGE; |