diff options
author | Hugh Dickins <hughd@google.com> | 2011-03-22 16:33:07 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2011-03-22 17:44:04 -0700 |
commit | 5b280c0cc70062967bb9d630b216375b18db3a0b (patch) | |
tree | 6242d234d08fdd433b99c425e25f6562cd51cd0f /mm | |
parent | 9d8aa4ea855e0d64bb6926acb5618e6d1e2ed344 (diff) | |
download | linux-exynos-5b280c0cc70062967bb9d630b216375b18db3a0b.tar.gz linux-exynos-5b280c0cc70062967bb9d630b216375b18db3a0b.tar.bz2 linux-exynos-5b280c0cc70062967bb9d630b216375b18db3a0b.zip |
mm: don't return 0 too early from find_get_pages()
Callers of find_get_pages(), or its wrapper pagevec_lookup() - notably
truncate_inode_pages_range() - stop looking further when it returns 0.
But if an interrupt comes just after its radix_tree_gang_lookup_slot(),
especially if we have preemptible RCU enabled, isn't it conceivable that
all 14 pages returned could be removed from the page cache by
shrink_page_list(), before find_get_pages() gets to process them? So
causing it to return 0 although there may be plenty more pages beyond.
Make find_get_pages() and find_get_pages_tag() check for this unlikely
case, and restart should it occur; but callers of find_get_pages_contig()
have no such expectation, it's okay for that to return 0 early.
I have not seen this in practice, just worried by the possibility.
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Cc: Salman Qazi <sqazi@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/filemap.c | 14 |
1 files changed, 14 insertions, 0 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index a29318147365..f807afda86f2 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -885,6 +885,13 @@ repeat: pages[ret] = page; ret++; } + + /* + * If all entries were removed before we could secure them, + * try again, because callers stop trying once 0 is returned. + */ + if (unlikely(!ret && nr_found)) + goto restart; rcu_read_unlock(); return ret; } @@ -1004,6 +1011,13 @@ repeat: pages[ret] = page; ret++; } + + /* + * If all entries were removed before we could secure them, + * try again, because callers stop trying once 0 is returned. + */ + if (unlikely(!ret && nr_found)) + goto restart; rcu_read_unlock(); if (ret) |