diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2012-03-21 13:32:19 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-03-21 13:32:19 -0700 |
commit | 3a990a52f9f25f45469e272017a31e7a3fda60ed (patch) | |
tree | 366f639d9ce1e907b65caa72bc098df6c4b5a240 /mm/mmap.c | |
parent | 3556485f1595e3964ba539e39ea682acbb835cee (diff) | |
parent | f5cc4eef9987d0b517364d01e290d6438e47ee5d (diff) | |
download | linux-3.10-3a990a52f9f25f45469e272017a31e7a3fda60ed.tar.gz linux-3.10-3a990a52f9f25f45469e272017a31e7a3fda60ed.tar.bz2 linux-3.10-3a990a52f9f25f45469e272017a31e7a3fda60ed.zip |
Merge branch 'vm' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull munmap/truncate race fixes from Al Viro:
"Fixes for racy use of unmap_vmas() on truncate-related codepaths"
* 'vm' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
VM: make zap_page_range() callers that act on a single VMA use separate helper
VM: make unmap_vmas() return void
VM: don't bother with feeding upper limit to tlb_finish_mmu() in exit_mmap()
VM: make zap_page_range() return void
VM: can't go through the inner loop in unmap_vmas() more than once...
VM: unmap_page_range() can return void
Diffstat (limited to 'mm/mmap.c')
-rw-r--r-- | mm/mmap.c | 5 |
1 files changed, 2 insertions, 3 deletions
diff --git a/mm/mmap.c b/mm/mmap.c index 39a68ddf38b..6f3766b5780 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2237,7 +2237,6 @@ void exit_mmap(struct mm_struct *mm) struct mmu_gather tlb; struct vm_area_struct *vma; unsigned long nr_accounted = 0; - unsigned long end; /* mm's last user has gone, and its about to be pulled down */ mmu_notifier_release(mm); @@ -2262,11 +2261,11 @@ void exit_mmap(struct mm_struct *mm) tlb_gather_mmu(&tlb, mm, 1); /* update_hiwater_rss(mm) here? but nobody should be looking */ /* Use -1 here to ensure all VMAs in the mm are unmapped */ - end = unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL); + unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0); - tlb_finish_mmu(&tlb, 0, end); + tlb_finish_mmu(&tlb, 0, -1); /* * Walk the list again, actually closing and freeing it, |