diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-04-19 13:29:19 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@ppc970.osdl.org.(none)> | 2005-04-19 13:29:19 -0700 |
commit | e2cdef8c847b480529b7e26991926aab4be008e6 (patch) | |
tree | b936ab7f0964f56bc3312ad9ad956e978ac39895 /mm | |
parent | 021740dc30d184e3b0fa7679936e65a56090c425 (diff) | |
download | linux-3.10-e2cdef8c847b480529b7e26991926aab4be008e6.tar.gz linux-3.10-e2cdef8c847b480529b7e26991926aab4be008e6.tar.bz2 linux-3.10-e2cdef8c847b480529b7e26991926aab4be008e6.zip |
[PATCH] freepgt: free_pgtables from FIRST_USER_ADDRESS
The patches to free_pgtables by vma left problems on any architectures which
leave some user address page table entries unencapsulated by vma. Andi has
fixed the 32-bit vDSO on x86_64 to use a vma. Now fix arm (and arm26), whose
first PAGE_SIZE is reserved (perhaps) for machine vectors.
Our calls to free_pgtables must not touch that area, and exit_mmap's
BUG_ON(nr_ptes) must allow that arm's get_pgd_slow may (or may not) have
allocated an extra page table, which its free_pgd_slow would free later.
FIRST_USER_PGD_NR has misled me and others: until all the arches define
FIRST_USER_ADDRESS instead, a hack in mmap.c to derive one from t'other. This
patch fixes the bugs, the remaining patches just clean it up.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/mmap.c | 11 |
1 files changed, 8 insertions, 3 deletions
diff --git a/mm/mmap.c b/mm/mmap.c index 0fa87a5ae2c..ac6e694c3b6 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1612,6 +1612,11 @@ static void unmap_vma_list(struct mm_struct *mm, struct vm_area_struct *vma) validate_mm(mm); } +#ifndef FIRST_USER_ADDRESS /* temporary hack */ +#define THIS_IS_ARM FIRST_USER_PGD_NR +#define FIRST_USER_ADDRESS (THIS_IS_ARM * PAGE_SIZE) +#endif + /* * Get rid of page table information in the indicated region. * @@ -1630,7 +1635,7 @@ static void unmap_region(struct mm_struct *mm, tlb = tlb_gather_mmu(mm, 0); unmap_vmas(&tlb, mm, vma, start, end, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); - free_pgtables(&tlb, vma, prev? prev->vm_end: 0, + free_pgtables(&tlb, vma, prev? prev->vm_end: FIRST_USER_ADDRESS, next? next->vm_start: 0); tlb_finish_mmu(tlb, start, end); spin_unlock(&mm->page_table_lock); @@ -1910,7 +1915,7 @@ void exit_mmap(struct mm_struct *mm) /* Use -1 here to ensure all VMAs in the mm are unmapped */ end = unmap_vmas(&tlb, mm, vma, 0, -1, &nr_accounted, NULL); vm_unacct_memory(nr_accounted); - free_pgtables(&tlb, vma, 0, 0); + free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0); tlb_finish_mmu(tlb, 0, end); mm->mmap = mm->mmap_cache = NULL; @@ -1931,7 +1936,7 @@ void exit_mmap(struct mm_struct *mm) vma = next; } - BUG_ON(mm->nr_ptes); /* This is just debugging */ + BUG_ON(mm->nr_ptes > (FIRST_USER_ADDRESS+PMD_SIZE-1)>>PMD_SHIFT); } /* Insert vm structure into process list sorted by address |