diff options
author | Rik van Riel <riel@redhat.com> | 2008-10-18 20:26:50 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-10-20 08:52:31 -0700 |
commit | ba470de43188cdbff795b5da43a1474523c6c2fb (patch) | |
tree | 0477460fa8c3e61edd9f1534cd2193656e586f8b /mm/internal.h | |
parent | 8edb08caf68184fb170f4f69c7445929e199eaea (diff) | |
download | linux-3.10-ba470de43188cdbff795b5da43a1474523c6c2fb.tar.gz linux-3.10-ba470de43188cdbff795b5da43a1474523c6c2fb.tar.bz2 linux-3.10-ba470de43188cdbff795b5da43a1474523c6c2fb.zip |
mmap: handle mlocked pages during map, remap, unmap
Originally by Nick Piggin <npiggin@suse.de>
Remove mlocked pages from the LRU using "unevictable infrastructure"
during mmap(), munmap(), mremap() and truncate(). Try to move back to
normal LRU lists on munmap() when last mlocked mapping removed. Remove
PageMlocked() status when page truncated from file.
[akpm@linux-foundation.org: cleanup]
[kamezawa.hiroyu@jp.fujitsu.com: fix double unlock_page()]
[kosaki.motohiro@jp.fujitsu.com: split LRU: munlock rework]
[lee.schermerhorn@hp.com: mlock: fix __mlock_vma_pages_range comment block]
[akpm@linux-foundation.org: remove bogus kerneldoc token]
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Signed-off-by: KAMEZAWA Hiroyuki <kamewzawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/internal.h')
-rw-r--r-- | mm/internal.h | 9 |
1 files changed, 7 insertions, 2 deletions
diff --git a/mm/internal.h b/mm/internal.h index 4ebf0bef9a3..48e32f79057 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -61,9 +61,14 @@ static inline unsigned long page_order(struct page *page) return page_private(page); } -extern int mlock_vma_pages_range(struct vm_area_struct *vma, +extern long mlock_vma_pages_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); -extern void munlock_vma_pages_all(struct vm_area_struct *vma); +extern void munlock_vma_pages_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end); +static inline void munlock_vma_pages_all(struct vm_area_struct *vma) +{ + munlock_vma_pages_range(vma, vma->vm_start, vma->vm_end); +} #ifdef CONFIG_UNEVICTABLE_LRU /* |