diff options
author | Hugh Dickins <hugh@veritas.com> | 2005-10-29 18:16:41 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2005-10-29 21:40:42 -0700 |
commit | f412ac08c9861b4791af0145934c22f1458686da (patch) | |
tree | 5e515efa116f3968c2caa75bc691a197199313a8 /mm | |
parent | 4c21e2f2441dc5fbb957b030333f5a3f2d02dea7 (diff) | |
download | linux-3.10-f412ac08c9861b4791af0145934c22f1458686da.tar.gz linux-3.10-f412ac08c9861b4791af0145934c22f1458686da.tar.bz2 linux-3.10-f412ac08c9861b4791af0145934c22f1458686da.zip |
[PATCH] mm: fix rss and mmlist locking
A couple of oddities were guarded by page_table_lock, no longer properly
guarded when that is split.
The mm_counters of file_rss and anon_rss: make those an atomic_t, or an
atomic64_t if the architecture supports it, in such a case. Definitions by
courtesy of Christoph Lameter: who spent considerable effort on more scalable
ways of counting, but found insufficient benefit in practice.
And adding an mm with swap to the mmlist for swapoff: the list is well-
guarded by its own lock, but the list_empty check now has to be repeated
inside it.
Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/memory.c | 4 | ||||
-rw-r--r-- | mm/rmap.c | 3 |
2 files changed, 5 insertions, 2 deletions
diff --git a/mm/memory.c b/mm/memory.c index e9ef599498b..d68421dd64e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -372,7 +372,9 @@ copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, /* make sure dst_mm is on swapoff's mmlist. */ if (unlikely(list_empty(&dst_mm->mmlist))) { spin_lock(&mmlist_lock); - list_add(&dst_mm->mmlist, &src_mm->mmlist); + if (list_empty(&dst_mm->mmlist)) + list_add(&dst_mm->mmlist, + &src_mm->mmlist); spin_unlock(&mmlist_lock); } } diff --git a/mm/rmap.c b/mm/rmap.c index a33e779d1bd..a7427bbf57e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -559,7 +559,8 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma) swap_duplicate(entry); if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); - list_add(&mm->mmlist, &init_mm.mmlist); + if (list_empty(&mm->mmlist)) + list_add(&mm->mmlist, &init_mm.mmlist); spin_unlock(&mmlist_lock); } set_pte_at(mm, address, pte, swp_entry_to_pte(entry)); |