diff options
author | Adam Litke <agl@us.ibm.com> | 2008-03-10 11:43:50 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2008-03-10 18:01:19 -0700 |
commit | 2668db9111bb1a6ab5a54f41f703179f35c7d098 (patch) | |
tree | bde940cfd298321663cf988b607151513c801a1a | |
parent | 842078054da2d754c6b998b116d7c468abbfaaca (diff) | |
download | linux-3.10-2668db9111bb1a6ab5a54f41f703179f35c7d098.tar.gz linux-3.10-2668db9111bb1a6ab5a54f41f703179f35c7d098.tar.bz2 linux-3.10-2668db9111bb1a6ab5a54f41f703179f35c7d098.zip |
hugetlb: correct page count for surplus huge pages
Free pages in the hugetlb pool are free and as such have a reference count of
zero. Regular allocations into the pool from the buddy are "freed" into the
pool which results in their page_count dropping to zero. However, surplus
pages can be directly utilized by the caller without first being freed to the
pool. Therefore, a call to put_page_testzero() is in order so that such a
page will be handed to the caller with a correct count.
This has not affected end users because the bad page count is reset before the
page is handed off. However, under CONFIG_DEBUG_VM this triggers a BUG when
the page count is validated.
Thanks go to Mel for first spotting this issue and providing an initial fix.
Signed-off-by: Adam Litke <agl@us.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Dave Hansen <haveblue@us.ibm.com>
Cc: William Lee Irwin III <wli@holomorphy.com>
Cc: Andy Whitcroft <apw@shadowen.org>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/hugetlb.c | 13 |
1 files changed, 10 insertions, 3 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index dcacc811e70..74c1b6b0b37 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -286,6 +286,12 @@ static struct page *alloc_buddy_huge_page(struct vm_area_struct *vma, spin_lock(&hugetlb_lock); if (page) { + /* + * This page is now managed by the hugetlb allocator and has + * no users -- drop the buddy allocator's reference. + */ + put_page_testzero(page); + VM_BUG_ON(page_count(page)); nid = page_to_nid(page); set_compound_page_dtor(page, free_huge_page); /* @@ -369,13 +375,14 @@ free: enqueue_huge_page(page); else { /* - * Decrement the refcount and free the page using its - * destructor. This must be done with hugetlb_lock + * The page has a reference count of zero already, so + * call free_huge_page directly instead of using + * put_page. This must be done with hugetlb_lock * unlocked which is safe because free_huge_page takes * hugetlb_lock before deciding how to free the page. */ spin_unlock(&hugetlb_lock); - put_page(page); + free_huge_page(page); spin_lock(&hugetlb_lock); } } |