diff options
author | Mel Gorman <mel@csn.ul.ie> | 2007-07-26 10:41:18 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-07-26 11:35:19 -0700 |
commit | b5445f956ec3c8c19b760775e9ff92a160e3a167 (patch) | |
tree | 034d62678f1e402e6d71b07ba0bf96544554a2ba /mm/page_alloc.c | |
parent | ee2077d97b2f392cfc0b884775ac58aa9b9b8c8f (diff) | |
download | linux-3.10-b5445f956ec3c8c19b760775e9ff92a160e3a167.tar.gz linux-3.10-b5445f956ec3c8c19b760775e9ff92a160e3a167.tar.bz2 linux-3.10-b5445f956ec3c8c19b760775e9ff92a160e3a167.zip |
Allow nodes to exist that only contain ZONE_MOVABLE
With the introduction of kernelcore=, a configurable zone is created on
request. In some cases, this value will be small enough that some nodes
contain only ZONE_MOVABLE. On some NUMA configurations when this occurs,
arch-independent zone-sizing will get the size of the memory holes within
the node incorrect. The value of present_pages goes negative and the boot
fails.
This patch fixes the bug in the calculation of the size of the hole. The
test case is to boot test a NUMA machine with a low value of kernelcore=
before and after the patch is applied. While this bug exists in early
kernel it cannot be triggered in practice.
This patch has been boot-tested on a variety machines with and without
kernelcore= set.
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/page_alloc.c')
-rw-r--r-- | mm/page_alloc.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 40954fb8159..6d3550ca028 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2775,11 +2775,11 @@ unsigned long __meminit __absent_pages_in_range(int nid, if (i == -1) return 0; + prev_end_pfn = min(early_node_map[i].start_pfn, range_end_pfn); + /* Account for ranges before physical memory on this node */ if (early_node_map[i].start_pfn > range_start_pfn) - hole_pages = early_node_map[i].start_pfn - range_start_pfn; - - prev_end_pfn = early_node_map[i].start_pfn; + hole_pages = prev_end_pfn - range_start_pfn; /* Find all holes for the zone within the node */ for (; i != -1; i = next_active_region_index_in_nid(i, nid)) { |