diff options
author | Zou Nan hai <nanhai.zou@intel.com> | 2007-06-01 00:46:28 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-06-01 08:18:27 -0700 |
commit | 2e1c49db4c640b35df13889b86b9d62215ade4b6 (patch) | |
tree | 8b886843a57f5ff683700a9f623294e3f24d07ef /mm/sparse.c | |
parent | fa0aa866c82e441787e07169cb4925e3b673e891 (diff) | |
download | kernel-common-2e1c49db4c640b35df13889b86b9d62215ade4b6.tar.gz kernel-common-2e1c49db4c640b35df13889b86b9d62215ade4b6.tar.bz2 kernel-common-2e1c49db4c640b35df13889b86b9d62215ade4b6.zip |
x86_64: allocate sparsemem memmap above 4G
On systems with huge amount of physical memory, VFS cache and memory memmap
may eat all available system memory under 4G, then the system may fail to
allocate swiotlb bounce buffer.
There was a fix for this issue in arch/x86_64/mm/numa.c, but that fix dose
not cover sparsemem model.
This patch add fix to sparsemem model by first try to allocate memmap above
4G.
Signed-off-by: Zou Nan hai <nanhai.zou@intel.com>
Acked-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Andi Kleen <ak@suse.de>
Cc: <stable@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/sparse.c')
-rw-r--r-- | mm/sparse.c | 11 |
1 files changed, 11 insertions, 0 deletions
diff --git a/mm/sparse.c b/mm/sparse.c index 1302f8348d51..545e4d3afcdf 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -209,6 +209,12 @@ static int __meminit sparse_init_one_section(struct mem_section *ms, return 1; } +__attribute__((weak)) +void *alloc_bootmem_high_node(pg_data_t *pgdat, unsigned long size) +{ + return NULL; +} + static struct page __init *sparse_early_mem_map_alloc(unsigned long pnum) { struct page *map; @@ -219,6 +225,11 @@ static struct page __init *sparse_early_mem_map_alloc(unsigned long pnum) if (map) return map; + map = alloc_bootmem_high_node(NODE_DATA(nid), + sizeof(struct page) * PAGES_PER_SECTION); + if (map) + return map; + map = alloc_bootmem_node(NODE_DATA(nid), sizeof(struct page) * PAGES_PER_SECTION); if (map) |