summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorMarek Szyprowski <m.szyprowski@samsung.com>2012-11-09 16:04:23 +0100
committerChanho Park <chanho61.park@samsung.com>2014-11-18 11:42:20 +0900
commit17dd7e70450b6ebf3e530e34e5a9ce910112663d (patch)
tree03ba10f8e6b7906b29c9f44edbfb5d0a95bfe763 /mm
parenta253489daf66edd5e642a3971f80be6fdab4d048 (diff)
downloadlinux-3.10-17dd7e70450b6ebf3e530e34e5a9ce910112663d.tar.gz
linux-3.10-17dd7e70450b6ebf3e530e34e5a9ce910112663d.tar.bz2
linux-3.10-17dd7e70450b6ebf3e530e34e5a9ce910112663d.zip
mm: cma: allocate pages from CMA if NR_FREE_PAGES approaches low water mark
It has been observed that system tends to keep a lot of CMA free pages even in very high memory pressure use cases. The CMA fallback for movable pages is used very rarely, only when system is completely pruned from MOVABLE pages, what usually means that the out-of-memory even will be triggered very soon. To avoid such situation and make better use of CMA pages, a heuristics is introduced which turns on CMA fallback for movable pages when the real number of free pages (excluding CMA free pages) approaches low water mark. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com>
Diffstat (limited to 'mm')
-rw-r--r--mm/page_alloc.c9
1 files changed, 9 insertions, 0 deletions
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 494a081ec5e..0cbd0ed9579 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1101,6 +1101,15 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
{
struct page *page;
+#ifdef CONFIG_CMA
+ unsigned long nr_free = zone_page_state(zone, NR_FREE_PAGES);
+ unsigned long nr_cma_free = zone_page_state(zone, NR_FREE_CMA_PAGES);
+
+ if (migratetype == MIGRATE_MOVABLE && nr_cma_free &&
+ nr_free - nr_cma_free < 2 * low_wmark_pages(zone))
+ migratetype = MIGRATE_CMA;
+#endif /* CONFIG_CMA */
+
retry_reserve:
page = __rmqueue_smallest(zone, order, migratetype);