summaryrefslogtreecommitdiff
path: root/patches.tizen/0006-mm-cma-allocate-pages-from-CMA-if-NR_FREE_PAGES-appr.patch
blob: baa43d8f27eab944fbf2b026c306d53536c3652b (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
From 2aeb96c2467ab83034cc2344db35be7c042343ee Mon Sep 17 00:00:00 2001
From: Marek Szyprowski <m.szyprowski@samsung.com>
Date: Fri, 9 Nov 2012 16:04:23 +0100
Subject: [PATCH 0006/1302] mm: cma: allocate pages from CMA if NR_FREE_PAGES
 approaches low water mark

It has been observed that system tends to keep a lot of CMA free pages
even in very high memory pressure use cases. The CMA fallback for movable
pages is used very rarely, only when system is completely pruned from
MOVABLE pages, what usually means that the out-of-memory even will be
triggered very soon. To avoid such situation and make better use of CMA
pages, a heuristics is introduced which turns on CMA fallback for movable
pages when the real number of free pages (excluding CMA free pages)
approaches low water mark.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
---
 mm/page_alloc.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2ee0fd3..bb92575 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1099,6 +1099,15 @@ static struct page *__rmqueue(struct zone *zone, unsigned int order,
 {
 	struct page *page;
 
+#ifdef CONFIG_CMA
+	unsigned long nr_free = zone_page_state(zone, NR_FREE_PAGES);
+	unsigned long nr_cma_free = zone_page_state(zone, NR_FREE_CMA_PAGES);
+
+	if (migratetype == MIGRATE_MOVABLE && nr_cma_free &&
+	    nr_free - nr_cma_free < 2 * low_wmark_pages(zone))
+		migratetype = MIGRATE_CMA;
+#endif /* CONFIG_CMA */
+
 retry_reserve:
 	page = __rmqueue_smallest(zone, order, migratetype);
 
-- 
1.8.3.2