summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorJoonsoo Kim <iamjoonsoo.kim@lge.com>2014-04-07 15:37:05 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2014-10-09 12:21:28 -0700
commit4fff5ca78029f4df452334ecf013e53bf29079cc (patch)
tree68d77d8cb8b21333fe5ebcc4fa89836ccdeb6d73 /mm
parentcc85d67f4fdc4fb89c74ff327d2bbe7803951a0f (diff)
downloadlinux-stable-4fff5ca78029f4df452334ecf013e53bf29079cc.tar.gz
linux-stable-4fff5ca78029f4df452334ecf013e53bf29079cc.tar.bz2
linux-stable-4fff5ca78029f4df452334ecf013e53bf29079cc.zip
mm/compaction: change the timing to check to drop the spinlock
commit be1aa03b973c7dcdc576f3503f7a60429825c35d upstream. It is odd to drop the spinlock when we scan (SWAP_CLUSTER_MAX - 1) th pfn page. This may results in below situation while isolating migratepage. 1. try isolate 0x0 ~ 0x200 pfn pages. 2. When low_pfn is 0x1ff, ((low_pfn+1) % SWAP_CLUSTER_MAX) == 0, so drop the spinlock. 3. Then, to complete isolating, retry to aquire the lock. I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking the criteria about dropping the lock. This has no harm 0x0 pfn, because, at this time, locked variable would be false. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/compaction.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/compaction.c b/mm/compaction.c
index 711ebf75b454..f347b73be165 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -487,7 +487,7 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc,
cond_resched();
for (; low_pfn < end_pfn; low_pfn++) {
/* give a chance to irqs before checking need_resched() */
- if (locked && !((low_pfn+1) % SWAP_CLUSTER_MAX)) {
+ if (locked && !(low_pfn % SWAP_CLUSTER_MAX)) {
if (should_release_lock(&zone->lru_lock)) {
spin_unlock_irqrestore(&zone->lru_lock, flags);
locked = false;