summaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorVladimir Davydov <vdavydov@parallels.com>2014-04-03 14:47:19 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2014-10-09 12:21:29 -0700
commit12f2f0bab442ecdff45d67ff1c3d0b21f46794bd (patch)
treec783845f7aa0deda7503de8e619046378dea9a99 /mm
parent6591981a30a8c752b1c537211f7fe4b9530fea41 (diff)
downloadrenesas_kernel-12f2f0bab442ecdff45d67ff1c3d0b21f46794bd.tar.gz
renesas_kernel-12f2f0bab442ecdff45d67ff1c3d0b21f46794bd.tar.bz2
renesas_kernel-12f2f0bab442ecdff45d67ff1c3d0b21f46794bd.zip
mm: vmscan: respect NUMA policy mask when shrinking slab on direct reclaim
commit 99120b772b52853f9a2b829a21dd44d9b20558f1 upstream. When direct reclaim is executed by a process bound to a set of NUMA nodes, we should scan only those nodes when possible, but currently we will scan kmem from all online nodes even if the kmem shrinker is NUMA aware. That said, binding a process to a particular NUMA node won't prevent it from shrinking inode/dentry caches from other nodes, which is not good. Fix this. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michal Hocko <mhocko@suse.cz> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Rik van Riel <riel@redhat.com> Cc: Dave Chinner <dchinner@redhat.com> Cc: Glauber Costa <glommer@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6ef484f0777..47905d7dfaf 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2424,8 +2424,8 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
unsigned long lru_pages = 0;
nodes_clear(shrink->nodes_to_scan);
- for_each_zone_zonelist(zone, z, zonelist,
- gfp_zone(sc->gfp_mask)) {
+ for_each_zone_zonelist_nodemask(zone, z, zonelist,
+ gfp_zone(sc->gfp_mask), sc->nodemask) {
if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
continue;