summaryrefslogtreecommitdiff
path: root/mm/slub.c
diff options
context:
space:
mode:
authorChristoph Lameter <cl@linux.com>2011-11-23 09:24:27 -0600
committerPekka Enberg <penberg@kernel.org>2011-12-13 22:14:02 +0200
commit8f1e33daeda6cd89753f9e77d174805a6f21db09 (patch)
tree46fa0c7c64953ba12e73c30ad31fc1b10b5af785 /mm/slub.c
parentdc47ce90c3a822cd7c9e9339fe4d5f61dcb26b50 (diff)
downloadlinux-3.10-8f1e33daeda6cd89753f9e77d174805a6f21db09.tar.gz
linux-3.10-8f1e33daeda6cd89753f9e77d174805a6f21db09.tar.bz2
linux-3.10-8f1e33daeda6cd89753f9e77d174805a6f21db09.zip
slub: Switch per cpu partial page support off for debugging
Eric saw an issue with accounting of slabs during validation. Its not possible to determine accurately how many per cpu partial slabs exist at any time so this switches off per cpu partial pages during debug. Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Christoph Lameter <cl@linux.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
Diffstat (limited to 'mm/slub.c')
-rw-r--r--mm/slub.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c
index ed3334d9b6d..4056d29e661 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3028,7 +3028,9 @@ static int kmem_cache_open(struct kmem_cache *s,
* per node list when we run out of per cpu objects. We only fetch 50%
* to keep some capacity around for frees.
*/
- if (s->size >= PAGE_SIZE)
+ if (kmem_cache_debug(s))
+ s->cpu_partial = 0;
+ else if (s->size >= PAGE_SIZE)
s->cpu_partial = 2;
else if (s->size >= 1024)
s->cpu_partial = 6;