summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorShaohua Li <shaohua.li@intel.com>2011-11-11 14:54:14 +0800
committerPekka Enberg <penberg@kernel.org>2011-11-27 22:08:15 +0200
commit4c493a5a5c0bab6c434af2723328edd79c49aa0c (patch)
tree184c48e7c1759127de931d903bdbbdcc786acac6
parent42616cacf8bf898b1bc734b88a76cbaadffb8eb7 (diff)
downloadlinux-3.10-4c493a5a5c0bab6c434af2723328edd79c49aa0c.tar.gz
linux-3.10-4c493a5a5c0bab6c434af2723328edd79c49aa0c.tar.bz2
linux-3.10-4c493a5a5c0bab6c434af2723328edd79c49aa0c.zip
slub: add missed accounting
With per-cpu partial list, slab is added to partial list first and then moved to node list. The __slab_free() code path for add/remove_partial is almost deprecated(except for slub debug). But we forget to account add/remove_partial when move per-cpu partial pages to node list, so the statistics for such events are always 0. Add corresponding accounting. This is against the patch "slub: use correct parameter to add a page to partial list tail" Acked-by: Christoph Lameter <cl@linux.com> Signed-off-by: Shaohua Li <shaohua.li@intel.com> Signed-off-by: Pekka Enberg <penberg@kernel.org>
-rw-r--r--mm/slub.c7
1 files changed, 5 insertions, 2 deletions
diff --git a/mm/slub.c b/mm/slub.c
index c3138233a6e..108ed03fb42 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1901,11 +1901,14 @@ static void unfreeze_partials(struct kmem_cache *s)
}
if (l != m) {
- if (l == M_PARTIAL)
+ if (l == M_PARTIAL) {
remove_partial(n, page);
- else
+ stat(s, FREE_REMOVE_PARTIAL);
+ } else {
add_partial(n, page,
DEACTIVATE_TO_TAIL);
+ stat(s, FREE_ADD_PARTIAL);
+ }
l = m;
}