path: root/mm/slab_common.c
diff options
authorGlauber Costa <>2012-12-18 22:22:38 (GMT)
committerLinus Torvalds <>2012-12-18 23:02:13 (GMT)
commit55007d849759252ddd573aeb36143b947202d509 (patch)
treed042bc2f717922fb73f9d526592eeb331c2f0f70 /mm/slab_common.c
parent2633d7a028239a738b793be5ca8fa6ac312f5793 (diff)
memcg: allocate memory for memcg caches whenever a new memcg appears
Every cache that is considered a root cache (basically the "original" caches, tied to the root memcg/no-memcg) will have an array that should be large enough to store a cache pointer per each memcg in the system. Theoreticaly, this is as high as 1 << sizeof(css_id), which is currently in the 64k pointers range. Most of the time, we won't be using that much. What goes in this patch, is a simple scheme to dynamically allocate such an array, in order to minimize memory usage for memcg caches. Because we would also like to avoid allocations all the time, at least for now, the array will only grow. It will tend to be big enough to hold the maximum number of kmem-limited memcgs ever achieved. We'll allocate it to be a minimum of 64 kmem-limited memcgs. When we have more than that, we'll start doubling the size of this array every time the limit is reached. Because we are only considering kmem limited memcgs, a natural point for this to happen is when we write to the limit. At that point, we already have set_limit_mutex held, so that will become our natural synchronization mechanism. Signed-off-by: Glauber Costa <> Cc: Christoph Lameter <> Cc: David Rientjes <> Cc: Frederic Weisbecker <> Cc: Greg Thelen <> Cc: Johannes Weiner <> Cc: JoonSoo Kim <> Cc: KAMEZAWA Hiroyuki <> Cc: Mel Gorman <> Cc: Michal Hocko <> Cc: Pekka Enberg <> Cc: Rik van Riel <> Cc: Suleiman Souhlal <> Cc: Tejun Heo <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
Diffstat (limited to 'mm/slab_common.c')
1 files changed, 28 insertions, 0 deletions
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3031bad..1c424b6 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -81,6 +81,34 @@ static inline int kmem_cache_sanity_check(struct mem_cgroup *memcg,
+int memcg_update_all_caches(int num_memcgs)
+ struct kmem_cache *s;
+ int ret = 0;
+ mutex_lock(&slab_mutex);
+ list_for_each_entry(s, &slab_caches, list) {
+ if (!is_root_cache(s))
+ continue;
+ ret = memcg_update_cache_size(s, num_memcgs);
+ /*
+ * See comment in memcontrol.c, memcg_update_cache_size:
+ * Instead of freeing the memory, we'll just leave the caches
+ * up to this point in an updated state.
+ */
+ if (ret)
+ goto out;
+ }
+ memcg_update_array_size(num_memcgs);
+ mutex_unlock(&slab_mutex);
+ return ret;
* Figure out what the alignment of the objects will be given a set of
* flags, a user specified alignment and the size of the objects.