diff options
author | Joonsoo Kim <iamjoonsoo.kim@lge.com> | 2014-08-06 16:04:20 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-08-06 18:01:14 -0700 |
commit | 8a9c61d4381c5e5007cc68e023940b18fa0808d7 (patch) | |
tree | cd75ac7431138a602bcfcf669b33225a0774f21d /mm/slab.c | |
parent | 02e72cc61713185013d958baba508288ba2a0157 (diff) | |
download | linux-exynos-8a9c61d4381c5e5007cc68e023940b18fa0808d7.tar.gz linux-exynos-8a9c61d4381c5e5007cc68e023940b18fa0808d7.tar.bz2 linux-exynos-8a9c61d4381c5e5007cc68e023940b18fa0808d7.zip |
slab: add unlikely macro to help compiler
This patchset does some cleanup and tries to remove lockdep annotation.
Patches 1~2 are just for really really minor improvement.
Patches 3~9 are for clean-up and removing lockdep annotation.
There are two cases that lockdep annotation is needed in SLAB.
1) holding two node locks
2) holding two array cache(alien cache) locks
I looked at the code and found that we can avoid these cases without any
negative effect.
1) occurs if freeing object makes new free slab and we decide to
destroy it. Although we don't need to hold the lock during destroying
a slab, current code do that. Destroying a slab without holding the
lock would help the reduction of the lock contention. To do it, I
change the implementation that new free slab is destroyed after
releasing the lock.
2) occurs on similar situation. When we free object from non-local
node, we put this object to alien cache with holding the alien cache
lock. If alien cache is full, we try to flush alien cache to proper
node cache, and, in this time, new free slab could be made. Destroying
it would be started and we will free metadata object which comes from
another node. In this case, we need another node's alien cache lock to
free object. This forces us to hold two array cache locks and then we
need lockdep annotation although they are always different locks and
deadlock cannot be possible. To prevent this situation, I use same way
as 1).
In this way, we can avoid 1) and 2) cases, and then, can remove lockdep
annotation. As short stat noted, this makes SLAB code much simpler.
This patch (of 9):
slab_should_failslab() is called on every allocation, so to optimize it
is reasonable. We normally don't allocate from kmem_cache. It is just
used when new kmem_cache is created, so it's very rare case. Therefore,
add unlikely macro to help compiler optimization.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/slab.c')
-rw-r--r-- | mm/slab.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/slab.c b/mm/slab.c index 66b3ffbb890d..7d07942b9804 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3048,7 +3048,7 @@ static void *cache_alloc_debugcheck_after(struct kmem_cache *cachep, static bool slab_should_failslab(struct kmem_cache *cachep, gfp_t flags) { - if (cachep == kmem_cache) + if (unlikely(cachep == kmem_cache)) return false; return should_failslab(cachep->object_size, flags, cachep->flags); |