diff options
author | Christoph Lameter <clameter@sgi.com> | 2007-05-10 03:15:16 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-05-10 09:26:52 -0700 |
commit | 894b8788d7f265eb7c6f75a9a77cedeb48f51586 (patch) | |
tree | 4b00fa4704090876895b8a7528c6fe5e2201fc28 /include | |
parent | 02b67325a6d34f2ae67484a8802b6ffc9ce9931d (diff) | |
download | linux-3.10-894b8788d7f265eb7c6f75a9a77cedeb48f51586.tar.gz linux-3.10-894b8788d7f265eb7c6f75a9a77cedeb48f51586.tar.bz2 linux-3.10-894b8788d7f265eb7c6f75a9a77cedeb48f51586.zip |
slub: support concurrent local and remote frees and allocs on a slab
Avoid atomic overhead in slab_alloc and slab_free
SLUB needs to use the slab_lock for the per cpu slabs to synchronize with
potential kfree operations. This patch avoids that need by moving all free
objects onto a lockless_freelist. The regular freelist continues to exist
and will be used to free objects. So while we consume the
lockless_freelist the regular freelist may build up objects.
If we are out of objects on the lockless_freelist then we may check the
regular freelist. If it has objects then we move those over to the
lockless_freelist and do this again. There is a significant savings in
terms of atomic operations that have to be performed.
We can even free directly to the lockless_freelist if we know that we are
running on the same processor. So this speeds up short lived objects.
They may be allocated and freed without taking the slab_lock. This is
particular good for netperf.
In order to maximize the effect of the new faster hotpath we extract the
hottest performance pieces into inlined functions. These are then inlined
into kmem_cache_alloc and kmem_cache_free. So hotpath allocation and
freeing no longer requires a subroutine call within SLUB.
[I am not sure that it is worth doing this because it changes the easy to
read structure of slub just to reduce atomic ops. However, there is
someone out there with a benchmark on 4 way and 8 way processor systems
that seems to show a 5% regression vs. Slab. Seems that the regression is
due to increased atomic operations use vs. SLAB in SLUB). I wonder if
this is applicable or discernable at all in a real workload?]
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/mm_types.h | 7 |
1 files changed, 5 insertions, 2 deletions
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index e30687bad07..d5bb1796e12 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -50,13 +50,16 @@ struct page { spinlock_t ptl; #endif struct { /* SLUB uses */ - struct page *first_page; /* Compound pages */ + void **lockless_freelist; struct kmem_cache *slab; /* Pointer to slab */ }; + struct { + struct page *first_page; /* Compound pages */ + }; }; union { pgoff_t index; /* Our offset within mapping. */ - void *freelist; /* SLUB: pointer to free object */ + void *freelist; /* SLUB: freelist req. slab lock */ }; struct list_head lru; /* Pageout list, eg. active_list * protected by zone->lru_lock ! |