diff options
author | Christoph Lameter <cl@linux.com> | 2010-08-26 09:41:19 -0500 |
---|---|---|
committer | Pekka Enberg <penberg@kernel.org> | 2010-10-02 10:24:29 +0300 |
commit | db210e70e5f191710a3b1d09f653b44885d397ea (patch) | |
tree | 3d1472be3dfd80107090a73bf70710cdb5df21f1 | |
parent | a016471a16b5c4d4ec8f5221575e603a3d11e5e9 (diff) | |
download | linux-3.10-db210e70e5f191710a3b1d09f653b44885d397ea.tar.gz linux-3.10-db210e70e5f191710a3b1d09f653b44885d397ea.tar.bz2 linux-3.10-db210e70e5f191710a3b1d09f653b44885d397ea.zip |
Slub: UP bandaid
Since the percpu allocator does not provide early allocation in UP mode (only
in SMP configurations) use __get_free_page() to improvise a compound page
allocation that can be later freed via kfree().
Compound pages will be released when the cpu caches are resized.
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Christoph Lameter <cl@linux.com>
Signed-off-by: Pekka Enberg <penberg@kernel.org>
-rw-r--r-- | mm/slub.c | 16 |
1 files changed, 16 insertions, 0 deletions
diff --git a/mm/slub.c b/mm/slub.c index 4c5a76f505e..05674aac929 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2103,8 +2103,24 @@ init_kmem_cache_node(struct kmem_cache_node *n, struct kmem_cache *s) static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) { +#ifdef CONFIG_SMP + /* + * Will use reserve that does not require slab operation during + * early boot. + */ BUILD_BUG_ON(PERCPU_DYNAMIC_EARLY_SIZE < SLUB_PAGE_SHIFT * sizeof(struct kmem_cache_cpu)); +#else + /* + * Special hack for UP mode. allocpercpu() falls back to kmalloc + * operations. So we cannot use that before the slab allocator is up + * Simply get the smallest possible compound page. The page will be + * released via kfree() when the cpu caches are resized later. + */ + if (slab_state < UP) + s->cpu_slab = (__percpu void *)kmalloc_large(PAGE_SIZE << 1, GFP_NOWAIT); + else +#endif s->cpu_slab = alloc_percpu(struct kmem_cache_cpu); |