diff options
author | Christoph Lameter <clameter@sgi.com> | 2008-04-14 19:11:41 +0300 |
---|---|---|
committer | Pekka Enberg <penberg@cs.helsinki.fi> | 2008-04-27 18:28:39 +0300 |
commit | 114e9e89e668ec561c9b0f3dea7bcc8af7c29d21 (patch) | |
tree | 12ee7fdc5e3068d7c5e9bf8d7e5f65f75673bf7b /mm/slub.c | |
parent | 31d33baf36bda7a2fea800648d87c9fe6155e7ca (diff) | |
download | linux-3.10-114e9e89e668ec561c9b0f3dea7bcc8af7c29d21.tar.gz linux-3.10-114e9e89e668ec561c9b0f3dea7bcc8af7c29d21.tar.bz2 linux-3.10-114e9e89e668ec561c9b0f3dea7bcc8af7c29d21.zip |
slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTS
We can now fallback to order 0 slabs. So set the slub_max_order to
PAGE_CACHE_ORDER_COSTLY but keep the slub_min_objects at 4. This
will mostly preserve the orders used in 2.6.25. F.e. The 2k kmalloc slab
will use order 1 allocs and the 4k kmalloc slab order 2.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 23 |
1 files changed, 2 insertions, 21 deletions
diff --git a/mm/slub.c b/mm/slub.c index 06533f342be..6572cef0c43 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -149,25 +149,6 @@ static inline void ClearSlabDebug(struct page *page) /* Enable to test recovery from slab corruption on boot */ #undef SLUB_RESILIENCY_TEST -#if PAGE_SHIFT <= 12 - -/* - * Small page size. Make sure that we do not fragment memory - */ -#define DEFAULT_MAX_ORDER 1 -#define DEFAULT_MIN_OBJECTS 4 - -#else - -/* - * Large page machines are customarily able to handle larger - * page orders. - */ -#define DEFAULT_MAX_ORDER 2 -#define DEFAULT_MIN_OBJECTS 8 - -#endif - /* * Mininum number of partial slabs. These will be left on the partial * lists even if they are empty. kmem_cache_shrink may reclaim them. @@ -1821,8 +1802,8 @@ static struct page *get_object_page(const void *x) * take the list_lock. */ static int slub_min_order; -static int slub_max_order = DEFAULT_MAX_ORDER; -static int slub_min_objects = DEFAULT_MIN_OBJECTS; +static int slub_max_order = PAGE_ALLOC_COSTLY_ORDER; +static int slub_min_objects = 4; /* * Merge control. If this is set then no merging of slab caches will occur. |