summaryrefslogtreecommitdiff
path: root/mm/slub.c
AgeCommit message (Expand)AuthorFilesLines
2012-05-16slub: remove unused argument of init_kmem_cache_node()Joonsoo Kim1-4/+4
2012-05-16slub: fix a possible memory leakJoonsoo Kim1-1/+1
2012-05-08slub: fix incorrect return type of get_any_partial()Joonsoo Kim1-1/+1
2012-03-28Merge branch 'akpm' (Andrew's patch-bomb)Linus Torvalds1-1/+9
2012-03-28slub: only IPI CPUs that have per cpu obj to flushGilad Ben-Yossef1-1/+9
2012-03-28Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds1-5/+21
2012-03-21cpuset: mm: reduce large amounts of memory barrier related damage v3Mel Gorman1-15/+25
2012-02-18slub: per cpu partial statistics changeAlex Shi1-3/+9
2012-02-10slub: include include for prefetchChristoph Lameter1-0/+1
2012-02-06slub: Do not hold slub_lock when calling sysfs_slab_add()Christoph Lameter1-1/+2
2012-01-24slub: prefetch next freelist pointer in slab_alloc()Eric Dumazet1-1/+9
2012-01-12mm,x86,um: move CMPXCHG_DOUBLE config optionHeiko Carstens1-3/+6
2012-01-12mm,slub,x86: decouple size of struct page from CONFIG_CMPXCHG_LOCALHeiko Carstens1-3/+3
2012-01-11Merge branch 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/gi...Linus Torvalds1-29/+48
2012-01-11Merge branch 'slab/urgent' into slab/for-linusPekka Enberg1-1/+3
2012-01-10slub: min order when debug_guardpage_minorder > 0Stanislaw Gruszka1-0/+3
2012-01-10slub: disallow changing cpu_partial from userspace for debug cachesDavid Rientjes1-0/+2
2012-01-09Merge branch 'for-3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/pe...Linus Torvalds1-3/+3
2012-01-04x86: Fix and improve cmpxchg_double{,_local}()Jan Beulich1-2/+2
2011-12-22percpu: Remove irqsafe_cpu_xxx variantsChristoph Lameter1-3/+3
2011-12-13slub: add missed accountingShaohua Li1-2/+5
2011-12-13slub: Extract get_freelist from __slab_allocChristoph Lameter1-25/+32
2011-12-13slub: Switch per cpu partial page support off for debuggingChristoph Lameter1-1/+3
2011-12-13slub: fix a possible memleak in __slab_alloc()Eric Dumazet1-0/+5
2011-11-27slub: add missed accountingShaohua Li1-2/+5
2011-11-27Merge branch 'slab/urgent' into slab/nextPekka Enberg1-16/+26
2011-11-24slub: avoid potential NULL dereference or corruptionEric Dumazet1-10/+11
2011-11-24slub: use irqsafe_cpu_cmpxchg for put_cpu_partialChristoph Lameter1-1/+1
2011-11-16slub: add taint flag outputting to debug pathsDave Jones1-1/+1
2011-11-15slub: move discard_slab out of node lockShaohua Li1-4/+12
2011-11-15slub: use correct parameter to add a page to partial list tailShaohua Li1-1/+2
2011-10-31lib/string.c: introduce memchr_inv()Akinobu Mita1-45/+2
2011-10-26Merge branches 'slab/next' and 'slub/partial' into slab/for-linusPekka Enberg1-166/+392
2011-09-27slub: Discard slab page when node partial > minimum partial numberAlex Shi1-1/+1
2011-09-27slub: correct comments error for per cpu partialAlex Shi1-1/+1
2011-09-27mm: restrict access to slab files under procfs and sysfsVasiliy Kulikov1-3/+4
2011-09-19Merge branch 'slab/urgent' into slab/nextPekka Enberg1-10/+12
2011-09-13slub: Code optimization in get_partial_node()Alex,Shi1-4/+2
2011-08-27slub: explicitly document position of inserting slab to partial listShaohua Li1-6/+6
2011-08-27slub: add slab with one free object to partial list tailShaohua Li1-1/+1
2011-08-19slub: per cpu cache for partial pagesChristoph Lameter1-47/+292
2011-08-19slub: return object pointer from get_partial() / new_slab().Christoph Lameter1-60/+73
2011-08-19slub: pass kmem_cache_cpu pointer to get_partial()Christoph Lameter1-15/+15
2011-08-19slub: Prepare inuse field in new_slab()Christoph Lameter1-3/+2
2011-08-19slub: Remove useless statements in __slab_allocChristoph Lameter1-4/+0
2011-08-19slub: free slabs without holding locksChristoph Lameter1-13/+13
2011-08-09slub: Fix partial count comparison confusionChristoph Lameter1-1/+1
2011-08-09slub: fix check_bytes() for slub debuggingAkinobu Mita1-1/+1
2011-08-09slub: Fix full list corruption if debugging is onChristoph Lameter1-2/+4
2011-07-31slub: use print_hex_dumpSebastian Andrzej Siewior1-35/+9