summaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)AuthorFilesLines
2007-07-05Fix slab redzone alignmentDavid Woodhouse1-9/+23
Commit b46b8f19c9cd435ecac4d9d12b39d78c137ecd66 fixed a couple of bugs by switching the redzone to 64 bits. Unfortunately, it neglected to ensure that the _second_ redzone, after the slab object, is aligned correctly. This caused illegal instruction faults on sparc32, which for some reason not entirely clear to me are not trapped and fixed up. Two things need to be done to fix this: - increase the object size, rounding up to alignof(long long) so that the second redzone can be aligned correctly. - If SLAB_STORE_USER is set but alignof(long long)==8, allow a full 64 bits of space for the user word at the end of the buffer, even though we may not _use_ the whole 64 bits. This patch should be a no-op on any 64-bit architecture or any 32-bit architecture where alignof(long long) == 4. Of the others, it's tested on ppc32 by myself and a very similar patch was tested on sparc32 by Mark Fortescue, who reported the new problem. Also, fix the conditions for FORCED_DEBUG, which hadn't been adjusted to the new sizes. Again noticed by Mark. Signed-off-by: David Woodhouse <dwmw2@infradead.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-03SLUB: Make lockdep happy by not calling add_partial with interrupts enabled ↵Christoph Lameter1-2/+6
during bootstrap If we move the local_irq_enable() to the end of the function then add_partial() in early_kmem_cache_node_alloc() will be called with interrupts disabled like during regular operations. This makes lockdep happy. Signed-off-by: Christoph Lameter <clameter@sgi.com> Tested-by: Andre Noll <maan@systemlinux.org> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-07-01SLAB: remove WARN_ON_ONCE for zero sized objects for 2.6.22 releaseChristoph Lameter1-1/+0
We agreed to remove the WARN_ON_ONCE before 2.6.22 is released. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-28mm: kill validate_anon_vma to avoid mapcount BUGHugh Dickins1-23/+1
validate_anon_vma gave a useful check on the integrity of the anon_vma list when Andrea was developing obj rmap; but it was not enabled in SLES9 itself, nor in mainline, until Nick changed commented-out RMAP_DEBUG to configurable CONFIG_DEBUG_VM in 2.6.17. Now Petr Vandrovec reports that its BUG_ON(mapcount > 100000) can easily crash a CONFIG_DEBUG_VM=y system. That limit was just an arbitrary number to protect against an infinite loop. We could raise it to something enormous (depending on sizeof struct vma and size of memory?); but I rather think validate_anon_vma has outlived its usefulness, and is better just removed - which gives a magnificent performance boost to anything like Petr's test program ;) Of course, a very long anon_vma list is bad news for preemption latency, and I believe there has been one recent report of such: let's not forget that, but validate_anon_vma only makes it worse not better. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Petr Vandrovec <petr@vmware.com> Acked-by: Nick Piggin <npiggin@suse.de> Cc: Andrea Arcangeli <andrea@suse.de> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-24SLUB: fix behavior if the text output of list_locations overflows PAGE_SIZEChristoph Lameter1-2/+4
If slabs are allocated or freed from a large set of call sites (typical for the kmalloc area) then we may create more output than fits into a single PAGE and sysfs only gives us one page. The output should be truncated. This patch fixes the checks to do the truncation properly. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-21[PARISC] Handle wrapping in expand_upwards()Helge Deller1-2/+7
Function expand_upwards() did not guarded against wrapping around to address 0. This fixes the adjtimex02 testcase from the Linux Test Project on a 32bit PARISC kernel. [expand_upwards is only used on parisc and ia64; it looks like it does the right thing on both. --kyle] Signed-off-by: Helge Deller <deller@gmx.de> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2007-06-16SLUB: minimum alignment fixesChristoph Lameter1-5/+15
If ARCH_KMALLOC_MINALIGN is set to a value greater than 8 (SLUBs smallest kmalloc cache) then SLUB may generate duplicate slabs in sysfs (yes again) because the object size is padded to reach ARCH_KMALLOC_MINALIGN. Thus the size of the small slabs is all the same. No arch sets ARCH_KMALLOC_MINALIGN larger than 8 though except mips which for some reason wants a 128 byte alignment. This patch increases the size of the smallest cache if ARCH_KMALLOC_MINALIGN is greater than 8. In that case more and more of the smallest caches are disabled. If we do that then the count of the active general caches that is displayed on boot is not correct anymore since we may skip elements of the kmalloc array. So count them separately. This approach was tested by Havard yesterday. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-16Rework ptep_set_access_flags and fix sun4cBenjamin Herrenschmidt2-10/+10
Some changes done a while ago to avoid pounding on ptep_set_access_flags and update_mmu_cache in some race situations break sun4c which requires update_mmu_cache() to always be called on minor faults. This patch reworks ptep_set_access_flags() semantics, implementations and callers so that it's now responsible for returning whether an update is necessary or not (basically whether the PTE actually changed). This allow fixing the sparc implementation to always return 1 on sun4c. [akpm@linux-foundation.org: fixes, cleanups] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Hugh Dickins <hugh@veritas.com> Cc: David Miller <davem@davemloft.net> Cc: Mark Fortescue <mark@mtfhpc.demon.co.uk> Acked-by: William Lee Irwin III <wli@holomorphy.com> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-16SLUB slab validation: Alloc while interrupts are disabled must use GFP_ATOMICChristoph Lameter1-1/+1
The data structure to manage the information gathered about functions allocating and freeing objects is allocated when the list_lock has already been taken. We need to allocate with GFP_ATOMIC instead of GFP_KERNEL. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Andy Whitcroft <apw@shadowen.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-15mm: Fix memory/cpu hotplug section mismatch and oops.Paul Mundt1-1/+1
When building with memory hotplug enabled and cpu hotplug disabled, we end up with the following section mismatch: WARNING: mm/built-in.o(.text+0x4e58): Section mismatch: reference to .init.text: (between 'free_area_init_node' and '__build_all_zonelists') This happens as a result of: -> free_area_init_node() -> free_area_init_core() -> zone_pcp_init() <-- all __meminit up to this point -> zone_batchsize() <-- marked as __cpuinit fo This happens because CONFIG_HOTPLUG_CPU=n sets __cpuinit to __init, but CONFIG_MEMORY_HOTPLUG=y unsets __meminit. Changing zone_batchsize() to __devinit fixes this. __devinit is the only thing that is common between CONFIG_HOTPLUG_CPU=y and CONFIG_MEMORY_HOTPLUG=y. In the long run, perhaps this should be moved to another section identifier completely. Without this, memory hot-add of offline nodes (via hotadd_new_pgdat()) will oops if CPU hotplug is not also enabled. Signed-off-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> -- mm/page_alloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
2007-06-08Move three functions that are only needed for CONFIG_MEMORY_HOTPLUGStephen Rothwell1-21/+21
into the appropriate #ifdef. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Cc: Andy Whitcroft <apw@shadowen.org> Cc: Badari Pulavarty <pbadari@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-08SLUB: return ZERO_SIZE_PTR for kmalloc(0)Christoph Lameter1-8/+18
Instead of returning the smallest available object return ZERO_SIZE_PTR. A ZERO_SIZE_PTR can be legitimately used as an object pointer as long as it is not deferenced. The dereference of ZERO_SIZE_PTR causes a distinctive fault. kfree can handle a ZERO_SIZE_PTR in the same way as NULL. This enables functions to use zero sized object. e.g. n = number of objects. objects = kmalloc(n * sizeof(object)); for (i = 0; i < n; i++) objects[i].x = y; kfree(objects); Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-08slab: fix alien cache handlingChristoph Lameter1-1/+1
cache_free_alien must be called regardless if we use alien caches or not. cache_free_alien() will do the right thing if there are no alien caches available. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Paul Mundt <lethal@linux-sh.org> Acked-by: Pekka J Enberg <penberg@cs.helsinki.fi> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-08mount -t tmpfs -o mpol=: check nodes onlineHugh Dickins1-0/+2
Randy Dunlap reports that a tmpfs, mounted with NUMA mpol= specifying an offline node, crashes as soon as data is allocated upon it. Now restrict it to online nodes, where before it restricted to MAX_NUMNODES. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Robin Holt <holt@sgi.com> Cc: Christoph Lameter <clameter@sgi.com> Cc: Andi Kleen <ak@suse.de> Tested-and-acked-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-01SLUB: fix locking for hotplug callbacksChristoph Lameter1-1/+14
Hotplug callbacks are performed with interrupts enabled. Slub requires interrupts to be disabled for flushing caches. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Michal Piotrowski <michal.k.k.piotrowski@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-01memory hotplug: fix unnecessary calling of init_currenty_empty_zone()Yasunori Goto1-1/+1
zone->present_pages is updated in online_pages(). But, __add_zone() can be called twice or more before calling online_pages(). So, init_currenty_empty_zone() can be called unnecessary times. It is cause of memory leak of zone's wait_table. Signed-off-by: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-06-01x86_64: allocate sparsemem memmap above 4GZou Nan hai1-0/+11
On systems with huge amount of physical memory, VFS cache and memory memmap may eat all available system memory under 4G, then the system may fail to allocate swiotlb bounce buffer. There was a fix for this issue in arch/x86_64/mm/numa.c, but that fix dose not cover sparsemem model. This patch add fix to sparsemem model by first try to allocate memmap above 4G. Signed-off-by: Zou Nan hai <nanhai.zou@intel.com> Acked-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Andi Kleen <ak@suse.de> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-31m68k: discontinuous memory supportRoman Zippel1-1/+1
Fix support for discontinuous memory Signed-off-by: Roman Zippel <zippel@linux-m68k.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-31SLUB: Fix NUMA / SYSFS bootstrap issueChristoph Lameter1-0/+7
We need this patch in ASAP. Patch fixes the mysterious hang that remained on some particular configurations with lockdep on after the first fix that moved the #idef CONFIG_SLUB_DEBUG to the right location. See http://marc.info/?t=117963072300001&r=1&w=2 The kmem_cache_node cache is very special because it is needed for NUMA bootstrap. Under certain conditions (like for example if lockdep is enabled and significantly increases the size of spinlock_t) the structure may become exactly the size as one of the larger caches in the kmalloc array. That early during bootstrap we cannot perform merging properly. The unique id for the kmem_cache_node cache will match one of the kmalloc array. Sysfs will complain about a duplicate directory entry. All of this occurs while the console is not yet fully operational. Thus boot may appear to be silently failing. The kmem_cache_node cache is very special. During early boostrap the main allocation function is not operational yet and so we have to run our own small special alloc function during early boot. It is also special in that it is never freed. We really do not want any merging on that cache. Set the refcount -1 and forbid merging of slabs that have a negative refcount. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-23SLUB Debug: fix check for super sized slabs (>512k 64bit, >256k 32bit)Christoph Lameter1-1/+1
The check for super sized slabs where we can no longer move the free pointer behind the object for debugging purposes etc is accessing a field that is not setup yet. We must use objsize here since the size of the slab has not been determined yet. The effect of this is that a global slab shrink via "slabinfo -s" will show errors about offsets being wrong if booted with slub_debug. Potentially there are other troubles with huge slabs under slub_debug because the calculated free pointer offset is truncated. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-23fix unused setup_nr_node_idsMiklos Szeredi1-20/+25
mm/page_alloc.c:931: warning: 'setup_nr_node_ids' defined but not used This is now the only (!) compiler warning I get in my UML build :) Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-23SLUB Debug: Fix object size calculationChristoph Lameter1-1/+1
The object size calculation is wrong if !CONFIG_SLUB_DEBUG because the #ifdef CONFIG_SLUB_DEBUG is now switching off the size adjustments for DESTROY_BY_RCU and ctor. Signed-off-by: Christoph Lameter <clameter@sgi.com> Acked-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-fixLinus Torvalds3-4/+4
* git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-fix: mm/slab: fix section mismatch warning mm: fix section mismatch warnings init/main: use __init_refok to fix section mismatch kbuild: introduce __init_refok/__initdata_refok to supress section mismatch warnings all-archs: consolidate .data section definition in asm-generic all-archs: consolidate .text section definition in asm-generic kbuild: add "Section mismatch" warning whitelist for powerpc kbuild: make better section mismatch reports on i386, arm and mips kbuild: make modpost section warnings clearer kconfig: search harder for curses library in check-lxdialog.sh kbuild: include limits.h in sumversion.c for PATH_MAX powerpc: Fix the MODALIAS generation in modpost for of devices
2007-05-21Detach sched.h from mm.hAlexey Dobriyan5-0/+15
First thing mm.h does is including sched.h solely for can_do_mlock() inline function which has "current" dereference inside. By dealing with can_do_mlock() mm.h can be detached from sched.h which is good. See below, why. This patch a) removes unconditional inclusion of sched.h from mm.h b) makes can_do_mlock() normal function in mm/mlock.c c) exports can_do_mlock() to not break compilation d) adds sched.h inclusions back to files that were getting it indirectly. e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were getting them indirectly Net result is: a) mm.h users would get less code to open, read, preprocess, parse, ... if they don't need sched.h b) sched.h stops being dependency for significant number of files: on x86_64 allmodconfig touching sched.h results in recompile of 4083 files, after patch it's only 3744 (-8.3%). Cross-compile tested on all arm defconfigs, all mips defconfigs, all powerpc defconfigs, alpha alpha-up arm i386 i386-up i386-defconfig i386-allnoconfig ia64 ia64-up m68k mips parisc parisc-up powerpc powerpc-up s390 s390-up sparc sparc-up sparc64 sparc64-up um-x86_64 x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig as well as my two usual configs. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-19mm/slab: fix section mismatch warningSam Ravnborg1-1/+1
Use the new __init_refok marker to avoid the section mismatch warning from slab.c Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-05-19mm: fix section mismatch warningsSam Ravnborg2-3/+3
modpost had two cases hardcoded for mm/ Shift over to __init_refok and kill the hardcoded function names in modpost. This has the drawback that the functions will always be kept no matter configuration. With previous code the function were placed in init section if configuration allowed it. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
2007-05-17mm: more rmap checkingNick Piggin2-3/+57
Re-introduce rmap verification patches that Hugh removed when he removed PG_map_lock. PG_map_lock actually isn't needed to synchronise access to anonymous pages, because PG_locked and PTL together already do. These checks were important in discovering and fixing a rare rmap corruption in SLES9. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17Make __vunmap staticBenjamin Herrenschmidt1-1/+1
__vunmap doesn't seem to be used outside of mm/vmalloc.c, and has no prototype in any header so let's make it static Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17Slab allocators: define common size limitationsChristoph Lameter1-17/+2
Currently we have a maze of configuration variables that determine the maximum slab size. Worst of all it seems to vary between SLAB and SLUB. So define a common maximum size for kmalloc. For conveniences sake we use the maximum size ever supported which is 32 MB. We limit the maximum size to a lower limit if MAX_ORDER does not allow such large allocations. For many architectures this patch will have the effect of adding large kmalloc sizes. x86_64 adds 5 new kmalloc sizes. So a small amount of memory will be needed for these caches (contemporary SLAB has dynamically sizeable node and cpu structure so the waste is less than in the past) Most architectures will then be able to allocate object with sizes up to MAX_ORDER. We have had repeated breakage (in fact whenever we doubled the number of supported processors) on IA64 because one or the other struct grew beyond what the slab allocators supported. This will avoid future issues and f.e. avoid fixes for 2k and 4k cpu support. CONFIG_LARGE_ALLOCS is no longer necessary so drop it. It fixes sparc64 with SLAB. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17SLUB: Simplify debug codeChristoph Lameter1-55/+57
Consolidate functionality into the #ifdef section. Extract tracing into one subroutine. Move object debug processing into the #ifdef section so that the code in __slab_alloc and __slab_free becomes minimal. Reduce number of functions we need to provide stubs for in the !SLUB_DEBUG case. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17Remove SLAB_CTOR_CONSTRUCTORChristoph Lameter5-19/+13
SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: David Howells <dhowells@redhat.com> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Steven French <sfrench@us.ibm.com> Cc: Michael Halcrow <mhalcrow@us.ibm.com> Cc: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: Miklos Szeredi <miklos@szeredi.hu> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dave Kleikamp <shaggy@austin.ibm.com> Cc: Trond Myklebust <trond.myklebust@fys.uio.no> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Anton Altaparmakov <aia21@cantab.net> Cc: Mark Fasheh <mark.fasheh@oracle.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Christoph Hellwig <hch@lst.de> Cc: Jan Kara <jack@ucw.cz> Cc: David Chinner <dgc@sgi.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17SLUB: Do our own flags based on PG_active and PG_errorChristoph Lameter1-14/+14
The atomicity when handling flags in SLUB is not necessary since both flags used by SLUB are not updated in a racy way. Flag updates are either done during slab creation or destruction or under slab_lock. Some of these flags do not have the non atomic variants that we need. So define our own. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17slab: warn on zero-length allocationsChristoph Lameter1-0/+1
slub warns on this, and we're working on making kmalloc(0) return NULL. Let's make slab warn as well so our testers detect such callers more rapidly. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17SLUB: Define functions for cpu slab handling instead of using PageActiveChristoph Lameter1-19/+38
Use inline functions to access the per cpu bit. Intoduce the notion of "freezing" a slab to make things more understandable. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17Slab allocators: Drop support for destructorsChristoph Lameter3-61/+16
There is no user of destructors left. There is no reason why we should keep checking for destructors calls in the slab allocators. The RFC for this patch was discussed at http://marc.info/?l=linux-kernel&m=117882364330705&w=2 Destructors were mainly used for list management which required them to take a spinlock. Taking a spinlock in a destructor is a bit risky since the slab allocators may run the destructors anytime they decide a slab is no longer needed. Patch drops destructor support. Any attempt to use a destructor will BUG(). Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Paul Mundt <lethal@linux-sh.org> Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-17slob: implement RCU freeingNick Piggin1-7/+45
The SLOB allocator should implement SLAB_DESTROY_BY_RCU correctly, because even on UP, RCU freeing semantics are not equivalent to simply freeing immediately. This also allows SLOB to be used on SMP. Signed-off-by: Nick Piggin <npiggin@suse.de> Acked-by: Matt Mackall <mpm@selenic.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-16Fix: find_or_create_page skips cpuset memory spreading.Christoph Lameter1-1/+2
We call alloc_page where we should be calling __page_cache_alloc. __page_cache_alloc performs cpuset memory spreading. alloc_page does not. There is no reason that pages allocated via find_or_create should be exempt. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-16slub: don't confuse ctor and dtorHugh Dickins1-1/+1
kmem_cache_create() was swapping ctor and dtor in calling find_mergeable(): though it caused no bug, and probably never would, even if destructors are retained; but fix it so as not to generate anxiety ;) Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-14sh64: generic quicklist support.Paul Mundt1-1/+1
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-05-11consolidate generic_writepages and mpage_writepagesMiklos Szeredi1-19/+40
Clean up massive code duplication between mpage_writepages() and generic_writepages(). The new generic function, write_cache_pages() takes a function pointer argument, which will be called for each page to be written. Maybe cifs_writepages() too can use this infrastructure, but I'm not touching that with a ten-foot pole. The upcoming page writeback support in fuse will also want this. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Acked-by: Christoph Hellwig <hch@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11VM statistics: Make timer deferrableChristoph Lameter1-1/+1
VM statistics updates do not matter if the kernel is in idle powersaving mode. So allow the timer to be deferred. It would be better though if we could switch the timer between deferrable and nondeferrable based on differentials present. The timer would start out nondeferrable and if we find that there were no updates in the last statistics interval then we would switch the timer to deferrable. If the timer later finds again that there are differentials then go to nondeferrable again. And yet another way would be to run the timer shortly before going to idle? The solution here means that the VM counters may be slightly off during idle since differentials may be still pending while the timer is deferred. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-11Bug in mm/thrash.c function grab_swap_token()Mika Kukkonen1-3/+2
Following bug was uncovered by compiling with '-W' flag: CC mm/thrash.o mm/thrash.c: In function ‘grab_swap_token’: mm/thrash.c:52: warning: comparison of unsigned expression < 0 is always false Variable token_priority is unsigned, so decrementing first and then checking the result does not work; fixed by reversing the test, patch attached (compile tested only). I am not sure if likely() makes much sense in this new situation, but I'll let somebody else to make a decision on that. Signed-off-by: Mika Kukkonen <mikukkon@iki.fi> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-10SLUB: remove nr_cpu_ids hackChristoph Lameter1-3/+2
This was in SLUB in order to head off trouble while the nr_cpu_ids functionality was not merged. Its merged now so no need to still have this. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-10early_pfn_to_nid needs to be __meminitStephen Rothwell1-1/+1
Since it is referenced by memmap_init_zone (which is __meminit) via the early_pfn_in_nid macro when CONFIG_NODES_SPAN_OTHER_NODES is set (which basically means PowerPC 64). This removes a section mismatch warning in those circumstances. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Yasunori Goto <y-goto@jp.fujitsu.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-10slub: support concurrent local and remote frees and allocs on a slabChristoph Lameter1-36/+118
Avoid atomic overhead in slab_alloc and slab_free SLUB needs to use the slab_lock for the per cpu slabs to synchronize with potential kfree operations. This patch avoids that need by moving all free objects onto a lockless_freelist. The regular freelist continues to exist and will be used to free objects. So while we consume the lockless_freelist the regular freelist may build up objects. If we are out of objects on the lockless_freelist then we may check the regular freelist. If it has objects then we move those over to the lockless_freelist and do this again. There is a significant savings in terms of atomic operations that have to be performed. We can even free directly to the lockless_freelist if we know that we are running on the same processor. So this speeds up short lived objects. They may be allocated and freed without taking the slab_lock. This is particular good for netperf. In order to maximize the effect of the new faster hotpath we extract the hottest performance pieces into inlined functions. These are then inlined into kmem_cache_alloc and kmem_cache_free. So hotpath allocation and freeing no longer requires a subroutine call within SLUB. [I am not sure that it is worth doing this because it changes the easy to read structure of slub just to reduce atomic ops. However, there is someone out there with a benchmark on 4 way and 8 way processor systems that seems to show a 5% regression vs. Slab. Seems that the regression is due to increased atomic operations use vs. SLAB in SLUB). I wonder if this is applicable or discernable at all in a real workload?] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-09Merge master.kernel.org:/pub/scm/linux/kernel/git/lethal/sh-2.6Linus Torvalds1-1/+1
* master.kernel.org:/pub/scm/linux/kernel/git/lethal/sh-2.6: sh: Fix stacktrace simplification fallout. sh: SH7760 DMABRG support. sh: clockevent/clocksource/hrtimers/nohz TMU support. sh: Truncate MAX_ACTIVE_REGIONS for the common case. rtc: rtc-sh: Fix rtc_dev pointer for rtc_update_irq(). sh: Convert to common die chain. sh: Wire up utimensat syscall. sh: landisk mv_nr_irqs definition. sh: Fixup ndelay() xloops calculation for alternate HZ. sh: Add 32-bit opcode feature CPU flag. sh: Fix PC adjustments for varying opcode length. sh: Support for SH-2A 32-bit opcodes. sh: Kill off redundant __div64_32 symbol export. sh: Share exception vector table for SH-3/4. sh: Always define TRAPA_BUG_OPCODE. sh: __GFP_REPEAT for pte allocations, too. rtc: rtc-sh: Fix up dev_dbg() warnings. sh: generic quicklist support.
2007-05-09Fix a bad error case handling in read_cache_page_async()David Howells1-3/+3
Commit 6fe6900e1e5b6fa9e5c59aa5061f244fe3f467e2 introduced a nasty bug in read_cache_page_async(). It added a "mark_page_accessed(page)" at the final return path in read_cache_page_async(). But in error cases, 'page' holds the error code, and you can't mark it accessed. [ and Glauber de Oliveira Costa points out that we can use a return instead of adding more goto's ] Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-09Merge git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivialLinus Torvalds1-1/+1
* git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: (25 commits) sound: convert "sound" subdirectory to UTF-8 MAINTAINERS: Add cxacru website/mailing list include files: convert "include" subdirectory to UTF-8 general: convert "kernel" subdirectory to UTF-8 documentation: convert the Documentation directory to UTF-8 Convert the toplevel files CREDITS and MAINTAINERS to UTF-8. remove broken URLs from net drivers' output Magic number prefix consistency change to Documentation/magic-number.txt trivial: s/i_sem /i_mutex/ fix file specification in comments drivers/base/platform.c: fix small typo in doc misc doc and kconfig typos Remove obsolete fat_cvf help text Fix occurrences of "the the " Fix minor typoes in kernel/module.c Kconfig: Remove reference to external mqueue library Kconfig: A couple of grammatical fixes in arch/i386/Kconfig Correct comments in genrtc.c to refer to correct /proc file. Fix more "deprecated" spellos. Fix "deprecated" typoes. ... Fix trivial comment conflict in kernel/relay.c.
2007-05-09Move remote node draining out of slab allocatorsChristoph Lameter4-126/+63
Currently the slab allocators contain callbacks into the page allocator to perform the draining of pagesets on remote nodes. This requires SLUB to have a whole subsystem in order to be compatible with SLAB. Moving node draining out of the slab allocators avoids a section of code in SLUB. Move the node draining so that is is done when the vm statistics are updated. At that point we are already touching all the cachelines with the pagesets of a processor. Add a expire counter there. If we have to update per zone or global vm statistics then assume that the pageset will require subsequent draining. The expire counter will be decremented on each vm stats update pass until it reaches zero. Then we will drain one batch from the pageset. The draining will cause vm counter updates which will then cause another expiration until the pcp is empty. So we will drain a batch every 3 seconds. Note that remote node draining is a somewhat esoteric feature that is required on large NUMA systems because otherwise significant portions of system memory can become trapped in pcp queues. The number of pcp is determined by the number of processors and nodes in a system. A system with 4 processors and 2 nodes has 8 pcps which is okay. But a system with 1024 processors and 512 nodes has 512k pcps with a high potential for large amount of memory being caught in them. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-09Make vm statistics update interval configurableChristoph Lameter1-1/+3
Make it configurable. Code in mm makes the vm statistics intervals independent from the cache reaper use that opportunity to make it configurable. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>