summaryrefslogtreecommitdiff
path: root/arch/sh/mm/pmb.c
AgeCommit message (Collapse)AuthorFilesLines
2012-03-28Disintegrate asm/system.h for SHDavid Howells1-1/+0
Disintegrate asm/system.h for SH. Signed-off-by: David Howells <dhowells@redhat.com> cc: linux-sh@vger.kernel.org
2011-03-23sh: pmb: Use struct syscore_ops instead of sysdevsPaul Mundt1-29/+14
This converts the PMB code over to use the new syscore_ops and kills off the old sysdev utilization, as per Rafael's example. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-10-14sh: Fix up PMB locking.Paul Mundt1-16/+15
This first converts the PMB locking over to raw spinlocks, and secondly fixes up a nested locking issue that was triggering lockdep early on: swapper/0 is trying to acquire lock: (&pmbe->lock){......}, at: [<806be9bc>] pmb_init+0xf4/0x4dc but task is already holding lock: (&pmbe->lock){......}, at: [<806be98e>] pmb_init+0xc6/0x4dc other info that might help us debug this: 1 lock held by swapper/0: #0: (&pmbe->lock){......}, at: [<806be98e>] pmb_init+0xc6/0x4dc Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-09-24sh: provide generic arch_debugfs_dir.Paul Mundt1-3/+1
While sh previously had its own debugfs root, there now exists a common arch_debugfs_dir prototype, so we switch everything over to that. Presumably once more architectures start making use of this we'll be able to just kill off the stub kdebugfs wrapper. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-06-21arch/sh/mm: Eliminate a double lockJulia Lawall1-1/+1
The function begins and ends with a read_lock. The latter is changed to a read_unlock. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @locked@ expression E1; position p; @@ read_lock(E1@p,...); @r exists@ expression x <= locked.E1; expression locked.E1; expression E2; identifier lock; position locked.p,p1,p2; @@ *lock@p1 (E1@p,...); ... when != E1 when != \(x = E2\|&x\) *lock@p2 (E1,...); // </smpl> Signed-off-by: Julia Lawall <julia@diku.dk> Acked-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-05-11sh: Reject small mappings for PMB bolting.Paul Mundt1-0/+2
The minimum section size for the PMB is 16M, so just always error out early if the specified size is too small. This permits us to unconditionally call in to pmb_bolt_mapping() with variable sizes without wasting a TLB and cache flush for the range. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-26Merge branch 'sh/stable-updates'Paul Mundt1-1/+0
Conflicts: arch/sh/kernel/dwarf.c drivers/dma/shdma.c Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-04-25sh: Do not try merging two 128MB PMB mappingsMatt Fleming1-1/+1
There is a logic error in pmb_merge() that means we will incorrectly try to merge two 128MB PMB mappings into one mapping. However, 256MB isn't a valid PMB map size and pmb_merge() will actually drop the second 128MB mapping. This patch allows my SDK7786 board to boot when configured with CONFIG_MEMORY_SIZE=0x10000000. Signed-off-by: Matt Fleming <matt@console-pimps.org>
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo1-1/+0
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-23sh: Fix build after dynamic PMB reworkMatt Fleming1-0/+2
set_pmb_entry() is now only used by a function that is wrapped in #ifdef CONFIG_PM, so wrap set_pmb_entry() in CONFIG_PM too. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23sh: Replace unsafe manipulation of MMUCRMatt Fleming1-1/+1
Setting the TI in MMUCR causes all the TLB bits in MMUCR to be cleared. Unfortunately, the TLB wired bits are also cleared when setting the TI bit, causing any wired TLB entries to become unwired. Use local_flush_tlb_all() which implements TLB flushing in a safer manner by using the memory-mapped TLB registers. As each CPU has its own PMB the modifications in pmb_init() only affect the local CPU, so only flush the local CPU's TLB. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-05sh: Move PMB debugfs entry initialization to later stagePawel Moll1-1/+1
... so the "sh_debugfs_root" is already available. Previously it wasn't and in result its path was "/sys/kernel/debug/pmb" instead of "/sys/kernel/debug/sh/pmb". Signed-off-by: Pawel Moll <pawel.moll@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-04sh: fix up MMU reset with variable PMB mapping sizes.Paul Mundt1-6/+31
Presently we run in to issues with the MMU resetting the CPU when variable sized mappings are employed. This takes a slightly more aggressive approach to keeping the TLB and cache state sane before establishing the mappings in order to cut down on races observed on SMP configurations. At the same time, we bump the VMA range up to the 0xb000...0xc000 range, as there still seems to be some undocumented behaviour in setting up variable mappings in the 0xa000...0xb000 range, resulting in reset by the TLB. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03sh: check for existing mappings for bolted PMB entries.Paul Mundt1-44/+96
When entries are being bolted unconditionally it's possible that the boot loader has established mappings that are within range that we don't want to clobber. Perform some basic validation to ensure that the new mapping is out of range before allowing the entry setup to take place. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: fixed virt/phys mapping helpers for PMB.Paul Mundt1-46/+51
This moves the pmb_remap_caller() mapping logic out in to pmb_bolt_mapping(), which enables us to establish fixed mappings in places such as the NUMA code. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: make pmb iomapping configurable.Paul Mundt1-0/+17
This plugs in an early_param for permitting transparent PMB-backed ioremapping to be enabled/disabled. For the time being, we use a default-disabled policy. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: reworked dynamic PMB mapping.Paul Mundt1-98/+158
This implements a fairly significant overhaul of the dynamic PMB mapping code. The primary change here is that the PMB gets its own VMA that follows the uncached mapping and we attempt to be a bit more intelligent with dynamic sizing, multi-entry mapping, and so forth. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-01sh: No need to explicitly include <linux/rwlock.h>.Robert P. J. Day1-1/+0
Since <linux/spinlock.h> already includes <linux/rwlock.h>, and the latter file will warn about not having included the former file anyway, there is no value in including rwlock.h explicitly. Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18sh: Merge legacy and dynamic PMB modes.Paul Mundt1-36/+207
This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18sh: Use uncached I/O helpers in PMB setup.Paul Mundt1-27/+19
The PMB code is an example of something that spends an absurd amount of time running uncached when only a couple of operations really need to be. This switches over to the shiny new uncached helpers, permitting us to spend far more time running cached. Additionally, MMUCR twiddling is perfectly safe from cached space given that it's paired with a control register barrier, so fix that up, too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: PMB locking overhaul.Paul Mundt1-38/+114
This implements some locking for the PMB code. A high level rwlock is added for dealing with rw accesses on the entry map while a per-entry data structure spinlock is added to deal with the PMB entry changing out from underneath us. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Fix up dynamically created write-through PMB mappings.Paul Mundt1-24/+32
Write-through PMB mappings still require the cache bit to be set, even if they're to be flagged with a different cache policy and bufferability bit. To reduce some of the confusion surrounding the flag encoding we centralize the cache mask based on the system cache policy while we're at it. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Build PMB entry links for existing contiguous multi-page mappings.Paul Mundt1-30/+29
This plugs in entry sizing support for existing mappings and then builds on top of that for linking together entries that are mapping contiguous areas. This will ultimately permit us to coalesce mappings and promote head pages while reclaiming PMB slots for dynamic remapping. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: PMB tidying.Paul Mundt1-45/+38
Some overdue cleanup of the PMB code, killing off unused functionality and duplication sprinkled about the tree. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Fix up more 64-bit pgprot truncation on SH-X2 TLB.Paul Mundt1-1/+5
Both the store queue API and the PMB remapping take unsigned long for their pgprot flags, which cuts off the extended protection bits. In the case of the PMB this isn't really a problem since the cache attribute bits that we care about are all in the lower 32-bits, but we do it just to be safe. The store queue remapping on the other hand depends on the extended prot bits for enabling userspace access to the mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16sh: Merge the legacy PMB mapping and entry synchronization code.Paul Mundt1-93/+69
This merges the code for iterating over the legacy PMB mappings and the code for synchronizing software state with the hardware mappings. There's really no reason to do the same iteration twice, and this also buys us the legacy entry logging facility for the dynamic PMB case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16sh: Prevent fixed slot PMB remapping from clobbering boot entries.Paul Mundt1-1/+1
The PMB initialization code walks the entries and synchronizes the software PMB state with the hardware mappings, preserving the slot index. Unfortunately pmb_alloc() only tested the bit position in the entry map and failed to set it, resulting in subsequent remaps being able to be dynamically assigned a slot that trampled an existing boot mapping with general badness ensuing. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-26sh: Mass ctrl_in/outX to __raw_read/writeX conversion.Paul Mundt1-12/+12
The old ctrl in/out routines are non-portable and unsuitable for cross-platform use. While drivers/sh has already been sanitized, there is still quite a lot of code that is not. This converts the arch/sh/ bits over, which permits us to flag the routines as deprecated whilst still building with -Werror for the architecture code, and to ensure that future users are not added. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21sh: Kill off the special uncached section and fixmap.Paul Mundt1-3/+3
Now that cached_to_uncached works as advertized in 32-bit mode and we're never going to be able to map < 16MB anyways, there's no need for the special uncached section. Kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: Make 29/32-bit mode check helper generally available.Paul Mundt1-0/+5
Presently __in_29bit_mode() is only defined for the PMB case, but it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT && CONFIG_PMB=n cases. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18sh: Setup early PMB mappings.Matt Fleming1-51/+105
More and more boards are going to start shipping that boot with the MMU in 32BIT mode by default. Previously we relied on the bootloader to setup PMB mappings for use by the kernel but we also need to cater for boards whose bootloaders don't set them up. If CONFIG_PMB_LEGACY is not enabled we have full control over our PMB mappings and can compress our address space. Usually, the distance between the the cached and uncached mappings of RAM is always 512MB, however we can compress the distance to be the amount of RAM on the board. pmb_init() now becomes much simpler. It no longer has to calculate any mappings, it just has to synchronise the software PMB table with the hardware. Tested on SDK7786 and SH7785LCR. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-13sh: fixed PMB mode refactoring.Paul Mundt1-45/+61
This introduces some much overdue chainsawing of the fixed PMB support. fixed PMB was introduced initially to work around the fact that dynamic PMB mode was relatively broken, though they were never intended to converge. The main areas where there are differences are whether the system is booted in 29-bit mode or 32-bit mode, and whether legacy mappings are to be preserved. Any system booting in true 32-bit mode will not care about legacy mappings, so these are roughly decoupled. Regardless of the entry point, PMB and 32BIT are directly related as far as the kernel is concerned, so we also switch back to having one select the other. With legacy mappings iterated through and applied in the initialization path it's now possible to finally merge the two implementations and permit dynamic remapping overtop of remaining entries regardless of whether boot mappings are crafted by hand or inherited from the boot loader. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10sh: Fold fixed-PMB support into dynamic PMB supportMatt Fleming1-5/+60
The initialisation process differs for CONFIG_PMB and for CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB entries that were allocated by the bootloader. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10sh: Fix the offset from P1SEG/P2SEG where we map RAMMatt Fleming1-6/+7
We need to map the gap between 0x00000000 and __MEMORY_START in the PMB, as well as RAM. With this change my 7785LCR board can switch to 32bit MMU mode at runtime. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10sh: Remap physical memory into P1 and P2 in pmb_init()Matt Fleming1-39/+15
Eventually we'll have complete control over what physical memory gets mapped where and we can probably do other interesting things. For now though, when the MMU is in 32-bit mode, we map physical memory into the P1 and P2 virtual address ranges with the same semantics as they have in 29-bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10sh: Get rid of the kmem cache codeMatt Fleming1-55/+26
Unfortunately, at the time during in boot when we want to be setting up the PMB entries, the kmem subsystem hasn't been initialised. We now match pmb_map slots with pmb_entry_list slots. When we find an empty slot in pmb_map, we set the bit, thereby acquiring the corresponding pmb_entry_list entry. There is a benefit in using this static array of struct pmb_entry's; we don't need to acquire any locks in order to traverse the list of struct pmb_entry's. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10sh: Make most PMB functions staticMatt Fleming1-9/+8
There's no need to export the internal PMB functions for allocating, freeing and modifying PMB entries, etc. This way we can restrict the interface for PMB. Also remove the static from pmb_init() so that we have more freedom in setting up the initial PMB entries and turning on MMU 32bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-10sh: Allocate PMB entry slot earlierMatt Fleming1-41/+39
Simplify set_pmb_entry() by removing the possibility of not finding a free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so that if there are no free slots we fail at allocation time, rather than in set_pmb_entry(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09sh: Don't allocate smaller sized mappings on every iterationMatt Fleming1-0/+7
Currently, we've got the less than ideal situation where if we need to allocate a 256MB mapping we'll allocate four entries like so, entry 1: 128MB entry 2: 64MB entry 3: 16MB entry 4: 16MB This is because as we execute the loop in pmb_remap() we will progressively try mapping the remaining address space with smaller and smaller sizes. This isn't good because the size we use on one iteration may be the perfect size to use on the next iteration, for instance when the initial size is divisible by one of the PMB mapping sizes. With this patch, we now only need two entries in the PMB to map 256MB of address space, entry 1: 128MB entry 2: 128MB Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-10-09sh: Plug PMB alloc memory leakMatt Fleming1-6/+24
If we fail to allocate a PMB entry in pmb_remap() we must remember to clear and free any PMB entries that we may have previously allocated, e.g. if we were allocating a multiple entry mapping. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2009-03-16sh: PMB hibernation supportFrancesco VIRLINZI1-0/+38
This implements preliminary suspend/resume support for the PMB. Signed-off-by: Francesco Virlinzi <francesco.virlinzi@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-10-20Fix debugfs_create_file's error checking method for arch/sh/mm/Zhaolei1-0/+2
debugfs_create_file() returns NULL if an error occurs, returns -ENODEV when debugfs is not enabled in the kernel. Signed-off-by: Zhao Lei <zhaolei@cn.fujitsu.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-28sh: fix seq_file memory leakLi Zefan1-1/+1
When using single_open(), single_release() should be used instead of seq_release(), otherwise there is a memory leak. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-07-26SL*B: drop kmem cache argument from constructorAlexey Dobriyan1-1/+1
Kmem cache passed to constructor is only needed for constructors that are themselves multiplexeres. Nobody uses this "feature", nor does anybody uses passed kmem cache in non-trivial way, so pass only pointer to object. Non-trivial places are: arch/powerpc/mm/init_64.c arch/powerpc/mm/hugetlbpage.c This is flag day, yes. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Acked-by: Pekka Enberg <penberg@cs.helsinki.fi> Acked-by: Christoph Lameter <cl@linux-foundation.org> Cc: Jon Tollefson <kniht@linux.vnet.ibm.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: Matt Mackall <mpm@selenic.com> [akpm@linux-foundation.org: fix arch/powerpc/mm/hugetlbpage.c] [akpm@linux-foundation.org: fix mm/slab.c] [akpm@linux-foundation.org: fix ubifs] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2008-04-18sh: Create an sh debugfs root.Paul Mundt1-1/+1
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Preparation for uncached jumps through PMB.Stuart Menefy1-9/+9
Presently most of the 29-bit physical parts do P1/P2 segmentation with a 1:1 cached/uncached mapping, jumping between the two to control the caching behaviour. This provides the basic infrastructure to maintain this behaviour on 32-bit physical parts that don't map P1/P2 at all, using a shiny new linker section and corresponding fixmap entry. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Fix compile error of arch/sh/mm/pmb.cNobuhiro Iwamatsu1-1/+1
Signed-off-by: Nobuhiro Iwamatsu <iwamatsu@nigauri.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2008-01-28sh: Invalidate the TLB after applying PMB mappings.Stuart Menefy1-0/+6
Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2007-10-17Slab API: remove useless ctor parameter and reorder parametersChristoph Lameter1-2/+1
Slab constructors currently have a flags parameter that is never used. And the order of the arguments is opposite to other slab functions. The object pointer is placed before the kmem_cache pointer. Convert ctor(void *object, struct kmem_cache *s, unsigned long flags) to ctor(struct kmem_cache *s, void *object) throughout the kernel [akpm@linux-foundation.org: coupla fixes] Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-09-21sh: Support explicit L1 cache disabling.Paul Mundt1-1/+1
This reworks the cache mode configuration in Kconfig, and allows for explicit selection of write-back/write-through/off configurations. All of the cache flushing routines are optimized away for the off case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>