summaryrefslogtreecommitdiff
path: root/arch/arm/mm
AgeCommit message (Collapse)AuthorFilesLines
2010-04-09ARM: Fix ioremap_cached()/ioremap_wc() for SMP platformsRussell King1-0/+4
Write combining/cached device mappings are not setting the shared bit, which could potentially cause problems on SMP systems since the cache lines won't participate in the cache coherency protocol. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-04-05Merge branch 'master' into export-slabhTejun Heo2-0/+23
2010-03-30include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo4-1/+4
implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-25ARM: 5996/1: ARM: Change the mandatory barriers implementation (4/4)Catalin Marinas1-0/+6
The mandatory barriers (mb, rmb, wmb) are used even on uniprocessor systems for things like ordering Normal Non-cacheable memory accesses with DMA transfer (via Device memory writes). The current implementation uses dmb() for mb() and friends but this is not sufficient. The DMB only ensures the relative ordering of the observability of accesses by other processors or devices acting as masters. In case of DMA transfers started by writes to device memory, the relative ordering is not ensured because accesses to slave ports of a device are not considered observable by the DMB definition. A DSB is required for the data to reach the main memory (even if mapped as Normal Non-cacheable) before the device receives the notification to begin the transfer. Furthermore, some L2 cache controllers (like L2x0 or PL310) buffer stores to Normal Non-cacheable memory and this would need to be drained with the outer_sync() function call. The patch also allows platforms to define their own mandatory barriers implementation by selecting CONFIG_ARCH_HAS_BARRIERS and providing a mach/barriers.h file. Note that the SMP barriers are unchanged (being DMBs as before) since they are only guaranteed to work with Normal Cacheable memory. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-25ARM: 5995/1: ARM: Add L2x0 outer_sync() support (3/4)Catalin Marinas2-0/+11
The L2x0 cache controllers need to explicitly drain their write buffer even for Normal Noncacheable memory accesses. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-25ARM: 5994/1: ARM: Add outer_cache_fns.sync function pointer (2/4)Catalin Marinas1-0/+6
This patch introduces the outer_cache_fns.sync function pointer together with the OUTER_CACHE_SYNC config option that can be used to drain the write buffer of the outer cache. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-03-01Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds41-361/+1143
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits) ARM: Eliminate decompressor -Dstatic= PIC hack ARM: 5958/1: ARM: U300: fix inverted clk round rate ARM: 5956/1: misplaced parentheses ARM: 5955/1: ep93xx: move timer defines into core.c and document ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c ARM: 5953/1: ep93xx: fix broken build of clock.c ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig ARM: 5949/1: NUC900 add gpio virtual memory map ARM: 5948/1: Enable timer0 to time4 clock support for nuc910 ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk ARM: make_coherent(): fix problems with highpte, part 2 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself ARM: 5945/1: ep93xx: include correct irq.h in core.c ARM: 5933/1: amba-pl011: support hardware flow control ARM: 5930/1: Add PKMAP area description to memory.txt. ARM: 5929/1: Add checks to detect overlap of memory regions. ARM: 5928/1: Change type of VMALLOC_END to unsigned long. ARM: 5927/1: Make delimiters of DMA area globally visibly. ARM: 5926/1: Add "Virtual kernel memory..." printout. ARM: 5920/1: OMAP4: Enable L2 Cache ... Fix up trivial conflict in arch/arm/mach-mx25/clock.c
2010-02-25Merge branches 'clks' and 'pnx' into develRussell King3-5/+10
2010-02-25Merge branch 'misc2' into develRussell King9-93/+343
2010-02-25Merge branch 'perf' into develRussell King1-0/+7
Conflicts: arch/arm/Kconfig
2010-02-25Merge branches 'at91', 'cache', 'cup', 'ep93xx', 'ixp4xx', 'nuc', ↵Russell King36-276/+818
'pending-dma-streaming', 'u300' and 'umc' into devel
2010-02-24ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH ↵Kukjin Kim1-1/+1
Kconfig Add ARM_L1_CACHE_SHIFT_6 to arch/arm/Kconfig to allow CPUs with L1 cache lines which are 64bytes to indicate this without having to alter the arch/arm/mm/Kconfig entry each time. Update the mm Kconfig so that ARM_L1_CACHE_SHIFT default value uses this and change OMAP3 and S5PC1XX to select ARM_L1_CACHE_SHIFT_6. Acked-by: Ben Dooks <ben-linux@fluff.org> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Kukjin Kim <kgene.kim@samsung.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20ARM: make_coherent(): fix problems with highpte, part 2Russell King1-3/+4
update_mmu_cache() is called with the page table for the faulted-in page still mapped. We need to modify the PTE for this page to ensure coherency with other shared mappings when multiple shared mappings exist within a MM. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King1-2/+3
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20ARM: allow alignment fault mode to be configured at kernel bootRussell King1-0/+3
Some glibc versions intentionally create lots of alignment faults in their gconv code, which if not fixed up, results in segfaults during boot. This can prevent systems booting properly. There is no clear hard-configurable default for this; the desired default depends on the nature of the userspace which is going to be booted. So, provide a way for the alignment fault handler to be configured via the kernel command line. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5929/1: Add checks to detect overlap of memory regions.Fenkart/Bostandzhyan1-0/+17
Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5928/1: Change type of VMALLOC_END to unsigned long.Fenkart/Bostandzhyan1-1/+1
Makes it consistent with VMALLOC_START Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5927/1: Make delimiters of DMA area globally visibly.Fenkart/Bostandzhyan2-3/+6
Adds DMA area to 'virtual memory map' startup message Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5926/1: Add "Virtual kernel memory..." printout.Fenkart/Bostandzhyan1-9/+69
Code based on parisc and x86_32. Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5919/1: ARM: L2 : Errata 588369: Clean & Invalidate do not invalidate ↵Santosh Shilimkar1-0/+36
clean lines This patch implements the work-around for the errata 588369.The secure API is used to alter L2 debug register because of trust-zone. This version updated with comments from Russell and Catalin and generated against 2.6.33-rc6 mainline kernel. Detail comments can be found: http://www.spinics.net/lists/linux-omap/msg23431.html Signed-off-by: Woodruff Richard <r-woodruff2@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5917/1: OMAP4: Add L2 Cache supportSantosh Shilimkar1-1/+1
This patch adds L2 Cache support for OMAP4. External L2 cache is used in OMAP4 CC: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5916/1: ARM: L2 : Add maintainace by line helper functionsSantosh Shilimkar1-10/+26
This patch adds the cache maintainance by line helper functions. Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5911/1: ARM: Select CPU_32v6K for CPU_V7 only if ARCH_OMAP2 is not selectedTony Lindgren1-2/+2
Otherwise the kernel built with both CPU_V6 and CPU_V7 will not boot on omap2. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5905/1: ARM: Global ASID allocation on SMPCatalin Marinas1-14/+110
The current ASID allocation algorithm doesn't ensure the notification of the other CPUs when the ASID rolls over. This may lead to two processes using the same ASID (but different generation) or multiple threads of the same process using different ASIDs. This patch adds the broadcasting of the ASID rollover event to the other CPUs. To avoid a race on multiple CPUs modifying "cpu_last_asid" during the handling of the broadcast, the ASID numbering now starts at "smp_processor_id() + 1". At rollover, the cpu_last_asid will be set to NR_CPUS. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: 5880/1: arm: use generic infrastructure for early paramsJeremy Kerr2-25/+28
The ARM setup code includes its own parser for early params, there's also one in the generic init code. This patch removes __early_init (and related code) from arch/arm/kernel/setup.c, and changes users to the generic early_init macro instead. The generic macro takes a char * argument, rather than char **, so we need to update the parser functions a little. Signed-off-by: Jeremy Kerr <jeremy.kerr@canonical.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: Move creation of /proc/cpu out of alignment.cRussell King1-5/+1
Always creating this directory avoids other users having to jump through silly hoops when they want to share this directory. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: Add caller information to ioremapRussell King2-23/+46
This allows the procfs vmallocinfo file to show who created the ioremap regions. Note: __builtin_return_address(0) doesn't do what's expected if its used in an inline function, so we leave __arm_ioremap callers in such places alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-15ARM: dma-mapping: fix for speculative prefetchingRussell King3-46/+42
ARMv6 and ARMv7 CPUs can perform speculative prefetching, which makes DMA cache coherency handling slightly more interesting. Rather than being able to rely upon the CPU not accessing the DMA buffer until DMA has completed, we now must expect that the cache could be loaded with possibly stale data from the DMA buffer. Where DMA involves data being transferred to the device, we clean the cache before handing it over for DMA, otherwise we invalidate the buffer to get rid of potential writebacks. On DMA Completion, if data was transferred from the device, we invalidate the buffer to get rid of any stale speculative prefetches. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15ARM: dma-mapping: remove dmac_clean_range and dmac_inv_rangeRussell King21-148/+41
These are now unused, and so can be removed. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15ARM: dma-mapping: provide per-cpu type map/unmap functionsRussell King22-17/+584
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15ARM: dma-mapping: simplify dma_cache_maint_pageRussell King1-24/+18
dma_cache_maint_contiguous is now simple enough to live inside dma_cache_maint_page, so move it there. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15ARM: dma-mapping: move selection of page ops out of dma_cache_maint_contiguousRussell King1-29/+30
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15ARM: dma-mapping: push buffer ownership down into dma-mapping.cRussell King1-4/+30
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
2010-02-15ARM: dma-mapping: introduce the idea of buffer ownershipRussell King1-5/+8
The DMA API has the notion of buffer ownership; make it explicit in the ARM implementation of this API. This gives us a set of hooks to allow us to deal with CPU cache issues arising from non-cache coherent DMA. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-By: Jamie Iles <jamie@jamieiles.com>
2010-02-12ARM: 5900/2: arm: enable support for software perf eventsJamie Iles1-0/+7
The perf events subsystem allows counting of both hardware and software events. This patch implements the bare minimum for software performance events. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Jamie Iles <jamie.iles@picochip.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-03ARM: Fix wrong register in proc-arm6_7.S data abort handlerRussell King1-1/+1
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20ARM: make_coherent: avoid recalculating the pfn for the modified pageRussell King1-6/+6
We already know the pfn for the page to be modified in make_coherent, so let's stop recalculating it unnecessarily. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20ARM: make_coherent: fix problems with highpte, part 1Russell King1-2/+11
update_mmu_cache() is called with a page table already mapped. We call make_coherent(), which then calls adjust_pte() which wants to map other page tables. This causes kmap_atomic() to BUG() because the slot its trying to use is already taken. Since do_adjust_pte() modifies the page tables, we are also missing any form of locking, so we're risking corrupting the page tables. Fix this by using pte_offset_map_nested(), and taking the pte page table lock around do_adjust_pte(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20ARM: make_coherent: convert adjust_pte() to use p*d_none_or_clear_bad()Russell King1-20/+4
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-20ARM: make_coherent: split adjust_pte() in twoRussell King1-20/+32
adjust_pte() walks the page tables, and do_adjust_pte() does the page table manipulation. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-19ARM: 5888/1: arm: Update comments in cacheflush.h and remove unnecessary V6 ↵Tony Lindgren2-4/+0
and V7 comments The comments in cacheflush.h should follow what's in struct cpu_cache_fns. The comments for V6 and V7 are unnecessary. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-19ARM: 5886/1: arm: Fix cpu_proc_fin() for proc-v7.S and make kexec workTony Lindgren1-1/+8
The comments in arm_machine_restart() suggest that cpu_proc_fin() will clean and disable cache and turn off interrupts. This does not seem to be implemented for proc-v7.S, implement it the same way as for proc-v6.S. This also makes kexec work for v7. Note that a related TLB and branch traget flush patch is also needed to avoid kexec "crc error". Note that there are still some issues that seem to be related to L2 cache being on and causing occasional uncompress "crc error" with kexec. Anyways, this gets kexec mostly working on V7 for now. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-19ARM: 5885/1: arm: Flush TLB entries in setup_mm_for_reboot()Tony Lindgren1-0/+2
We need to do that if we tinker with the MMU entries. This fixes the occasional bug with kexec where the new fails to uncompress with "crc error". Most likely at least kexec on v6 and v7 need this fix. Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-12Merge master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds3-6/+24
* master.kernel.org:/home/rmk/linux-2.6-arm: ARM: Ensure ARMv6/7 mm files are built using appropriate assembler options ARM: Fix wrong dmb ARM: 5874/1: serial21285: fix disable_irq-from-interrupt-handler deadlock ARM: 5873/1: ARM: Fix the reset logic for ARM RealView boards ARM: 5872/1: ARM: include needed linux/cpu.h in asm/cpu.h ARM: 5871/1: arch/arm: Fix build failure for lpd7a404_defconfig caused by missing includes ARM: 5870/1: arch/arm: Fix build failure for defconfigs without CONFIG_ISA_DMA_API set ARM: 5868/1: ARM: fix "BUG: using smp_processor_id() in preemptible code" ARM: 5867/1: Update U300 defconfig ARM: 5866/1: arm ptrace: use unsigned types for kernel pt_regs [ARM] pxa: fix strange characters in zaurus gpio .desc ARM: add missing recvmmsg syscall number [ARM] pxa: fix compiler warnings of unused variable 'id' in cpu_is_pxa9*() [ARM] pxa: update pwm_backlight->notify() to include missed 'struct device *' [ARM] pxa: enable L2 if present in XSC3 [ARM] pxa: do not enable L2 after MMU is enabled
2010-01-12ARM: Ensure ARMv6/7 mm files are built using appropriate assembler optionsRussell King1-0/+12
A kernel with both ARMv6 and ARMv7 selected results in build errors. Fix this by specifying the proper architectures for these assembly files. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-11mm: make totalhigh_pages unsigned longAndreas Fenkart1-1/+1
Makes it consistent with the extern declaration, used when CONFIG_HIGHMEM is set Removes redundant casts in printout messages Signed-off-by: Andreas Fenkart <andreas.fenkart@streamunlimited.com> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Lennox Wu <lennox.wu@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2010-01-08Merge branch 'fix' of ↵Russell King2-6/+12
git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6
2010-01-05ARM: 5858/1: Remove unused vma_vm_flags macro from v7wbi_flush_user_tlb_rangeBahadir Balban1-1/+0
Signed-off-by: Bahadir Balban <bbalban@b-labs.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-01-01[ARM] pxa: enable L2 if present in XSC3Haojian Zhuang1-0/+7
Check whether L2 is present or not in XSC3. If it's present, enable L2 immediately. Disabling L2 after L2 is enabled that would result in unpredicatable behavior of XSC3 processor. Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
2010-01-01[ARM] pxa: do not enable L2 after MMU is enabledHaojian Zhuang1-6/+5
Outer cache checked whether L2 is enabled or not. If L2 isn't enabled in XSC3, it would enable L2. This operation is evil that would make system hang. In XSC3 core document, these words are mentioned in below. "Following reset, the L2 Unified Cache Enable bit is cleared. To enable the L2 Cache, software may set the bit to a '1' before or at the same time as enabling the MMU. Enabling the L2 Cache after the MMU has been enabled or disabling the L2 Cache after the L2 Cache has been enabled, may result in unpredictable behavior of the processor." When outer cache is initialized, the MMU is already enabled. We couldn't enable L2 after MMU enabled. Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>