summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2014-04-16arm64: mm: fix the function name in comment of cpu_do_switch_mmJingoo Han1-1/+1
Fix the function name of comment of cpu_do_switch_mm, because cpu_do_switch_mm is the correct name. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: fix build error if DMA_CMA is enabledPankaj Dubey1-1/+0
arm64/include/asm/dma-contiguous.h is trying to include <asm-genric/dma-contiguous.h> which does not exist, and thus failing build for arm64 if we enable CONFIG_DMA_CMA. This patch fixes build error by removing unwanted header inclusion from arm64's dma-contiguous.h. Signed-off-by: Pankaj Dubey <pankaj.dubey@samsung.com> Signed-off-by: Somraj Mani <somraj.mani@samsung.com> Acked-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: kernel: fix per-cpu offset restore on resumeLorenzo Pieralisi1-0/+8
The introduction of percpu offset optimisation through tpidr_el1 in: Commit id :7158627686f02319c50c8d9d78f75d4c8 "arm64: percpu: implement optimised pcpu access using tpidr_el1" requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume so that percpu variables can be addressed correctly when a CPU comes out of reset from warm-boot. This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by calling the set_my_cpu_offset C API, as it is done on primary and secondary CPUs on cold boot, so that, even if the register used to store the percpu offset is changed, the save and restore of general purpose registers does not have to be updated. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: mm: fix the function name in comment of __flush_dcache_areaJingoo Han1-1/+1
Fix the function name of comment of __flush_dcache_area, because __flush_dcache_area is the correct name. Also, the missing variable 'size' is added to the comment. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: mm: use ubfm for dcache_line_sizeJingoo Han1-2/+1
Use 'ubfm' for the bitfield move instruction; thus, single instruction can be used instead of two instructions, when getting the minimum D-cache line size from CTR_EL0 register. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm/arm64: kvm: Use virt_to_idmap instead of virt_to_phys for idmap mappingsSantosh Shilimkar3-4/+6
KVM initialisation fails on architectures implementing virt_to_idmap() because virt_to_phys() on such architectures won't fetch you the correct idmap page. So update the KVM ARM code to use the virt_to_idmap() to fix the issue. Since the KVM code is shared between arm and arm64, we create kvm_virt_to_phys() and handle the redirection in respective headers. Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-04-16arm64: KVM: Force undefined exception for Guest SMC intructionsAnup Patel1-3/+0
The SMC-based PSCI emulation for Guest is going to be very different from the in-kernel HVC-based PSCI emulation hence for now just inject undefined exception when Guest executes SMC instruction. Signed-off-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org> Signed-off-by: marc Zyngier <marc.zyngier@arm.com>
2014-04-16arm64: KVM: Support X-Gene guest VCPU on APM X-Gene hostAnup Patel3-14/+24
This patch allows us to have X-Gene guest VCPU when using KVM arm64 on APM X-Gene host. We add KVM_ARM_TARGET_XGENE_POTENZA for X-Gene Potenza compatible guest VCPU and we return KVM_ARM_TARGET_XGENE_POTENZA in kvm_target_cpu() when running on X-Gene host with Potenza core. [maz: sanitized the commit log] Signed-off-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-04-16arm64: KVM: Add Kconfig option for max VCPUs per-GuestAnup Patel2-1/+17
Current max VCPUs per-Guest is set to 4 which is preventing us from creating a Guest (or VM) with 8 VCPUs on Host (e.g. X-Gene Storm SOC) with 8 Host CPUs. The correct value of max VCPUs per-Guest should be same as the max CPUs supported by GICv2 which is 8 but, increasing value of max VCPUs per-Guest can make things slower hence we add Kconfig option to let KVM users select appropriate max VCPUs per-Guest. Signed-off-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2014-04-16ARM/KVM: save and restore generic timer registersAndre Przywara5-1/+166
For migration to work we need to save (and later restore) the state of each core's virtual generic timer. Since this is per VCPU, we can use the [gs]et_one_reg ioctl and export the three needed registers (control, counter, compare value). Though they live in cp15 space, we don't use the existing list, since they need special accessor functions and the arch timer is optional. Acked-by: Marc Zynger <marc.zyngier@arm.com> Signed-off-by: Andre Przywara <andre.przywara@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-04-16arm64: fix typo in entry.SNeil Zhang1-1/+1
Commit 64681787 (arm64: let the core code deal with preempt_count) changed the code, but left the comments unchanged, fix it. Signed-off-by: Neil Zhang <zhangwm@marvell.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: kernel: restore HW breakpoint registers in cpu_suspendLorenzo Pieralisi2-23/+28
When a CPU resumes from low-power, it restores HW breakpoint and watchpoint slots through a CPU PM notifier. Since we want to enable debugging as early as possible in the resume path, the mdscr content is restored along the general purpose registers in the cpu_suspend API and debug exceptions are reenabled when cpu_suspend returns. Since the CPU PM notifier is run after a CPU has been resumed, we cannot expect HW breakpoint registers to contain sane values till the notifier is run, since the HW breakpoints registers content is unknown at reset; this means that the CPU might run with debug exceptions enabled, mdscr restored but HW breakpoint registers containing junk values that can trigger spurious debug exceptions. This patch fixes current HW breakpoints restore by moving the HW breakpoints registers restoration to the cpu_suspend API, before the debug exceptions are enabled. This way, as soon as the cpu_suspend function returns the kernel can resume debugging with sane values in HW breakpoint registers. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16jump_label: use defined macros instead of hard-coding for better readabilityJiang Liu1-7/+12
Use macro JUMP_LABEL_TRUE_BRANCH instead of hard-coding for better readability. Acked-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Jason Baron <jbaron@akamai.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64, jump label: optimize jump label implementationJiang Liu4-0/+112
Optimize jump label implementation for ARM64 by dynamically patching kernel text. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64, jump label: detect %c support for ARM64Jiang Liu1-1/+1
As commit a9468f30b5eac6 "ARM: 7333/2: jump label: detect %c support for ARM", this patch detects the same thing for ARM64 because some ARM64 GCC versions have the same issue. Some versions of ARM64 GCC which do support asm goto, do not support the %c specifier. Since we need the %c to support jump labels on ARM64, detect that too in the asm goto detection script to avoid build errors with these versions. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: introduce aarch64_insn_gen_{nop|branch_imm}() helper functionsJiang Liu2-0/+50
Introduce aarch64_insn_gen_{nop|branch_imm}() helper functions, which will be used to implement jump label on ARM64. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: move encode_insn_immediate() from module.c to insn.cJiang Liu3-110/+114
Function encode_insn_immediate() will be used by other instruction manipulate related functions, so move it into insn.c and rename it as aarch64_insn_encode_immediate(). Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: introduce interfaces to hotpatch kernel and module codeJiang Liu2-1/+128
Introduce three interfaces to patch kernel and module code: aarch64_insn_patch_text_nosync(): patch code without synchronization, it's caller's responsibility to synchronize all CPUs if needed. aarch64_insn_patch_text_sync(): patch code and always synchronize with stop_machine() aarch64_insn_patch_text(): patch code and synchronize with stop_machine() if needed Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: introduce basic aarch64 instruction decoding helpersJiang Liu3-1/+169
Introduce basic aarch64 instruction decoding helper aarch64_get_insn_class() and aarch64_insn_hotpatch_safe(). Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Jiang Liu <liuj97@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: dts: Reduce size of virtio block device for foundation modelMark Brown1-1/+1
Will Deacon observed that kvmtool uses a size of 0x200 for virtio block memory region and that the virtio block spec only uses 31 bytes in the device specific region at 0x100 so reduce the region to a less wasteful 0x200. Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: Remove unused __data_loc variableGeoff Levand2-12/+0
The __data_loc variable is an unused left over from the 32 bit arm implementation. Remove that variable and adjust the __mmap_switched startup routine accordingly. Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16clockevents: Implement unbind functionalityThomas Gleixner5-3/+127
Provide a sysfs interface to allow unbinding of clockevent devices. The device is unbound if it is unused or if there is a replacement device available. Unbinding of broadcast devices is not supported as we don't want to foster that nonsense. If no replacement device is available the unbind returns -EBUSY. Unbind is available from the kernel and through sysfs, which is necessary to drop the module refcount. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: John Stultz <john.stultz@linaro.org> Cc: Magnus Damm <magnus.damm@gmail.com> Link: http://lkml.kernel.org/r/20130425143436.499216659@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-16clockevents: Define CS_NAME_LEN unconditionallyThomas Gleixner1-0/+2
Unbreak architectures which do not use clockevents, but require to build some of the core timekeeping infrastructure Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-16ARM: arch_timer: add support to configure and enable event streamSudeep KarkadaNagesha3-4/+29
This patch adds support for configuring the event stream frequency and enabling it. It also adds the hwcaps definitions to the user to detect this event stream feature. Cc: Russell King <linux@arm.linux.org.uk> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Conflicts: arch/arm/include/asm/arch_timer.h arch/arm/include/uapi/asm/hwcap.h arch/arm/kernel/setup.c
2014-04-16arm64: Enable CMALaura Abbott4-2/+56
arm64 bit targets need the features CMA provides. Add the appropriate hooks, header files, and Kconfig to allow this to happen. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: Warn on NULL device structure for dma APIsLaura Abbott1-0/+10
Although parts of the DMA apis may properly check for NULL devices, there may be some places that don't. Rather than fix up all the possible locations, just require a non-NULL device structure to be used for allocating/freeing. Cc: Will Deacon <will.deacon@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [catalin.marinas@arm.com: s/WARN/WARN_ONCE/] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: Add hwcaps for crypto and CRC32 extensions.Steve Capper2-1/+42
Advertise the optional cryptographic and CRC32 instructions to user space where present. Several hwcap bits [3-7] are allocated. Signed-off-by: Steve Capper <steve.capper@linaro.org> [bit 2 is taken now so use bits 3-7 instead] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: drop redundant macros from read_cpuid()Ard Biesheuvel1-14/+4
asm/cputype.h contains a bunch of #defines for CPU id registers that essentially map to themselves. Remove the #defines and pass the tokens directly to the inline asm() that reads the registers. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: Remove outdated commentLiviu Dudau1-5/+0
Code referenced in the comment has moved to arch/arm64/kernel/cputable.c Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: cmpxchg: update macros to prevent warningsMark Hambleton1-11/+17
Make sure the value we are going to return is referenced in order to avoid warnings from newer GCCs such as: arch/arm64/include/asm/cmpxchg.h:162:3: warning: value computed is not used [-Wunused-value] ((__typeof__(*(ptr)))__cmpxchg_mb((ptr), \ ^ net/netfilter/nf_conntrack_core.c:674:2: note: in expansion of macro ‘cmpxchg’ cmpxchg(&nf_conntrack_hash_rnd, 0, rand); [Modified to use the current underlying implementation as current mainline for both cmpxchg() and cmpxchg_local() does -- broonie] Signed-off-by: Mark Hambleton <mahamble@broadcom.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: support single-step and breakpoint handler hooksSandeepa Prabhu3-1/+110
AArch64 Single Steping and Breakpoint debug exceptions will be used by multiple debug framworks like kprobes & kgdb. This patch implements the hooks for those frameworks to register their own handlers for handling breakpoint and single step events. Reworked the debug exception handler in entry.S: do_dbg to route software breakpoint (BRK64) exception to do_debug_exception() Signed-off-by: Sandeepa Prabhu <sandeepa.prabhu@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16ARM64: fix framepointer check in unwind_frameKonstantin Khlebnikov1-1/+1
We need at least 24 bytes above frame pointer. Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16ARM64: check stack pointer in get_wchanKonstantin Khlebnikov1-2/+5
get_wchan() is lockless. Task may wakeup at any time and change its own stack, thus each next stack frame may be overwritten and filled with random stuff. Signed-off-by: Konstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: kconfig: select HAVE_EFFICIENT_UNALIGNED_ACCESSWill Deacon1-0/+1
ARMv8 CPUs can perform efficient unaligned memory accesses in hardware and this feature is relied up on by code such as the dcache word-at-a-time name hashing. This patch selects HAVE_EFFICIENT_UNALIGNED_ACCESS for arm64. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: dcache: select DCACHE_WORD_ACCESS for little-endian CPUsWill Deacon2-0/+41
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string comparisons in the vfs layer. This patch implements support for load_unaligned_zeropad in much the same way as has been done for ARM, although big-endian systems are also supported. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Conflicts: arch/arm64/Kconfig
2014-04-16arm64: futex: ensure .fixup entries are sufficiently alignedWill Deacon1-0/+1
AArch64 instructions must be 4-byte aligned, so make sure this is true for the futex .fixup section. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: use generic strnlen_user and strncpy_from_user functionsWill Deacon7-127/+64
This patch implements the word-at-a-time interface for arm64 using the same algorithm as ARM. We use the fls64 macro, which expands to a clz instruction via a compiler builtin. Big-endian configurations make use of the implementation from asm-generic. With this implemented, we can replace our byte-at-a-time strnlen_user and strncpy_from_user functions with the optimised generic versions. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: percpu: implement optimised pcpu access using tpidr_el1Will Deacon4-3/+55
This patch implements optimised percpu variable accesses using the el1 r/w thread register (tpidr_el1) along the same lines as arch/arm/. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: perf: add support for percpu pmu interruptVinayak Kale1-30/+78
Add support for irq registration when pmu interrupt is percpu. Signed-off-by: Vinayak Kale <vkale@apm.com> Signed-off-by: Tuan Phan <tphan@apm.com> [will: tidied up cross-calling to pass &irq] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16genirq: Add an accessor for IRQ_PER_CPU flagVinayak Kale1-0/+8
This patch adds an accessor function for IRQ_PER_CPU flag. The accessor function is useful to determine whether an IRQ is percpu or not. This patch is based on an older patch posted by Chris Smith here [1]. There is a minor change w.r.t. Chris's original patch: The accessor function is renamed as 'irq_is_percpu' instead of 'irq_is_per_cpu'. [1]: http://lkml.indiana.edu/hypermail/linux/kernel/1207.3/02955.html Signed-off-by: Chris Smith <chris.smith@st.com> Signed-off-by: Vinayak Kale <vkale@apm.com> Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16arm64: vmlinux.lds.S: drop redundant .commentMark Rutland1-1/+0
We currently try to emit .comment twice, once in STABS_DEBUG, and once in the line immediately following it. As the two section definitions are identical, the latter is redundant and can be dropped. This patch drops the redundant .comment section definition. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-04-16drm: Update drm_addmap and drm_mmap to use PAT WC instead of MTRRsAndy Lutomirski2-21/+21
Previously, DRM_FRAME_BUFFER mappings, as well as DRM_REGISTERS mappings with DRM_WRITE_COMBINING set, resulted in an unconditional MTRR being added but the actual mappings being created as UC-. Now these mappings have the MTRR added only if needed, but they will be mapped with pgprot_writecombine. The non-WC DRM_REGISTERS case now uses pgprot_noncached instead of hardcoding the bit twiddling. The DRM_AGP case is unchanged for now. [airlied: fix ppc build] Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Dave Airlie <airlied@redhat.com> Conflicts: drivers/gpu/drm/drm_vm.c
2014-04-16Add arch_phys_wc_{add, del} to manipulate WC MTRRs if neededAndy Lutomirski4-1/+112
Several drivers currently use mtrr_add through various #ifdef guards and/or drm wrappers. The vast majority of them want to add WC MTRRs on x86 systems and don't actually need the MTRR if PAT (i.e. ioremap_wc, etc) are working. arch_phys_wc_add and arch_phys_wc_del are new functions, available on all architectures and configurations, that add WC MTRRs on x86 if needed (and handle errors) and do nothing at all otherwise. They're also easier to use than mtrr_add and mtrr_del, so the call sites can be simplified. As an added benefit, this will avoid wasting MTRRs and possibly warning pointlessly on PAT-supporting systems. Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-16drm/cma: Replace PTR_RET with PTR_ERR_OR_ZEROSachin Kamat1-1/+1
PTR_RET is now deprecated. Use PTR_ERR_OR_ZERO instead. Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
2014-04-16drm/crtc-helper: explicit DPMS on after modesetDaniel Vetter1-16/+11
Atm the crtc helper implementation of set_config has really inconsisten semantics: If just an fb update is good enough, dpms state will be left as-is, but if we do a full modeset we force everything to dpms on. This change has already been applied to the i915 modeset code in commit e3de42b68478a8c95dd27520e9adead2af9477a5 Author: Imre Deak <imre.deak@intel.com> Date: Fri May 3 19:44:07 2013 +0200 drm/i915: force full modeset if the connector is in DPMS OFF mode which according to Greg KH seems to aim for a new record in most Bugzilla: links in a commit message. The history of this dpms forcing is pretty interesting. This patch here is an almost-revert of commit 811aaa55ba21ab37407018cfc01770d6b037d3fb Author: Keith Packard <keithp@keithp.com> Date: Thu Feb 3 16:57:28 2011 -0800 drm: Only set DPMS ON when actually configuring a mode which fixed the bug of trying to dpms on disabled outputs, but introduced the new discrepancy between an fb update only and full modesets. The actual introduction of this goes back to commit bf9dc102e284a5aa78c73fc9d72e11d5ccd8669f Author: Keith Packard <keithp@keithp.com> Date: Fri Nov 26 10:45:58 2010 -0800 drm: Set connector DPMS status to ON in drm_crtc_helper_set_config And if you'd dig around in the i915 driver code there's even more fun around forcing dpms on and losing our heads and temper of the resulting inconsistencies. Especially the DP re-training code had tons of funny stuff in it. v2: So v1 totally blew up on resume on my radeon system here. After much head-scraching I've figured out that the radeon resume functions resumes the console system _before_ it actually restores all the modeset state. And resuming the console systems means that fbdev doeas an immediate ->set_par call. Now up to this patch that ->set_par did absolutely nothing: All the old sw state from pre-suspend was still around (since the modeset reset wasn't done yet), which means that the set_config calls done as a result of the ->set_par where all treated as no-ops (despite that the real hw state was obviously something completely different). Since v1 of this patch just added a bunch of ->dpms calls if the crtc was enabled, those set_config calls suddenly stopped being no-ops. But because the hw state wasn't restored the ->dpms callbacks resulted in decent amounts of hilarity and eventual full hangs. Since I can't review all kms drivers for such tricky ordering constraints v2 opts for a different approach and forces a full modeset if the connector dpms state isnt' DPMS_ON. Since the ->dpms callbacks implemented by the modeset helpers update the connector->dpms property we have the same effect of ensuring that the pipe is ultimately turned on, even if we just end up updating the fb. This is the same approac we ended up using in the intel driver. Note that besides i915.ko only all other drivers eventually call drm_helper_connector_dpms with the exception of vmwgfx, which does not support dmps at all. v3: Dave Airlie merged the broken first version of this patch, so squash in the revert of commit 372835a8527f85b3eff20a18c2c339e827dfd4e4 Author: Daniel Vetter <daniel.vetter@ffwll.ch> Date: Sat Jun 15 00:13:13 2013 +0200 drm/crtc-helper: explicit DPMS on after modeset Also fix up the spelling fail a bit in the commit message while at it. Cc: Dave Airlie <airlied@redhat.com> Reviewed-by: Alex Deucher <alexdeucher@gmail.com> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=67043 Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-16drm: avoid warning in drm_load_edid_firmware()Linus Torvalds1-3/+3
Use "const char *" instead of "char *" in order to avoid this warning: drivers/gpu/drm/drm_edid_load.c: In function ‘drm_load_edid_firmware’: drivers/gpu/drm/drm_edid_load.c:245:25: warning: initialization discards ‘const’ qualifier from pointer target type [enabled by default] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-04-16drm/cma: remove GEM CMA specific dma_buf functionalityJoonyoung Shim2-292/+0
We can use prime helpers instead. Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-16drm/cma: add low-level hook functions to use prime helpersJoonyoung Shim2-0/+88
Instead of using the dma_buf functionality for GEM CMA, we can use prime helpers if we can provide low-level hook functions for GEM CMA. Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-16drm: add mmap function to prime helpersJoonyoung Shim2-1/+9
This adds to call low-level mmap() from prime helpers. Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com> Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Dave Airlie <airlied@redhat.com>
2014-04-16drm/prime: fix sgt NULL checkingJoonyoung Shim1-5/+6
The drm_gem_map_detach() can be called with sgt is NULL. Signed-off-by: Joonyoung Shim <jy0922.shim@samsung.com> Signed-off-by: Dave Airlie <airlied@redhat.com>