summaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)AuthorFilesLines
2019-07-31arm64: tizen_tw3_defconfig: Enable NETFILTER_XT_MATCH_OWNER configtizen_5.5.m2_releasesubmit/tizen_5.5_mobile_hotfix/20201027.114301submit/tizen_5.5_mobile_hotfix/20201026.1851010submit/tizen_5.5/20191031.000013submit/tizen_5.5/20191031.000011submit/tizen_5.5/20191031.000010submit/tizen/20190801.010047accepted/tizen/unified/20190801.113445accepted/tizen/5.5/unified/mobile/hotfix/20201027.060924accepted/tizen/5.5/unified/20191031.032629tizen_5.5_mobile_hotfixaccepted/tizen_5.5_unified_mobile_hotfixSeung-Woo Kim1-2/+1
Enable the NETFILTER_XT_MATCH_OWNER config option to use xt_owner supplementary groups. Change-Id: I3c40c2d41dd29eede403f430887aab8f1ee4eaea Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
2019-05-16arm64: dts: exynos: Support offload playbacksubmit/tizen/20190530.011954accepted/tizen/unified/20190602.221819Jaechul Lee34-3/+36
Change-Id: I9a000740ef332018484f4a055baa700b27093821 Signed-off-by: Jaechul Lee <jcsing.lee@samsung.com>
2019-01-09arm64: tizen_tw3_defconfig: enable SECURITY_SMACK_APPEND_SIGNALSsubmit/tizen/20190109.063522accepted/tizen/unified/20190110.060355Seung-Woo Kim1-1/+1
Enable SECURITY_SMACK_APPEND_SIGNALS to allow smack rules denying signal delivery while allowing IPC in Tizen. Change-Id: If438465b6ced7f529baf69754555778eb554c6c5 Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
2019-01-09arm64: tizen_tw3_defconfig: enable CONNECTOR and PROC_EVENTS optionsSeung-Woo Kim1-1/+3
These options enable Netlink Connector feature of kernel to monitor process lifecycle like Fork and Exit status of all processes asynchronously. In Tzen, it will be used by stc-manager(smart traffic control) to monitor process lifecycle. Change-Id: I265504609e6b2ce963875e66884d064affc48d9d Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
2019-01-04ARM64: tizen_tw3_defconfig: enable MODULESsubmit/tizen/20190107.021540accepted/tizen/unified/20190107.065510Seung-Woo Kim1-1/+25
To support kernel modules build, loading, and unloading enable config option MODULES and force loading and unloading options. Note: in tizen, to support swap-modules, it is required to support kernel module build. Change-Id: If67e5a8c5ae91c632a3225244a385ac9ff26728b Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
2019-01-04arm64: tizen_tw3_defconfig: disable SWAP_DA config.Seung-Woo Kim1-9/+1
To use swap-modules instead of in-tree SWAP_DA, disable related configs. Change-Id: I773919821d2e3443878b44672b1d67617f425e8c Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com>
2018-12-11ARM64: configs: tizen_tw3_defconfig: disable modem interface configJaehoon Chung1-32/+4
Disable modem interface configurations. This config is based on galileo large config. (Previous config was based on galileo large lte na.) Change-Id: If3de5097a78e258f8ffbba4c16977e0337a7f0e4 Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com>
2018-12-05ARM64: defconfig: use the bootloader's cmmdlineJaehoon Chung1-3/+3
Use bootloader's cmdline. Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com>
2018-12-03ARM64: configs: add tizen_tw3_defconfig fileJaehoon Chung1-0/+4632
Add tizen_tw3_defconfig file. This config is from Galileo Large Lte Na. Signed-off-by: Jaehoon Chung <jh80.chung@samsung.com>
2018-12-03Import codes from productJaehoon Chung291-232/+137664
Impor codes from product. These codes are from opensource.samsung.com. - Version : R880XU1ARGB
2017-10-27x86/microcode/intel: Disable late loading on model 79Borislav Petkov1-0/+19
commit 723f2828a98c8ca19842042f418fb30dd8cfc0f7 upstream. Blacklist Broadwell X model 79 for late loading due to an erratum. Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Tony Luck <tony.luck@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20171018111225.25635-1-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-27parisc: Fix double-word compare and exchange in LWS code on 32-bit kernelsJohn David Anglin1-3/+3
commit 374b3bf8e8b519f61eb9775888074c6e46b3bf0c upstream. As discussed on the debian-hppa list, double-wordcompare and exchange operations fail on 32-bit kernels. Looking at the code, I realized that the ",ma" completer does the wrong thing in the "ldw,ma 4(%r26), %r29" instruction. This increments %r26 and causes the following store to write to the wrong location. Note by Helge Deller: The patch applies cleanly to stable kernel series if this upstream commit is merged in advance: f4125cfdb300 ("parisc: Avoid trashing sr2 and sr3 in LWS code"). Signed-off-by: John David Anglin <dave.anglin@bell.net> Tested-by: Christoph Biedl <debian.axhn@manchmal.in-ulm.de> Fixes: 89206491201c ("parisc: Implement new LWS CAS supporting 64 bit operations.") Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21powerpc/perf: Add restrictions to PMC5 in power9 DD1Madhavan Srinivasan2-1/+5
[ Upstream commit 8d911904f3ce412b20874a9c95f82009dcbb007c ] PMC5 on POWER9 DD1 may not provide right counts in all sampling scenarios, hence use PM_INST_DISP event instead in PMC2 or PMC3 in preference. Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21mm/memory_hotplug: set magic number to page->freelist instead of page->lru.nextYasuaki Ishimatsu1-1/+1
[ Upstream commit ddffe98d166f4a93d996d5aa628fd745311fc1e7 ] To identify that pages of page table are allocated from bootmem allocator, magic number sets to page->lru.next. But page->lru list is initialized in reserve_bootmem_region(). So when calling free_pagetable(), the function cannot find the magic number of pages. And free_pagetable() frees the pages by free_reserved_page() not put_page_bootmem(). But if the pages are allocated from bootmem allocator and used as page table, the pages have private flag. So before freeing the pages, we should clear the private flag by put_page_bootmem(). Before applying the commit 7bfec6f47bb0 ("mm, page_alloc: check multiple page fields with a single branch"), we could find the following visible issue: BUG: Bad page state in process kworker/u1024:1 page:ffffea103cfd8040 count:0 mapcount:0 mappi flags: 0x6fffff80000800(private) page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set bad because of flags: 0x800(private) <snip> Call Trace: [...] dump_stack+0x63/0x87 [...] bad_page+0x114/0x130 [...] free_pages_prepare+0x299/0x2d0 [...] free_hot_cold_page+0x31/0x150 [...] __free_pages+0x25/0x30 [...] free_pagetable+0x6f/0xb4 [...] remove_pagetable+0x379/0x7ff [...] vmemmap_free+0x10/0x20 [...] sparse_remove_one_section+0x149/0x180 [...] __remove_pages+0x2e9/0x4f0 [...] arch_remove_memory+0x63/0xc0 [...] remove_memory+0x8c/0xc0 [...] acpi_memory_device_remove+0x79/0xa5 [...] acpi_bus_trim+0x5a/0x8d [...] acpi_bus_trim+0x38/0x8d [...] acpi_device_hotplug+0x1b7/0x418 [...] acpi_hotplug_work_fn+0x1e/0x29 [...] process_one_work+0x152/0x400 [...] worker_thread+0x125/0x4b0 [...] kthread+0xd8/0xf0 [...] ret_from_fork+0x22/0x40 And the issue still silently occurs. Until freeing the pages of page table allocated from bootmem allocator, the page->freelist is never used. So the patch sets magic number to page->freelist instead of page->lru.next. [isimatu.yasuaki@jp.fujitsu.com: fix merge issue] Link: http://lkml.kernel.org/r/722b1cc4-93ac-dd8b-2be2-7a7e313b3b0b@gmail.com Link: http://lkml.kernel.org/r/2c29bd9f-5b67-02d0-18a3-8828e78bbb6f@gmail.com Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Xishi Qiu <qiuxishi@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21sparc64: Migrate hvcons irq to panicked cpuVijay Kumar2-2/+9
[ Upstream commit 7dd4fcf5b70694dc961eb6b954673e4fc9730dbd ] On panic, all other CPUs are stopped except the one which had hit panic. To keep console alive, we need to migrate hvcons irq to panicked CPU. Signed-off-by: Vijay Kumar <vijay.ac.kumar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-21MIPS: Fix minimum alignment requirement of IRQ stackMatt Redfearn1-1/+1
commit 5fdc66e046206306bf61ff2d626bfa52ca087f7b upstream. Commit db8466c581cc ("MIPS: IRQ Stack: Unwind IRQ stack onto task stack") erroneously set the initial stack pointer of the IRQ stack to a value with a 4 byte alignment. The MIPS32 ABI requires that the minimum stack alignment is 8 byte, and the MIPS64 ABIs(n32/n64) require 16 byte minimum alignment. Fix IRQ_STACK_START such that it leaves space for the dummy stack frame (containing interrupted task kernel stack pointer) while also meeting minimum alignment requirements. Fixes: db8466c581cc ("MIPS: IRQ Stack: Unwind IRQ stack onto task stack") Reported-by: Darius Ivanauskas <dasilt@yahoo.com> Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Petr Mladek <pmladek@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Jason A. Donenfeld <jason@zx2c4.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/16760/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18KVM: nVMX: update last_nonleaf_level when initializing nested EPTLadi Prosek1-0/+1
commit fd19d3b45164466a4adce7cbff448ba9189e1427 upstream. The function updates context->root_level but didn't call update_last_nonleaf_level so the previous and potentially wrong value was used for page walks. For example, a zero value of last_nonleaf_level would allow a potential out-of-bounds access in arch/x86/mmu/paging_tmpl.h's walk_addr_generic function (CVE-2017-12188). Fixes: 155a97a3d7c78b46cef6f1a973c831bc5a4f82bb Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18x86/alternatives: Fix alt_max_short macro to really be a max()Mathias Krause2-4/+6
commit 6b32c126d33d5cb379bca280ab8acedc1ca978ff upstream. The alt_max_short() macro in asm/alternative.h does not work as intended, leading to nasty bugs. E.g. alt_max_short("1", "3") evaluates to 3, but alt_max_short("3", "1") evaluates to 1 -- not exactly the maximum of 1 and 3. In fact, I had to learn it the hard way by crashing my kernel in not so funny ways by attempting to make use of the ALTENATIVE_2 macro with alternatives where the first one was larger than the second one. According to [1] and commit dbe4058a6a44 ("x86/alternatives: Fix ALTERNATIVE_2 padding generation properly") the right handed side should read "-(-(a < b))" not "-(-(a - b))". Fix that, to make the macro work as intended. While at it, fix up the comments regarding the additional "-", too. It's not about gas' usage of s32 but brain dead logic of having a "true" value of -1 for the < operator ... *sigh* Btw., the one in asm/alternative-asm.h is correct. And, apparently, all current users of ALTERNATIVE_2() pass same sized alternatives, avoiding to hit the bug. [1] http://graphics.stanford.edu/~seander/bithacks.html#IntegerMinOrMax Reviewed-and-tested-by: Borislav Petkov <bp@suse.de> Fixes: dbe4058a6a44 ("x86/alternatives: Fix ALTERNATIVE_2 padding generation properly") Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/1507228213-13095-1-git-send-email-minipli@googlemail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exitHaozhong Zhang1-1/+1
commit 8eb3f87d903168bdbd1222776a6b1e281f50513e upstream. When KVM emulates an exit from L2 to L1, it loads L1 CR4 into the guest CR4. Before this CR4 loading, the guest CR4 refers to L2 CR4. Because these two CR4's are in different levels of guest, we should vmx_set_cr4() rather than kvm_set_cr4() here. The latter, which is used to handle guest writes to its CR4, checks the guest change to CR4 and may fail if the change is invalid. The failure may cause trouble. Consider we start a L1 guest with non-zero L1 PCID in use, (i.e. L1 CR4.PCIDE == 1 && L1 CR3.PCID != 0) and a L2 guest with L2 PCID disabled, (i.e. L2 CR4.PCIDE == 0) and following events may happen: 1. If kvm_set_cr4() is used in load_vmcs12_host_state() to load L1 CR4 into guest CR4 (in VMCS01) for L2 to L1 exit, it will fail because of PCID check. As a result, the guest CR4 recorded in L0 KVM (i.e. vcpu->arch.cr4) is left to the value of L2 CR4. 2. Later, if L1 attempts to change its CR4, e.g., clearing VMXE bit, kvm_set_cr4() in L0 KVM will think L1 also wants to enable PCID, because the wrong L2 CR4 is used by L0 KVM as L1 CR4. As L1 CR3.PCID != 0, L0 KVM will inject GP to L1 guest. Fixes: 4704d0befb072 ("KVM: nVMX: Exiting from L2 to L1") Cc: qemu-stable@nongnu.org Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18KVM: MMU: always terminate page walks at level 1Ladi Prosek2-8/+9
commit 829ee279aed43faa5cb1e4d65c0cad52f2426c53 upstream. is_last_gpte() is not equivalent to the pseudo-code given in commit 6bb69c9b69c31 ("KVM: MMU: simplify last_pte_bitmap") because an incorrect value of last_nonleaf_level may override the result even if level == 1. It is critical for is_last_gpte() to return true on level == 1 to terminate page walks. Otherwise memory corruption may occur as level is used as an index to various data structures throughout the page walking code. Even though the actual bug would be wherever the MMU is initialized (as in the previous patch), be defensive and ensure here that is_last_gpte() returns the correct value. This patch is also enough to fix CVE-2017-12188. Fixes: 6bb69c9b69c315200ddc2bc79aee14c0184cf5b2 Cc: Andy Honig <ahonig@google.com> Signed-off-by: Ladi Prosek <lprosek@redhat.com> [Panic if walk_addr_generic gets an incorrect level; this is a serious bug and it's not worth a WARN_ON where the recovery path might hide further exploitable issues; suggested by Andrew Honig. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-18MIPS: math-emu: Remove pr_err() calls from fpu_emu()Paul Burton1-2/+0
commit ca8eb05b5f332a9e1ab3e2ece498d49f4d683470 upstream. The FPU emulator includes 2 calls to pr_err() which are triggered by invalid instruction encodings for MIPSr6 cmp.cond.fmt instructions. These cases are not kernel errors, merely invalid instructions which are already handled by delivering a SIGILL which will provide notification that something failed in cases where that makes sense. In cases where that SIGILL is somewhat expected & being handled, for example when crashme happens to generate one of the affected bad encodings, the message is printed with no useful context about what triggered it & spams the kernel log for no good reason. Remove the pr_err() calls to make crashme run silently & treat the bad encodings the same way we do others, with a SIGILL & no further kernel log output. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: f8c3c6717a71 ("MIPS: math-emu: Add support for the CMP.condn.fmt R6 instruction") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/17253/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12KVM: x86: fix singlestepping over syscallPaolo Bonzini3-30/+24
commit c8401dda2f0a00cd25c0af6a95ed50e478d25de4 upstream. TF is handled a bit differently for syscall and sysret, compared to the other instructions: TF is checked after the instruction completes, so that the OS can disable #DB at a syscall by adding TF to FMASK. When the sysret is executed the #DB is taken "as if" the syscall insn just completed. KVM emulates syscall so that it can trap 32-bit syscall on Intel processors. Fix the behavior, otherwise you could get #DB on a user stack which is not nice. This does not affect Linux guests, as they use an IST or task gate for #DB. This fixes CVE-2017-7518. Cc: stable@vger.kernel.org Reported-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [bwh: Backported to 4.9: - kvm_vcpu_check_singlestep() sets some flags differently - Drop changes to kvm_skip_emulated_instruction()] Cc: Ben Hutchings <benh@debian.org> Cc: Salvatore Bonaccorso <carnil@debian.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12powerpc/tm: Fix illegal TM state in signal handlerGustavo Romero1-1/+12
commit 044215d145a7a8a60ffa8fdc859d110a795fa6ea upstream. Currently it's possible that on returning from the signal handler through the restore_tm_sigcontexts() code path (e.g. from a signal caught due to a `trap` instruction executed in the middle of an HTM block, or a deliberately constructed sigframe) an illegal TM state (like TS=10 TM=0, i.e. "T0") is set in SRR1 and when `rfid` sets implicitly the MSR register from SRR1 register on return to userspace it causes a TM Bad Thing exception. That illegal state can be set (a) by a malicious user that disables the TM bit by tweaking the bits in uc_mcontext before returning from the signal handler or (b) by a sufficient number of context switches occurring such that the load_tm counter overflows and TM is disabled whilst in the signal handler. This commit fixes the illegal TM state by ensuring that TM bit is always enabled before we return from restore_tm_sigcontexts(). A small comment correction is made as well. Fixes: 5d176f751ee3 ("powerpc: tm: Enable transactional memory (TM) lazily for userspace") Signed-off-by: Gustavo Romero <gromero@linux.vnet.ibm.com> Signed-off-by: Breno Leitao <leitao@debian.org> Signed-off-by: Cyril Bur <cyrilbur@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-12powerpc/64s: Use emergency stack for kernel TM Bad Thing program checksCyril Bur1-1/+23
commit 265e60a170d0a0ecfc2d20490134ed2c48dd45ab upstream. When using transactional memory (TM), the CPU can be in one of six states as far as TM is concerned, encoded in the Machine State Register (MSR). Certain state transitions are illegal and if attempted trigger a "TM Bad Thing" type program check exception. If we ever hit one of these exceptions it's treated as a bug, ie. we oops, and kill the process and/or panic, depending on configuration. One case where we can trigger a TM Bad Thing, is when returning to userspace after a system call or interrupt, using RFID. When this happens the CPU first restores the user register state, in particular r1 (the stack pointer) and then attempts to update the MSR. However the MSR update is not allowed and so we take the program check with the user register state, but the kernel MSR. This tricks the exception entry code into thinking we have a bad kernel stack pointer, because the MSR says we're coming from the kernel, but r1 is pointing to userspace. To avoid this we instead always switch to the emergency stack if we take a TM Bad Thing from the kernel. That way none of the user register values are used, other than for printing in the oops message. This is the fix for CVE-2017-1000255. Fixes: 5d176f751ee3 ("powerpc: tm: Enable transactional memory (TM) lazily for userspace") Signed-off-by: Cyril Bur <cyrilbur@gmail.com> [mpe: Rewrite change log & comments, tweak asm slightly] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08s390/mm: make pmdp_invalidate() do invalidation onlyGerald Schaefer1-1/+3
commit 91c575b335766effa6103eba42a82aea560c365f upstream. Commit 227be799c39a ("s390/mm: uninline pmdp_xxx functions from pgtable.h") inadvertently changed the behavior of pmdp_invalidate(), so that it now clears the pmd instead of just marking it as invalid. Fix this by restoring the original behavior. A possible impact of the misbehaving pmdp_invalidate() would be the MADV_DONTNEED races (see commits ced10803 and 58ceeb6b), although we should not have any negative impact on the related dirty/young flags, since those flags are not set by the hardware on s390. Fixes: 227be799c39a ("s390/mm: uninline pmdp_xxx functions from pgtable.h") Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08ARM: remove duplicate 'const' annotations'Arnd Bergmann6-6/+6
commit 0527873b29b077fc8e656acd63e1866b429fef55 upstream. gcc-7 warns about some declarations that are more 'const' than necessary: arch/arm/mach-at91/pm.c:338:34: error: duplicate 'const' declaration specifier [-Werror=duplicate-decl-specifier] static const struct of_device_id const ramc_ids[] __initconst = { arch/arm/mach-bcm/bcm_kona_smc.c:36:34: error: duplicate 'const' declaration specifier [-Werror=duplicate-decl-specifier] static const struct of_device_id const bcm_kona_smc_ids[] __initconst = { arch/arm/mach-spear/time.c:207:34: error: duplicate 'const' declaration specifier [-Werror=duplicate-decl-specifier] static const struct of_device_id const timer_of_match[] __initconst = { arch/arm/mach-omap2/prm_common.c:714:34: error: duplicate 'const' declaration specifier [-Werror=duplicate-decl-specifier] static const struct of_device_id const omap_prcm_dt_match_table[] __initconst = { arch/arm/mach-omap2/vc.c:562:35: error: duplicate 'const' declaration specifier [-Werror=duplicate-decl-specifier] static const struct i2c_init_data const omap4_i2c_timing_data[] __initconst = { The ones in arch/arm were apparently all introduced accidentally by one commit that correctly marked a lot of variables as __initconst. Fixes: 19c233b79d1a ("ARM: appropriate __init annotation for const data") Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com> Acked-by: Tony Lindgren <tony@atomide.com> Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Krzysztof Hałasa <khalasa@piap.pl> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08ARM: dts: BCM5301X: Fix memory start addressJon Mason1-1/+1
[ Upstream commit 88d1fa70c21d7b431386cfe70cdc514d98b0c9c4 ] Memory starts at 0x80000000, not 0. 0 "works" due to mirrior of the first 128M of RAM to that address. Anything greater than 128M will quickly find nothing there. Correcting the starting address has everything working again. Signed-off-by: Jon Mason <jon.mason@broadcom.com> Fixes: 7eb05f6d ("ARM: dts: bcm5301x: Add BCM SVK DT files") Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08x86/acpi: Restore the order of CPU IDsDou Liyang2-20/+13
[ Upstream commit 2b85b3d22920db7473e5fed5719e7955c0ec323e ] The following commits: f7c28833c2 ("x86/acpi: Enable acpi to register all possible cpus at boot time") and 8f54969dc8 ("x86/acpi: Introduce persistent storage for cpuid <-> apicid mapping") ... registered all the possible CPUs at boot time via ACPI tables to make the mapping of cpuid <-> apicid fixed. Both enabled and disabled CPUs could have a logical CPU ID after boot time. But, ACPI tables are unreliable. the number amd order of Local APIC entries which depends on the firmware is often inconsistent with the physical devices. Even if they are consistent, The disabled CPUs which take up some logical CPU IDs will also make the order discontinuous. Revert the part of disabled CPUs registration, keep the allocation logic of logical CPU IDs and also keep some code location changes. Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com> Tested-by: Xiaolong Ye <xiaolong.ye@intel.com> Cc: rjw@rjwysocki.net Cc: linux-acpi@vger.kernel.org Cc: guzheng1@huawei.com Cc: izumi.taku@jp.fujitsu.com Cc: lenb@kernel.org Link: http://lkml.kernel.org/r/1488528147-2279-4-git-send-email-douly.fnst@cn.fujitsu.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08parisc: perf: Fix potential NULL pointer dereferenceArvind Yadav1-45/+49
[ Upstream commit 74e3f6e63da6c8e8246fba1689e040bc926b4a1a ] Fix potential NULL pointer dereference and clean up coding style errors (code indent, trailing whitespaces). Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Helge Deller <deller@gmx.de> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08MIPS: smp-cps: Fix retrieval of VPE mask on big endian CPUsMatt Redfearn1-1/+1
[ Upstream commit fb2155e3c30dc2043b52020e26965067a3e7779c ] The vpe_mask member of struct core_boot_config is of type atomic_t, which is a 32bit type. In cps-vec.S this member was being retrieved by a PTR_L macro, which on 64bit systems is a 64bit load. On little endian systems this is OK, since the double word that is retrieved will have the required less significant word in the correct position. However, on big endian systems the less significant word of the load is retrieved from address+4, and the more significant from address+0. The destination register therefore ends up with the required word in the more significant word e.g. when starting the second VP of a big endian 64bit system, the load PTR_L ta2, COREBOOTCFG_VPEMASK(a0) ends up setting register ta2 to 0x0000000300000000 When this value is written to the CPC it is ignored, since it is invalid to write anything larger than 4 bits. This results in any VP other than VP0 in a core failing to start in 64bit big endian systems. Change the load to a 32bit load word instruction to fix the bug. Fixes: f12401d7219f ("MIPS: smp-cps: Pull boot config retrieval out of mips_cps_boot_vpes") Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15787/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08MIPS: IRQ Stack: Unwind IRQ stack onto task stackMatt Redfearn4-20/+60
[ Upstream commit db8466c581cca1a08b505f1319c3ecd246f16fa8 ] When the separate IRQ stack was introduced, stack unwinding only proceeded as far as the top of the IRQ stack, leading to kernel backtraces being less useful, lacking the trace of what was interrupted. Fix this by providing a means for the kernel to unwind the IRQ stack onto the interrupted task stack. The processor state is saved to the kernel task stack on interrupt. The IRQ_STACK_START macro reserves an unsigned long at the top of the IRQ stack where the interrupted task stack pointer can be saved. After the active stack is switched to the IRQ stack, save the interrupted tasks stack pointer to the reserved location. Fix the stack unwinding code to look for the frame being the top of the IRQ stack and if so get the next frame from the saved location. The existing test does not work with the separate stack since the ra is no longer pointed at ret_from_{irq,exception}. The test to stop unwinding the stack 32 bytes from the top of a stack must be modified to allow unwinding to continue up to the location of the saved task stack pointer when on the IRQ stack. The low / high marks of the stack are set depending on whether the sp is on an irq stack or not. Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Cc: Masanari Iida <standby24x7@gmail.com> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jason A. Donenfeld <jason@zx2c4.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15788/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08kasan: do not sanitize kexec purgatoryMike Galbraith1-0/+1
[ Upstream commit 13a6798e4a03096b11bf402a063786a7be55d426 ] Fixes this: kexec: Undefined symbol: __asan_load8_noabort kexec-bzImage64: Loading purgatory failed Link: http://lkml.kernel.org/r/1489672155.4458.7.camel@gmx.de Signed-off-by: Mike Galbraith <efault@gmx.de> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08mips: ath79: clock:- Unmap region obtained by of_iomapArvind Yadav1-2/+5
[ Upstream commit b3d91db3f71d5f70ea60d900425a3f96aeb3d065 ] Free memory mapping, if ath79_clocks_init_dt_ng is not successful. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Fixes: 3bdf1071ba7d ("MIPS: ath79: update devicetree clock support for AR9132") Cc: antonynpavlov@gmail.com Cc: albeu@free.fr Cc: hackpascal@gmail.com Cc: sboyd@codeaurora.org Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14915/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08MIPS: Lantiq: Fix another request_mem_region() return code checkArnd Bergmann1-2/+2
[ Upstream commit 98ea51cb0c8ce009d9da1fd7b48f0ff1d7a9bbb0 ] Hauke already fixed a couple of them, but one instance remains that checks for a negative integer when it should check for a NULL pointer: arch/mips/lantiq/xway/sysctrl.c: In function 'ltq_soc_init': arch/mips/lantiq/xway/sysctrl.c:473:19: error: ordered comparison of pointer with integer zero [-Werror=extra] Fixes: 6e807852676a ("MIPS: Lantiq: Fix check for return value of request_mem_region()") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/15043/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08arm: dts: mt2701: Add subsystem clock controller device nodesJames Liao1-0/+36
[ Upstream commit f235c7e7a75325f28a33559a71f25a0eca6112db ] Add MT2701 subsystem clock controllers, inlcude mmsys, imgsys, vdecsys, hifsys, ethsys and bdpsys. Signed-off-by: James Liao <jamesjj.liao@mediatek.com> Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08ARM: 8635/1: nommu: allow enabling REMAP_VECTORS_TO_RAMAfzal Mohammed1-2/+1
[ Upstream commit 8a792e9afbce84a0fdaf213fe42bb97382487094 ] REMAP_VECTORS_TO_RAM depends on DRAM_BASE, but since DRAM_BASE is a hex, REMAP_VECTORS_TO_RAM could never get enabled. Also depending on DRAM_BASE is redundant as whenever REMAP_VECTORS_TO_RAM makes itself available to Kconfig, DRAM_BASE also is available as the Kconfig gets sourced on !MMU. Signed-off-by: Afzal Mohammed <afzal.mohd.ma@gmail.com> Reviewed-by: Vladimir Murzin <vladimir.murzin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08ARM: dts: am335x-chilisom: Wakeup from RTC-only state by power on eventMarcin Niestroj1-0/+8
[ Upstream commit ca244a83ecc7f0a9242ee2116e622cb6d7ec2a90 ] On chiliSOM TPS65217 nWAKEUP pin is connected to AM335x internal RTC EXT_WAKEUP input. In RTC-only state TPS65217 is notifying about power on events (such as power buton presses) by setting nWAKEUP output low. After that it waits 5s for proper device boot. Currently it doesn't happen, as the processor doesn't listen for such events. Consequently TPS65217 changes state from SLEEP (RTC-only state) to OFF. Enable EXT_WAKEUP input of AM335x's RTC, so the processor can properly detect power on events and recover immediately from RTC-only states, without powering off RTC and losing time. Signed-off-by: Marcin Niestroj <m.niestroj@grinn-global.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08MIPS: ralink: Fix incorrect assignment on ralink_socColin Ian King1-1/+1
[ Upstream commit 08d90c81b714482dceb5323d14f6617bcf55ee61 ] ralink_soc sould be assigned to RT3883_SOC, replace incorrect comparision with assignment. Signed-off-by: Colin Ian King <colin.king@canonical.com> Fixes: 418d29c87061 ("MIPS: ralink: Unify SoC id handling") Cc: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14903/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08MIPS: ralink: Fix a typo in the pinmux setup.John Crispin1-9/+9
[ Upstream commit 58181a117d353427127a2e7afc7cf1ab44759828 ] There is a typo inside the pinmux setup code. The function is really called utif and not util. This was recently discovered when people were trying to make the UTIF interface work. Signed-off-by: John Crispin <john@phrozen.org> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14899/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08MIPS: Ensure bss section ends on a long-aligned addressPaul Burton1-1/+1
[ Upstream commit 3f00f4d8f083bc61005d0a1ef592b149f5c88bbd ] When clearing the .bss section in kernel_entry we do so using LONG_S instructions, and branch whilst the current write address doesn't equal the end of the .bss section minus the size of a long integer. The .bss section always begins at a long-aligned address and we always increment the write pointer by the size of a long integer - we therefore rely upon the .bss section ending at a long-aligned address. If this is not the case then the long-aligned write address can never be equal to the non-long-aligned end address & we will continue to increment past the end of the .bss section, attempting to zero the rest of memory. Despite this requirement that .bss end at a long-aligned address we pass 0 as the end alignment requirement to the BSS_SECTION macro and thus don't guarantee any particular alignment, allowing us to hit the error condition described above. Fix this by instead passing 8 bytes as the end alignment argument to the BSS_SECTION macro, ensuring that the end of the .bss section is always at least long-aligned. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14526/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08ARM: dts: r8a7790: Use R-Car Gen 2 fallback binding for msiof nodesSimon Horman1-4/+8
[ Upstream commit 654450baf2afba86cf328e1849ccac61ec4630af ] Use recently added R-Car Gen 2 fallback binding for msiof nodes in DT for r8a7790 SoC. This has no run-time effect for the current driver as the initialisation sequence is the same for the SoC-specific binding for r8a7790 and the fallback binding for R-Car Gen 2. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-08ARM: dts: exynos: Add CPU OPPs for Exynos4412 PrimeBartlomiej Zolnierkiewicz5-5/+48
[ Upstream commit 80b7a2e2498bcffb1a79980dfbeb7a1275577b28 ] Add CPU operating points for Exynos4412 Prime (it supports additional 1704MHz & 1600MHz OPPs and 1500MHz OPP is just a regular non-turbo OPP on this SoC). Also update relevant cooling maps to account for new OPPs. ODROID-X2/U2/U3 boards use Exynos4412 Prime SoC version so update their board files accordingly. Based on Hardkernel's kernel for ODROID-X2/U2/U3 boards. Cc: Doug Anderson <dianders@chromium.org> Cc: Andreas Faerber <afaerber@suse.de> Cc: Thomas Abraham <thomas.ab@samsung.com> Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Cc: Ben Gamari <ben@smart-cactus.org> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@verizon.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05swiotlb-xen: implement xen_swiotlb_dma_mmap callbackStefano Stabellini1-0/+1
commit 7e91c7df29b5e196de3dc6f086c8937973bd0b88 upstream. This function creates userspace mapping for the DMA-coherent memory. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@globallogic.com> Signed-off-by: Andrii Anisov <andrii_anisov@epam.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05KVM: VMX: use cmpxchg64Paolo Bonzini1-6/+6
commit c0a1666bcb2a33e84187a15eabdcd54056be9a97 upstream. This fixes a compilation failure on 32-bit systems. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05KVM: VMX: remove WARN_ON_ONCE in kvm_vcpu_trigger_posted_interruptHaozhong Zhang1-12/+21
commit 5753743fa5108b8f98bd61e40dc63f641b26c768 upstream. WARN_ON_ONCE(pi_test_sn(&vmx->pi_desc)) in kvm_vcpu_trigger_posted_interrupt() intends to detect the violation of invariant that VT-d PI notification event is not suppressed when vcpu is in the guest mode. Because the two checks for the target vcpu mode and the target suppress field cannot be performed atomically, the target vcpu mode may change in between. If that does happen, WARN_ON_ONCE() here may raise false alarms. As the previous patch fixed the real invariant breaker, remove this WARN_ON_ONCE() to avoid false alarms, and document the allowed cases instead. Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Reported-by: "Ramamurthy, Venkatesh" <venkatesh.ramamurthy@intel.com> Reported-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Fixes: 28b835d60fcc ("KVM: Update Posted-Interrupts Descriptor when vCPU is preempted") Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05KVM: VMX: do not change SN bit in vmx_update_pi_irte()Haozhong Zhang1-5/+1
commit dc91f2eb1a4021eb6705c15e474942f84ab9b211 upstream. In kvm_vcpu_trigger_posted_interrupt() and pi_pre_block(), KVM assumes that PI notification events should not be suppressed when the target vCPU is not blocked. vmx_update_pi_irte() sets the SN field before changing an interrupt from posting to remapping, but it does not check the vCPU mode. Therefore, the change of SN field may break above the assumption. Besides, I don't see reasons to suppress notification events here, so remove the changes of SN field to avoid race condition. Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Reported-by: "Ramamurthy, Venkatesh" <venkatesh.ramamurthy@intel.com> Reported-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Fixes: 28b835d60fcc ("KVM: Update Posted-Interrupts Descriptor when vCPU is preempted") Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05x86/fpu: Don't let userspace set bogus xcomp_bvEric Biggers2-2/+11
commit 814fb7bb7db5433757d76f4c4502c96fc53b0b5e upstream. On x86, userspace can use the ptrace() or rt_sigreturn() system calls to set a task's extended state (xstate) or "FPU" registers. ptrace() can set them for another task using the PTRACE_SETREGSET request with NT_X86_XSTATE, while rt_sigreturn() can set them for the current task. In either case, registers can be set to any value, but the kernel assumes that the XSAVE area itself remains valid in the sense that the CPU can restore it. However, in the case where the kernel is using the uncompacted xstate format (which it does whenever the XSAVES instruction is unavailable), it was possible for userspace to set the xcomp_bv field in the xstate_header to an arbitrary value. However, all bits in that field are reserved in the uncompacted case, so when switching to a task with nonzero xcomp_bv, the XRSTOR instruction failed with a #GP fault. This caused the WARN_ON_FPU(err) in copy_kernel_to_xregs() to be hit. In addition, since the error is otherwise ignored, the FPU registers from the task previously executing on the CPU were leaked. Fix the bug by checking that the user-supplied value of xcomp_bv is 0 in the uncompacted case, and returning an error otherwise. The reason for validating xcomp_bv rather than simply overwriting it with 0 is that we want userspace to see an error if it (incorrectly) provides an XSAVE area in compacted format rather than in uncompacted format. Note that as before, in case of error we clear the task's FPU state. This is perhaps non-ideal, especially for PTRACE_SETREGSET; it might be better to return an error before changing anything. But it seems the "clear on error" behavior is fine for now, and it's a little tricky to do otherwise because it would mean we couldn't simply copy the full userspace state into kernel memory in one __copy_from_user(). This bug was found by syzkaller, which hit the above-mentioned WARN_ON_FPU(): WARNING: CPU: 1 PID: 0 at ./arch/x86/include/asm/fpu/internal.h:373 __switch_to+0x5b5/0x5d0 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.13.0 #453 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 task: ffff9ba2bc8e42c0 task.stack: ffffa78cc036c000 RIP: 0010:__switch_to+0x5b5/0x5d0 RSP: 0000:ffffa78cc08bbb88 EFLAGS: 00010082 RAX: 00000000fffffffe RBX: ffff9ba2b8bf2180 RCX: 00000000c0000100 RDX: 00000000ffffffff RSI: 000000005cb10700 RDI: ffff9ba2b8bf36c0 RBP: ffffa78cc08bbbd0 R08: 00000000929fdf46 R09: 0000000000000001 R10: 0000000000000000 R11: 0000000000000000 R12: ffff9ba2bc8e42c0 R13: 0000000000000000 R14: ffff9ba2b8bf3680 R15: ffff9ba2bf5d7b40 FS: 00007f7e5cb10700(0000) GS:ffff9ba2bf400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000004005cc CR3: 0000000079fd5000 CR4: 00000000001406e0 Call Trace: Code: 84 00 00 00 00 00 e9 11 fd ff ff 0f ff 66 0f 1f 84 00 00 00 00 00 e9 e7 fa ff ff 0f ff 66 0f 1f 84 00 00 00 00 00 e9 c2 fa ff ff <0f> ff 66 0f 1f 84 00 00 00 00 00 e9 d4 fc ff ff 66 66 2e 0f 1f Here is a C reproducer. The expected behavior is that the program spin forever with no output. However, on a buggy kernel running on a processor with the "xsave" feature but without the "xsaves" feature (e.g. Sandy Bridge through Broadwell for Intel), within a second or two the program reports that the xmm registers were corrupted, i.e. were not restored correctly. With CONFIG_X86_DEBUG_FPU=y it also hits the above kernel warning. #define _GNU_SOURCE #include <stdbool.h> #include <inttypes.h> #include <linux/elf.h> #include <stdio.h> #include <sys/ptrace.h> #include <sys/uio.h> #include <sys/wait.h> #include <unistd.h> int main(void) { int pid = fork(); uint64_t xstate[512]; struct iovec iov = { .iov_base = xstate, .iov_len = sizeof(xstate) }; if (pid == 0) { bool tracee = true; for (int i = 0; i < sysconf(_SC_NPROCESSORS_ONLN) && tracee; i++) tracee = (fork() != 0); uint32_t xmm0[4] = { [0 ... 3] = tracee ? 0x00000000 : 0xDEADBEEF }; asm volatile(" movdqu %0, %%xmm0\n" " mov %0, %%rbx\n" "1: movdqu %%xmm0, %0\n" " mov %0, %%rax\n" " cmp %%rax, %%rbx\n" " je 1b\n" : "+m" (xmm0) : : "rax", "rbx", "xmm0"); printf("BUG: xmm registers corrupted! tracee=%d, xmm0=%08X%08X%08X%08X\n", tracee, xmm0[0], xmm0[1], xmm0[2], xmm0[3]); } else { usleep(100000); ptrace(PTRACE_ATTACH, pid, 0, 0); wait(NULL); ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov); xstate[65] = -1; ptrace(PTRACE_SETREGSET, pid, NT_X86_XSTATE, &iov); ptrace(PTRACE_CONT, pid, 0, 0); wait(NULL); } return 1; } Note: the program only tests for the bug using the ptrace() system call. The bug can also be reproduced using the rt_sigreturn() system call, but only when called from a 32-bit program, since for 64-bit programs the kernel restores the FPU state from the signal frame by doing XRSTOR directly from userspace memory (with proper error checking). Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Rik van Riel <riel@redhat.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Kevin Hao <haokexin@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Halcrow <mhalcrow@google.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Cc: Yu-cheng Yu <yu-cheng.yu@intel.com> Cc: kernel-hardening@lists.openwall.com Fixes: 0b29643a5843 ("x86/xsaves: Change compacted format xsave area header") Link: http://lkml.kernel.org/r/20170922174156.16780-2-ebiggers3@gmail.com Link: http://lkml.kernel.org/r/20170923130016.21448-25-mingo@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05x86/mm: Fix fault error path using unsafe vma pointerLaurent Dufour1-23/+24
commit a3c4fb7c9c2ebfd50b8c60f6c069932bb319bc37 upstream. commit 7b2d0dbac489 ("x86/mm/pkeys: Pass VMA down in to fault signal generation code") passes down a vma pointer to the error path, but that is done once the mmap_sem is released when calling mm_fault_error() from __do_page_fault(). This is dangerous as the vma structure is no more safe to be used once the mmap_sem has been released. As only the protection key value is required in the error processing, we could just pass down this value. Fix it by passing a pointer to a protection key value down to the fault signal generation code. The use of a pointer allows to keep the check generating a warning message in fill_sig_info_pkey() when the vma was not known. If the pointer is valid, the protection value can be accessed by deferencing the pointer. [ tglx: Made *pkey u32 as that's the type which is passed in siginfo ] Fixes: 7b2d0dbac489 ("x86/mm/pkeys: Pass VMA down in to fault signal generation code") Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Cc: Dave Hansen <dave.hansen@linux.intel.com> Link: http://lkml.kernel.org/r/1504513935-12742-1-git-send-email-ldufour@linux.vnet.ibm.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05kvm: nVMX: Don't allow L2 to access the hardware CR8Jim Mattson1-0/+5
commit 51aa68e7d57e3217192d88ce90fd5b8ef29ec94f upstream. If L1 does not specify the "use TPR shadow" VM-execution control in vmcs12, then L0 must specify the "CR8-load exiting" and "CR8-store exiting" VM-execution controls in vmcs02. Failure to do so will give the L2 VM unrestricted read/write access to the hardware CR8. This fixes CVE-2017-12154. Signed-off-by: Jim Mattson <jmattson@google.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-10-05KVM: VMX: Do not BUG() on out-of-bounds guest IRQJan H. Schönherr1-2/+7
commit 3a8b0677fc6180a467e26cc32ce6b0c09a32f9bb upstream. The value of the guest_irq argument to vmx_update_pi_irte() is ultimately coming from a KVM_IRQFD API call. Do not BUG() in vmx_update_pi_irte() if the value is out-of bounds. (Especially, since KVM as a whole seems to hang after that.) Instead, print a message only once if we find that we don't have a route for a certain IRQ (which can be out-of-bounds or within the array). This fixes CVE-2017-1000252. Fixes: efc644048ecde54 ("KVM: x86: Update IRTE for posted-interrupts") Signed-off-by: Jan H. Schönherr <jschoenh@amazon.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>