summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2011-12-27KVM: drop bsp_vcpu pointer from kvm structGleb Natapov4-14/+17
Drop bsp_vcpu pointer from kvm struct since its only use is incorrect anyway. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86: Consolidate PIT legacy testJan Kiszka1-7/+3
Move the test for KVM_PIT_FLAGS_HPET_LEGACY into create_pit_timer instead of replicating it on the caller site. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86: Do not rely on implicit inclusionsJan Kiszka1-0/+2
Works so far by change, but it is not guaranteed to stay like this. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: Make KVM_INTEL depend on CPU_SUP_INTELAvi Kivity1-0/+2
PMU virtualization needs to talk to Intel-specific bits of perf; these are only available when CPU_SUP_INTEL=y. Fixes arch/x86/built-in.o: In function `atomic_switch_perf_msrs': vmx.c:(.text+0x6b1d4): undefined reference to `perf_guest_get_msrs' Reported-by: Ingo Molnar <mingo@elte.hu> Reported-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27Merge remote-tracking branch 'tip/perf/core' into kvm-updates/3.3Avi Kivity98-2389/+4196
* tip/perf/core: (66 commits) perf, x86: Expose perf capability to other modules perf, x86: Implement arch event mask as quirk x86, perf: Disable non available architectural events jump_label: Provide jump_label_key initializers jump_label, x86: Fix section mismatch perf, core: Rate limit perf_sched_events jump_label patching perf: Fix enable_on_exec for sibling events perf: Remove superfluous arguments perf, x86: Prefer fixed-purpose counters when scheduling perf, x86: Fix event scheduler for constraints with overlapping counters perf, x86: Implement event scheduler helper functions perf: Avoid a useless pmu_disable() in the perf-tick x86/tools: Add decoded instruction dump mode x86: Update instruction decoder to support new AVX formats x86/tools: Fix insn_sanity message outputs x86/tools: Fix instruction decoder message output x86: Fix instruction decoder to handle grouped AVX instructions x86/tools: Fix Makefile to build all test tools perf test: Soft errors shouldn't stop the "Validate PERF_RECORD_" test perf test: Validate PERF_RECORD_ events and perf_sample fields ... Signed-off-by: Avi Kivity <avi@redhat.com> * commit 'b3d9468a8bd218a695e3a0ff112cd4efd27b670a': (66 commits) perf, x86: Expose perf capability to other modules perf, x86: Implement arch event mask as quirk x86, perf: Disable non available architectural events jump_label: Provide jump_label_key initializers jump_label, x86: Fix section mismatch perf, core: Rate limit perf_sched_events jump_label patching perf: Fix enable_on_exec for sibling events perf: Remove superfluous arguments perf, x86: Prefer fixed-purpose counters when scheduling perf, x86: Fix event scheduler for constraints with overlapping counters perf, x86: Implement event scheduler helper functions perf: Avoid a useless pmu_disable() in the perf-tick x86/tools: Add decoded instruction dump mode x86: Update instruction decoder to support new AVX formats x86/tools: Fix insn_sanity message outputs x86/tools: Fix instruction decoder message output x86: Fix instruction decoder to handle grouped AVX instructions x86/tools: Fix Makefile to build all test tools perf test: Soft errors shouldn't stop the "Validate PERF_RECORD_" test perf test: Validate PERF_RECORD_ events and perf_sample fields ...
2011-12-27KVM: Use memdup_user instead of kmalloc/copy_from_userSasha Levin2-64/+47
Switch to using memdup_user when possible. This makes code more smaller and compact, and prevents errors. Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Use kmemdup() instead of kmalloc/memcpySasha Levin2-6/+5
Switch to kmemdup() in two places to shorten the code and avoid possible bugs. Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Document KVM_NMIAvi Kivity1-0/+25
Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: x86 emulator: Remove set-but-unused cr4 from check_cr_writeJan Kiszka1-3/+0
This was probably copy&pasted from the cr0 case, but it's unneeded here. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: Drop unused return value of kvm_mmu_remove_some_alloc_mmu_pagesJan Kiszka1-6/+6
freed_pages is never evaluated, so remove it as well as the return code kvm_mmu_remove_some_alloc_mmu_pages so far delivered to its only user. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: use this_cpu_xxx replace percpu_xxx funcsAlex,Shi1-7/+7
percpu_xxx funcs are duplicated with this_cpu_xxx funcs, so replace them for further code clean up. And in preempt safe scenario, __this_cpu_xxx funcs has a bit better performance since __this_cpu_xxx has no redundant preempt_disable() Signed-off-by: Alex Shi <alex.shi@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: audit: inline audit functionXiao Guangrong2-29/+28
inline audit function and little cleanup Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: remove oos_shadow parameterXiao Guangrong2-8/+0
The unsync code should be stable now, maybe it is the time to remove this parameter to cleanup the code a little bit Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: move the relevant mmu code to mmu.cXiao Guangrong3-11/+12
Move the mmu code in kvm_arch_vcpu_init() to kvm_mmu_create() Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: x86: remove the dead code of KVM_EXIT_HYPERCALLXiao Guangrong1-4/+0
KVM_EXIT_HYPERCALL is not used anymore, so remove the code Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: audit: replace mmu audit tracepoint with jump-labelXiao Guangrong4-41/+26
The tracepoint is only used to audit mmu code, it should not be exposed to user, let us replace it with jump-label. Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27jump-label: export jump_label_inc/jump_label_decXiao Guangrong1-0/+2
Export these two symbols, they will be used by KVM mmu audit Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Refactor and simplify kvm_dev_ioctl_get_supported_cpuidSasha Levin1-50/+63
This patch cleans and simplifies kvm_dev_ioctl_get_supported_cpuid by using a table instead of duplicating code as Avi suggested. This patch also fixes a bug where kvm_dev_ioctl_get_supported_cpuid would return -E2BIG when amount of entries passed was just right. Signed-off-by: Sasha Levin <levinsasha928@gmail.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: expose latest Intel cpu new features (BMI1/BMI2/FMA/AVX2) to guestLiu, Jinsong2-2/+5
Intel latest cpu add 6 new features, refer http://software.intel.com/file/36945 The new feature cpuid listed as below: 1. FMA CPUID.EAX=01H:ECX.FMA[bit 12] 2. MOVBE CPUID.EAX=01H:ECX.MOVBE[bit 22] 3. BMI1 CPUID.EAX=07H,ECX=0H:EBX.BMI1[bit 3] 4. AVX2 CPUID.EAX=07H,ECX=0H:EBX.AVX2[bit 5] 5. BMI2 CPUID.EAX=07H,ECX=0H:EBX.BMI2[bit 8] 6. LZCNT CPUID.EAX=80000001H:ECX.LZCNT[bit 5] This patch expose these features to guest. Among them, FMA/MOVBE/LZCNT has already been defined, MOVBE/LZCNT has already been exposed. This patch defines BMI1/AVX2/BMI2, and exposes FMA/BMI1/AVX2/BMI2 to guest. Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Move cpuid code to new fileAvi Kivity7-635/+679
The cpuid code has grown; put it into a separate file. Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for INS/OUTS from/to port in DXTakuya Yoshikawa1-12/+2
INSB : 6C INSW/INSD : 6D OUTSB : 6E OUTSW/OUTSD: 6F The I/O port address is read from the DX register when we decode the operand because we see the SrcDX/DstDX flag is set. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Allow aligned byte and word writes to IOAPIC registers.Julian Stecklina1-3/+12
This fixes byte accesses to IOAPIC_REG_SELECT as mandated by at least the ICH10 and Intel Series 5 chipset specs. It also makes ioapic_mmio_write consistent with ioapic_mmio_read, which also allows byte and word accesses. Signed-off-by: Julian Stecklina <js@alien8.de> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: IA64: fix struct redefinitionXiao Guangrong1-2/+2
There is the same struct definition in ia64 and kvm common code: arch/ia64/kvm//kvm-ia64.c: At top level: arch/ia64/kvm//kvm-ia64.c:777:8: error: redefinition of ‘struct kvm_io_range’ include/linux/kvm_host.h:62:8: note: originally defined here So, rename kvm_io_range to kvm_ia64_io_range in ia64 code Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: introduce a table to map slot id to index in memslots arrayXiao Guangrong2-7/+13
The operation of getting dirty log is frequent when framebuffer-based displays are used(for example, Xwindow), so, we introduce a mapping table to speed up id_to_memslot() Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: sort memslots by its size and use line searchXiao Guangrong2-25/+72
Sort memslots base on its size and use line search to find it, so that the larger memslots have better fit The idea is from Avi Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: introduce id_to_memslot functionXiao Guangrong6-17/+30
Introduce id_to_memslot to get memslot by slot id Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: introduce kvm_for_each_memslot macroXiao Guangrong5-26/+27
Introduce kvm_for_each_memslot to walk all valid memslot Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: introduce update_memslots functionXiao Guangrong3-8/+17
Introduce update_memslots to update slot which will be update to kvm->memslots Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: introduce KVM_MEM_SLOTS_NUM macroXiao Guangrong4-5/+10
Introduce KVM_MEM_SLOTS_NUM macro to instead of KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for BSF/BSRTakuya Yoshikawa1-25/+35
BSF: 0F BC BSR: 0F BD Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for CMPXCHGTakuya Yoshikawa1-18/+19
CMPXCHG: 0F B0, 0F B1 Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for WRMSR/RDMSRTakuya Yoshikawa1-26/+26
WRMSR: 0F 30 RDMSR: 0F 32 Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for MOV to cr/drTakuya Yoshikawa1-22/+30
MOV: 0F 22 (move to control registers) MOV: 0F 23 (move to debug registers) Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for CALLTakuya Yoshikawa1-8/+10
CALL: E8 Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for BT familyTakuya Yoshikawa1-39/+38
BT : 0F A3 BTS: 0F AB BTR: 0F B3 BTC: 0F BB Group 8: 0F BA Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86 emulator: Use opcode::execute for IN/OUTTakuya Yoshikawa1-26/+28
IN : E4, E5, EC, ED OUT: E6, E7, EE, EF Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: VMX: remove unneeded vmx_load_host_state() calls.Gleb Natapov1-5/+0
vmx_load_host_state() does not handle msrs switching (except MSR_KERNEL_GS_BASE) since commit 26bb0981b3f. Remove call to it where it is no longer make sense. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Optimize dirty logging by rmap_write_protect()Takuya Yoshikawa3-11/+63
Currently, write protecting a slot needs to walk all the shadow pages and checks ones which have a pte mapping a page in it. The walk is overly heavy when dirty pages in that slot are not so many and checking the shadow pages would result in unwanted cache pollution. To mitigate this problem, we use rmap_write_protect() and check only the sptes which can be reached from gfns marked in the dirty bitmap when the number of dirty pages are less than that of shadow pages. This criterion is reasonable in its meaning and worked well in our test: write protection became some times faster than before when the ratio of dirty pages are low and was not worse even when the ratio was near the criterion. Note that the locking for this write protection becomes fine grained. The reason why this is safe is descripted in the comments. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Count the number of dirty pages for dirty loggingTakuya Yoshikawa3-7/+7
Needed for the next patch which uses this number to decide how to write protect a slot. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: Split gfn_to_rmap() into two functionsTakuya Yoshikawa1-9/+17
rmap_write_protect() calls gfn_to_rmap() for each level with gfn fixed. This results in calling gfn_to_memslot() repeatedly with that gfn. This patch introduces __gfn_to_rmap() which takes the slot as an argument to avoid this. This is also needed for the following dirty logging optimization. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: Clean up BUG_ON() conditions in rmap_write_protect()Takuya Yoshikawa1-3/+1
Remove redundant checks and use is_large_pte() macro. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: Use kmemdup rather than duplicating its implementationThomas Meyer1-6/+5
Use kmemdup rather than duplicating its implementation The semantic patch that makes this change is available in scripts/coccinelle/api/memdup.cocci. More information about semantic patching is available at http://coccinelle.lip6.fr/ Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: MMU: remove KVM host pv mmu supportChris Wright4-169/+0
The host side pv mmu support has been marked for feature removal in January 2011. It's not in use, is slower than shadow or hardware assisted paging, and a maintenance burden. It's November 2011, time to remove it. Signed-off-by: Chris Wright <chrisw@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM guest: remove KVM guest pv mmu supportChris Wright1-181/+0
This has not been used for some years now. It's time to remove it. Signed-off-by: Chris Wright <chrisw@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: make checks stricter in coalesced_mmio_in_range()Dan Carpenter1-3/+9
My testing version of Smatch complains that addr and len come from the user and they can wrap. The path is: -> kvm_vm_ioctl() -> kvm_vm_ioctl_unregister_coalesced_mmio() -> coalesced_mmio_in_range() I don't know what the implications are of wrapping here, but we may as well fix it, if only to silence the warning. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: x86: Simplify kvm timer handlerJan Kiszka1-22/+4
The vcpu reference of a kvm_timer can't become NULL while the timer is valid, so drop this redundant test. This also makes it pointless to carry a separate __kvm_timer_fn, fold it into kvm_timer_fn. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2011-12-27KVM: Fix include dependency for mmu_notifierEric B Munson1-0/+1
The kvm_host struct can include an mmu_notifier struct but mmu_notifier.h is not included directly. Signed-off-by: Eric B Munson <emunson@mgebm.net> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: improve write flooding detectedXiao Guangrong3-48/+32
Detecting write-flooding does not work well, when we handle page written, if the last speculative spte is not accessed, we treat the page is write-flooding, however, we can speculative spte on many path, such as pte prefetch, page synced, that means the last speculative spte may be not point to the written page and the written page can be accessed via other sptes, so depends on the Accessed bit of the last speculative spte is not enough Instead of detected page accessed, we can detect whether the spte is accessed after it is written, if the spte is not accessed but it is written frequently, we treat is not a page table or it not used for a long time Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: fix detecting misaligned accessedXiao Guangrong1-0/+8
Sometimes, we only modify the last one byte of a pte to update status bit, for example, clear_bit is used to clear r/w bit in linux kernel and 'andb' instruction is used in this function, in this case, kvm_mmu_pte_write will treat it as misaligned access, and the shadow page table is zapped Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>
2011-12-27KVM: MMU: split kvm_mmu_pte_write functionXiao Guangrong1-75/+119
kvm_mmu_pte_write is too long, we split it for better readable Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com> Signed-off-by: Avi Kivity <avi@redhat.com>