Age | Commit message (Collapse) | Author | Files | Lines |
|
After commit 626cf8f (icount: set can_do_io outside TB execution,
2014-12-08), can_do_io is set to 1 if not executing code. It is
no longer necessary to make this assumption in cpu_can_do_io.
It is also possible to remove the use_icount test, simply by
never setting cpu->can_do_io to 0 unless use_icount is true.
With these changes cpu_can_do_io boils down to a read of
cpu->can_do_io.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Instead of invalidating an original TB in cpu_exec_nocache()
prematurely, just save a link to it in the temporary generated TB. If
cpu_io_recompile() is raised subsequently from the temporary TB,
invalidate the original one as well. That allows reusing the original TB
each time cpu_exec_nocache() is called to handle expired instruction
counter in icount mode.
Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Message-Id: <1435656909-29116-1-git-send-email-serge.fdrv@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Remove un-needed usages of ENV_GET_CPU() by converting the APIs to use
CPUState pointers and retrieving the env_ptr as minimally needed.
Scripted conversion for target-* change:
for I in target-*/cpu.h; do
sed -i \
's/\(^int cpu_[^_]*_exec(\)[^ ][^ ]* \*s);$/\1CPUState *cpu);/' \
$I;
done
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
This is one of very few things in exec-all with a genuine CPU
architecture dependency. Move these hashing helpers to a new
header to trim exec-all.h down to a near architecture-agnostic
header.
The defs are only used by cpu-exec and translate-all which are both
arch-obj's so the new tb-hash.h has no core code usage.
Reviewed-by: Richard Henderson <rth@redhat.com>
Signed-off-by: Peter Crosthwaite <crosthwaite.peter@gmail.com>
Message-Id: <9d048b96f7cfa64a4d9c0b88e0dd2877fac51d41.1433052532.git.crosthwaite.peter@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
- vhost-scsi: add bootindex property
- RCU: fix MemoryRegion lifetime issues in PCI; document the rules;
convert of AddressSpaceDispatch and RAMList
- KVM: add kvm_exit reasons for aarch64
# gpg: Signature made Mon Feb 16 16:32:32 2015 GMT using RSA key ID 78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (21 commits)
Convert ram_list to RCU
exec: convert ram_list to QLIST
cosmetic changes preparing for the following patches
exec: protect mru_block with RCU
rcu: add g_free_rcu
rcu: introduce RCU-enabled QLIST
exec: RCUify AddressSpaceDispatch
exec: make iotlb RCU-friendly
exec: introduce cpu_reload_memory_map
docs: clarify memory region lifecycle
pci: split shpc_cleanup and shpc_free
pcie: remove mmconfig memory leak and wrap mmconfig update with transaction
memory: keep the owner of the AddressSpace alive until do_address_space_destroy
rcu: run RCU callbacks under the BQL
rcu: do not let RCU callbacks pile up indefinitely
vhost-scsi: set the bootable value of channel/target/lun
vhost-scsi: add a property for booting
vhost-scsi: expose the TYPE_FW_PATH_PROVIDER interface
vhost-scsi: add bootindex property
qdev: support to get a device firmware path directly
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Note that even after this patch, most callers of address_space_*
functions must still be under the big QEMU lock, otherwise the memory
region returned by address_space_translate can disappear as soon as
address_space_translate returns. This will be fixed in the next part
of this series.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
After the previous patch, TLBs will be flushed on every change to
the memory mapping. This patch augments that with synchronization
of the MemoryRegionSections referred to in the iotlb array.
With this change, it is guaranteed that iotlb_to_region will access
the correct memory map, even once the TLB will be accessed outside
the BQL.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This for now is a simple TLB flush. This can change later for two
reasons:
1) an AddressSpaceDispatch will be cached in the CPUState object
2) it will not be possible to do tlb_flush once the TCG-generated code
runs outside the BQL.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Use MIN instead of an "if" statement. Move "tb" assignment where
the value is actually used.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
All uses of TB inside cpu_exec are dominated by "tb = tb_find_fast(env)",
and there are no uses after the switch statement. So the assignment
is dead, as reported by Coverity.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
|
|
With the introduction of QEMU_CLOCK_VIRTUAL_RT, the computation of
sc->diff_clk can be simplified nicely:
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) +
cpu_get_clock_offset()
= qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - cpu_get_clock_offset())
= qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timers_state.cpu_clock_offset)
= qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) -
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL_RT)
Cc: Sebastian Tanase <sebastian.tanase@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
sc->diff_clk is already equal to sleep_delay (split in a second and a
nanosecond part). If you subtract sleep_delay - rem_delay, the result
is exactly rem_delay.
Cc: Sebastian Tanase <sebastian.tanase@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache. Do this manually through a new compile
flag.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
This patch sets can_do_io function to allow reading icount
within cpu-exec, but outside TB execution.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Exception index is reset at every entry at every entry into cpu_exec()
function. This may cause missing the exceptions while replaying them.
This patch moves exception_index reset to the locations where they are
processed.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
In icount mode cpu_exec_nocache function is used to execute part of the
existing TB. At the end of cpu_exec_nocache newly created TB is deleted.
Sometimes io_read function needs to recompile current TB and restart TB
lookup and execution. After that tb_find_fast function finds old (bigger)
TB again. This TB cannot be executed (because icount is not big enough)
and cpu_exec_nocache is called again. Such a loop continues over and over.
This patch deletes old TB and avoids finding it in the TB cache.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The signal is currently checked by 10 targets, but only actually
raised by Sparc and ARM. For the sake of one test-and-branch,
we can handle this generic bit generically.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-24-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-23-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: qemu-ppc@nongnu.org
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-22-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Michael Walle <michael@walle.cc>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Acked-by: Michael Walle <michael@walle.cc>
Message-id: 1410626734-3804-21-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-20-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Leon Alrae <leon.alrae@imgtec.com>
Tested-by: Leon Alrae <leon.alrae@imgtec.com>
Message-id: 1410626734-3804-19-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
It can go back in when it actually does something.
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-id: 1410626734-3804-18-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Jia Liu <proljc@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Tested-by: Jia Liu <proljc@gmail.com>
Message-id: 1410626734-3804-17-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Blue Swirl <blauwirbel@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-16-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-15-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-14-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-13-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
Message-id: 1410626734-3804-12-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Message-id: 1410626734-3804-11-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Since do_interrupt_m68k_hardirq is no longer used outside
op_helper.c, make it static.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-10-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-9-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Message-id: 1410626734-3804-8-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Continuing the removal of ifdefs from cpu_exec.
Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-7-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Cc: qemu-ppc@nongnu.org
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-6-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-5-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Note that the code that was within the "exit" ifdef block
was identical to the cpu_compute_eflags inline, so make that
simplification at the same time.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-4-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Around the cpu_exec_enter/exit hooks contain many empty
ifdef blocks. Delete all of these to highlight those
targets for which we actually need to do work.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-3-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
In preparation for removing a bunch of ifdefs from cpu_exec.
Cc: Andreas Färber <afaerber@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1410626734-3804-2-git-send-email-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Make the debug_excp_handler target specific hook into a QOM
CPU method.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Correct an error in the logic for deciding whether we can
take an IRQ interrupt which meant that on M profile cores
it was never possible to disable them.
The design here is still bogus in that M profile doesn't
have separate "IRQ" and "FIQ", which are an A/R profile
concept; we should ideally implement the proper priority
based scheme.
Signed-off-by: David Hoover <spm@boiteauxlettres.sent.at>
[PMM: Wrote a proper commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
Add TriCore target stubs, and QOM cpu, and Maintainer
Signed-off-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Message-id: 1409572800-4116-2-git-send-email-kbastian@mail.uni-paderborn.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
|
|
This adds a couple of tcg specific trace-events which are useful for
tracing execution though tcg generated blocks. It's been tested with
lttng user space tracing but is generic enough for all systems. The tcg
events are:
* translate_block - when a subject block is translated
* exec_tb - when a translated block is entered
* exec_tb_exit - when we exit the translated code
* exec_tb_nocache - special case translations
Of course we can only trace the entrance to the first block of a chain
as each block will jump directly to the next when it can. See the -d
nochain patch to allow more complete tracing at the expense of
performance.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Show in 'info jit' the current delay between the host clock
and the guest clock. In addition, print the maximum advance
and delay of the guest compared to the host.
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
If the align option is enabled, we print to the user whenever
the guest clock is behind the host clock in order for he/she
to have a hint about the actual performance. The maximum
print interval is 2s and we limit the number of messages to 100.
If desired, this can be changed in cpu-exec.c
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The goal is to sleep qemu whenever the guest clock
is in advance compared to the host clock (we use
the monotonic clocks). The amount of time to sleep
is calculated in the execution loop in cpu_exec.
At first, we tried to approximate at each for loop the real time elapsed
while searching for a TB (generating or retrieving from cache) and
executing it. We would then approximate the virtual time corresponding
to the number of virtual instructions executed. The difference between
these 2 values would allow us to know if the guest is in advance or delayed.
However, the function used for measuring the real time
(qemu_clock_get_ns(QEMU_CLOCK_REALTIME)) proved to be very expensive.
We had an added overhead of 13% of the total run time.
Therefore, we modified the algorithm and only take into account the
difference between the 2 clocks at the begining of the cpu_exec function.
During the for loop we try to reduce the advance of the guest only by
computing the virtual time elapsed and sleeping if necessary. The overhead
is thus reduced to 3%. Even though this method still has a noticeable
overhead, it no longer is a bottleneck in trying to achieve a better
guest frequency for which the guest clock is faster than the host one.
As for the the alignement of the 2 clocks, with the first algorithm
the guest clock was oscillating between -1 and 1ms compared to the host clock.
Using the second algorithm we notice that the guest is 5ms behind the host, which
is still acceptable for our use case.
The tests where conducted using fio and stress. The host machine in an i5 CPU at
3.10GHz running Debian Jessie (kernel 3.12). The guest machine is an arm versatile-pb
built with buildroot.
Currently, on our test machine, the lowest icount we can achieve that is suitable for
aligning the 2 clocks is 6. However, we observe that the IO tests (using fio) are
slower than the cpu tests (using stress).
Signed-off-by: Sebastian Tanase <sebastian.tanase@openwide.fr>
Tested-by: Camille Bégué <camille.begue@openwide.fr>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
On the x86, some devices need access to the CPU reset pin (INIT#).
Provide a generic service to do this, using one of the internal
cpu_interrupt targets. Generalize the PPC-specific code for
CPU_INTERRUPT_RESET to other targets.
Since PPC does not support migration across QEMU versions (its
machine types are not versioned yet), I picked the value that
is used on x86, CPU_INTERRUPT_TGT_INT_1. Consequently, TGT_INT_2
and TGT_INT_3 are shifted down by one while keeping their value.
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
If the guest attempts to execute from unreadable memory, this will
cause us to longjmp back to the main loop from inside the
target frontend decoder. For linux-user mode, this means we will
still hold the tb_ctx.tb_lock, and will deadlock when we try to
start executing code again. Unlock the lock in the return-from-longjmp
code path to avoid this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Andrei Warkentin <andrey.warkentin@gmail.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
|
|
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Signed-off-by: Andreas Färber <afaerber@suse.de>
|