Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch adds support for the VSX bit of the PowerPC Machine
State Register (MSR) as well as the corresponding VSX Unavailable
exception.
The VSX bit is added to the defined bits masks of the Power7 and
Power8 CPU models.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This patch adds the flag POWERPC_FLAG_VSX to the list of defined
flags and also adds this flag to the list of supported features of
the Power7 and Power8 CPUs. Additionally, the VSX instructions
are added to the list of TCG-enabled instruction.
Signed-off-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Instead of opencoding 64 use MAX_SLB_ENTRIES. We don't update the kernel
header here.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
On POWER7, LPCR_ILE is used to control what endian guests take
their exceptions in so use it instead of MSR_ILE.
Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The savevm code for the powerpc cpu emulation is currently based around
the old register_savevm() rather than register_vmstate() method. It's also
rather broken, missing some important state on some CPU models.
This patch completely rewrites the savevm for target-ppc, using the new
VMStateDescription approach. Exactly what needs to be saved in what
configurations has been more carefully examined, too. This introduces a
new version (5) of the cpu save format. The old load function is retained
to support version 4 images.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-id: 1374175984-8930-2-git-send-email-aliguori@us.ibm.com
[aik: ppc cpu savevm convertion fixed to use PowerPCCPU instead of CPUPPCState]
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
Where no extra implementation is needed, fall back to CPUClass::set_pc().
Acked-by: Michael Walle <michael@walle.cc> (for lm32)
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
The functions cpu_clone_regs() and cpu_set_tls() are not purely CPU
related -- they are specific to the TLS ABI for a a particular OS.
Move them into the linux-user/ tree where they belong.
target-lm32 had entirely unused implementations, since it has no
linux-user target; just drop them.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Use it to clean up the opcode table, resolving a former TODO from Jocelyn.
Also switch from malloc() to g_malloc().
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
When running -cpu on a POWER7 system with PR KVM, we mask out the 1TB
MMU capability from the MMU type mask, but not the AMR bit.
This leads to us having a new MMU type that we don't check for in our
MMU management functions.
Add the new type, so that we don't have to worry about breakage there.
We're not going to use the TCG MMU management in that case anyway.
The long term fix for this will be to move all these MMU management
functions to class callbacks.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
.. and enable it on POWER7 CPU.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
In addition to the performance monitor registers found on nearly all
6xx chips, the POWER7 has two additional counters (PMC5 & PMC6) and an
extra control register (MMCRA). This patch adds stub support for them to
qemu - the registers won't do anything, but with this change won't cause
illegal instruction traps accessing them. They're also registered with
their ONE_REG ids, so their value will be kept in sync with KVM where
appropriate.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This value is not needed if we use correctly the MSR[IP] bit.
excp_prefix is always 0x00000000, except when the MSR[IP] bit is
implemented and set to 1, in that case excp_prefix is 0xfff00000.
The handling of MSR[IP] was already implemented but not used at reset
because the value of env->msr was changed "manually".
The patch uses the function hreg_store_msr() to set env->msr, this
ensures a good handling of MSR[IP] at reset, and therefore a good value
for excp_prefix.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
For softmmu builds the interface from the generic code to the target
specific MMU implementation is through the tlb_fill() function. For ppc
this is currently in mem_helper.c, whereas it would make more sense in
mmu_helper.c. This patch moves it, which also allows
cpu_ppc_handle_mmu_fault() to become a local function in mmu_helper.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
mmu_helper.c is, for obvious reasons, almost entirely concerned with
softmmu builds of qemu. However, it does contain one stub function which
is used when CONFIG_USER_ONLY=y - the user only versoin of
cpu_ppc_handle_mmu_fault, which always triggers an exception. The entire
rest of the file is surrounded by #if !defined(CONFIG_USER_ONLY).
We clean this up by moving the user only stub into its own new file,
removing the ifdefs and building mmu_helper.c only when CONFIG_SOFTMMU
is set. This also lets us remove the #define of cpu_handle_mmu_fault to
cpu_ppc_handle_mmu_fault - that name is only used from generic code for
user only - so we just name our split user version by the generic name.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Version 2.06 of the Power architecture describes an additional page
protection mechanism. Each virtual page has a "class" (0-31) recorded in
the PTE. The AMR register contains bits which can prohibit reads and/or
writes on a class by class basis. Interestingly, the AMR is userspace
readable and writable, however user mode writes are masked by the contents
of the UAMOR which is privileged.
This patch implements this protection mechanism, along with the AMR and
UAMOR SPRs. The architecture also specifies a hypervisor-privileged AMOR
register which masks user and supervisor writes to the AMR and UAMOR. We
leave this out for now, since we don't at present model hypervisor mode
correctly in any case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Currently cpu.h contains a number of definitions relating to the 64-bit
hash MMU. Some are used in the MMU emulation code, but some are only used
in the spapr MMU management hcall implementations.
This patch moves these definitions (except for a few that are needed
more widely) into mmu-hash64.h header, shared between the MMU emulation
code and the spapr hcall code. The MMU emulation code is also updated to
actually use a number of those definitions in place of hard coded
constants.
Similarly, we add new analogous definitions to mmu-hash32.h and use those
in place of many hard-coded constants in mmu-hash32.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix 32-bit hosts]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
mmu_ctx_t is currently defined in cpu.h. However it is used for temporary
information relating to mmu translation, and is only used in mmu_helper.c
and (now) mmu-hash{32,64}.c. Furthermore it contains information which
should be specific to particular MMU types. Therefore, move its definition
to mmu_helper.c. mmu-hash{32,64}.c are converted to use new data types
private to the relevant MMUs (identical to mmu_ctx_t for now, but that will
change in future patches).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The functions for looking up BATs (Block Address Translation - essentially
a level 0 TLB) are shared between the classic 32-bit hash MMUs and the
6xx style software loaded TLB implementations.
This patch splits out a copy for the 32-bit hash MMUs, to facilitate
cleaning it up. The remaining version is left, but cleaned up slightly
to no longer deal with PowerPC 601 peculiarities (601 has a hash MMU).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The get_pteg_offset() helper function is currently shared between 32-bit
and 64-bit hash mmus, taking a parameter for the hash pte size. In the
64-bit paths, it's only called in one place, and it's a trivial
calculation. This patch, therefore, open codes it for 64-bit. The
remaining version, which is used in two places is made 32-bit only and
moved to mmu-hash32.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The newly separated paths for hash mmus rely on several helper functions
which are still shared with 32-bit hash mmus: pp_check(), check_prot() and
pte_update_flags(). While these don't have ugly ifdefs on the mmu type,
they're not very well thought out, so sharing them impedes cleaning up the
hash mmu paths. For now, put near-duplicate versions into mmu-hash64.c and
mmu-hash32.c, leaving the old version in mmu_helper.c for 6xx software
loaded tlb implementations. The hash 32 and software loaded
implementations are simplfied slightly, using the fact that no 32-bit CPUs
implement the 3rd page protection bit.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Depending on the MSR state, for 64-bit hash MMUs, get_physical_address
can either call check_physical (which has further tests for mmu type)
or get_segment64. Similarly for 32-bit hash MMUs we can either call
check_physucal or get_bat() and get_segment32().
This patch splits off the whole get_physical_addresss() path for hash
MMUs into 32-bit and 64-bit versions, handling real mode correctly for
such MMUs without going to check_physical and rechecking the mmu type.
Correspondingly, the hash MMU specific paths in check_physical() are
removed.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
32-bit and 64-bit hash MMU implementations currently share a find_pte
function. This results in a whole bunch of ugly conditionals in the shared
function, and not all that much actually shared code.
This patch separates out the 32-bit and 64-bit versions, putting then
in mmu-hash64.c and mmu-has32.c, and removes the conditionals from
both versions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Currently support for both 32-bit and 64-bit hash MMUs share an
implementation of pte_check. But there are enough differences that this
means the shared function has several very ugly conditionals on "is_64b".
This patch cleans things up by separating out the 64-bit version
(putting it into mmu-hash64.c) and the 32-bit hash version (putting it
in mmu-hash32.c). Another copy remains in mmu_helper.c, which is used
for the 6xx software loaded TLB paths.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
As a first step to disentangling the handling for 64-bit hash MMUs from
the rest, we move the code handling the Segment Lookaside Buffer (SLB)
(which only exists on 64-bit hash MMUs) into a new mmu-hash64.c file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This removes the never-used pte64_invalidate() function, and makes
ppcmas_tlb_check() static, since it's only used within that file.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
The PowerPC 620 was the very first 64-bit PowerPC implementation, but
hardly anyone ever actually used the chips. qemu notionally supports the
620, but since we don't actually have code to implement the segment table,
the support is broken (quite likely in other ways too).
This patch, therefore, removes all remaining pieces of 620 support, to
stop it cluttering up the platforms we actually care about. This includes
removing support for the ASR register, used only on segment table based
machines.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Although the support of this register may be uncomplete, there are no
reason to prevent the debugger from reading or writing it.
Signed-off-by: Fabien Chouteau <chouteau@adacore.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
This removes a global per-target function and thus takes us one step
closer to compiling multiple targets into one executable.
It will also allow to override the interrupt handling for certain CPU
families.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Both fields are used in VMState, thus need to be moved together.
Explicitly zero them on reset since they were located before
breakpoints.
Pass PowerPCCPU to kvmppc_handle_halt().
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
Currently when runing under KVM on ppc, we synchronize a certain number of
vital SPRs to KVM through the SET_SREGS call. This leaves out quite a lot
of important SPRs which are maintained in KVM. It would be helpful to
have their contents in qemu for debugging purposes, and when we implement
migration it will be vital, since they include important guest state that
will need to be restored on the target.
This patch sets up for synchronization of any registers supported by the
KVM ONE_REG calls. A new variant on spr_register() allows a ONE_REG id to
be stored with the SPR information. When we set/get information to KVM
we also synchronize any SPRs so registered.
For now we set this mechanism up to synchronize a handful of important
registers that already have ONE_REG IDs, notably the DAR and DSISR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Turn the array of model definitions into a set of self-registering QOM
types with their own class_init. Unique identifiers are obtained from
the combination of PVR, SVR and family identifiers; this requires all
alias #defines to be removed from the list. Possibly there are some more
left after this commit that are not currently being compiled.
Prepares for introducing abstract intermediate CPU types for families.
Keep the right-aligned macro line breaks within 78 chars to aid
three-way merges.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
In preparation for more efficient setting of these fields.
Cc: Alexander Graf <agraf@suse.de>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
|
|
The bit that makes a dcbz instruction a dcbzl instruction was declared as
reserved in ppc32 ISAs. However, hardware simply ignores the bit, making
code valid if it simply invokes dcbzl instead of dcbz even on 750 and G4.
Thus, mark the bit as unreserved so that we properly emulate a simple dcbz
in case we're running on non-G5s.
While at it, also refactor the code to check the 970 special case during
runtime. This way we don't need to differenciate between a 970 dcbz and
any other dcbz anymore. We also allow for future improvements to add e500mc
dcbz handling.
Reported-by: Amadeusz Sławiński <amade@asmblr.net>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Since the model list is highly macrofied, keep ppc_def_t for now and
save a pointer to it in PowerPCCPUClass. This results in a flat list of
subclasses including aliases, to be refined later.
Move cpu_ppc_init() to translate_init.c and drop helper.c.
Long-term the idea is to turn translate_init.c into a standalone cpu.c.
Inline cpu_ppc_usable() into type registration.
Split cpu_ppc_register() in two by code movement into the initfn and
by turning the remaining part into a realizefn.
Move qemu_init_vcpu() call into the new realizefn and adapt
create_ppc_opcodes() to return an Error.
Change ppc_find_by_pvr() -> ppc_cpu_class_by_pvr().
Change ppc_find_by_name() -> ppc_cpu_class_by_name().
Turn -cpu host into its own subclass. This requires to move the
kvm_enabled() check in ppc_cpu_class_by_name() to avoid the class being
found via the normal name lookup in the !kvm_enabled() case.
Turn kvmppc_host_cpu_def() into the class_init and add an initfn that
asserts KVM is in fact enabled.
Implement -cpu ? and the QMP equivalent in terms of subclasses.
This newly exposes -cpu host to the user, ordered last for -cpu ?.
Signed-off-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
We already used to support the external proxy facility of FSL MPICs,
but only implemented it halfway correctly.
This patch adds support for
* dynamic enablement of the EPR facility
* interrupt acknowledgement only when the interrupt is delivered
This way the implementation now is closer to real hardware.
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
The hwaddr type is somewhat vaguely defined as being able to contain bus
addresses on the widest possible bus in the system. For that reason it's
discouraged for representing specific pieces of persistent hardware state,
which should instead use an explicit width type that matches the bits
available in real hardware. In particular, because of the possibility that
the size of hwaddr might change if different buses are added to the target
in future, it's not suitable for use in vm state descriptions for savevm
and migration.
This patch purges such unwise uses of hwaddr from the ppc target code,
which turns out to be just one. The ppcemb_tlb_t struct, used on a number
of embedded ppc models to represent a TLB entry contains a hwaddr for the
real address field. This patch changes it to be a fixed uint64_t which is
suitable enough for all machine types which use this structure.
Other uses of hwaddr in CPUPPCState turn out not to be problematic:
htab_base and htab_mask are just used for the convenience of the TCG code;
the underlying machine state is the SDR1 register, which is stored with
a suitable type already. Likewise the mpic_cpu_base field is only used
internally and does not represent fundamental hardware state which needs to
be saved.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
* 'trivial-patches' of git://github.com/stefanha/qemu:
pc: Drop redundant test for ROM memory region
exec: make some functions static
target-ppc: make some functions static
ppc: add missing static
vnc: add missing static
vl.c: add missing static
target-sparc: make do_unaligned_access static
m68k: Return semihosting errno values correctly
cadence_uart: More debug information
Conflicts:
target-m68k/m68k-semi.c
|
|
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
Add missing 'static' qualifiers.
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
|
|
* afaerber/qom-cpu: (35 commits)
target-i386: Pass X86CPU to kvm_handle_halt()
target-i386: Pass X86CPU to kvm_get_mp_state()
cpu: Move thread_id to CPUState
cpus: Pass CPUState to run_on_cpu()
target-i386: Pass X86CPU to cpu_x86_inject_mce()
target-i386: Pass X86CPU to kvm_mce_inject()
cpus: Pass CPUState to [qemu_]cpu_has_work()
spapr: Pass PowerPCCPU to hypercalls
spapr: Pass PowerPCCPU to spapr_hypercall()
target-ppc: Pass PowerPCCPU to cpu_ppc_hypercall
target-ppc: Pass PowerPCCPU to powerpc_excp()
xtensa_pic: Pass XtensaCPU to xtensa_ccompare_cb()
cpus: Pass CPUState to qemu_wait_io_event_common()
cpus: Pass CPUState to flush_queued_work()
cpu: Move queued_work_{first,last} to CPUState
cpus: Pass CPUState to qemu_cpu_kick()
target-ppc: Rename kvm_kick_{env => cpu} and pass PowerPCCPU
ppc: Pass PowerPCCPU to {ppc6xx,ppc970,power7,ppc40x,ppce500}_set_irq()
cpus: Pass CPUState to qemu_tcg_init_vcpu()
cpus: Pass CPUState to qemu_tcg_cpu_thread_fn
...
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
This patch adds some extra FPU state to CPUPPCState. Specifically,
fpscr is extended to a target_ulong bits, since some recent (64 bit)
CPUs now have more status bits than fit inside 32 bits. Also, we add
the 32 VSR registers present on CPUs with VSX (these extend the
standard FP regs, which together with the Altivec/VMX registers form a
64 x 128bit register file for VSX).
We don't actually support the instructions using these extra registers
in TCG yet, but we still need a place to store the state so we can
sync it with KVM and savevm/loadvm it. This patch updates the savevm
code to not fail on the extended state, but also does not actually
save it - that's a project for another patch.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
We change the storage of the VPA information to explicitly use fixed
size integer types which will make life easier for syncing this data with
KVM, which we will need in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[agraf: fix commit message]
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
For target-mips also change the return type to bool.
Make include paths for cpu-qom.h consistent for alpha and unicore32.
Signed-off-by: Andreas Färber <afaerber@suse.de>
[AF: Updated new target-openrisc function accordingly]
Acked-by: Richard Henderson <rth@twiddle.net> (for alpha)
|
|
Adapt emulate_spapr_hypercall() accordingly.
Needed for changing spapr_hypercall() argument type to PowerPCCPU.
Signed-off-by: Andreas Färber <afaerber@suse.de>
|
|
With PAPR guests, hypercalls allow registration of the Virtual Processor
Area (VPA), SLB shadow and dispatch trace log (DTL), each of which allow
for certain communication between the guest and hypervisor. Currently, we
store the addresses of the three areas and the size of the dtl in
CPUPPCState.
The SLB shadow and DTL are variable sized, with the size being retrieved
from within the registered memory area at the hypercall time. This size
can later be overwritten with other information, however, so we need to
save the size as of registration time. We already do this for the DTL,
but not for the SLB shadow, so this patch fixes that.
In addition, we change the storage of the VPA information to use fixed
size integer types which will make life easier for syncing this data with
KVM, which we will need in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
reserved) and its purpose doesn't match the name (most target_phys_addr_t
addresses are not target specific). Replace it with a finger-friendly,
standards conformant hwaddr.
Outstanding patchsets can be fixed up with the command
git rebase -i --exec 'find -name "*.[ch]"
| xargs s/target_phys_addr_t/hwaddr/g' origin
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
|
|
CPUPPCState includes a variable 'power_mode' which is used nowhere. This
patch removes it. This includes saving a dummy zero in its place during
vmsave, to avoid breaking the save format.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
|
|
On 64bit capable systems, MAS2 can actually hold a 64bit virtual page
address. So increase the mask for its EPN.
Signed-off-by: Alexander Graf <agraf@suse.de>
|