Commit graph

7652 commits

Author SHA1 Message Date
Peter Maydell 85b4fa0cd1 linux-user: Don't include gdbstub.h in qemu.h
Currently the linux-user qemu.h pulls in gdbstub.h. There's no real reason
why it should do this; include it directly from the C files which require
it, and drop the include line in qemu.h.

(Note that several of the C files previously relying on this indirect
include were going out of their way to only include gdbstub.h conditionally
on not CONFIG_USER_ONLY!)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210908154405.15417-9-peter.maydell@linaro.org>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
2021-09-13 20:35:45 +02:00
Peter Maydell 7d79344d4f * Fixes for "-cpu max" on i386 TCG (Daniel)
* vVMLOAD/VMSAVE and vGIF implementation (Lara)
 * Reorganize i386 targets documentation in preparation for SGX (myself)
 * Meson cleanups (myself, Thomas)
 * NVMM fixes (Reinoud)
 * Suppress bogus -Wstringop-overflow (Richard)
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmE/PHEUHHBib256aW5p
 QHJlZGhhdC5jb20ACgkQv/vSX3jHroNn0ggAhOApMUZR2L9p4Z56X+Nnc1835dOJ
 QlX8UmMpoRBPuIKfaJPJQWwYeRSw4Nqaik3EndXug8Mo3LJaG5AFEHTXDkZGHMgh
 tGCyeARhDnUQPfKLszT1zg0EMloX6bCLFaA9ba1JBNK8VWXE4oJJLETk3Q+pDJZt
 0ztoxaLvQ2jaMFfPKtLdyhcXjDCPeZZjaQjCFVVmWV9hj8z4np3LZLoYi8a6cRWu
 u1Rb5SrftF12tu+RWACXZFQSnxFkU+iVeoKhQB0vrh7UgV/HAAbZS8c2U46v/kM0
 H6UcuBPjrz3fF/9hHNdovb4HxyQAP2pEliBSG7tFzJ+TbnMQVcoxN5uJ2Q==
 =DBxg
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging

* Fixes for "-cpu max" on i386 TCG (Daniel)
* vVMLOAD/VMSAVE and vGIF implementation (Lara)
* Reorganize i386 targets documentation in preparation for SGX (myself)
* Meson cleanups (myself, Thomas)
* NVMM fixes (Reinoud)
* Suppress bogus -Wstringop-overflow (Richard)

# gpg: Signature made Mon 13 Sep 2021 12:56:33 BST
# gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
# gpg:                issuer "pbonzini@redhat.com"
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
# gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
#      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83

* remotes/bonzini-gitlab/tags/for-upstream: (21 commits)
  docs: link to archived Fedora code of conduct
  Fix nvmm_ram_block_added() function arguments
  Only check CONFIG_NVMM when NEED_CPU_H is defined
  util: Suppress -Wstringop-overflow in qemu_thread_start
  fw_cfg: add etc/msr_feature_control
  meson: remove dead variable
  meson: do not use python.full_path() unnecessarily
  meson: look up cp and dtrace with find_program()
  meson.build: Do not look for VNC-related libraries if have_system is not set
  docs/system: move x86 CPU configuration to a separate document
  docs/system: standardize man page sections to --- with overline
  docs: standardize directory index to --- with overline
  docs: standardize book titles to === with overline
  target/i386: Added vVMLOAD and vVMSAVE feature
  target/i386: Added changed priority check for VIRQ
  target/i386: Added ignore TPR check in ctl_has_irq
  target/i386: Added VGIF V_IRQ masking capability
  target/i386: Moved int_ctl into CPUX86State structure
  target/i386: Added VGIF feature
  target/i386: VMRUN and VMLOAD canonicalizations
  ...

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-13 13:33:21 +01:00
Reinoud Zandijk 8d4cd3dd8b Fix nvmm_ram_block_added() function arguments
A parameter max_size was added to the RAMBlockNotifier
ram_block_added function. Use the max_size for pre allocation
of hva space.

Signed-off-by: Reinoud Zandijk <Reinoud@NetBSD.org>
Message-Id: <20210718134650.1191-3-reinoud@NetBSD.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier 52fb8ad37a target/i386: Added vVMLOAD and vVMSAVE feature
The feature allows the VMSAVE and VMLOAD instructions to execute in guest mode without
causing a VMEXIT. (APM2 15.33.1)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier 7760bb069f target/i386: Added changed priority check for VIRQ
Writes to cr8 affect v_tpr. This could set or unset an interrupt
request as the priority might have changed.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier 66a0201ba7 target/i386: Added ignore TPR check in ctl_has_irq
The APM2 states that if V_IGN_TPR is nonzero, the current
virtual interrupt ignores the (virtual) TPR.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier b67e2796a1 target/i386: Added VGIF V_IRQ masking capability
VGIF provides masking capability for when virtual interrupts
are taken. (APM2)

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier e3126a5c92 target/i386: Moved int_ctl into CPUX86State structure
Moved int_ctl into the CPUX86State structure.  It removes some
unnecessary stores and loads, and prepares for tracking the vIRQ
state even when it is masked due to vGIF.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier 900eeca579 target/i386: Added VGIF feature
VGIF allows STGI and CLGI to execute in guest mode and control virtual
interrupts in guest mode.
When the VGIF feature is enabled then:
 * executing STGI in the guest sets bit 9 of the VMCB offset 60h.
 * executing CLGI in the guest clears bit 9 of the VMCB offset 60h.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210730070742.9674-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Lara Lazier 97afb47e15 target/i386: VMRUN and VMLOAD canonicalizations
APM2 requires that VMRUN and VMLOAD canonicalize (sign extend to 63
from 48/57) all base addresses in the segment registers that have been
respectively loaded.

Signed-off-by: Lara Lazier <laramglazier@gmail.com>
Message-Id: <20210804113058.45186-1-laramglazier@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:26 +02:00
Daniel P. Berrangé 69e3895f9d target/i386: add missing bits to CR4_RESERVED_MASK
Booting Fedora kernels with -cpu max hangs very early in boot. Disabling
the la57 CPUID bit fixes the problem. git bisect traced the regression to

  commit 213ff024a2 (HEAD, refs/bisect/bad)
  Author: Lara Lazier <laramglazier@gmail.com>
  Date:   Wed Jul 21 17:26:50 2021 +0200

    target/i386: Added consistency checks for CR4

    All MBZ bits in CR4 must be zero. (APM2 15.5)
    Added reserved bitmask and added checks in both
    helper_vmrun and helper_write_crN.

    Signed-off-by: Lara Lazier <laramglazier@gmail.com>
    Message-Id: <20210721152651.14683-2-laramglazier@gmail.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

In this commit CR4_RESERVED_MASK is missing CR4_LA57_MASK and
two others. Adding this lets Fedora kernels boot once again.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Tested-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20210831175033.175584-1-berrange@redhat.com>
[Removed VMXE/SMXE, matching the commit message. - Paolo]
Fixes: 213ff024a2 ("target/i386: Added consistency checks for CR4", 2021-07-22)
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2021-09-13 13:56:18 +02:00
Peter Maydell b5328172a9 target/sparc: Drop use of gen_io_end()
The gen_io_end() function is obsolete (as documented in
docs/devel/tcg-icount.rst). Where an instruction is an I/O
operation, the translator frontend should call gen_io_start()
before generating the code which does the I/O, and then
end the TB immediately after this insn.

Remove the calls to gen_io_end() in the SPARC frontend,
and ensure that the insns which were calling it end the
TB if they didn't do so already.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-Id: <20210724134902.7785-2-peter.maydell@linaro.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
2021-09-08 11:09:45 +01:00
Christian Borntraeger 30e398f796 s390x/cpumodel: Add more feature to gen16 default model
Add the new gen16 features to the default model and fence them for
machine version 6.1 and earlier.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210907101017.27126-1-borntraeger@de.ibm.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-07 13:36:43 +02:00
David Hildenbrand c35622387e hw/s390x/s390-skeys: lazy storage key enablement under TCG
Let's enable storage keys lazily under TCG, just as we do under KVM.
Only fairly old Linux versions actually make use of storage keys, so it
can be kind of wasteful to allocate quite some memory and track
changes and references if nobody cares.

We have to make sure to flush the TLB when enabling storage keys after
the VM was already running: otherwise it might happen that we don't
catch references or modifications afterwards.

Add proper documentation to all callbacks.

The kvm-unit-tests skey tests keeps on working with this change.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-14-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand 380ac2bcce s390x/mmu_helper: avoid setting the storage key if nothing changed
Avoid setting the key if nothing changed.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-9-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand 390191c6f6 s390x/mmu_helper: move address validation into mmu_translate*()
Let's move address validation into mmu_translate() and
mmu_translate_real(). This allows for checking whether an absolute
address is valid before looking up the storage key. We can now get rid of
the ram_size check.

Interestingly, we're already handling LOAD REAL ADDRESS wrong, because
a) We're not supposed to touch storage keys
b) We're not supposed to convert to an absolute address

Let's use a fake, negative MMUAccessType to teach mmu_translate() to
fix that handling and to not perform address validation.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-8-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand e0b11f2df1 s390x/mmu_helper: fixup mmu_translate() documentation
Looks like we forgot to adjust documentation of one parameter.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-7-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand e039992f9a s390x/mmu_helper: no need to pass access type to mmu_translate_asce()
The access type is unused since commit 81d7e3bc45 ("s390x/mmu: Inject
DAT exceptions from a single place").

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-6-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand eaa0feea75 s390x/tcg: check for addressing exceptions for RRBE, SSKE and ISKE
Let's replace the ram_size check by a proper physical address space
check (for example, to prepare for memory hotplug), trigger addressing
exceptions and trace the return value of the storage key getter/setter.

Provide an helper mmu_absolute_addr_valid() to be used in other context
soon. Always test for "read" instead of "write" as we are not actually
modifying the page itself.

Signed-off-by: David Hildenbrand <david@redhat.com>
Acked-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-5-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand 06d8a10a70 s390x/tcg: convert real to absolute address for RRBE, SSKE and ISKE
For RRBE, SSKE, and ISKE, we're dealing with real addresses, so we have to
convert to an absolute address first.

In the future, when adding EDAT1 support, we'll have to pay attention to
SSKE handling, as we'll be dealing with absolute addresses when the
multiple-block control is one.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-4-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand fe00c705fe s390x/tcg: fix ignoring bit 63 when setting the storage key in SSKE
Right now we could set an 8-bit storage key via SSKE and retrieve it
again via ISKE, which is against the architecture description:

SSKE:
"
The new seven-bit storage-key value, or selected bits
thereof, is obtained from bit positions 56-62 of gen-
eral register R 1 . The contents of bit positions 0-55
and 63 of the register are ignored.
"

ISKE:
"
The seven-bit storage key is inserted in bit positions
56-62 of general register R 1 , and bit 63 is set to zero.
"

Let's properly ignore bit 63 to create the correct seven-bit storage key.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-3-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:05 +02:00
David Hildenbrand 634a0b51cb s390x/tcg: wrap address for RRBE
Let's wrap the address just like for SSKE and ISKE.

Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20210903155514.44772-2-david@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:24:04 +02:00
David Hildenbrand 0dd05d0606 s390x/ioinst: Fix wrong MSCH alignment check on little endian
schib->pmcw.chars is 32bit, not 16bit. This fixes the kvm-unit-tests
"css" test, which fails with:

  FAIL: Channel Subsystem: measurement block format1: Unaligned MB origin:
  Program interrupt: expected(21) == received(0)

Because we end up not injecting an operand program exception.

Fixes: a54b8ac340 ("css: SCHIB measurement block origin must be aligned")
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Pierre Morel <pmorel@linux.ibm.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Message-Id: <20210805143753.86520-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:23:22 +02:00
David Hildenbrand 6b01606f0e s390x/tcg: fix and optimize SPX (SET PREFIX)
We not only invalidate the translation of the range 0x0-0x2000, we also
invalidate the translation of the new prefix range and the translation
of the old prefix range -- because real2abs would return different
results for all of these ranges when changing the prefix location.

This fixes the kvm-unit-tests "edat" test that just hangs before this
patch because we end up clearing the new prefix area instead of the old
prefix area.

While at it, let's not do anything in case the prefix doesn't change.

Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: qemu-s390x@nongnu.org
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Message-Id: <20210805125938.74034-1-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
2021-09-06 16:23:16 +02:00
Shuuichirou Ishii e31c70ac04 target-arm: Add support for Fujitsu A64FX
Add a definition for the Fujitsu A64FX processor.

The A64FX processor does not implement the AArch32 Execution state,
so there are no associated AArch32 Identification registers.

For SVE, the A64FX processor supports only 128,256 and 512bit vector
lengths.

The Identification register values are defined based on the FX700,
and have been tested and confirmed.

Signed-off-by: Shuuichirou Ishii <ishii.shuuichir@fujitsu.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:18 +01:00
Peter Maydell d4cc1c2196 target/arm: Enable MVE in Cortex-M55
We now have a complete MVE emulation, so we can enable it in our
Cortex-M55 model by setting the ID registers to match those of a
Cortex-M55 with full MVE support.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:18 +01:00
Peter Maydell 98e40fbd79 target/arm: Implement MVE VRINT insns
Implement the MVE VRINT insns, which round floating point inputs
to integer values, leaving them in floating point format.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell 73d260db3c target/arm: Implement MVE VCVT between single and half precision
Implement the MVE VCVT instruction which converts between single
and half precision floating point.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell 53fc5f6139 target/arm: Implement MVE VCVT with specified rounding mode
Implement the MVE VCVT which converts from floating-point to integer
using a rounding mode specified by the instruction.  We implement
this similarly to the Neon equivalents, by passing the required
rounding mode as an extra integer parameter to the helper functions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell 2ec0dcf034 target/arm: Implement MVE VCVT between fp and integer
Implement the MVE "VCVT (between floating-point and integer)" insn.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell 2a4b939cf8 target/arm: Implement MVE VCVT between floating and fixed point
Implement the MVE VCVT insns which convert between floating and fixed
point.  As with the Neon equivalents, these use essentially the same
constant encoding as right-shift-by-immediate.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell c2d8f6bb28 target/arm: Implement MVE fp scalar comparisons
Implement the MVE fp scalar comparisons VCMP and VPT.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell c87fe6d28c target/arm: Implement MVE fp vector comparisons
Implement the MVE fp vector comparisons VCMP and VPT.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell 29f80e7d83 target/arm: Implement MVE FP max/min across vector
Implement the MVE VMAXNMV, VMINNMV, VMAXNMAV, VMINNMAV insns.  These
calculate the maximum or minimum of floating point elements across a
vector, starting with a value in a general purpose register and
returning the result there.

The pseudocode silences a possible SNaN in the accumulating result
on every iteration (by calling FPConvertNaN), but we do it only
on the input ra, because if none of the inputs to float*_maxnum
or float*_minnum are SNaNs then the result can't be an SNaN.

Note that we can't use the float*_maxnuma() etc functions we defined
earlier for VMAXNMA and VMINNMA, because we mustn't take the absolute
value of the starting general-purpose register value, which could be
negative.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell 4773e74e5f target/arm: Implement MVE fp-with-scalar VFMA, VFMAS
Implement the MVE fp-with-scalar VFMA and VFMAS insns.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:17 +01:00
Peter Maydell abfe39b263 target/arm: Implement MVE scalar fp insns
Implement the MVE scalar floating point insns VADD, VSUB and VMUL.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell 90257a4f35 target/arm: Implement MVE VMAXNMA and VMINNMA
Implement the MVE VMAXNMA and VMINNMA insns; these are 2-operand, but
the destination register must be the same as one of the source
registers.

We defer the decode of the size in bit 28 to the individual insn
patterns rather than doing it in the format, because otherwise we
would have a single insn pattern that overlapped with two groups (eg
VMAXNMA with the VMULH_S and VMULH_U groups). Having two insn
patterns per insn seems clearer than a complex multilevel nesting
of overlapping and non-overlapping groups.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell d3cd965c84 target/arm: Implement MVE VCMUL and VCMLA
Implement the MVE VCMUL and VCMLA insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell 3173c0dd93 target/arm: Implement MVE VFMA and VFMS
Implement the MVE VFMA and VFMS insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell 104afc68cf target/arm: Implement MVE VCADD
Implement the MVE VCADD insn.  Note that here the size bit is the
opposite sense to the other 2-operand fp insns.

We don't check for the sz == 1 && Qd == Qm UNPREDICTABLE case,
because that would mean we can't use the DO_2OP_FP macro in
translate-mve.c.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell 82af0153d3 target/arm: Implement MVE VSUB, VMUL, VABD, VMAXNM, VMINNM
Implement more simple 2-operand floating point MVE insns.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Peter Maydell 1e35cd9166 target/arm: Implement MVE VADD (floating-point)
Implement the MVE VADD (floating-point) insn.  Handling of this is
similar to the 2-operand integer insns, except that we must take care
to only update the floating point exception status if the least
significant bit of the predicate mask for each element is active.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2021-09-01 11:08:16 +01:00
Richard Henderson 8e034ae44d target/riscv: Use {get,dest}_gpr for RVV
Remove gen_get_gpr, as the function becomes unused.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-25-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson f33960df5b target/riscv: Tidy trans_rvh.c.inc
Exit early if check_access fails.
Split out do_hlv, do_hsv, do_hlvx subroutines.
Use dest_gpr, get_gpr in the new subroutines.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-24-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson 7976837f9a target/riscv: Use {get,dest}_gpr for RVD
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-23-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson 75234a2843 target/riscv: Use {get,dest}_gpr for RVF
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-22-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson 6922eee6ac target/riscv: Use gen_shift_imm_fn for slli_uw
Always use tcg_gen_deposit_z_tl; the special case for
shamt >= 32 is handled there.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-21-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson cce762a75e target/riscv: Use {get,dest}_gpr for RVA
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-20-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson a974879b45 target/riscv: Reorg csr instructions
Introduce csrr and csrw helpers, for read-only and write-only insns.

Note that we do not properly implement this in riscv_csrrw, in that
we cannot distinguish true read-only (rs1 == 0) from any other zero
write_mask another source register -- this should still raise an
exception for read-only registers.

Only issue gen_io_start for CF_USE_ICOUNT.
Use ctx->zero for csrrc.
Use get_gpr and dest_gpr.

Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-19-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00
Richard Henderson 377cbb4bdb target/riscv: Fix hgeie, hgeip
We failed to write into *val for these read functions;
replace them with read_zero.  Only warn about unsupported
non-zero value when writing a non-zero value.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Bin Meng <bmeng.cn@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Message-id: 20210823195529.560295-18-richard.henderson@linaro.org
Signed-off-by: Alistair Francis <alistair.francis@wdc.com>
2021-09-01 11:59:12 +10:00