Compare commits

...

109 Commits

Author SHA1 Message Date
Michael Roth e22f675bdd Update version for 2.12.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-08-02 15:51:06 -05:00
Fam Zheng aae299a68d file-posix: Handle EINTR in preallocation=full write
Cc: qemu-stable@nongnu.org
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a1c81f4f16)
 Conflicts:
	block/file-posix.c
* avoid dep on 93f4e2ff by adding check to raw_regular_truncate instead
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-31 11:54:59 -05:00
KONRAD Frederic b102aea574 qcow: fix a reference leak
Since 42a3e1ab36 qemu asserts when using the
vvfat driver:

git clone git://qemu.org/qemu.git
cd qemu
./configure --target-list=ppc-softmmu --enable-debug
make -j8
mkdir foo
touch foo/hello
./ppc-softmmu/qemu-system-ppc -M prep --nographic --monitor null             \
                              -hda fat:rw:./foo

"Ctrl-C"

qemu-system-ppc: block.c:3368: bdrv_close_all: Assertion                     \
   `((&all_bdrv_states)->tqh_first == ((void *)0))' failed.

This is because we reference bs twice in qcow_co_create(..) one time in
bdrv_open_blockdev_ref(..) and in blk_insert_bs(..) but we unref it only once
in blk_unref which leads to the reference leak.

Note that I didn't tested much QCOW after this change as I don't use it much.

Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 41b6513436)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-31 11:46:31 -05:00
Christian Borntraeger 336cd382dc s390x/sclp: fix maxram calculation
We clamp down ram_size to match the sclp increment size. We do
not do the same for maxram_size, which means for large guests
with some sizes (e.g. -m 50000) maxram_size differs from ram_size.
This can break other code (e.g. CMMA migration) which uses maxram_size
to calculate the number of pages and then throws some errors.

Fixes: 82fab5c5b9 ("s390x/sclp: remove memory hotplug support")
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: qemu-stable@nongnu.org
CC: David Hildenbrand <david@redhat.com>
Message-Id: <1532959766-53343-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 408e5ace51)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-31 11:27:16 -05:00
Marc-André Lureau bf1cb819e9 qga: process_event() simplification and leak fix
json_parser_parse_err() may return something else than a QDict, in
which case we loose the object. Let's keep track of the original
object to avoid leaks.

When an error occurs, "qdict" contains the response, but we still
check the "execute" key there. Untangle a bit this code, by having a
clear error path.

CC: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit ae7da1e5f6)
* drop context dep on d43b16945a
* drop functional dep on cb3e7f08ae
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-24 17:47:40 -05:00
Markus Armbruster 08c4a51c65 qmp: De-duplicate error response building
All callers of qmp_build_error_object() duplicate the code to wrap it
in a response object.  Replace it by qmp_error_response() that
captures the duplicated code, including error_free().

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-23-armbru@redhat.com>
(cherry picked from commit cee32796ca)
 Conflicts:
	include/qapi/qmp/dispatch.h
	qapi/qmp-dispatch.c
	qga/main.c
* drop context dep on cb3e7f08ae
* prereq for ae7da1e5f6
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-24 15:45:47 -05:00
Markus Armbruster 441784598e qobject: New qdict_from_jsonf_nofail()
Many uses of qobject_from_jsonf() convert JSON objects.  Create new
convenience function qdict_from_jsonf_nofail() that includes the
conversion to QDict.  The next few commits will put it to use.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180703085358.13941-22-armbru@redhat.com>
(cherry picked from commit a193352ff9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-24 15:05:01 -05:00
Michael Roth 2f36efaeb1 update s390-ccw.img for stable
-----BEGIN PGP SIGNATURE-----
 
 iQJGBAABCAAwFiEEw9DWbcNiT/aowBjO3s9rk8bwL68FAltXBzkSHGNvaHVja0By
 ZWRoYXQuY29tAAoJEN7Pa5PG8C+vf3sP/RWYs2DD2eacDvj4DEBu1Bf+Ov0bOymi
 J59AJ6H5psa4aI9gDjOj7UNjutuJOTw29wyjoIowtOrd8PArmlVzgK6JEBXzVQIE
 bsLLYw67/tAxW+Gnb3W1CHVKj5dWBQtLhEAPWvSNiz7ls++MJZzMNBwbkPkdIJv4
 xlJxooY2wHLH5ve7wdhmQA8OXeawhdDmNB/SCk5UdyiQCRVIXHTcQP6pMJgOHqc6
 kvu38tmCksa/4O4Du3RtabcNpmDACdWcStZlkMyehucySTzZ6GbU5ZamEuWApICy
 vslmZhChLwDzbQrHSO2v9fVDMdpMhUM3m4ODOlSBwi5bcD7WcHej6KiA+fC33B0K
 Gs2wpuncgYkk9t+cEZ9PCZ5emKoyM+NeD/N1fPOuJQCLtCpYN+BmcAIjGzuKzcqx
 K15dEHA8vc69ROh8WTLnvr5ZoslYQZDYtOCtKnNGHJkOFAQqL4RBBr1B6siHGgXF
 3Wx8yUMF45mwLBAEaFCar1HV33qPyhz5LW4B40vOevFNbdwCw9UwX70LGFeIVw5Z
 zjEhWcMTL1on5V79ZaZgIOlWY+KKas7CYkR44dKfUfAsBk/2iExwW4BrgqeQaEkX
 znf+RnzpRGnoAI6olu+x0fl09LDmV//2/ejJ3JRih1U7xrpIpR0qpcwEHVjkUa2e
 xeQSrQIf9Nrv
 =7EV0
 -----END PGP SIGNATURE-----

Merge tag 's390x-20180724-212-stable' into stable-2.12-staging

update s390-ccw.img for stable
2018-07-24 14:12:14 -05:00
Marc-André Lureau 90b2d94123 ccid-card-passthru: fix regression in realize()
Since cc847bfd16, CCID card-passthru
fails to intialize, because it changed a debug line to an error,
probably by mistake. Change it back to a DPRINTF debug.

(solves Boxes creating VM with smartcard passthru failing to start)

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180515153039.27514-1-marcandre.lureau@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit e58d64a16a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-24 14:08:46 -05:00
Cornelia Huck c16427177a pc-bios/s390-ccw.img: update image for stable
Contains the following commits:
- s390-ccw: force diag 308 subcode to unsigned long
- pc-bios/s390-ccw: struct tpi_info must be declared as aligned(4)

Signed-off-by: Cornelia Huck <cohuck@redhat.com>
2018-07-24 06:08:42 -04:00
Richard Henderson e8488edcb3 tcg/i386: Mark xmm registers call-clobbered
When host vector registers and operations were introduced, I failed
to mark the registers call clobbered as required by the ABI.

Fixes: 770c2fc7bb
Cc: qemu-stable@nongnu.org
Reported-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 672189cd58)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-23 14:24:17 -05:00
Peter Lieven 3afe55ff38 qemu-img: avoid overflow of min_sparse parameter
the min_sparse convert parameter can overflow (e.g. -S 1024G)
in the conversion from int64_t to int resulting in a negative
min_sparse parameter. Avoid this by limiting the valid parameters
to sane values. In fact anything exceeding the convert buffer size
is also pointless. While at it also forbid values that are non
multiple of 512 to avoid undesired behaviour. For instance, values
between 1 and 511 were legal, but resulted in full allocation.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6360ab278c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-23 14:23:30 -05:00
Yunjian Wang 1b817abcd4 tap: fix memory leak on success to create a tap device
The memory leak on success to create a tap device. And the nfds and
nvhosts may not be the same and need to be processed separately.

Fixes: 07825977 ("tap: fix memory leak on failure to create a multiqueue tap device")
Fixes: 264986e2 ("tap: multiqueue support")
Cc: qemu-stable@nongnu.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 323e7c1177)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-23 14:20:45 -05:00
Emilio G. Cota 0935356e43 target/ppc: set is_jmp on ppc_tr_breakpoint_check
The use of GDB breakpoints was broken by b0c2d52 ("target/ppc: convert
to TranslatorOps", 2018-02-16).

Fix it by setting is_jmp, so that we break from the translation loop
as originally intended.

Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 2a8ceefca2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 16:13:24 -05:00
Pankaj Gupta d109f8eb7e virtio-rng: process pending requests on DRIVER_OK
virtio-rng device causes old guest kernels(2.6.32) to hang on latest qemu.
The driver attempts to read from the virtio-rng device too early in it's
initialization. Qemu detects guest is not ready and returns, resulting in
hang.

To fix handle pending requests when guest is running and driver status is
set to 'VIRTIO_CONFIG_S_DRIVER_OK'.

CC: qemu-stable@nongnu.org
Reported-by: Sergio lopez <slopezpa@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Pankaj Gupta <pagupta@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 5d9c9ea22a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 16:11:30 -05:00
Eric Blake 2379ac134a iscsi: Avoid potential for get_status overflow
Detected by Coverity: Multiplying two 32-bit int and assigning
the result to a 64-bit number is a risk of overflow.  Prior to
the conversion to byte-based interfaces, the block layer took
care of ensuring that a status request never exceeded 2G in
the driver; but after that conversion, the block layer expects
drivers to deal with any size request (the driver can always
truncate the request size back down, as long as it makes
progress).  So, in the off-chance that someone makes a large
request, we are at the mercy of whether iscsi_get_lba_status_task()
will cap things to at most INT_MAX / iscsilun->block_size when
it populates lbasd->num_blocks; since I could not easily audit
that, it's better to be safe than sorry by just forcing a 64-bit
multiply.

Fixes: 92809c36
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180508212718.1482663-1-eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
(cherry picked from commit 8ee1cef459)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 16:10:48 -05:00
Eric Blake f8b3b02933 nbd/server: Reject 0-length block status request
The NBD spec says that behavior is unspecified if the client
requests 0 length for block status; but since the structured
reply is documenting as returning a non-zero length, it's
easier to just diagnose this with an EINVAL error than to
figure out what to return.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180621124937.166549-1-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit d8b20291cb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:51:25 -05:00
Richard Henderson 78747264b9 tcg: Reduce max TB opcode count
Also, assert that we don't overflow any of two different offsets into
the TB. Both unwind and goto_tb both record a uint16_t for later use.

This fixes an arm-softmmu test case utilizing NEON in which there is
a TB generated that runs to 7800 opcodes, and compiles to 96k on an
x86_64 host.  This overflows the 16-bit offset in which we record the
goto_tb reset offset.  Because of that overflow, we install a jump
destination that goes to neverland.  Boom.

With this reduced op count, the same TB compiles to about 48k for
aarch64, ppc64le, and x86_64 hosts, and neither assertion fires.

Cc: qemu-stable@nongnu.org
Reported-by: "Jason A. Donenfeld" <Jason@zx2c4.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 9f75462065)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:44:56 -05:00
Vladimir Sementsov-Ogievskiy d8a7ec1deb migration/block-dirty-bitmap: fix dirty_bitmap_load
dirty_bitmap_load_header return code is obtained but not handled. Fix
this.

Bug was introduced in b35ebdf076
"migration: add postcopy migration of dirty bitmaps" with the whole
function.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180530112424.204835-1-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
(cherry picked from commit a36f6ff46f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:43:20 -05:00
Alex Williamson 2cb041a82d vfio/pci: Default display option to "off"
Commit a9994687cb ("vfio/display: core & wireup") added display
support to vfio-pci with the default being "auto", which breaks
existing VMs when the vGPU requires GL support but had no previous
requirement for a GL compatible configuration.  "Off" is the safer
default as we impose no new requirements to VM configurations.

Fixes: a9994687cb ("vfio/display: core & wireup")
Cc: qemu-stable@nongnu.org
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit 8151a9c56d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:15:01 -05:00
Olaf Hering 6d3ed3798b replace functions which are only available in glib-2.24
Currently the minimal supported version of glib is 2.22.
Since testing is done with a glib that claims to be 2.22, but in fact
has APIs from newer version of glib, this bug was not caught during
submit of the patch referenced below.

Replace g_realloc_n, which is available only since 2.24, with g_renew.

Fixes commit 418026ca43 ("util: Introduce vfio helpers")

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
CC: qemu-stable@nongnu.org
(cherry picked from commit d29eb678bc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:12:05 -05:00
Kevin Wolf 58119514f5 nfs: Remove processed options from QDict
Commit c22a03454 QAPIfied option parsing in the NFS block driver, but
forgot to remove all the options we processed. Therefore, we get an
error in bdrv_open_inherit(), which thinks the remaining options are
invalid. Trying to open an NFS image will result in an error like this:

    Block protocol 'nfs' doesn't support the option 'server.host'

Remove all options from the QDict to make the NFS driver work again.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20180516160816.26259-1-kwolf@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit c82be42cc8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:10:19 -05:00
Marc-André Lureau 008ffc7a2f mux: fix ctrl-a b again
Commit fb5e19d2e1 originally fixed the
regression, but was inadvertently broken again in merge commit
2d6752d38d.

Fixes:
https://bugs.launchpad.net/qemu/+bug/1654137

Cc: qemu-stable@nongnu.org
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180515152500.19460-3-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit eeaa671505)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:03:03 -05:00
Philippe Mathieu-Daudé 5e10c00f61 hw/isa/superio: Fix inconsistent use of Chardev->be
4c3119a6e3 and cd9526ab7c introduced an incorrect and inconsistent
use of Chardev->be. Also, this CharBackend member is private and is
not supposed to be accessible.

Fix it by removing the inconsistent check.

Cc: qemu-stable@nongnu.org
Reported-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20180515152500.19460-2-f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
(cherry picked from commit d4c8fcd91a)
 Conflicts:
	hw/isa/isa-superio.c
* avoid context dep on 9bca0edb28
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 11:02:03 -05:00
Alex Bennée ca11f0ab77 target/arm: Fix sqrt_f16 exception raising
We are meant to explicitly pass fpst, not cpu_env.

Cc: qemu-stable@nongnu.org
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-12-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 905edee910)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:52:03 -05:00
Alex Bennée ffc3a15018 target/arm: Implement FMOV (immediate) for fp16
All the hard work is already done by vfp_expand_imm, we just need to
make sure we pick up the correct size.

Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180512003217.9105-11-richard.henderson@linaro.org
[rth: Merge unallocated_encoding check with TCGMemOp conversion.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

(cherry picked from commit 6ba28ddb9b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:51:54 -05:00
Alex Bennée f3816879f9 target/arm: Implement FCSEL for fp16
These were missed out from the rest of the half-precision work.

Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180512003217.9105-10-richard.henderson@linaro.org
[rth: Fix erroneous check vs type]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

(cherry picked from commit ace97feef3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:51:44 -05:00
Alex Bennée 246dad2f3c target/arm: Implement FCMP for fp16
These where missed out from the rest of the half-precision work.

Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180512003217.9105-9-richard.henderson@linaro.org
[rth: Diagnose lack of FP16 before fp_access_check]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

(cherry picked from commit 7a1929256e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:51:31 -05:00
Richard Henderson 0819a17250 target/arm: Implement FP data-processing (3 source) for fp16
We missed all of the scalar fp16 fma operations.

Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-8-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 95f9864fde)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:51:20 -05:00
Richard Henderson 7133cd4cfe target/arm: Implement FP data-processing (2 source) for fp16
We missed all of the scalar fp16 binary operations.

Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-7-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit b8f5171cf0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:51:05 -05:00
Richard Henderson d1ed4a60ba target/arm: Introduce and use read_fp_hreg
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-6-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 3d99d93126)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:50:48 -05:00
Richard Henderson 7c38f3703d target/arm: Implement FCVT (scalar, fixed-point) for fp16
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 2752728016)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:50:35 -05:00
Richard Henderson baa552e54f target/arm: Implement FCVT (scalar, integer) for fp16
Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 564a063250)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:49:18 -05:00
Richard Henderson 4ec6a17a04 target/arm: Implement FMOV (general) for fp16
Adding the fp16 moves to/from general registers.

Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180512003217.9105-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 68130236e3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:48:36 -05:00
Petr Tesarik 781cde6d94 fpu/softfloat: Fix conversion from uint64 to float128
The significand is passed to normalizeRoundAndPackFloat128() as high
first, low second. The current code passes the integer first, so the
result is incorrectly shifted left by 64 bits.

This bug affects the emulation of s390x instruction CXLGBR (convert
from logical 64-bit binary-integer operand to extended BFP result).

Cc: qemu-stable@nongnu.org
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Petr Tesarik <ptesarik@suse.com>
Message-Id: <20180511071052.1443-1-ptesarik@suse.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 6603d50648)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:47:49 -05:00
Richard Henderson e5af958dd2 target/arm: Clear SVE high bits for FMOV
Use write_fp_dreg and clear_vec_high to zero the bits
that need zeroing for these cases.

Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-5-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 9a9f1f5952)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:47:00 -05:00
Richard Henderson c708ce7d6e target/arm: Fix float16 to/from int16
The instruction "ucvtf v0.4h, v04h, #2", with input 0x8000u,
overflows the intermediate float16 to infinity before we have a
chance to scale the output.  Use float64 as the intermediate type
so that no input argument (uint32_t in this case) can overflow
or round before scaling.  Given the declared argument, the signed
int32_t function has the same problem.

When converting from float16 to integer, using u/int32_t instead
of u/int16_t means that the bounding is incorrect.

Cc: qemu-stable@nongnu.org
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-4-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 88808a022c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:46:52 -05:00
Richard Henderson 0aaf1cca02 target/arm: Implement vector shifted FCVT for fp16
While we have some of the scalar paths for FCVT for fp16,
we failed to decode the fp16 version of these instructions.

Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-3-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit d0ba8e74ac)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:46:40 -05:00
Richard Henderson 994b0cf997 target/arm: Implement vector shifted SCVF/UCVF for fp16
While we have some of the scalar paths for *CVF for fp16,
we failed to decode the fp16 version of these instructions.

Cc: qemu-stable@nongnu.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180502221552.3873-2-richard.henderson@linaro.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit a6117fae45)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:46:30 -05:00
Peter Maydell e653eee8d8 fpu/softfloat: Don't set Invalid for float-to-int(MAXINT)
In float-to-integer conversion, if the floating point input
converts exactly to the largest or smallest integer that
fits in to the result type, this is not an overflow.
In this situation we were producing the correct result value,
but were incorrectly setting the Invalid flag.
For example for Arm A64, "FCVTAS w0, d0" on an input of
0x41dfffffffc00000 should produce 0x7fffffff and set no flags.

Fix the boundary case to take the right half of the if()
statements.

This fixes a regression from 2.11 introduced by the softfloat
refactoring.

Cc: qemu-stable@nongnu.org
Fixes: ab52f973a5
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180510140141.12120-1-peter.maydell@linaro.org
(cherry picked from commit 333583757c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:45:23 -05:00
Peter Maydell fbaeb1068c target/arm: Fix fp_status_f16 tininess before rounding
In commit d81ce0ef2c we added an extra float_status field
fp_status_fp16 for Arm, but forgot to initialize it correctly
by setting it to float_tininess_before_rounding. This currently
will only cause problems for the new V8_FP16 feature, since the
float-to-float conversion code doesn't use it yet. The effect
would be that we failed to set the Underflow IEEE exception flag
in all the cases where we should.

Add the missing initialization.

Fixes: d81ce0ef2c
Cc: qemu-stable@nongnu.org
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20180512004311.9299-16-richard.henderson@linaro.org
(cherry picked from commit bcc531f036)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:43:45 -05:00
John Snow 0779afdc89 blockjob: expose error string via query
When we've reached the concluded state, we need to expose the error
state if applicable. Add the new field.

This should be sufficient for determining if a job completed
successfully or not after concluding; if we want to discriminate
based on how it failed more mechanically, we can always add an
explicit return code enumeration later.

I didn't bother to make it only show up if we are in the concluded
state; I don't think it's necessary.

Cc: qemu-stable@nongnu.org
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ab9ba61455)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:35:53 -05:00
Michael Clark 4a67f4a953 RISC-V: Minimal QEMU 2.12 fix for sifive_u machine
The 'sifive_u' board has a bug where the ROM is
created as RAM at the wrong address and marked
readonly. The bug renders the board unusable.
This is a minimal fix and allows booting Linux.

5aec3247c1
"RISC-V: Mark ROM read-only after copying in code"
contains a comprehensive fix using the ROM APIs
memory_region_init_rom and rom_add_blob_fixed_as
which could be backported.

Cc: Sagar Karandikar <sagark@eecs.berkeley.edu>
Cc: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Alistair Francis <Alistair.Francis@wdc.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Clark <mjc@sifive.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-18 10:32:06 -05:00
Richard Henderson 9363c34825 tcg: Limit the number of ops in a TB
In 6001f7729e we partially attempt to address the branch
displacement overflow caused by 15fa08f845.

However, gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/vqtbX.c
is a testcase that contains a TB so large as to overflow anyway.
The limit here of 8000 ops produces a maximum output TB size of
24112 bytes on a ppc64le host with that test case.  This is still
much less than the maximum forward branch distance of 32764 bytes.

Cc: qemu-stable@nongnu.org
Fixes: 15fa08f845 ("tcg: Dynamically allocate TCGOps")
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit abebf92597)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
Peter Maydell 51d5decb32 softfloat: Handle default NaN mode after pickNaNMulAdd, not before
It is implementation defined whether a multiply-add of
(0,inf,qnan) or (inf,0,qnan) raises InvalidaOperation or
not, so we let the target-specific pickNaNMulAdd function
handle this. This means that we must do the "return the
default NaN in default NaN mode" check after the call,
not before. Correct the ordering, and restore the comment
from the old propagateFloat64MulAddNaN() that warned about
this corner case.

This fixes a regression from 2.11 for Arm guests where we would
incorrectly fail to set the Invalid flag for these cases.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Tested-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 20180504100547.14621-1-peter.maydell@linaro.org
(cherry picked from commit 1839189bbf)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
Peter Maydell 0e4b4b4fd3 tcg/i386: Fix dup_vec in non-AVX2 codepath
The VPUNPCKLD* instructions are all "non-destructive source",
indicated by "NDS" in the encoding string in the x86 ISA manual.
This means that they take two source operands, one of which is
encoded in the VEX.vvvv field. We were incorrectly treating them
as if they were destructive-source and passing 0 as the 'v'
argument of tcg_out_vex_modrm(). This meant we were always
using %xmm0 as one of the source operands, causing incorrect
results if the register allocator happened to want to use
something else. For instance the input AArch64 insn:
 DUP v26.16b, w21
which becomes TCG IR ops:
 dup_vec v128,e8,tmp2,x21
 st_vec v128,e8,tmp2,env,$0xa40
was assembled to:
0x607c568c:  c4 c1 7a 7e 86 e8 00 00  vmovq    0xe8(%r14), %xmm0
0x607c5694:  00
0x607c5695:  c5 f9 60 c8              vpunpcklbw %xmm0, %xmm0, %xmm1
0x607c5699:  c5 f9 61 c9              vpunpcklwd %xmm1, %xmm0, %xmm1
0x607c569d:  c5 f9 70 c9 00           vpshufd  $0, %xmm1, %xmm1
0x607c56a2:  c4 c1 7a 7f 8e 40 0a 00  vmovdqu  %xmm1, 0xa40(%r14)
0x607c56aa:  00

when the vpunpcklwd insn should be "%xmm1, %xmm1, %xmm1".
This resulted in our incorrectly setting the output vector to
q26=0000320000003200:0000320000003200
when given an input of x21 == 0000000002803200
rather than the expected all-zeroes.

Pass the correct source register number to tcg_out_vex_modrm()
for these insns.

Fixes: 770c2fc7bb
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180504153431.5169-1-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 7eb30ef0ba)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
Eric Blake 6951158023 nbd/client: Relax handling of large NBD_CMD_BLOCK_STATUS reply
The NBD spec is proposing a relaxation of NBD_CMD_BLOCK_STATUS
where a server may have the final extent per context give a
length beyond the original request, if it can easily prove that
subsequent bytes have the same status, on the grounds that a
client can take advantage of this information for fewer block
status requests.  Since qemu 2.12 as a client always sends
NBD_CMD_FLAG_REQ_ONE, and rejects a server that sends extra
length, the upstream NBD spec will probably limit this behavior
to clients that don't request REQ_ONE semantics; but it doesn't
hurt to relax qemu to always be permissive of this server
behavior, even if it continues to use REQ_ONE.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180503222626.1303410-1-eblake@redhat.com>
Reviewed-by:  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit acfd8f7a5f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
KONRAD Frederic b129914a8d riscv: requires libfdt
When compiling on a machine without libfdt installed the configure script
should try to get libfdt from the git or should die because otherwise
CONFIG_LIBFDT is not set and the build process end in an error in the link
phase.. eg:

hw/riscv/virt.o: In function `riscv_virt_board_init':
qemu/src/hw/riscv/virt.c:317: undefined reference to `qemu_fdt_setprop_cell'
qemu/src/hw/riscv/virt.c:319: undefined reference to `qemu_fdt_setprop_cell'
qemu/src/hw/riscv/virt.c:345: undefined reference to `qemu_fdt_dumpdtb'
collect2: error: ld returned 1 exit status
make[1]: *** [qemu-system-riscv64] Error 1
make: *** [subdir-riscv64-softmmu] Error 2

Cc: qemu-stable@nongnu.org
Reviewed-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: Michael Clark <mjc@sifive.com>

Message-Id: <1525360636-18229-4-git-send-email-frederic.konrad@adacore.com>
(cherry picked from commit a666409f0d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
KONRAD Frederic db6f66eff7 riscv: htif: increase the priority of the htif subregion
The htif device is supposed to be mapped over an other subregion. So increase
its priority to one to avoid any conflict.

Here is the output of info mtree:

Before:
(qemu) info mtree
 address-space: memory
   0000000000000000-ffffffffffffffff (prio 0, i/o): system
     0000000000000000-000000000000000f (prio 0, i/o): riscv.htif.uart
     0000000000000000-0000000000011fff (prio 0, ram): riscv.spike.bootrom
     0000000002000000-000000000200ffff (prio 0, i/o): riscv.sifive.clint
     0000000080000000-0000000087ffffff (prio 0, ram): riscv.spike.ram

 address-space: I/O
   0000000000000000-000000000000ffff (prio 0, i/o): io

 address-space: cpu-memory-0
   0000000000000000-ffffffffffffffff (prio 0, i/o): system
     0000000000000000-000000000000000f (prio 0, i/o): riscv.htif.uart
     0000000000000000-0000000000011fff (prio 0, ram): riscv.spike.bootrom
     0000000002000000-000000000200ffff (prio 0, i/o): riscv.sifive.clint
     0000000080000000-0000000087ffffff (prio 0, ram): riscv.spike.ram

After:
 (qemu) info mtree
 address-space: memory
   0000000000000000-ffffffffffffffff (prio 0, i/o): system
     0000000000000000-000000000000000f (prio 1, i/o): riscv.htif.uart
     0000000000000000-0000000000011fff (prio 0, ram): riscv.spike.bootrom
     0000000002000000-000000000200ffff (prio 0, i/o): riscv.sifive.clint
     0000000080000000-0000000087ffffff (prio 0, ram): riscv.spike.ram

 address-space: I/O
   0000000000000000-000000000000ffff (prio 0, i/o): io

 address-space: cpu-memory-0
   0000000000000000-ffffffffffffffff (prio 0, i/o): system
     0000000000000000-000000000000000f (prio 1, i/o): riscv.htif.uart
     0000000000000000-0000000000011fff (prio 0, ram): riscv.spike.bootrom
     0000000002000000-000000000200ffff (prio 0, i/o): riscv.sifive.clint
     0000000080000000-0000000087ffffff (prio 0, ram): riscv.spike.ram

Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: Michael Clark <mjc@sifive.com>

Message-Id: <1525360636-18229-3-git-send-email-frederic.konrad@adacore.com>
(cherry picked from commit 6fad7d1893)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
KONRAD Frederic 26cf05c1a1 riscv: spike: allow base == 0
The sanity check on base doesn't allow htif to be mapped @0. Check if the
symbol exists instead so we can map it where we want.

Reviewed-by: Michael Clark <mjc@sifive.com>
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: Michael Clark <mjc@sifive.com>

Message-Id: <1525360636-18229-2-git-send-email-frederic.konrad@adacore.com>
(cherry picked from commit 17b9751e85)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:22 -05:00
Max Reitz 7bc615f88f iotests: Add test for cancelling a mirror job
We already have an extensive mirror test (041) which does cover
cancelling a mirror job, especially after it has emitted the READY
event.  However, it does not check what exact events are emitted after
block-job-cancel is executed.  More importantly, it does not use
throttling to ensure that it covers the case of block-job-cancel before
READY.

It would be possible to add this case to 041, but considering it is
already our largest test file, it makes sense to create a new file for
these cases.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180501220509.14152-3-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit dc885fff97)
 Conflicts:
	tests/qemu-iotests/group
* fix minor conflicts with test groups
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Max Reitz 1eddfab31c block/mirror: Make cancel always cancel pre-READY
Commit b76e4458b1 made the mirror block
job respect block-job-cancel's @force flag: With that flag set, it would
now always really cancel, even post-READY.

Unfortunately, it had a side effect: Without that flag set, it would now
never cancel, not even before READY.  Considering that is an
incompatible change and not noted anywhere in the commit or the
description of block-job-cancel's @force parameter, this seems
unintentional and we should revert to the previous behavior, which is to
immediately cancel the job when block-job-cancel is called before source
and target are in sync (i.e. before the READY event).

Cc: qemu-stable@nongnu.org
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1572856
Reported-by: Yanan Fu <yfu@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180501220509.14152-2-mreitz@redhat.com
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit eb36639f7b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Laszlo Ersek 3882183fda qapi: fill in CpuInfoFast.arch in query-cpus-fast
* Commit ca230ff33f added the @arch field to @CpuInfoFast, but it failed
  to set the new field in qmp_query_cpus_fast(), when TARGET_S390X was not
  defined. The updated @query-cpus-fast example in "qapi-schema.json"
  showed "arch":"x86" only because qmp_query_cpus_fast() calls g_malloc0()
  to allocate @CpuInfoFast, and the CPU_INFO_ARCH_X86 enum constant is
  generated with value 0.

  All @arch values other than @s390 implied the @CpuInfoOther sub-struct
  for @CpuInfoFast -- at the time of writing the patch --, thus no fields
  other than @arch needed to be set when TARGET_S390X was not defined. Set
  @arch now, by copying the corresponding assignments from
  qmp_query_cpus().

* Commit 25fa194b7b added the @riscv enum constant to @CpuInfoArch (used
  in both @CpuInfo and @CpuInfoFast -- the return types of the @query-cpus
  and @query-cpus-fast commands, respectively), and assigned, in both
  return structures, the @CpuInfoRISCV sub-structure to the new enum
  value.

  However, qmp_query_cpus_fast() would not populate either the @arch field
  or the @CpuInfoRISCV sub-structure, when TARGET_RISCV was defined; only
  qmp_query_cpus() would.

  Assign @CpuInfoOther to the @riscv enum constant in @CpuInfoFast, and
  populate only the @arch field in qmp_query_cpus_fast(). Getting CPU
  state without interrupting KVM is an exceptional thing that only S390X
  does currently. Quoting Cornelia Huck <cohuck@redhat.com>, "s390x is
  exceptional in that it has state in QEMU that is actually interesting
  for upper layers and can be retrieved without performance penalty". See
  also
  <https://www.redhat.com/archives/libvir-list/2018-February/msg00121.html>.

Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Viktor VM Mihajlovski <mihajlov@linux.vnet.ibm.com>
Cc: qemu-stable@nongnu.org
Fixes: ca230ff33f
Fixes: 25fa194b7b
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20180427192852.15013-2-lersek@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit 96054f5639)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Vladimir Sementsov-Ogievskiy 3b52d47418 migration/block-dirty-bitmap: fix memory leak in dirty_bitmap_load_bits
Release buf on error path too.

Bug was introduced in b35ebdf076 "migration: add postcopy
migration of dirty bitmaps" with the whole function.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180427142002.21930-3-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 16a2227893)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Vladimir Sementsov-Ogievskiy f155487bef nbd/client: fix nbd_negotiate_simple_meta_context
Initialize received variable. Otherwise, is is possible for server to
answer without any contexts, but we will set context_id to something
random (received_id is not initialized too) and return 1, which is
wrong.

To solve it, just initialize received to false. Initialize received_id
too, just to make all possible checkers happy.

Bug was introduced in 78a33ab587 "nbd: BLOCK_STATUS for
standard get_block_status function: client part" with the whole
function.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20180427142002.21930-2-vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 89aa0d8763)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Cédric Le Goater 54eb6cc6d7 cpus: tcg: fix never exiting loop on unplug
Commit 9b0605f983 ("cpus: tcg: unregister thread with RCU, fix
exiting of loop on unplug") changed the exit condition of the loop in
the vCPU thread function but forgot to remove the beginning 'while (1)'
statement. The resulting code :

	while (1) {
	...
	} while (!cpu->unplug || cpu_can_run(cpu));

is a sequence of two distinct two while() loops, the first not exiting
in case of an unplug event.

Remove the first while (1) to fix CPU unplug.

Signed-off-by: Cédric Le Goater <clg@kaod.org>
Message-Id: <20180425131828.15604-1-clg@kaod.org>
Cc: qemu-stable@nongnu.org
Fixes: 9b0605f983
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Cédric Le Goater <clg@kaod.org>
(cherry picked from commit 54961aac19)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Stefan Hajnoczi 9eb3e5a8a8 block/mirror: honor ratelimit again
Commit b76e4458b1 ("block/mirror: change
the semantic of 'force' of block-job-cancel") accidentally removed the
ratelimit in the mirror job.

Reintroduce the ratelimit but keep the block-job-cancel force=true
behavior that was added in commit
b76e4458b1.

Note that block_job_sleep_ns() returns immediately when the job is
cancelled.  Therefore it's safe to unconditionally call
block_job_sleep_ns() - a cancelled job does not sleep.

This commit fixes the non-deterministic qemu-iotests 185 output.  The
test relies on the ratelimit to make the job sleep until the 'quit'
command is processed.  Previously the job could complete before the
'quit' command was received since there was no ratelimit.

Cc: Liang Li <liliang.opensource@gmail.com>
Cc: Jeff Cody <jcody@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20180424123527.19168-1-stefanha@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit ddc4115efd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Gerd Hoffmann 05a3e663b1 vnc: fix use-after-free
When vnc_client_read() return value is -1
vs is not valid any more.

Fixes: d49b87f0d1
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180420084820.3873-1-kraxel@redhat.com
(cherry picked from commit 1bc3117aba)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Philippe Mathieu-Daudé 073198b8e8 usb/dev-mtp: Fix use of uninitialized values
This fixes:

  hw/usb/dev-mtp.c:971:5: warning: 4th function call argument is an uninitialized value
      trace_usb_mtp_op_get_partial_object(s->dev.addr, o->handle, o->path,
                                           c->argv[1], c->argv[2]);
                                                       ^~~~~~~~~~
and:

  hw/usb/dev-mtp.c:981:12: warning: Assigned value is garbage or undefined
      offset = c->argv[1];
               ^ ~~~~~~~~~~

Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180604151421.23385-3-f4bug@amsat.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 62713a2e50)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Philippe Mathieu-Daudé 5da7e93f51 usb: correctly handle Zero Length Packets
USB Specification Revision 2.0, §5.5.3:
  The Data stage of a control transfer from an endpoint to the host is complete when the endpoint does one of the following:
  • Has transferred exactly the amount of data specified during the Setup stage
  • Transfers a packet with a payload size less than wMaxPacketSize or transfers a zero-length packet"

hw/usb/redirect.c:802:9: warning: Declared variable-length array (VLA) has zero size
        uint8_t buf[size];
        ^~~~~~~~~~~ ~~~~

Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20180604151421.23385-2-f4bug@amsat.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit bf78fb1c1b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:21 -05:00
Shannon Zhao c5dd07b529 arm_gicv3_kvm: kvm_dist_get/put_priority: skip the registers banked by GICR_IPRIORITYR
While for_each_dist_irq_reg loop starts from GIC_INTERNAL, it forgot to
offset the date array and index. This will overlap the GICR registers
value and leave the last GIC_INTERNAL irq's registers out of update.

Fixes: 367b9f527b
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 1dcf367519)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Eric Blake 396d79c36c iotests: Add test 221 to catch qemu-img map regression
Although qemu-img creates aligned files (by rounding up), it
must also gracefully handle files that are not sector-aligned.
Test that the bug fixed in the previous patch does not recur.

It's a bit annoying that we can see the (implicit) hole past
the end of the file on to the next sector boundary, so if we
ever reach the point where we report a byte-accurate size rather
than our current behavior of always rounding up, this test will
probably need a slight modification.

Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c6a9d2f6f9)
 Conflicts:
	tests/qemu-iotests/group
* drop context dep on tests not present in 2.12
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Eric Blake 26cdf35f69 qemu-img: Fix assert when mapping unaligned raw file
Commit a290f085 exposed a latent bug in qemu-img map introduced
during the conversion of block status to be byte-based.  Earlier in
commit 5e344dd8, the internal interface get_block_status() switched
to take byte-based parameters, but still called a sector-based
block layer function; as such, rounding was added in the lone
caller to obey the contract.  However, commit 237d78f8 changed
get_block_status() to truly be byte-based, at which point rounding
to sector boundaries can result in calling bdrv_block_status() with
'bytes == 0' (a coding error) when the boundary between data and a
hole falls mid-sector (true for the past-EOF implicit hole present
in POSIX files).  Fix things by removing the rounding that is now
no longer necessary.

See also https://bugzilla.redhat.com/1589738

Fixes: 237d78f8
Reported-by: Dan Kenigsberg <danken@redhat.com>
Reported-by: Nir Soffer <nsoffer@redhat.com>
Reported-by: Maor Lipchuk <mlipchuk@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit e0b371ed5e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
linzhecheng fb7f173c2c vhost-user: delete net client if necessary
As qemu_new_net_client create new ncs but error happens later,
ncs will be left in global net_clients list and we can't use them any
more, so we need to cleanup them.

Cc: qemu-stable@nongnu.org
Signed-off-by: linzhecheng <linzhecheng@huawei.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit c67daf4a24)
 Conflicts:
	net/vhost-user.c
* drop functional dep on 4d0cf552
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Brijesh Singh 2f2b189235 tap: set vhostfd passed from qemu cli to non-blocking
A guest boot hangs while probing the network interface when
iommu_platform=on is used.

The following qemu cli hangs without this patch:

# $QEMU \
  -netdev tap,fd=3,id=hostnet0,vhost=on,vhostfd=4 3<>/dev/tap67 4<>/dev/host-net \
  -device virtio-net-pci,netdev=hostnet0,id=net0,iommu_platform=on,disable-legacy=on \
  ...

Commit: c471ad0e9b (vhost_net: device IOTLB support) took care of
setting vhostfd to non-blocking when QEMU opens /dev/host-net but if
the fd is passed from qemu cli then we need to ensure that fd is set
to non-blocking.

Fixes: c471ad0e9b ("vhost_net: device IOTLB support")
Cc: qemu-stable@nongnu.org
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit d542800d1e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Konrad Rzeszutek Wilk 43163837d3 i386: define the AMD 'virt-ssbd' CPUID feature bit (CVE-2018-3639)
AMD Zen expose the Intel equivalant to Speculative Store Bypass Disable
via the 0x80000008_EBX[25] CPUID feature bit.

This needs to be exposed to guest OS to allow them to protect
against CVE-2018-3639.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180521215424.13520-3-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit 403503b162)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Konrad Rzeszutek Wilk 3129ddb943 i386: Define the Virt SSBD MSR and handling of it (CVE-2018-3639)
"Some AMD processors only support a non-architectural means of enabling
speculative store bypass disable (SSBD).  To allow a simplified view of
this to a guest, an architectural definition has been created through a new
CPUID bit, 0x80000008_EBX[25], and a new MSR, 0xc001011f.  With this, a
hypervisor can virtualize the existence of this definition and provide an
architectural method for using SSBD to a guest.

Add the new CPUID feature, the new MSR and update the existing SSBD
support to use this MSR when present." (from x86/speculation: Add virtualized
speculative store bypass disable support in Linux).

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Message-Id: <20180521215424.13520-4-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit cfeea0c021)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Daniel P. Berrangé 8a302f42a5 i386: define the 'ssbd' CPUID feature bit (CVE-2018-3639)
New microcode introduces the "Speculative Store Bypass Disable"
CPUID feature bit. This needs to be exposed to guest OS to allow
them to protect against CVE-2018-3639.

Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-Id: <20180521215424.13520-2-berrange@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit d19d1f9659)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Alberto Garcia ef67e67388 throttle: Fix crash on reopen
The throttle block filter can be reopened, and with this it is
possible to change the throttle group that the filter belongs to.

The way the code does that is the following:

  - On throttle_reopen_prepare(): create a new ThrottleGroupMember
    and attach it to the new throttle group.

  - On throttle_reopen_commit(): detach the old ThrottleGroupMember,
    delete it and replace it with the new one.

The problem with this is that by replacing the ThrottleGroupMember the
previous value of io_limits_disabled is lost, causing an assertion
failure in throttle_co_drain_end().

This problem can be reproduced by reopening a throttle node:

   $QEMU -monitor stdio
   -object throttle-group,id=tg0,x-iops-total=1000 \
   -blockdev node-name=hd0,driver=qcow2,file.driver=file,file.filename=hd.qcow2 \
   -blockdev node-name=root,driver=throttle,throttle-group=tg0,file=hd0,read-only=on

   (qemu) block_stream root
   block/throttle.c:214: throttle_co_drain_end: Assertion `tgm->io_limits_disabled' failed.

Since we only want to change the throttle group on reopen there's no
need to create a ThrottleGroupMember and discard the old one. It's
easier if we simply detach it from its current group and attach it to
the new one.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 20180608151536.7378-1-berto@igalia.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit bc33c047d1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Max Reitz 081eac8b30 iotests: Add case for a corrupted inactive image
Reviewed-by: John Snow <jsnow@redhat.com>
Tested-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606193702.7113-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit c50abd175a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:20 -05:00
Max Reitz 5aa76f3a8c qcow2: Do not mark inactive images corrupt
When signaling a corruption on a read-only image, qcow2 already makes
fatal events non-fatal (i.e., they will not result in the image being
closed, and the image header's corrupt flag will not be set).  This is
necessary because we cannot set the corrupt flag on read-only images,
and it is possible because further corruption of read-only images is
impossible.

Inactive images are effectively read-only, too, so we should do the same
for them.  bdrv_is_writable() can tell us whether an image can actually
be written to, so use its result instead of !bs->read_only.

(Otherwise, the assert(!(bs->open_flags & BDRV_O_INACTIVE)) in
bdrv_co_pwritev() will fail, crashing qemu.)

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606193702.7113-3-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit ddf3b47ef4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Max Reitz bd64fec665 block: Make bdrv_is_writable() public
This is a useful function for the whole block layer, so make it public.
At the same time, users outside of block.c probably do not need to make
use of the reopen functionality, so rename the current function to
bdrv_is_writable_after_reopen() create a new bdrv_is_writable() function
that just passes NULL to it for the reopen queue.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180606193702.7113-2-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit cc02214097)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Shannon Zhao 5459c0c458 arm_gicv3_kvm: kvm_dist_get/put: skip the registers banked by GICR
While we skip the GIC_INTERNAL irqs, we don't change the register offset
accordingly. This will overlap the GICR registers value and leave the
last GIC_INTERNAL irq's registers out of update.

Fix this by skipping the registers banked by GICR.

Also for migration compatibility if the migration source (old version
qemu) doesn't send gicd_no_migration_shift_bug = 1 to destination, then
we shift the data of PPI to get the right data for SPI.

Fixes: 367b9f527b
Cc: qemu-stable@nongnu.org
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
Message-id: 1527816987-16108-1-git-send-email-zhaoshenglong@huawei.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 910e204841)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
John Snow 5c9266fa97 ahci: fix PxCI register race
Fixes: https://bugs.launchpad.net/qemu/+bug/1769189

AHCI presently signals completion prior to the PxCI register being
cleared to indicate completion. If a guest driver attempts to issue
a new command in its IRQ handler, it might be surprised to learn there
is still a command pending.

In the case of Windows 10's boot driver, it will actually poll the IRQ
register hoping to find out when the command is done running -- which
will never happen, as there isn't a command running.

Fix this: clear PxCI in ahci_cmd_done and not in the asynchronous BH.
Because it now runs synchronously, we don't need to check if the command
is actually done by spying on the ATA registers. We know it's done.

CC: qemu-stable <qemu-stable@nongnu.org>
Reported-by: François Guerraz <kubrick@fgv6.net>
Tested-by: Bruce Rogers <brogers@suse.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Message-id: 20180531004323.4611-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 5694c7eacc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
John Thomson df00a166c4 Fix libusb-1.0.22 deprecated libusb_set_debug with libusb_set_option
libusb-1.0.22 marked libusb_set_debug deprecated
it is replaced with
libusb_set_option(libusb_context, LIBUSB_OPTION_LOG_LEVEL, libusb_log_level);

details here: 539f22e2fd

Warning here:

  CC      hw/usb/host-libusb.o
/builds/xen/src/qemu-xen/hw/usb/host-libusb.c: In function 'usb_host_init':
/builds/xen/src/qemu-xen/hw/usb/host-libusb.c:250:5: error: 'libusb_set_debug' is deprecated: Use libusb_set_option instead [-Werror=deprecated-declarations]
     libusb_set_debug(ctx, loglevel);
     ^~~~~~~~~~~~~~~~
In file included from /builds/xen/src/qemu-xen/hw/usb/host-libusb.c:40:0:
/usr/include/libusb-1.0/libusb.h:1300:18: note: declared here
 void LIBUSB_CALL libusb_set_debug(libusb_context *ctx, int level);
                  ^~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make: *** [/builds/xen/src/qemu-xen/rules.mak:66: hw/usb/host-libusb.o] Error 1
make: Leaving directory '/builds/xen/src/xen/tools/qemu-xen-build'

Signed-off-by: John Thomson <git@johnthomson.fastmail.com.au>
Message-id: 20180405132046.4968-1-git@johnthomson.fastmail.com.au
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 9d8fa0df49)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Shannon Zhao 77df190051 arm_gicv3_kvm: increase clroffset accordingly
It forgot to increase clroffset during the loop. So it only clear the
first 4 bytes.

Fixes: 367b9f527b
Cc: qemu-stable@nongnu.org
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1527047633-12368-1-git-send-email-zhaoshenglong@huawei.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 34ffacae08)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Peter Xu f4b4095a8f intel-iommu: rework the page walk logic
This patch fixes a potential small window that the DMA page table might
be incomplete or invalid when the guest sends domain/context
invalidations to a device.  This can cause random DMA errors for
assigned devices.

This is a major change to the VT-d shadow page walking logic. It
includes but is not limited to:

- For each VTDAddressSpace, now we maintain what IOVA ranges we have
  mapped and what we have not.  With that information, now we only send
  MAP or UNMAP when necessary.  Say, we don't send MAP notifies if we
  know we have already mapped the range, meanwhile we don't send UNMAP
  notifies if we know we never mapped the range at all.

- Introduce vtd_sync_shadow_page_table[_range] APIs so that we can call
  in any places to resync the shadow page table for a device.

- When we receive domain/context invalidation, we should not really run
  the replay logic, instead we use the new sync shadow page table API to
  resync the whole shadow page table without unmapping the whole
  region.  After this change, we'll only do the page walk once for each
  domain invalidations (before this, it can be multiple, depending on
  number of notifiers per address space).

While at it, the page walking logic is also refactored to be simpler.

CC: QEMU Stable <qemu-stable@nongnu.org>
Reported-by: Jintack Lim <jintack@cs.columbia.edu>
Tested-by: Jintack Lim <jintack@cs.columbia.edu>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 63b88968f1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Peter Xu 08aa25f5f8 util: implement simple iova tree
Introduce a simplest iova tree implementation based on GTree.

CC: QEMU Stable <qemu-stable@nongnu.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit eecf5eedbd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Peter Xu d5c60a950a intel-iommu: trace domain id during page walk
This patch only modifies the trace points.

Previously we were tracing page walk levels.  They are redundant since
we have page mask (size) already.  Now we trace something much more
useful which is the domain ID of the page walking.  That can be very
useful when we trace more than one devices on the same system, so that
we can know which map is for which domain.

CC: QEMU Stable <qemu-stable@nongnu.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit d118c06ebb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:19 -05:00
Peter Xu 78b85a98a3 intel-iommu: pass in address space when page walk
We pass in the VTDAddressSpace too.  It'll be used in the follow up
patches.

CC: QEMU Stable <qemu-stable@nongnu.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2f764fa87d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Peter Xu 28048f7bcd intel-iommu: introduce vtd_page_walk_info
During the recursive page walking of IOVA page tables, some stack
variables are constant variables and never changed during the whole page
walking procedure.  Isolate them into a struct so that we don't need to
pass those contants down the stack every time and multiple times.

CC: QEMU Stable <qemu-stable@nongnu.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit fe215b0cbb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Peter Xu 1e5b93f620 intel-iommu: only do page walk for MAP notifiers
For UNMAP-only IOMMU notifiers, we don't need to walk the page tables.
Fasten that procedure by skipping the page table walk.  That should
boost performance for UNMAP-only notifiers like vhost.

CC: QEMU Stable <qemu-stable@nongnu.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 4f8a62a933)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Peter Xu 5cf61b56a4 intel-iommu: add iommu lock
SECURITY IMPLICATION: this patch fixes a potential race when multiple
threads access the IOMMU IOTLB cache.

Add a per-iommu big lock to protect IOMMU status.  Currently the only
thing to be protected is the IOTLB/context cache, since that can be
accessed even without BQL, e.g., in IO dataplane.

Note that we don't need to protect device page tables since that's fully
controlled by the guest kernel.  However there is still possibility that
malicious drivers will program the device to not obey the rule.  In that
case QEMU can't really do anything useful, instead the guest itself will
be responsible for all uncertainties.

CC: QEMU Stable <qemu-stable@nongnu.org>
Reported-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 1d9efa73e1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Peter Xu d64604326f intel-iommu: remove IntelIOMMUNotifierNode
That is not really necessary.  Removing that node struct and put the
list entry directly into VTDAddressSpace.  It simplfies the code a lot.
Since at it, rename the old notifiers_list into vtd_as_with_notifiers.

CC: QEMU Stable <qemu-stable@nongnu.org>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit b4a4ba0d68)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Peter Xu 93a53137be intel-iommu: send PSI always even if across PDEs
SECURITY IMPLICATION: without this patch, any guest with both assigned
device and a vIOMMU might encounter stale IO page mappings even if guest
has already unmapped the page, which may lead to guest memory
corruption.  The stale mappings will only be limited to the guest's own
memory range, so it should not affect the host memory or other guests on
the host.

During IOVA page table walking, there is a special case when the PSI
covers one whole PDE (Page Directory Entry, which contains 512 Page
Table Entries) or more.  In the past, we skip that entry and we don't
notify the IOMMU notifiers.  This is not correct.  We should send UNMAP
notification to registered UNMAP notifiers in this case.

For UNMAP only notifiers, this might cause IOTLBs cached in the devices
even if they were already invalid.  For MAP/UNMAP notifiers like
vfio-pci, this will cause stale page mappings.

This special case doesn't trigger often, but it is very easy to be
triggered by nested device assignments, since in that case we'll
possibly map the whole L2 guest RAM region into the device's IOVA
address space (several GBs at least), which is far bigger than normal
kernel driver usages of the device (tens of MBs normally).

Without this patch applied to L1 QEMU, nested device assignment to L2
guests will dump some errors like:

qemu-system-x86_64: VFIO_MAP_DMA: -17
qemu-system-x86_64: vfio_dma_map(0x557305420c30, 0xad000, 0x1000,
                    0x7f89a920d000) = -17 (File exists)

CC: QEMU Stable <qemu-stable@nongnu.org>
Acked-by: Jason Wang <jasowang@redhat.com>
[peterx: rewrite the commit message]
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

(cherry picked from commit 36d2d52bdb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Jan Kiszka 91f6149592 hw/intc/arm_gicv3: Fix APxR<n> register dispatching
There was a nasty flip in identifying which register group an access is
targeting. The issue caused spuriously raised priorities of the guest
when handing CPUs over in the Jailhouse hypervisor.

Cc: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Message-id: 28b927d3-da58-bce4-cc13-bfec7f9b1cb9@siemens.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 887aae10f6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Michal Privoznik 81e46e3c82 console: Avoid segfault in screendump
After f771c5440e it is possible to select device and
head which to take screendump from. And even though we check if
provided head number falls within range, it may still happen that
the console has no surface yet leading to SIGSEGV:

  qemu.git $ ./x86_64-softmmu/qemu-system-x86_64 \
    -qmp stdio \
    -device virtio-vga,id=video0,max_outputs=4

  {"execute":"qmp_capabilities"}
  {"execute":"screendump", "arguments":{"filename":"/tmp/screen.ppm", "device":"video0", "head":1}}
  Segmentation fault

 #0  0x00005628249dda88 in ppm_save (filename=0x56282826cbc0 "/tmp/screen.ppm", ds=0x0, errp=0x7fff52a6fae0) at ui/console.c:304
 #1  0x00005628249ddd9b in qmp_screendump (filename=0x56282826cbc0 "/tmp/screen.ppm", has_device=true, device=0x5628276902d0 "video0", has_head=true, head=1, errp=0x7fff52a6fae0) at ui/console.c:375
 #2  0x00005628247740df in qmp_marshal_screendump (args=0x562828265e00, ret=0x7fff52a6fb68, errp=0x7fff52a6fb60) at qapi/qapi-commands-ui.c:110

Here, @ds from frame #0 (or @surface from frame #1) is
dereferenced at the very beginning of ppm_save(). And because
it's NULL crash happens.

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: cb05bb1909daa6ba62145c0194aafa05a14ed3d1.1526569138.git.mprivozn@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 08d9864fa4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Cornelia Huck a5c8fbbeac s390x/ccw: make sure all ccw devices are properly reset
Thomas reported that the subchannel for a  3270 device that ended up
in a broken state (status pending even though not enabled) did not
get out of that state even after a reboot (which involves a subsytem
reset). The reason for this is that the 3270 device did not define
a reset handler.

Let's fix this by introducing a base reset handler (set up for all
ccw devices) that resets the subchannel and have virtio-ccw call
its virtio-specific reset procedure in addition to that.

CC: qemu-stable@nongnu.org
Reported-by: Thomas Huth <thuth@redhat.com>
Suggested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 838fb84f83)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Cornelia Huck c9bb077871 virtio-ccw: common reset handler
All the different virtio ccw devices use the same reset handler,
so let's move setting it into the base virtio ccw device class.

CC: qemu-stable@nongnu.org
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 0c53057adb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:18 -05:00
Thomas Huth 3372a3168a pc-bios/s390-ccw: struct tpi_info must be declared as aligned(4)
I've run into a compilation error today with the current version of GCC 8:

In file included from s390-ccw.h:49,
                 from main.c:12:
cio.h:128:1: error: alignment 1 of 'struct tpi_info' is less than 4 [-Werror=packed-not-aligned]
 } __attribute__ ((packed));
 ^
cc1: all warnings being treated as errors

Since the struct tpi_info contains an element ("struct subchannel_id schid")
which is marked as aligned(4), we've got to mark the struct tpi_info as
aligned(4), too.

CC: qemu-stable@nongnu.org
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1525774672-11913-1-git-send-email-thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit a6e4385dea)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Cornelia Huck 87efdb9820 s390x/css: disabled subchannels cannot be status pending
The 3270 code will try to post an attention interrupt when the
3270 emulator (e.g. x3270) attaches. If the guest has not yet
enabled the subchannel for the 3270 device, we will present a spurious
cc 1 (status pending) when it uses msch on it later on, e.g. when
trying to enable the subchannel.

To fix this, just don't do anything in css_conditional_io_interrupt()
if the subchannel is not enabled. The 3270 code will work fine with
that, and the other user of this function (virtio-ccw) never
attempts to post an interrupt for a disabled device to begin with.

CC: qemu-stable@nongnu.org
Reported-by: Thomas Huth <thuth@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 6e9c893ecd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Fam Zheng 51691e9244 raw: Check byte range uniformly
We don't verify the request range against s->size in the I/O callbacks
except for raw_co_pwritev. This is inconsistent (especially for
raw_co_pwrite_zeroes and raw_co_pdiscard), so fix them, in the meanwhile
make the helper reusable by the coming new callbacks.

Note that in most cases the block layer already verifies the request
byte range against our reported image length, before invoking the driver
callbacks.  The exception is during image creating, after
blk_set_allow_write_beyond_eof(blk, true) is called. But in that case,
the requests are not directly from the user or guest. So there is no
visible behavior change in adding the check code.

The int64_t -> uint64_t inconsistency, as shown by the type casting, is
pre-existing due to the interface.

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 20180601092648.24614-3-famz@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 3844553852)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Michael Walle 4f9df08749 lm32: take BQL before writing IP/IM register
Writing to these registers may raise an interrupt request. Actually,
this prevents the milkymist board from starting.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Walle <michael@walle.cc>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
(cherry picked from commit 81e9cbd0ca)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Max Reitz ca3150da6d iotests: Add test for -U/force-share conflicts
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180502202051.15493-4-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 4e7d73c5fb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Max Reitz 9e724c05a0 qemu-img: Use only string options in img_open_opts
img_open_opts() takes a QemuOpts and converts them to a QDict, so all
values therein are strings.  Then it may try to call qdict_get_bool(),
however, which will fail with a segmentation fault every time:

$ ./qemu-img info -U --image-opts \
    driver=file,filename=/dev/null,force-share=off
[1]    27869 segmentation fault (core dumped)  ./qemu-img info -U
--image-opts driver=file,filename=/dev/null,force-share=off

Fix this by using qdict_get_str() and comparing the value as a string.
Also, when adding a force-share value to the QDict, add it as a string
so it fits the rest of the dict.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180502202051.15493-3-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 4615f87832)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Max Reitz e8d8f6a3aa qemu-io: Use purely string blockdev options
Currently, qemu-io only uses string-valued blockdev options (as all are
converted directly from QemuOpts) -- with one exception: -U adds the
force-share option as a boolean.  This in itself is already a bit
questionable, but a real issue is that it also assumes the value already
existing in the options QDict would be a boolean, which is wrong.

That has the following effect:

$ ./qemu-io -r -U --image-opts \
    driver=file,filename=/dev/null,force-share=off
[1]    15200 segmentation fault (core dumped)  ./qemu-io -r -U
--image-opts driver=file,filename=/dev/null,force-share=off

Since @opts is converted from QemuOpts, the value must be a string, and
we have to compare it as such.  Consequently, it makes sense to also set
it as a string instead of a boolean.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180502202051.15493-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 2a01c01f9e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Max Reitz b3a18683f9 iotests: Add test for rebasing with relative paths
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20180509182002.8044-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 28036a7f70)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Max Reitz f9e0e53add qemu-img: Resolve relative backing paths in rebase
Currently, rebase interprets a relative path for the new backing image
as follows:
(1) Open the new backing image with the given relative path (thus relative to
    qemu-img's working directory).
(2) Write it directly into the overlay's backing path field (thus
    relative to the overlay).

If the overlay is not in qemu-img's working directory, both will be
different interpretations, which may either lead to an error somewhere
(either rebase fails because it cannot open the new backing image, or
your overlay becomes unusable because its backing path does not point to
a file), or, even worse, it may result in your rebase being performed
for a different backing file than what your overlay will point to after
the rebase.

Fix this by interpreting the target backing path as relative to the
overlay, like qemu-img does everywhere else.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1569835
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20180509182002.8044-2-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit d16699b646)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Olaf Hering f81672a5c6 configure: recognize more rpmbuild macros
Extend the list of recognized, but ignored options from rpms %configure
macro. This fixes build on hosts running SUSE Linux.

Cc: qemu-stable@nongnu.org
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Message-Id: <20180418075045.27393-1-olaf@aepfle.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 181ce1d05c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:17 -05:00
Gerd Hoffmann 9ec09b6542 qxl: fix local renderer crash
Make sure we only ask the spice local renderer for display updates in
case we have a valid primary surface.  Without that spice is confused
and throws errors in case a display update request (triggered by
screendump for example) happens in parallel to a mode switch and hits
the race window where the old primary surface is gone and the new isn't
establisted yet.

Cc: qemu-stable@nongnu.org
Fixes: https://bugzilla.redhat.com//show_bug.cgi?id=1567733
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20180427115528.345-1-kraxel@redhat.com
(cherry picked from commit 5bd5c27c7d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Greg Kurz 2dbaba7af0 spapr: don't advertise radix GTSE if max-compat-cpu < power9
On a POWER9 host, if a guest runs in pre POWER9 compat mode, it necessarily
uses the hash MMU mode. In this case, we shouldn't advertise radix GTSE in
the ibm,arch-vec-5-platform-support DT property as the current code does.
The first reason is that it doesn't make sense, and the second one is that
causes the CAS-negotiated options subsection to be migrated. This breaks
backward migration to QEMU 2.7 and older versions on POWER8 hosts:

qemu-system-ppc64: error while loading state for instance 0x0 of device
 'spapr'
qemu-system-ppc64: load of migration failed: No such file or directory

This patch hence initialize CPUs a bit earlier so that we can check the
requested compat mode, and don't set OV5_MMU_RADIX_GTSE for power8 and
older.

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 0550b1206a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Greg Kurz 62f7a38610 target/ppc: always set PPC_MEM_TLBIE in pre 2.8 migration hack
The pseries-2.7 and older machine types require CPUPPCState::insns_flags
to be strictly equal between source and destination. This checking is
abusive and breaks migration of KVM guests when the host CPU models
are different, even if they are compatible enough to allow the guest
to run transparently. This buggy behaviour was fixed for pseries-2.8
and we added some hacks to allow backward migration of older machine
types. These hacks assume that the CPU belongs to the POWER8 family,
which was true for most KVM based setup we cared about at the time.
But now POWER9 systems are coming, and backward migration of pre 2.8
guests running in POWER8 architected mode from a POWER9 host to a
POWER8 host is broken:

qemu-system-ppc64: error while loading state for instance 0x0 of device
 'cpu'
qemu-system-ppc64: load of migration failed: Invalid argument

This happens because POWER9 doesn't set PPC_MEM_TLBIE in insns_flags,
while POWER8 does. Let's force PPC_MEM_TLBIE in the migration hack to
fix the issue. This is an acceptable hack because these old machine
types only support CPU models that do set PPC_MEM_TLBIE.

Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit bce009645b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Peter Maydell 1ace462f9b target/arm: Implement v8M VLLDM and VLSTM
For v8M the instructions VLLDM and VLSTM support lazy saving
and restoring of the secure floating-point registers. Even
if the floating point extension is not implemented, these
instructions must act as NOPs in Secure state, so they can
be used as part of the secure-to-nonsecure call sequence.

Fixes: https://bugs.launchpad.net/qemu/+bug/1768295
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20180503105730.5958-1-peter.maydell@linaro.org
(cherry picked from commit b1e5336a98)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Henry Wertz b90c93106e tcg/arm: Fix memory barrier encoding
I found with qemu 2.11.x or newer that I would get an illegal instruction
error running some Intel binaries on my ARM chromebook.  On investigation,
I found it was quitting on memory barriers.

qemu instruction:
mb $0x31
was translating as:
0x604050cc:  5bf07ff5  blpl     #0x600250a8

After patch it gives:
0x604050cc:  f57ff05b  dmb      ish

In short, I found INSN_DMB_ISH (memory barrier for ARMv7) appeared to be
correct based on online docs, but due to some endian-related shenanigans it
had to be byte-swapped to suit qemu; it appears INSN_DMB_MCR (memory
barrier for ARMv6) also should be byte swapped  (and this patch does so).
I have not checked for correctness of aarch64's barrier instruction.

Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Henry Wertz <hwertz10@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 3f814b8037)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Cornelia Huck 38b7a3ea72 s390-ccw: force diag 308 subcode to unsigned long
We currently pass an integer as the subcode parameter. However,
the upper bits of the register containing the subcode need to
be 0, which is not guaranteed unless we explicitly specify the
subcode to be an unsigned long value.

Fixes: d046c51dad ("pc-bios/s390-ccw: Get device address via diag 308/6")
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit 63d8b5ace3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Eric Blake cb7a41f3f9 nbd/client: Fix error messages during NBD_INFO_BLOCK_SIZE
A missing space makes for poor error messages, and sizes can't
go negative.  Also, we missed diagnosing a server that sends
a maximum block size less than the minimum.

Fixes: 081dd1fe
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20180501154654.943782-1-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit e475d108f1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Jason Andryuk 8ca471da10 ccid: Fix dwProtocols advertisement of T=0
Commit d7d218ef02 attempted to change
dwProtocols to only advertise support for T=0 and not T=1.  The change
was incorrect as it changed 0x00000003 to 0x00010000.

lsusb -v in a linux guest shows:
"dwProtocols         65536  (Invalid values detected)", though the
smart card could still be accessed.  Windows 7 does not detect inserted
smart cards and logs the the following Error in the Event Logs:

    Source: Smart Card Service
    Event ID: 610
    Smart Card Reader 'QEMU QEMU USB CCID 0' rejected IOCTL SET_PROTOCOL:
    Incorrect function. If this error persists, your smart card or reader
    may not be functioning correctly

    Command Header: 03 00 00 00

Setting to 0x00000001 fixes the Windows issue.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Message-id: 20180420183219.20722-1-jandryuk@gmail.com
Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 0ee86bb6c5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Geert Uytterhoeven 1783745673 device_tree: Increase FDT_MAX_SIZE to 1 MiB
It is not uncommon for a contemporary FDT to be larger than 64 KiB,
leading to failures loading the device tree from sysfs:

    qemu-system-aarch64: qemu_fdt_setprop: Couldn't set ...: FDT_ERR_NOSPACE

Hence increase the limit to 1 MiB, like on PPC.

For reference, the largest arm64 DTB created from the Linux sources is
ca. 75 KiB large (100 KiB when built with symbols/fixup support).

Cc: qemu-stable@nongnu.org
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Message-id: 1523541337-23919-1-git-send-email-geert+renesas@glider.be
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 14ec3cbd7c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
Marc-André Lureau 4319ae939c tests: fix tpm-crb tpm-tis tests race
No need to close the TPM data socket on the emulator end, qemu will
close it after a SHUTDOWN. This avoids a race between close() and
read() in the TPM data thread.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 7647d5c6b5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2018-07-11 11:48:16 -05:00
106 changed files with 2204 additions and 466 deletions

View File

@ -1781,6 +1781,12 @@ F: include/sysemu/replay.h
F: docs/replay.txt
F: stubs/replay.c
IOVA Tree
M: Peter Xu <peterx@redhat.com>
S: Maintained
F: include/qemu/iova-tree.h
F: util/iova-tree.c
Usermode Emulation
------------------
Overall

View File

@ -1 +1 @@
2.12.0
2.12.1

17
block.c
View File

@ -1620,13 +1620,24 @@ static int bdrv_reopen_get_flags(BlockReopenQueue *q, BlockDriverState *bs)
/* Returns whether the image file can be written to after the reopen queue @q
* has been successfully applied, or right now if @q is NULL. */
static bool bdrv_is_writable(BlockDriverState *bs, BlockReopenQueue *q)
static bool bdrv_is_writable_after_reopen(BlockDriverState *bs,
BlockReopenQueue *q)
{
int flags = bdrv_reopen_get_flags(q, bs);
return (flags & (BDRV_O_RDWR | BDRV_O_INACTIVE)) == BDRV_O_RDWR;
}
/*
* Return whether the BDS can be written to. This is not necessarily
* the same as !bdrv_is_read_only(bs), as inactivated images may not
* be written to but do not count as read-only images.
*/
bool bdrv_is_writable(BlockDriverState *bs)
{
return bdrv_is_writable_after_reopen(bs, NULL);
}
static void bdrv_child_perm(BlockDriverState *bs, BlockDriverState *child_bs,
BdrvChild *c, const BdrvChildRole *role,
BlockReopenQueue *reopen_queue,
@ -1664,7 +1675,7 @@ static int bdrv_check_perm(BlockDriverState *bs, BlockReopenQueue *q,
/* Write permissions never work with read-only images */
if ((cumulative_perms & (BLK_PERM_WRITE | BLK_PERM_WRITE_UNCHANGED)) &&
!bdrv_is_writable(bs, q))
!bdrv_is_writable_after_reopen(bs, q))
{
error_setg(errp, "Block node is read-only");
return -EPERM;
@ -1956,7 +1967,7 @@ void bdrv_format_default_perms(BlockDriverState *bs, BdrvChild *c,
&perm, &shared);
/* Format drivers may touch metadata even if the guest doesn't write */
if (bdrv_is_writable(bs, reopen_queue)) {
if (bdrv_is_writable_after_reopen(bs, reopen_queue)) {
perm |= BLK_PERM_WRITE | BLK_PERM_RESIZE;
}

View File

@ -1728,6 +1728,9 @@ static int raw_regular_truncate(int fd, int64_t offset, PreallocMode prealloc,
num = MIN(left, 65536);
result = write(fd, buf, num);
if (result < 0) {
if (errno == EINTR) {
continue;
}
result = -errno;
error_setg_errno(errp, -result,
"Could not write zeros for preallocation");

View File

@ -732,7 +732,7 @@ retry:
goto out_unlock;
}
*pnum = lbasd->num_blocks * iscsilun->block_size;
*pnum = (int64_t) lbasd->num_blocks * iscsilun->block_size;
if (lbasd->provisioning == SCSI_PROVISIONING_TYPE_DEALLOCATED ||
lbasd->provisioning == SCSI_PROVISIONING_TYPE_ANCHORED) {

View File

@ -868,12 +868,16 @@ static void coroutine_fn mirror_run(void *opaque)
}
ret = 0;
trace_mirror_before_sleep(s, cnt, s->synced, delay_ns);
if (block_job_is_cancelled(&s->common) && s->common.force) {
break;
} else if (!should_complete) {
if (s->synced && !should_complete) {
delay_ns = (s->in_flight == 0 && cnt == 0 ? SLICE_TIME : 0);
block_job_sleep_ns(&s->common, delay_ns);
}
trace_mirror_before_sleep(s, cnt, s->synced, delay_ns);
block_job_sleep_ns(&s->common, delay_ns);
if (block_job_is_cancelled(&s->common) &&
(!s->synced || s->common.force))
{
break;
}
s->last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
}

View File

@ -259,14 +259,18 @@ static int nbd_parse_blockstatus_payload(NBDClientSession *client,
if (extent->length == 0 ||
(client->info.min_block && !QEMU_IS_ALIGNED(extent->length,
client->info.min_block)) ||
extent->length > orig_length)
{
client->info.min_block))) {
error_setg(errp, "Protocol error: server sent status chunk with "
"invalid length");
return -EINVAL;
}
/* The server is allowed to send us extra information on the final
* extent; just clamp it to the length we requested. */
if (extent->length > orig_length) {
extent->length = orig_length;
}
return 0;
}

View File

@ -557,6 +557,7 @@ static BlockdevOptionsNfs *nfs_options_qdict_to_qapi(QDict *options,
BlockdevOptionsNfs *opts = NULL;
QObject *crumpled = NULL;
Visitor *v;
const QDictEntry *e;
Error *local_err = NULL;
crumpled = qdict_crumple(options, errp);
@ -573,6 +574,12 @@ static BlockdevOptionsNfs *nfs_options_qdict_to_qapi(QDict *options,
return NULL;
}
/* Remove the processed options from the QDict (the visitor processes
* _all_ options in the QDict) */
while ((e = qdict_first(options))) {
qdict_del(options, e->key);
}
return opts;
}

View File

@ -934,6 +934,7 @@ static int coroutine_fn qcow_co_create(BlockdevCreateOptions *opts,
ret = 0;
exit:
blk_unref(qcow_blk);
bdrv_unref(bs);
qcrypto_block_free(crypto);
return ret;
}

View File

@ -4386,7 +4386,7 @@ void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
char *message;
va_list ap;
fatal = fatal && !bs->read_only;
fatal = fatal && bdrv_is_writable(bs);
if (s->signaled_corruption &&
(!fatal || (s->incompatible_features & QCOW2_INCOMPAT_CORRUPT)))

View File

@ -167,16 +167,37 @@ static void raw_reopen_abort(BDRVReopenState *state)
state->opaque = NULL;
}
/* Check and adjust the offset, against 'offset' and 'size' options. */
static inline int raw_adjust_offset(BlockDriverState *bs, uint64_t *offset,
uint64_t bytes, bool is_write)
{
BDRVRawState *s = bs->opaque;
if (s->has_size && (*offset > s->size || bytes > (s->size - *offset))) {
/* There's not enough space for the write, or the read request is
* out-of-range. Don't read/write anything to prevent leaking out of
* the size specified in options. */
return is_write ? -ENOSPC : -EINVAL;;
}
if (*offset > INT64_MAX - s->offset) {
return -EINVAL;
}
*offset += s->offset;
return 0;
}
static int coroutine_fn raw_co_preadv(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov,
int flags)
{
BDRVRawState *s = bs->opaque;
int ret;
if (offset > UINT64_MAX - s->offset) {
return -EINVAL;
ret = raw_adjust_offset(bs, &offset, bytes, false);
if (ret) {
return ret;
}
offset += s->offset;
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags);
@ -186,23 +207,11 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, uint64_t offset,
uint64_t bytes, QEMUIOVector *qiov,
int flags)
{
BDRVRawState *s = bs->opaque;
void *buf = NULL;
BlockDriver *drv;
QEMUIOVector local_qiov;
int ret;
if (s->has_size && (offset > s->size || bytes > (s->size - offset))) {
/* There's not enough space for the data. Don't write anything and just
* fail to prevent leaking out of the size specified in options. */
return -ENOSPC;
}
if (offset > UINT64_MAX - s->offset) {
ret = -EINVAL;
goto fail;
}
if (bs->probed && offset < BLOCK_PROBE_BUF_SIZE && bytes) {
/* Handling partial writes would be a pain - so we just
* require that guests have 512-byte request alignment if
@ -237,7 +246,10 @@ static int coroutine_fn raw_co_pwritev(BlockDriverState *bs, uint64_t offset,
qiov = &local_qiov;
}
offset += s->offset;
ret = raw_adjust_offset(bs, &offset, bytes, true);
if (ret) {
goto fail;
}
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
@ -267,22 +279,24 @@ static int coroutine_fn raw_co_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int bytes,
BdrvRequestFlags flags)
{
BDRVRawState *s = bs->opaque;
if (offset > UINT64_MAX - s->offset) {
return -EINVAL;
int ret;
ret = raw_adjust_offset(bs, (uint64_t *)&offset, bytes, true);
if (ret) {
return ret;
}
offset += s->offset;
return bdrv_co_pwrite_zeroes(bs->file, offset, bytes, flags);
}
static int coroutine_fn raw_co_pdiscard(BlockDriverState *bs,
int64_t offset, int bytes)
{
BDRVRawState *s = bs->opaque;
if (offset > UINT64_MAX - s->offset) {
return -EINVAL;
int ret;
ret = raw_adjust_offset(bs, (uint64_t *)&offset, bytes, true);
if (ret) {
return ret;
}
offset += s->offset;
return bdrv_co_pdiscard(bs->file->bs, offset, bytes);
}

View File

@ -36,9 +36,12 @@ static QemuOptsList throttle_opts = {
},
};
static int throttle_configure_tgm(BlockDriverState *bs,
ThrottleGroupMember *tgm,
QDict *options, Error **errp)
/*
* If this function succeeds then the throttle group name is stored in
* @group and must be freed by the caller.
* If there's an error then @group remains unmodified.
*/
static int throttle_parse_options(QDict *options, char **group, Error **errp)
{
int ret;
const char *group_name;
@ -63,8 +66,7 @@ static int throttle_configure_tgm(BlockDriverState *bs,
goto fin;
}
/* Register membership to group with name group_name */
throttle_group_register_tgm(tgm, group_name, bdrv_get_aio_context(bs));
*group = g_strdup(group_name);
ret = 0;
fin:
qemu_opts_del(opts);
@ -75,6 +77,8 @@ static int throttle_open(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
{
ThrottleGroupMember *tgm = bs->opaque;
char *group;
int ret;
bs->file = bdrv_open_child(NULL, options, "file", bs,
&child_file, false, errp);
@ -84,7 +88,14 @@ static int throttle_open(BlockDriverState *bs, QDict *options,
bs->supported_write_flags = bs->file->bs->supported_write_flags;
bs->supported_zero_flags = bs->file->bs->supported_zero_flags;
return throttle_configure_tgm(bs, tgm, options, errp);
ret = throttle_parse_options(options, &group, errp);
if (ret == 0) {
/* Register membership to group with name group_name */
throttle_group_register_tgm(tgm, group, bdrv_get_aio_context(bs));
g_free(group);
}
return ret;
}
static void throttle_close(BlockDriverState *bs)
@ -160,35 +171,36 @@ static void throttle_attach_aio_context(BlockDriverState *bs,
static int throttle_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue, Error **errp)
{
ThrottleGroupMember *tgm;
int ret;
char *group = NULL;
assert(reopen_state != NULL);
assert(reopen_state->bs != NULL);
reopen_state->opaque = g_new0(ThrottleGroupMember, 1);
tgm = reopen_state->opaque;
return throttle_configure_tgm(reopen_state->bs, tgm, reopen_state->options,
errp);
ret = throttle_parse_options(reopen_state->options, &group, errp);
reopen_state->opaque = group;
return ret;
}
static void throttle_reopen_commit(BDRVReopenState *reopen_state)
{
ThrottleGroupMember *old_tgm = reopen_state->bs->opaque;
ThrottleGroupMember *new_tgm = reopen_state->opaque;
BlockDriverState *bs = reopen_state->bs;
ThrottleGroupMember *tgm = bs->opaque;
char *group = reopen_state->opaque;
throttle_group_unregister_tgm(old_tgm);
g_free(old_tgm);
reopen_state->bs->opaque = new_tgm;
assert(group);
if (strcmp(group, throttle_group_get_name(tgm))) {
throttle_group_unregister_tgm(tgm);
throttle_group_register_tgm(tgm, group, bdrv_get_aio_context(bs));
}
g_free(reopen_state->opaque);
reopen_state->opaque = NULL;
}
static void throttle_reopen_abort(BDRVReopenState *reopen_state)
{
ThrottleGroupMember *tgm = reopen_state->opaque;
throttle_group_unregister_tgm(tgm);
g_free(tgm);
g_free(reopen_state->opaque);
reopen_state->opaque = NULL;
}

View File

@ -831,6 +831,8 @@ BlockJobInfo *block_job_query(BlockJob *job, Error **errp)
info->status = job->status;
info->auto_finalize = job->auto_finalize;
info->auto_dismiss = job->auto_dismiss;
info->has_error = job->ret != 0;
info->error = job->ret ? g_strdup(strerror(-job->ret)) : NULL;
return info;
}

View File

@ -304,6 +304,7 @@ void mux_set_focus(Chardev *chr, int focus)
}
d->focus = focus;
chr->be = d->backends[focus];
mux_chr_send_event(d, d->focus, CHR_EVENT_MUX_IN);
}

4
configure vendored
View File

@ -959,6 +959,8 @@ for opt do
;;
--firmwarepath=*) firmwarepath="$optarg"
;;
--host=*|--build=*|\
--disable-dependency-tracking|\
--sbindir=*|--sharedstatedir=*|\
--oldincludedir=*|--datarootdir=*|--infodir=*|--localedir=*|\
--htmldir=*|--dvidir=*|--pdfdir=*|--psdir=*)
@ -3732,7 +3734,7 @@ fi
fdt_required=no
for target in $target_list; do
case $target in
aarch64*-softmmu|arm*-softmmu|ppc*-softmmu|microblaze*-softmmu|mips64el-softmmu)
aarch64*-softmmu|arm*-softmmu|ppc*-softmmu|microblaze*-softmmu|mips64el-softmmu|riscv*-softmmu)
fdt_required=yes
;;
esac

18
cpus.c
View File

@ -1648,7 +1648,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
/* process any pending work */
cpu->exit_request = 1;
while (1) {
do {
if (cpu_can_run(cpu)) {
int r;
qemu_mutex_unlock_iothread();
@ -2218,11 +2218,25 @@ CpuInfoFastList *qmp_query_cpus_fast(Error **errp)
info->value->props = props;
}
#if defined(TARGET_S390X)
#if defined(TARGET_I386)
info->value->arch = CPU_INFO_ARCH_X86;
#elif defined(TARGET_PPC)
info->value->arch = CPU_INFO_ARCH_PPC;
#elif defined(TARGET_SPARC)
info->value->arch = CPU_INFO_ARCH_SPARC;
#elif defined(TARGET_MIPS)
info->value->arch = CPU_INFO_ARCH_MIPS;
#elif defined(TARGET_TRICORE)
info->value->arch = CPU_INFO_ARCH_TRICORE;
#elif defined(TARGET_S390X)
s390_cpu = S390_CPU(cpu);
env = &s390_cpu->env;
info->value->arch = CPU_INFO_ARCH_S390;
info->value->u.s390.cpu_state = env->cpu_state;
#elif defined(TARGET_RISCV)
info->value->arch = CPU_INFO_ARCH_RISCV;
#else
info->value->arch = CPU_INFO_ARCH_OTHER;
#endif
if (!cur_item) {
head = cur_item = info;

View File

@ -29,7 +29,7 @@
#include <libfdt.h>
#define FDT_MAX_SIZE 0x10000
#define FDT_MAX_SIZE 0x100000
void *create_device_tree(int *sizep)
{

View File

@ -602,34 +602,42 @@ static FloatParts pick_nan(FloatParts a, FloatParts b, float_status *s)
static FloatParts pick_nan_muladd(FloatParts a, FloatParts b, FloatParts c,
bool inf_zero, float_status *s)
{
int which;
if (is_snan(a.cls) || is_snan(b.cls) || is_snan(c.cls)) {
s->float_exception_flags |= float_flag_invalid;
}
if (s->default_nan_mode) {
a.cls = float_class_dnan;
} else {
switch (pickNaNMulAdd(is_qnan(a.cls), is_snan(a.cls),
is_qnan(b.cls), is_snan(b.cls),
is_qnan(c.cls), is_snan(c.cls),
inf_zero, s)) {
case 0:
break;
case 1:
a = b;
break;
case 2:
a = c;
break;
case 3:
a.cls = float_class_dnan;
return a;
default:
g_assert_not_reached();
}
which = pickNaNMulAdd(is_qnan(a.cls), is_snan(a.cls),
is_qnan(b.cls), is_snan(b.cls),
is_qnan(c.cls), is_snan(c.cls),
inf_zero, s);
a.cls = float_class_msnan;
if (s->default_nan_mode) {
/* Note that this check is after pickNaNMulAdd so that function
* has an opportunity to set the Invalid flag.
*/
a.cls = float_class_dnan;
return a;
}
switch (which) {
case 0:
break;
case 1:
a = b;
break;
case 2:
a = c;
break;
case 3:
a.cls = float_class_dnan;
return a;
default:
g_assert_not_reached();
}
a.cls = float_class_msnan;
return a;
}
@ -1360,14 +1368,14 @@ static int64_t round_to_int_and_pack(FloatParts in, int rmode,
r = UINT64_MAX;
}
if (p.sign) {
if (r < -(uint64_t) min) {
if (r <= -(uint64_t) min) {
return -r;
} else {
s->float_exception_flags = orig_flags | float_flag_invalid;
return min;
}
} else {
if (r < max) {
if (r <= max) {
return r;
} else {
s->float_exception_flags = orig_flags | float_flag_invalid;
@ -3139,7 +3147,7 @@ float128 uint64_to_float128(uint64_t a, float_status *status)
if (a == 0) {
return float128_zero;
}
return normalizeRoundAndPackFloat128(0, 0x406E, a, 0, status);
return normalizeRoundAndPackFloat128(0, 0x406E, 0, a, status);
}

View File

@ -169,7 +169,8 @@ void qxl_render_update(PCIQXLDevice *qxl)
qemu_mutex_lock(&qxl->ssd.lock);
if (!runstate_is_running() || !qxl->guest_primary.commands) {
if (!runstate_is_running() || !qxl->guest_primary.commands ||
qxl->mode == QXL_MODE_UNDEFINED) {
qxl_render_update_area_unlocked(qxl);
qemu_mutex_unlock(&qxl->ssd.lock);
return;

View File

@ -128,6 +128,22 @@ static uint64_t vtd_set_clear_mask_quad(IntelIOMMUState *s, hwaddr addr,
return new_val;
}
static inline void vtd_iommu_lock(IntelIOMMUState *s)
{
qemu_mutex_lock(&s->iommu_lock);
}
static inline void vtd_iommu_unlock(IntelIOMMUState *s)
{
qemu_mutex_unlock(&s->iommu_lock);
}
/* Whether the address space needs to notify new mappings */
static inline gboolean vtd_as_has_map_notifier(VTDAddressSpace *as)
{
return as->notifier_flags & IOMMU_NOTIFIER_MAP;
}
/* GHashTable functions */
static gboolean vtd_uint64_equal(gconstpointer v1, gconstpointer v2)
{
@ -172,9 +188,9 @@ static gboolean vtd_hash_remove_by_page(gpointer key, gpointer value,
}
/* Reset all the gen of VTDAddressSpace to zero and set the gen of
* IntelIOMMUState to 1.
* IntelIOMMUState to 1. Must be called with IOMMU lock held.
*/
static void vtd_reset_context_cache(IntelIOMMUState *s)
static void vtd_reset_context_cache_locked(IntelIOMMUState *s)
{
VTDAddressSpace *vtd_as;
VTDBus *vtd_bus;
@ -197,12 +213,20 @@ static void vtd_reset_context_cache(IntelIOMMUState *s)
s->context_cache_gen = 1;
}
static void vtd_reset_iotlb(IntelIOMMUState *s)
/* Must be called with IOMMU lock held. */
static void vtd_reset_iotlb_locked(IntelIOMMUState *s)
{
assert(s->iotlb);
g_hash_table_remove_all(s->iotlb);
}
static void vtd_reset_iotlb(IntelIOMMUState *s)
{
vtd_iommu_lock(s);
vtd_reset_iotlb_locked(s);
vtd_iommu_unlock(s);
}
static uint64_t vtd_get_iotlb_key(uint64_t gfn, uint16_t source_id,
uint32_t level)
{
@ -215,6 +239,7 @@ static uint64_t vtd_get_iotlb_gfn(hwaddr addr, uint32_t level)
return (addr & vtd_slpt_level_page_mask(level)) >> VTD_PAGE_SHIFT_4K;
}
/* Must be called with IOMMU lock held */
static VTDIOTLBEntry *vtd_lookup_iotlb(IntelIOMMUState *s, uint16_t source_id,
hwaddr addr)
{
@ -235,6 +260,7 @@ out:
return entry;
}
/* Must be with IOMMU lock held */
static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
uint16_t domain_id, hwaddr addr, uint64_t slpte,
uint8_t access_flags, uint32_t level)
@ -246,7 +272,7 @@ static void vtd_update_iotlb(IntelIOMMUState *s, uint16_t source_id,
trace_vtd_iotlb_page_update(source_id, addr, slpte, domain_id);
if (g_hash_table_size(s->iotlb) >= VTD_IOTLB_MAX_SIZE) {
trace_vtd_iotlb_reset("iotlb exceeds size limit");
vtd_reset_iotlb(s);
vtd_reset_iotlb_locked(s);
}
entry->gfn = gfn;
@ -722,23 +748,117 @@ static int vtd_iova_to_slpte(VTDContextEntry *ce, uint64_t iova, bool is_write,
typedef int (*vtd_page_walk_hook)(IOMMUTLBEntry *entry, void *private);
/**
* Constant information used during page walking
*
* @hook_fn: hook func to be called when detected page
* @private: private data to be passed into hook func
* @notify_unmap: whether we should notify invalid entries
* @as: VT-d address space of the device
* @aw: maximum address width
* @domain: domain ID of the page walk
*/
typedef struct {
VTDAddressSpace *as;
vtd_page_walk_hook hook_fn;
void *private;
bool notify_unmap;
uint8_t aw;
uint16_t domain_id;
} vtd_page_walk_info;
static int vtd_page_walk_one(IOMMUTLBEntry *entry, vtd_page_walk_info *info)
{
VTDAddressSpace *as = info->as;
vtd_page_walk_hook hook_fn = info->hook_fn;
void *private = info->private;
DMAMap target = {
.iova = entry->iova,
.size = entry->addr_mask,
.translated_addr = entry->translated_addr,
.perm = entry->perm,
};
DMAMap *mapped = iova_tree_find(as->iova_tree, &target);
if (entry->perm == IOMMU_NONE && !info->notify_unmap) {
trace_vtd_page_walk_one_skip_unmap(entry->iova, entry->addr_mask);
return 0;
}
assert(hook_fn);
/* Update local IOVA mapped ranges */
if (entry->perm) {
if (mapped) {
/* If it's exactly the same translation, skip */
if (!memcmp(mapped, &target, sizeof(target))) {
trace_vtd_page_walk_one_skip_map(entry->iova, entry->addr_mask,
entry->translated_addr);
return 0;
} else {
/*
* Translation changed. Normally this should not
* happen, but it can happen when with buggy guest
* OSes. Note that there will be a small window that
* we don't have map at all. But that's the best
* effort we can do. The ideal way to emulate this is
* atomically modify the PTE to follow what has
* changed, but we can't. One example is that vfio
* driver only has VFIO_IOMMU_[UN]MAP_DMA but no
* interface to modify a mapping (meanwhile it seems
* meaningless to even provide one). Anyway, let's
* mark this as a TODO in case one day we'll have
* a better solution.
*/
IOMMUAccessFlags cache_perm = entry->perm;
int ret;
/* Emulate an UNMAP */
entry->perm = IOMMU_NONE;
trace_vtd_page_walk_one(info->domain_id,
entry->iova,
entry->translated_addr,
entry->addr_mask,
entry->perm);
ret = hook_fn(entry, private);
if (ret) {
return ret;
}
/* Drop any existing mapping */
iova_tree_remove(as->iova_tree, &target);
/* Recover the correct permission */
entry->perm = cache_perm;
}
}
iova_tree_insert(as->iova_tree, &target);
} else {
if (!mapped) {
/* Skip since we didn't map this range at all */
trace_vtd_page_walk_one_skip_unmap(entry->iova, entry->addr_mask);
return 0;
}
iova_tree_remove(as->iova_tree, &target);
}
trace_vtd_page_walk_one(info->domain_id, entry->iova,
entry->translated_addr, entry->addr_mask,
entry->perm);
return hook_fn(entry, private);
}
/**
* vtd_page_walk_level - walk over specific level for IOVA range
*
* @addr: base GPA addr to start the walk
* @start: IOVA range start address
* @end: IOVA range end address (start <= addr < end)
* @hook_fn: hook func to be called when detected page
* @private: private data to be passed into hook func
* @read: whether parent level has read permission
* @write: whether parent level has write permission
* @notify_unmap: whether we should notify invalid entries
* @aw: maximum address width
* @info: constant information for the page walk
*/
static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
uint64_t end, vtd_page_walk_hook hook_fn,
void *private, uint32_t level, bool read,
bool write, bool notify_unmap, uint8_t aw)
uint64_t end, uint32_t level, bool read,
bool write, vtd_page_walk_info *info)
{
bool read_cur, write_cur, entry_valid;
uint32_t offset;
@ -781,37 +901,34 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start,
*/
entry_valid = read_cur | write_cur;
if (vtd_is_last_slpte(slpte, level)) {
if (!vtd_is_last_slpte(slpte, level) && entry_valid) {
/*
* This is a valid PDE (or even bigger than PDE). We need
* to walk one further level.
*/
ret = vtd_page_walk_level(vtd_get_slpte_addr(slpte, info->aw),
iova, MIN(iova_next, end), level - 1,
read_cur, write_cur, info);
} else {
/*
* This means we are either:
*
* (1) the real page entry (either 4K page, or huge page)
* (2) the whole range is invalid
*
* In either case, we send an IOTLB notification down.
*/
entry.target_as = &address_space_memory;
entry.iova = iova & subpage_mask;
/* NOTE: this is only meaningful if entry_valid == true */
entry.translated_addr = vtd_get_slpte_addr(slpte, aw);
entry.addr_mask = ~subpage_mask;
entry.perm = IOMMU_ACCESS_FLAG(read_cur, write_cur);
if (!entry_valid && !notify_unmap) {
trace_vtd_page_walk_skip_perm(iova, iova_next);
goto next;
}
trace_vtd_page_walk_one(level, entry.iova, entry.translated_addr,
entry.addr_mask, entry.perm);
if (hook_fn) {
ret = hook_fn(&entry, private);
if (ret < 0) {
return ret;
}
}
} else {
if (!entry_valid) {
trace_vtd_page_walk_skip_perm(iova, iova_next);
goto next;
}
ret = vtd_page_walk_level(vtd_get_slpte_addr(slpte, aw), iova,
MIN(iova_next, end), hook_fn, private,
level - 1, read_cur, write_cur,
notify_unmap, aw);
if (ret < 0) {
return ret;
}
entry.addr_mask = ~subpage_mask;
/* NOTE: this is only meaningful if entry_valid == true */
entry.translated_addr = vtd_get_slpte_addr(slpte, info->aw);
ret = vtd_page_walk_one(&entry, info);
}
if (ret < 0) {
return ret;
}
next:
@ -827,28 +944,24 @@ next:
* @ce: context entry to walk upon
* @start: IOVA address to start the walk
* @end: IOVA range end address (start <= addr < end)
* @hook_fn: the hook that to be called for each detected area
* @private: private data for the hook function
* @aw: maximum address width
* @info: page walking information struct
*/
static int vtd_page_walk(VTDContextEntry *ce, uint64_t start, uint64_t end,
vtd_page_walk_hook hook_fn, void *private,
bool notify_unmap, uint8_t aw)
vtd_page_walk_info *info)
{
dma_addr_t addr = vtd_ce_get_slpt_base(ce);
uint32_t level = vtd_ce_get_level(ce);
if (!vtd_iova_range_check(start, ce, aw)) {
if (!vtd_iova_range_check(start, ce, info->aw)) {
return -VTD_FR_ADDR_BEYOND_MGAW;
}
if (!vtd_iova_range_check(end, ce, aw)) {
if (!vtd_iova_range_check(end, ce, info->aw)) {
/* Fix end so that it reaches the maximum */
end = vtd_iova_limit(ce, aw);
end = vtd_iova_limit(ce, info->aw);
}
return vtd_page_walk_level(addr, start, end, hook_fn, private,
level, true, true, notify_unmap, aw);
return vtd_page_walk_level(addr, start, end, level, true, true, info);
}
/* Map a device to its corresponding domain (context-entry) */
@ -907,6 +1020,58 @@ static int vtd_dev_to_context_entry(IntelIOMMUState *s, uint8_t bus_num,
return 0;
}
static int vtd_sync_shadow_page_hook(IOMMUTLBEntry *entry,
void *private)
{
memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
return 0;
}
/* If context entry is NULL, we'll try to fetch it on our own. */
static int vtd_sync_shadow_page_table_range(VTDAddressSpace *vtd_as,
VTDContextEntry *ce,
hwaddr addr, hwaddr size)
{
IntelIOMMUState *s = vtd_as->iommu_state;
vtd_page_walk_info info = {
.hook_fn = vtd_sync_shadow_page_hook,
.private = (void *)&vtd_as->iommu,
.notify_unmap = true,
.aw = s->aw_bits,
.as = vtd_as,
};
VTDContextEntry ce_cache;
int ret;
if (ce) {
/* If the caller provided context entry, use it */
ce_cache = *ce;
} else {
/* If the caller didn't provide ce, try to fetch */
ret = vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
vtd_as->devfn, &ce_cache);
if (ret) {
/*
* This should not really happen, but in case it happens,
* we just skip the sync for this time. After all we even
* don't have the root table pointer!
*/
trace_vtd_err("Detected invalid context entry when "
"trying to sync shadow page table");
return 0;
}
}
info.domain_id = VTD_CONTEXT_ENTRY_DID(ce_cache.hi);
return vtd_page_walk(&ce_cache, addr, addr + size, &info);
}
static int vtd_sync_shadow_page_table(VTDAddressSpace *vtd_as)
{
return vtd_sync_shadow_page_table_range(vtd_as, NULL, 0, UINT64_MAX);
}
/*
* Fetch translation type for specific device. Returns <0 if error
* happens, otherwise return the shifted type to check against
@ -1088,7 +1253,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
IntelIOMMUState *s = vtd_as->iommu_state;
VTDContextEntry ce;
uint8_t bus_num = pci_bus_num(bus);
VTDContextCacheEntry *cc_entry = &vtd_as->context_cache_entry;
VTDContextCacheEntry *cc_entry;
uint64_t slpte, page_mask;
uint32_t level;
uint16_t source_id = vtd_make_source_id(bus_num, devfn);
@ -1105,6 +1270,10 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
*/
assert(!vtd_is_interrupt_addr(addr));
vtd_iommu_lock(s);
cc_entry = &vtd_as->context_cache_entry;
/* Try to fetch slpte form IOTLB */
iotlb_entry = vtd_lookup_iotlb(s, source_id, addr);
if (iotlb_entry) {
@ -1164,7 +1333,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
* IOMMU region can be swapped back.
*/
vtd_pt_enable_fast_path(s, source_id);
vtd_iommu_unlock(s);
return true;
}
@ -1185,6 +1354,7 @@ static bool vtd_do_iommu_translate(VTDAddressSpace *vtd_as, PCIBus *bus,
vtd_update_iotlb(s, source_id, VTD_CONTEXT_ENTRY_DID(ce.hi), addr, slpte,
access_flags, level);
out:
vtd_iommu_unlock(s);
entry->iova = addr & page_mask;
entry->translated_addr = vtd_get_slpte_addr(slpte, s->aw_bits) & page_mask;
entry->addr_mask = ~page_mask;
@ -1192,6 +1362,7 @@ out:
return true;
error:
vtd_iommu_unlock(s);
entry->iova = 0;
entry->translated_addr = 0;
entry->addr_mask = 0;
@ -1230,20 +1401,23 @@ static void vtd_interrupt_remap_table_setup(IntelIOMMUState *s)
static void vtd_iommu_replay_all(IntelIOMMUState *s)
{
IntelIOMMUNotifierNode *node;
VTDAddressSpace *vtd_as;
QLIST_FOREACH(node, &s->notifiers_list, next) {
memory_region_iommu_replay_all(&node->vtd_as->iommu);
QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
vtd_sync_shadow_page_table(vtd_as);
}
}
static void vtd_context_global_invalidate(IntelIOMMUState *s)
{
trace_vtd_inv_desc_cc_global();
/* Protects context cache */
vtd_iommu_lock(s);
s->context_cache_gen++;
if (s->context_cache_gen == VTD_CONTEXT_CACHE_GEN_MAX) {
vtd_reset_context_cache(s);
vtd_reset_context_cache_locked(s);
}
vtd_iommu_unlock(s);
vtd_switch_address_space_all(s);
/*
* From VT-d spec 6.5.2.1, a global context entry invalidation
@ -1295,7 +1469,9 @@ static void vtd_context_device_invalidate(IntelIOMMUState *s,
if (vtd_as && ((devfn_it & mask) == (devfn & mask))) {
trace_vtd_inv_desc_cc_device(bus_n, VTD_PCI_SLOT(devfn_it),
VTD_PCI_FUNC(devfn_it));
vtd_iommu_lock(s);
vtd_as->context_cache_entry.context_cache_gen = 0;
vtd_iommu_unlock(s);
/*
* Do switch address space when needed, in case if the
* device passthrough bit is switched.
@ -1303,14 +1479,13 @@ static void vtd_context_device_invalidate(IntelIOMMUState *s,
vtd_switch_address_space(vtd_as);
/*
* So a device is moving out of (or moving into) a
* domain, a replay() suites here to notify all the
* IOMMU_NOTIFIER_MAP registers about this change.
* domain, resync the shadow page table.
* This won't bring bad even if we have no such
* notifier registered - the IOMMU notification
* framework will skip MAP notifications if that
* happened.
*/
memory_region_iommu_replay_all(&vtd_as->iommu);
vtd_sync_shadow_page_table(vtd_as);
}
}
}
@ -1354,48 +1529,60 @@ static void vtd_iotlb_global_invalidate(IntelIOMMUState *s)
static void vtd_iotlb_domain_invalidate(IntelIOMMUState *s, uint16_t domain_id)
{
IntelIOMMUNotifierNode *node;
VTDContextEntry ce;
VTDAddressSpace *vtd_as;
trace_vtd_inv_desc_iotlb_domain(domain_id);
vtd_iommu_lock(s);
g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_domain,
&domain_id);
vtd_iommu_unlock(s);
QLIST_FOREACH(node, &s->notifiers_list, next) {
vtd_as = node->vtd_as;
QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
if (!vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
vtd_as->devfn, &ce) &&
domain_id == VTD_CONTEXT_ENTRY_DID(ce.hi)) {
memory_region_iommu_replay_all(&vtd_as->iommu);
vtd_sync_shadow_page_table(vtd_as);
}
}
}
static int vtd_page_invalidate_notify_hook(IOMMUTLBEntry *entry,
void *private)
{
memory_region_notify_iommu((IOMMUMemoryRegion *)private, *entry);
return 0;
}
static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s,
uint16_t domain_id, hwaddr addr,
uint8_t am)
{
IntelIOMMUNotifierNode *node;
VTDAddressSpace *vtd_as;
VTDContextEntry ce;
int ret;
hwaddr size = (1 << am) * VTD_PAGE_SIZE;
QLIST_FOREACH(node, &(s->notifiers_list), next) {
VTDAddressSpace *vtd_as = node->vtd_as;
QLIST_FOREACH(vtd_as, &(s->vtd_as_with_notifiers), next) {
ret = vtd_dev_to_context_entry(s, pci_bus_num(vtd_as->bus),
vtd_as->devfn, &ce);
if (!ret && domain_id == VTD_CONTEXT_ENTRY_DID(ce.hi)) {
vtd_page_walk(&ce, addr, addr + (1 << am) * VTD_PAGE_SIZE,
vtd_page_invalidate_notify_hook,
(void *)&vtd_as->iommu, true, s->aw_bits);
if (vtd_as_has_map_notifier(vtd_as)) {
/*
* As long as we have MAP notifications registered in
* any of our IOMMU notifiers, we need to sync the
* shadow page table.
*/
vtd_sync_shadow_page_table_range(vtd_as, &ce, addr, size);
} else {
/*
* For UNMAP-only notifiers, we don't need to walk the
* page tables. We just deliver the PSI down to
* invalidate caches.
*/
IOMMUTLBEntry entry = {
.target_as = &address_space_memory,
.iova = addr,
.translated_addr = 0,
.addr_mask = size - 1,
.perm = IOMMU_NONE,
};
memory_region_notify_iommu(&vtd_as->iommu, entry);
}
}
}
}
@ -1411,7 +1598,9 @@ static void vtd_iotlb_page_invalidate(IntelIOMMUState *s, uint16_t domain_id,
info.domain_id = domain_id;
info.addr = addr;
info.mask = ~((1 << am) - 1);
vtd_iommu_lock(s);
g_hash_table_foreach_remove(s->iotlb, vtd_hash_remove_by_page, &info);
vtd_iommu_unlock(s);
vtd_iotlb_page_invalidate_notify(s, domain_id, addr, am);
}
@ -2326,8 +2515,6 @@ static void vtd_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu,
{
VTDAddressSpace *vtd_as = container_of(iommu, VTDAddressSpace, iommu);
IntelIOMMUState *s = vtd_as->iommu_state;
IntelIOMMUNotifierNode *node = NULL;
IntelIOMMUNotifierNode *next_node = NULL;
if (!s->caching_mode && new & IOMMU_NOTIFIER_MAP) {
error_report("We need to set caching-mode=1 for intel-iommu to enable "
@ -2335,22 +2522,13 @@ static void vtd_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu,
exit(1);
}
if (old == IOMMU_NOTIFIER_NONE) {
node = g_malloc0(sizeof(*node));
node->vtd_as = vtd_as;
QLIST_INSERT_HEAD(&s->notifiers_list, node, next);
return;
}
/* Update per-address-space notifier flags */
vtd_as->notifier_flags = new;
/* update notifier node with new flags */
QLIST_FOREACH_SAFE(node, &s->notifiers_list, next, next_node) {
if (node->vtd_as == vtd_as) {
if (new == IOMMU_NOTIFIER_NONE) {
QLIST_REMOVE(node, next);
g_free(node);
}
return;
}
if (old == IOMMU_NOTIFIER_NONE) {
QLIST_INSERT_HEAD(&s->vtd_as_with_notifiers, vtd_as, next);
} else if (new == IOMMU_NOTIFIER_NONE) {
QLIST_REMOVE(vtd_as, next);
}
}
@ -2719,6 +2897,7 @@ VTDAddressSpace *vtd_find_add_as(IntelIOMMUState *s, PCIBus *bus, int devfn)
vtd_dev_as->devfn = (uint8_t)devfn;
vtd_dev_as->iommu_state = s;
vtd_dev_as->context_cache_entry.context_cache_gen = 0;
vtd_dev_as->iova_tree = iova_tree_new();
/*
* Memory region relationships looks like (Address range shows
@ -2771,6 +2950,7 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
hwaddr start = n->start;
hwaddr end = n->end;
IntelIOMMUState *s = as->iommu_state;
DMAMap map;
/*
* Note: all the codes in this function has a assumption that IOVA
@ -2815,17 +2995,19 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
VTD_PCI_FUNC(as->devfn),
entry.iova, size);
map.iova = entry.iova;
map.size = entry.addr_mask;
iova_tree_remove(as->iova_tree, &map);
memory_region_notify_one(n, &entry);
}
static void vtd_address_space_unmap_all(IntelIOMMUState *s)
{
IntelIOMMUNotifierNode *node;
VTDAddressSpace *vtd_as;
IOMMUNotifier *n;
QLIST_FOREACH(node, &s->notifiers_list, next) {
vtd_as = node->vtd_as;
QLIST_FOREACH(vtd_as, &s->vtd_as_with_notifiers, next) {
IOMMU_NOTIFIER_FOREACH(n, &vtd_as->iommu) {
vtd_address_space_unmap(vtd_as, n);
}
@ -2857,8 +3039,19 @@ static void vtd_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
PCI_FUNC(vtd_as->devfn),
VTD_CONTEXT_ENTRY_DID(ce.hi),
ce.hi, ce.lo);
vtd_page_walk(&ce, 0, ~0ULL, vtd_replay_hook, (void *)n, false,
s->aw_bits);
if (vtd_as_has_map_notifier(vtd_as)) {
/* This is required only for MAP typed notifiers */
vtd_page_walk_info info = {
.hook_fn = vtd_replay_hook,
.private = (void *)n,
.notify_unmap = false,
.aw = s->aw_bits,
.as = vtd_as,
.domain_id = VTD_CONTEXT_ENTRY_DID(ce.hi),
};
vtd_page_walk(&ce, 0, ~0ULL, &info);
}
} else {
trace_vtd_replay_ce_invalid(bus_n, PCI_SLOT(vtd_as->devfn),
PCI_FUNC(vtd_as->devfn));
@ -2930,8 +3123,10 @@ static void vtd_init(IntelIOMMUState *s)
s->cap |= VTD_CAP_CM;
}
vtd_reset_context_cache(s);
vtd_reset_iotlb(s);
vtd_iommu_lock(s);
vtd_reset_context_cache_locked(s);
vtd_reset_iotlb_locked(s);
vtd_iommu_unlock(s);
/* Define registers with default values and bit semantics */
vtd_define_long(s, DMAR_VER_REG, 0x10UL, 0, 0);
@ -3070,7 +3265,8 @@ static void vtd_realize(DeviceState *dev, Error **errp)
return;
}
QLIST_INIT(&s->notifiers_list);
QLIST_INIT(&s->vtd_as_with_notifiers);
qemu_mutex_init(&s->iommu_lock);
memset(s->vtd_as_by_bus_num, 0, sizeof(s->vtd_as_by_bus_num));
memory_region_init_io(&s->csrmem, OBJECT(s), &vtd_mem_ops, s,
"intel_iommu", DMAR_REG_SIZE);

View File

@ -39,9 +39,10 @@ vtd_fault_disabled(void) "Fault processing disabled for context entry"
vtd_replay_ce_valid(uint8_t bus, uint8_t dev, uint8_t fn, uint16_t domain, uint64_t hi, uint64_t lo) "replay valid context device %02"PRIx8":%02"PRIx8".%02"PRIx8" domain 0x%"PRIx16" hi 0x%"PRIx64" lo 0x%"PRIx64
vtd_replay_ce_invalid(uint8_t bus, uint8_t dev, uint8_t fn) "replay invalid context device %02"PRIx8":%02"PRIx8".%02"PRIx8
vtd_page_walk_level(uint64_t addr, uint32_t level, uint64_t start, uint64_t end) "walk (base=0x%"PRIx64", level=%"PRIu32") iova range 0x%"PRIx64" - 0x%"PRIx64
vtd_page_walk_one(uint32_t level, uint64_t iova, uint64_t gpa, uint64_t mask, int perm) "detected page level 0x%"PRIx32" iova 0x%"PRIx64" -> gpa 0x%"PRIx64" mask 0x%"PRIx64" perm %d"
vtd_page_walk_one(uint16_t domain, uint64_t iova, uint64_t gpa, uint64_t mask, int perm) "domain 0x%"PRIu16" iova 0x%"PRIx64" -> gpa 0x%"PRIx64" mask 0x%"PRIx64" perm %d"
vtd_page_walk_one_skip_map(uint64_t iova, uint64_t mask, uint64_t translated) "iova 0x%"PRIx64" mask 0x%"PRIx64" translated 0x%"PRIx64
vtd_page_walk_one_skip_unmap(uint64_t iova, uint64_t mask) "iova 0x%"PRIx64" mask 0x%"PRIx64
vtd_page_walk_skip_read(uint64_t iova, uint64_t next) "Page walk skip iova 0x%"PRIx64" - 0x%"PRIx64" due to unable to read"
vtd_page_walk_skip_perm(uint64_t iova, uint64_t next) "Page walk skip iova 0x%"PRIx64" - 0x%"PRIx64" due to perm empty"
vtd_page_walk_skip_reserve(uint64_t iova, uint64_t next) "Page walk skip iova 0x%"PRIx64" - 0x%"PRIx64" due to rsrv set"
vtd_switch_address_space(uint8_t bus, uint8_t slot, uint8_t fn, bool on) "Device %02x:%02x.%x switching address space (iommu enabled=%d)"
vtd_as_unmap_whole(uint8_t bus, uint8_t slot, uint8_t fn, uint64_t iova, uint64_t size) "Device %02x:%02x.%x start 0x%"PRIx64" size 0x%"PRIx64

View File

@ -532,13 +532,6 @@ static void ahci_check_cmd_bh(void *opaque)
qemu_bh_delete(ad->check_bh);
ad->check_bh = NULL;
if ((ad->busy_slot != -1) &&
!(ad->port.ifs[0].status & (BUSY_STAT|DRQ_STAT))) {
/* no longer busy */
ad->port_regs.cmd_issue &= ~(1 << ad->busy_slot);
ad->busy_slot = -1;
}
check_cmd(ad->hba, ad->port_no);
}
@ -1425,6 +1418,12 @@ static void ahci_cmd_done(IDEDMA *dma)
trace_ahci_cmd_done(ad->hba, ad->port_no);
/* no longer busy */
if (ad->busy_slot != -1) {
ad->port_regs.cmd_issue &= ~(1 << ad->busy_slot);
ad->busy_slot = -1;
}
/* update d2h status */
ahci_write_fis_d2h(ad);

View File

@ -27,6 +27,7 @@
#include "hw/intc/arm_gicv3_common.h"
#include "gicv3_internal.h"
#include "hw/arm/linux-boot-if.h"
#include "sysemu/kvm.h"
static int gicv3_pre_save(void *opaque)
{
@ -141,6 +142,79 @@ static const VMStateDescription vmstate_gicv3_cpu = {
}
};
static int gicv3_gicd_no_migration_shift_bug_pre_load(void *opaque)
{
GICv3State *cs = opaque;
/*
* The gicd_no_migration_shift_bug flag is used for migration compatibility
* for old version QEMU which may have the GICD bmp shift bug under KVM mode.
* Strictly, what we want to know is whether the migration source is using
* KVM. Since we don't have any way to determine that, we look at whether the
* destination is using KVM; this is close enough because for the older QEMU
* versions with this bug KVM -> TCG migration didn't work anyway. If the
* source is a newer QEMU without this bug it will transmit the migration
* subsection which sets the flag to true; otherwise it will remain set to
* the value we select here.
*/
if (kvm_enabled()) {
cs->gicd_no_migration_shift_bug = false;
}
return 0;
}
static int gicv3_gicd_no_migration_shift_bug_post_load(void *opaque,
int version_id)
{
GICv3State *cs = opaque;
if (cs->gicd_no_migration_shift_bug) {
return 0;
}
/* Older versions of QEMU had a bug in the handling of state save/restore
* to the KVM GICv3: they got the offset in the bitmap arrays wrong,
* so that instead of the data for external interrupts 32 and up
* starting at bit position 32 in the bitmap, it started at bit
* position 64. If we're receiving data from a QEMU with that bug,
* we must move the data down into the right place.
*/
memmove(cs->group, (uint8_t *)cs->group + GIC_INTERNAL / 8,
sizeof(cs->group) - GIC_INTERNAL / 8);
memmove(cs->grpmod, (uint8_t *)cs->grpmod + GIC_INTERNAL / 8,
sizeof(cs->grpmod) - GIC_INTERNAL / 8);
memmove(cs->enabled, (uint8_t *)cs->enabled + GIC_INTERNAL / 8,
sizeof(cs->enabled) - GIC_INTERNAL / 8);
memmove(cs->pending, (uint8_t *)cs->pending + GIC_INTERNAL / 8,
sizeof(cs->pending) - GIC_INTERNAL / 8);
memmove(cs->active, (uint8_t *)cs->active + GIC_INTERNAL / 8,
sizeof(cs->active) - GIC_INTERNAL / 8);
memmove(cs->edge_trigger, (uint8_t *)cs->edge_trigger + GIC_INTERNAL / 8,
sizeof(cs->edge_trigger) - GIC_INTERNAL / 8);
/*
* While this new version QEMU doesn't have this kind of bug as we fix it,
* so it needs to set the flag to true to indicate that and it's necessary
* for next migration to work from this new version QEMU.
*/
cs->gicd_no_migration_shift_bug = true;
return 0;
}
const VMStateDescription vmstate_gicv3_gicd_no_migration_shift_bug = {
.name = "arm_gicv3/gicd_no_migration_shift_bug",
.version_id = 1,
.minimum_version_id = 1,
.pre_load = gicv3_gicd_no_migration_shift_bug_pre_load,
.post_load = gicv3_gicd_no_migration_shift_bug_post_load,
.fields = (VMStateField[]) {
VMSTATE_BOOL(gicd_no_migration_shift_bug, GICv3State),
VMSTATE_END_OF_LIST()
}
};
static const VMStateDescription vmstate_gicv3 = {
.name = "arm_gicv3",
.version_id = 1,
@ -165,6 +239,10 @@ static const VMStateDescription vmstate_gicv3 = {
VMSTATE_STRUCT_VARRAY_POINTER_UINT32(cpu, GICv3State, num_cpu,
vmstate_gicv3_cpu, GICv3CPUState),
VMSTATE_END_OF_LIST()
},
.subsections = (const VMStateDescription * []) {
&vmstate_gicv3_gicd_no_migration_shift_bug,
NULL
}
};
@ -364,6 +442,7 @@ static void arm_gicv3_common_reset(DeviceState *dev)
gicv3_gicd_group_set(s, i);
}
}
s->gicd_no_migration_shift_bug = true;
}
static void arm_gic_common_linux_init(ARMLinuxBootIf *obj,

View File

@ -431,7 +431,7 @@ static uint64_t icv_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
{
GICv3CPUState *cs = icc_cs_from_env(env);
int regno = ri->opc2 & 3;
int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
uint64_t value = cs->ich_apr[grp][regno];
trace_gicv3_icv_ap_read(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
@ -443,7 +443,7 @@ static void icv_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
{
GICv3CPUState *cs = icc_cs_from_env(env);
int regno = ri->opc2 & 3;
int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
trace_gicv3_icv_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);
@ -1465,7 +1465,7 @@ static uint64_t icc_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
uint64_t value;
int regno = ri->opc2 & 3;
int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
return icv_ap_read(env, ri);
@ -1487,7 +1487,7 @@ static void icc_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
GICv3CPUState *cs = icc_cs_from_env(env);
int regno = ri->opc2 & 3;
int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1;
int grp = (ri->crm & 1) ? GICV3_G1 : GICV3_G0;
if (icv_access(env, grp == GICV3_G0 ? HCR_FMO : HCR_IMO)) {
icv_ap_write(env, ri, value);
@ -2296,7 +2296,7 @@ static uint64_t ich_ap_read(CPUARMState *env, const ARMCPRegInfo *ri)
{
GICv3CPUState *cs = icc_cs_from_env(env);
int regno = ri->opc2 & 3;
int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
uint64_t value;
value = cs->ich_apr[grp][regno];
@ -2309,7 +2309,7 @@ static void ich_ap_write(CPUARMState *env, const ARMCPRegInfo *ri,
{
GICv3CPUState *cs = icc_cs_from_env(env);
int regno = ri->opc2 & 3;
int grp = ri->crm & 1 ? GICV3_G0 : GICV3_G1NS;
int grp = (ri->crm & 1) ? GICV3_G1NS : GICV3_G0;
trace_gicv3_ich_ap_write(ri->crm & 1, regno, gicv3_redist_affid(cs), value);

View File

@ -135,7 +135,14 @@ static void kvm_dist_get_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
uint32_t reg, *field;
int irq;
field = (uint32_t *)bmp;
/* For the KVM GICv3, affinity routing is always enabled, and the first 8
* GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
* functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
* sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
* offset.
*/
field = (uint32_t *)(bmp + GIC_INTERNAL);
offset += (GIC_INTERNAL * 8) / 8;
for_each_dist_irq_reg(irq, s->num_irq, 8) {
kvm_gicd_access(s, offset, &reg, false);
*field = reg;
@ -149,7 +156,14 @@ static void kvm_dist_put_priority(GICv3State *s, uint32_t offset, uint8_t *bmp)
uint32_t reg, *field;
int irq;
field = (uint32_t *)bmp;
/* For the KVM GICv3, affinity routing is always enabled, and the first 8
* GICD_IPRIORITYR<n> registers are always RAZ/WI. The corresponding
* functionality is replaced by GICR_IPRIORITYR<n>. It doesn't need to
* sync them. So it needs to skip the field of GIC_INTERNAL irqs in bmp and
* offset.
*/
field = (uint32_t *)(bmp + GIC_INTERNAL);
offset += (GIC_INTERNAL * 8) / 8;
for_each_dist_irq_reg(irq, s->num_irq, 8) {
reg = *field;
kvm_gicd_access(s, offset, &reg, true);
@ -164,6 +178,14 @@ static void kvm_dist_get_edge_trigger(GICv3State *s, uint32_t offset,
uint32_t reg;
int irq;
/* For the KVM GICv3, affinity routing is always enabled, and the first 2
* GICD_ICFGR<n> registers are always RAZ/WI. The corresponding
* functionality is replaced by GICR_ICFGR<n>. It doesn't need to sync
* them. So it should increase the offset to skip GIC_INTERNAL irqs.
* This matches the for_each_dist_irq_reg() macro which also skips the
* first GIC_INTERNAL irqs.
*/
offset += (GIC_INTERNAL * 2) / 8;
for_each_dist_irq_reg(irq, s->num_irq, 2) {
kvm_gicd_access(s, offset, &reg, false);
reg = half_unshuffle32(reg >> 1);
@ -181,6 +203,14 @@ static void kvm_dist_put_edge_trigger(GICv3State *s, uint32_t offset,
uint32_t reg;
int irq;
/* For the KVM GICv3, affinity routing is always enabled, and the first 2
* GICD_ICFGR<n> registers are always RAZ/WI. The corresponding
* functionality is replaced by GICR_ICFGR<n>. It doesn't need to sync
* them. So it should increase the offset to skip GIC_INTERNAL irqs.
* This matches the for_each_dist_irq_reg() macro which also skips the
* first GIC_INTERNAL irqs.
*/
offset += (GIC_INTERNAL * 2) / 8;
for_each_dist_irq_reg(irq, s->num_irq, 2) {
reg = *gic_bmp_ptr32(bmp, irq);
if (irq % 32 != 0) {
@ -222,6 +252,15 @@ static void kvm_dist_getbmp(GICv3State *s, uint32_t offset, uint32_t *bmp)
uint32_t reg;
int irq;
/* For the KVM GICv3, affinity routing is always enabled, and the
* GICD_IGROUPR0/GICD_IGRPMODR0/GICD_ISENABLER0/GICD_ISPENDR0/
* GICD_ISACTIVER0 registers are always RAZ/WI. The corresponding
* functionality is replaced by the GICR registers. It doesn't need to sync
* them. So it should increase the offset to skip GIC_INTERNAL irqs.
* This matches the for_each_dist_irq_reg() macro which also skips the
* first GIC_INTERNAL irqs.
*/
offset += (GIC_INTERNAL * 1) / 8;
for_each_dist_irq_reg(irq, s->num_irq, 1) {
kvm_gicd_access(s, offset, &reg, false);
*gic_bmp_ptr32(bmp, irq) = reg;
@ -235,6 +274,19 @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
uint32_t reg;
int irq;
/* For the KVM GICv3, affinity routing is always enabled, and the
* GICD_IGROUPR0/GICD_IGRPMODR0/GICD_ISENABLER0/GICD_ISPENDR0/
* GICD_ISACTIVER0 registers are always RAZ/WI. The corresponding
* functionality is replaced by the GICR registers. It doesn't need to sync
* them. So it should increase the offset and clroffset to skip GIC_INTERNAL
* irqs. This matches the for_each_dist_irq_reg() macro which also skips the
* first GIC_INTERNAL irqs.
*/
offset += (GIC_INTERNAL * 1) / 8;
if (clroffset != 0) {
clroffset += (GIC_INTERNAL * 1) / 8;
}
for_each_dist_irq_reg(irq, s->num_irq, 1) {
/* If this bitmap is a set/clear register pair, first write to the
* clear-reg to clear all bits before using the set-reg to write
@ -243,6 +295,7 @@ static void kvm_dist_putbmp(GICv3State *s, uint32_t offset,
if (clroffset != 0) {
reg = 0;
kvm_gicd_access(s, clroffset, &reg, true);
clroffset += 4;
}
reg = *gic_bmp_ptr32(bmp, irq);
kvm_gicd_access(s, offset, &reg, true);

View File

@ -43,7 +43,7 @@ static void isa_superio_realize(DeviceState *dev, Error **errp)
if (!k->parallel.is_enabled || k->parallel.is_enabled(sio, i)) {
/* FIXME use a qdev chardev prop instead of parallel_hds[] */
chr = parallel_hds[i];
if (chr == NULL || chr->be) {
if (chr == NULL) {
name = g_strdup_printf("discarding-parallel%d", i);
chr = qemu_chr_new(name, "null");
} else {
@ -81,9 +81,9 @@ static void isa_superio_realize(DeviceState *dev, Error **errp)
break;
}
if (!k->serial.is_enabled || k->serial.is_enabled(sio, i)) {
/* FIXME use a qdev chardev prop instead of serial_hds[] */
/* FIXME use a qdev chardev prop instead of serial_hd() */
chr = serial_hds[i];
if (chr == NULL || chr->be) {
if (chr == NULL) {
name = g_strdup_printf("discarding-serial%d", i);
chr = qemu_chr_new(name, "null");
} else {

View File

@ -2392,6 +2392,7 @@ static void spapr_machine_init(MachineState *machine)
long load_limit, fw_size;
char *filename;
Error *resize_hpt_err = NULL;
PowerPCCPU *first_ppc_cpu;
msi_nonbroken = true;
@ -2484,11 +2485,6 @@ static void spapr_machine_init(MachineState *machine)
}
spapr_ovec_set(spapr->ov5, OV5_FORM1_AFFINITY);
if (!kvm_enabled() || kvmppc_has_cap_mmu_radix()) {
/* KVM and TCG always allow GTSE with radix... */
spapr_ovec_set(spapr->ov5, OV5_MMU_RADIX_GTSE);
}
/* ... but not with hash (currently). */
/* advertise support for dedicated HP event source to guests */
if (spapr->use_hotplug_event_source) {
@ -2503,6 +2499,15 @@ static void spapr_machine_init(MachineState *machine)
/* init CPUs */
spapr_init_cpus(spapr);
first_ppc_cpu = POWERPC_CPU(first_cpu);
if ((!kvm_enabled() || kvmppc_has_cap_mmu_radix()) &&
ppc_check_compat(first_ppc_cpu, CPU_POWERPC_LOGICAL_3_00, 0,
spapr->max_compat_pvr)) {
/* KVM and TCG always allow GTSE with radix... */
spapr_ovec_set(spapr->ov5, OV5_MMU_RADIX_GTSE);
}
/* ... but not with hash (currently). */
if (kvm_enabled()) {
/* Enable H_LOGICAL_CI_* so SLOF can talk to in-kernel devices */
kvmppc_enable_logical_ci_hcalls();

View File

@ -41,17 +41,20 @@
} while (0)
static uint64_t fromhost_addr, tohost_addr;
static int address_symbol_set;
void htif_symbol_callback(const char *st_name, int st_info, uint64_t st_value,
uint64_t st_size)
uint64_t st_size)
{
if (strcmp("fromhost", st_name) == 0) {
address_symbol_set |= 1;
fromhost_addr = st_value;
if (st_size != 8) {
error_report("HTIF fromhost must be 8 bytes");
exit(1);
}
} else if (strcmp("tohost", st_name) == 0) {
address_symbol_set |= 2;
tohost_addr = st_value;
if (st_size != 8) {
error_report("HTIF tohost must be 8 bytes");
@ -248,10 +251,11 @@ HTIFState *htif_mm_init(MemoryRegion *address_space, MemoryRegion *main_mem,
qemu_chr_fe_init(&s->chr, chr, &error_abort);
qemu_chr_fe_set_handlers(&s->chr, htif_can_recv, htif_recv, htif_event,
htif_be_change, s, NULL, true);
if (base) {
if (address_symbol_set == 3) {
memory_region_init_io(&s->mmio, NULL, &htif_mm_ops, s,
TYPE_HTIF_UART, size);
memory_region_add_subregion(address_space, base, &s->mmio);
TYPE_HTIF_UART, size);
memory_region_add_subregion_overlap(address_space, base,
&s->mmio, 1);
}
return s;

View File

@ -250,9 +250,9 @@ static void riscv_sifive_u_init(MachineState *machine)
/* boot rom */
memory_region_init_ram(boot_rom, NULL, "riscv.sifive.u.mrom",
memmap[SIFIVE_U_MROM].base, &error_fatal);
memory_region_set_readonly(boot_rom, true);
memory_region_add_subregion(sys_memory, 0x0, boot_rom);
memmap[SIFIVE_U_MROM].size, &error_fatal);
memory_region_add_subregion(sys_memory, memmap[SIFIVE_U_MROM].base,
boot_rom);
if (machine->kernel_filename) {
load_kernel(machine->kernel_filename);
@ -282,6 +282,7 @@ static void riscv_sifive_u_init(MachineState *machine)
qemu_fdt_dumpdtb(s->fdt, s->fdt_size);
cpu_physical_memory_write(memmap[SIFIVE_U_MROM].base +
sizeof(reset_vec), s->fdt, s->fdt_size);
memory_region_set_readonly(boot_rom, true);
/* MMIO */
s->plic = sifive_plic_create(memmap[SIFIVE_U_PLIC].base,

View File

@ -40,6 +40,13 @@ static Property ccw_device_properties[] = {
DEFINE_PROP_END_OF_LIST(),
};
static void ccw_device_reset(DeviceState *d)
{
CcwDevice *ccw_dev = CCW_DEVICE(d);
css_reset_sch(ccw_dev->sch);
}
static void ccw_device_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
@ -48,6 +55,7 @@ static void ccw_device_class_init(ObjectClass *klass, void *data)
k->realize = ccw_device_realize;
k->refill_ids = ccw_device_refill_ids;
dc->props = ccw_device_properties;
dc->reset = ccw_device_reset;
}
const VMStateDescription vmstate_ccw_dev = {

View File

@ -616,6 +616,14 @@ void css_inject_io_interrupt(SubchDev *sch)
void css_conditional_io_interrupt(SubchDev *sch)
{
/*
* If the subchannel is not enabled, it is not made status pending
* (see PoP p. 16-17, "Status Control").
*/
if (!(sch->curr_status.pmcw.flags & PMCW_FLAGS_MASK_ENA)) {
return;
}
/*
* If the subchannel is not currently status pending, make it pending
* with alert status.

View File

@ -319,6 +319,7 @@ static void sclp_memory_init(SCLPDevice *sclp)
initial_mem = initial_mem >> increment_size << increment_size;
machine->ram_size = initial_mem;
machine->maxram_size = initial_mem;
/* let's propagate the changed ram size into the global variable. */
ram_size = initial_mem;
}

View File

@ -1058,10 +1058,12 @@ static void virtio_ccw_reset(DeviceState *d)
{
VirtioCcwDevice *dev = VIRTIO_CCW_DEVICE(d);
VirtIODevice *vdev = virtio_bus_get_device(&dev->bus);
CcwDevice *ccw_dev = CCW_DEVICE(d);
VirtIOCCWDeviceClass *vdc = VIRTIO_CCW_DEVICE_GET_CLASS(dev);
virtio_ccw_reset_virtio(dev, vdev);
css_reset_sch(ccw_dev->sch);
if (vdc->parent_reset) {
vdc->parent_reset(d);
}
}
static void virtio_ccw_vmstate_change(DeviceState *d, bool running)
@ -1345,7 +1347,6 @@ static void virtio_ccw_net_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_net_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_net_properties;
set_bit(DEVICE_CATEGORY_NETWORK, dc->categories);
}
@ -1373,7 +1374,6 @@ static void virtio_ccw_blk_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_blk_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_blk_properties;
set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
}
@ -1401,7 +1401,6 @@ static void virtio_ccw_serial_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_serial_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_serial_properties;
set_bit(DEVICE_CATEGORY_INPUT, dc->categories);
}
@ -1429,7 +1428,6 @@ static void virtio_ccw_balloon_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_balloon_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_balloon_properties;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
}
@ -1457,7 +1455,6 @@ static void virtio_ccw_scsi_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_scsi_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_scsi_properties;
set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
}
@ -1484,7 +1481,6 @@ static void vhost_ccw_scsi_class_init(ObjectClass *klass, void *data)
k->realize = vhost_ccw_scsi_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = vhost_ccw_scsi_properties;
set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
}
@ -1521,7 +1517,6 @@ static void virtio_ccw_rng_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_rng_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_rng_properties;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
}
@ -1559,7 +1554,6 @@ static void virtio_ccw_crypto_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_crypto_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_crypto_properties;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
}
@ -1597,7 +1591,6 @@ static void virtio_ccw_gpu_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_gpu_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_gpu_properties;
dc->hotpluggable = false;
set_bit(DEVICE_CATEGORY_DISPLAY, dc->categories);
@ -1626,7 +1619,6 @@ static void virtio_ccw_input_class_init(ObjectClass *klass, void *data)
k->realize = virtio_ccw_input_realize;
k->unrealize = virtio_ccw_unrealize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_input_properties;
set_bit(DEVICE_CATEGORY_INPUT, dc->categories);
}
@ -1725,11 +1717,13 @@ static void virtio_ccw_device_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
CCWDeviceClass *k = CCW_DEVICE_CLASS(dc);
VirtIOCCWDeviceClass *vdc = VIRTIO_CCW_DEVICE_CLASS(klass);
k->unplug = virtio_ccw_busdev_unplug;
dc->realize = virtio_ccw_busdev_realize;
dc->unrealize = virtio_ccw_busdev_unrealize;
dc->bus_type = TYPE_VIRTUAL_CSS_BUS;
device_class_set_parent_reset(dc, virtio_ccw_reset, &vdc->parent_reset);
}
static const TypeInfo virtio_ccw_device_info = {
@ -1806,7 +1800,6 @@ static void virtio_ccw_9p_class_init(ObjectClass *klass, void *data)
k->unrealize = virtio_ccw_unrealize;
k->realize = virtio_ccw_9p_realize;
dc->reset = virtio_ccw_reset;
dc->props = virtio_ccw_9p_properties;
set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
}
@ -1856,7 +1849,6 @@ static void vhost_vsock_ccw_class_init(ObjectClass *klass, void *data)
k->unrealize = virtio_ccw_unrealize;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
dc->props = vhost_vsock_ccw_properties;
dc->reset = virtio_ccw_reset;
}
static void vhost_vsock_ccw_instance_init(Object *obj)

View File

@ -77,6 +77,7 @@ typedef struct VirtIOCCWDeviceClass {
CCWDeviceClass parent_class;
void (*realize)(VirtioCcwDevice *dev, Error **errp);
void (*unrealize)(VirtioCcwDevice *dev, Error **errp);
void (*parent_reset)(DeviceState *dev);
} VirtIOCCWDeviceClass;
/* Performance improves when virtqueue kick processing is decoupled from the

View File

@ -345,7 +345,7 @@ static void passthru_realize(CCIDCardState *base, Error **errp)
card->vscard_in_pos = 0;
card->vscard_in_hdr = 0;
if (qemu_chr_fe_backend_connected(&card->cs)) {
error_setg(errp, "ccid-card-passthru: initing chardev");
DPRINTF(card, D_INFO, "ccid-card-passthru: initing chardev");
qemu_chr_fe_set_handlers(&card->cs,
ccid_card_vscard_can_read,
ccid_card_vscard_read,

View File

@ -1017,12 +1017,16 @@ static MTPData *usb_mtp_get_object(MTPState *s, MTPControl *c,
static MTPData *usb_mtp_get_partial_object(MTPState *s, MTPControl *c,
MTPObject *o)
{
MTPData *d = usb_mtp_data_alloc(c);
MTPData *d;
off_t offset;
if (c->argc <= 2) {
return NULL;
}
trace_usb_mtp_op_get_partial_object(s->dev.addr, o->handle, o->path,
c->argv[1], c->argv[2]);
d = usb_mtp_data_alloc(c);
d->fd = open(o->path, O_RDONLY);
if (d->fd == -1) {
usb_mtp_data_free(d);

View File

@ -329,8 +329,8 @@ static const uint8_t qemu_ccid_descriptor[] = {
*/
0x07, /* u8 bVoltageSupport; 01h - 5.0v, 02h - 3.0, 03 - 1.8 */
0x00, 0x00, /* u32 dwProtocols; RRRR PPPP. RRRR = 0000h.*/
0x01, 0x00, /* PPPP: 0001h = Protocol T=0, 0002h = Protocol T=1 */
0x01, 0x00, /* u32 dwProtocols; RRRR PPPP. RRRR = 0000h.*/
0x00, 0x00, /* PPPP: 0001h = Protocol T=0, 0002h = Protocol T=1 */
/* u32 dwDefaultClock; in kHZ (0x0fa0 is 4 MHz) */
0xa0, 0x0f, 0x00, 0x00,
/* u32 dwMaximumClock; */

View File

@ -247,7 +247,11 @@ static int usb_host_init(void)
if (rc != 0) {
return -1;
}
#if LIBUSB_API_VERSION >= 0x01000106
libusb_set_option(ctx, LIBUSB_OPTION_LOG_LEVEL, loglevel);
#else
libusb_set_debug(ctx, loglevel);
#endif
#ifdef CONFIG_WIN32
/* FIXME: add support for Windows. */
#else

View File

@ -795,7 +795,7 @@ static void usbredir_handle_bulk_data(USBRedirDevice *dev, USBPacket *p,
usbredirparser_peer_has_cap(dev->parser,
usb_redir_cap_32bits_bulk_length));
if (ep & USB_DIR_IN) {
if (ep & USB_DIR_IN || size == 0) {
usbredirparser_send_bulk_packet(dev->parser, p->id,
&bulk_packet, NULL, 0);
} else {

View File

@ -3154,7 +3154,7 @@ static Property vfio_pci_dev_properties[] = {
DEFINE_PROP_PCI_HOST_DEVADDR("host", VFIOPCIDevice, host),
DEFINE_PROP_STRING("sysfsdev", VFIOPCIDevice, vbasedev.sysfsdev),
DEFINE_PROP_ON_OFF_AUTO("display", VFIOPCIDevice,
display, ON_OFF_AUTO_AUTO),
display, ON_OFF_AUTO_OFF),
DEFINE_PROP_UINT32("x-intx-mmap-timeout-ms", VFIOPCIDevice,
intx.mmap_timeout, 1100),
DEFINE_PROP_BIT("x-vga", VFIOPCIDevice, features,

View File

@ -156,6 +156,19 @@ static void check_rate_limit(void *opaque)
vrng->activate_timer = true;
}
static void virtio_rng_set_status(VirtIODevice *vdev, uint8_t status)
{
VirtIORNG *vrng = VIRTIO_RNG(vdev);
if (!vdev->vm_running) {
return;
}
vdev->status = status;
/* Something changed, try to process buffers */
virtio_rng_process(vrng);
}
static void virtio_rng_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@ -261,6 +274,7 @@ static void virtio_rng_class_init(ObjectClass *klass, void *data)
vdc->realize = virtio_rng_device_realize;
vdc->unrealize = virtio_rng_device_unrealize;
vdc->get_features = get_features;
vdc->set_status = virtio_rng_set_status;
}
static const TypeInfo virtio_rng_info = {

View File

@ -400,6 +400,7 @@ bool bdrv_is_read_only(BlockDriverState *bs);
int bdrv_can_set_read_only(BlockDriverState *bs, bool read_only,
bool ignore_allow_rdw, Error **errp);
int bdrv_set_read_only(BlockDriverState *bs, bool read_only, Error **errp);
bool bdrv_is_writable(BlockDriverState *bs);
bool bdrv_is_sg(BlockDriverState *bs);
bool bdrv_is_inserted(BlockDriverState *bs);
void bdrv_lock_medium(BlockDriverState *bs, bool locked);

View File

@ -27,6 +27,7 @@
#include "hw/i386/ioapic.h"
#include "hw/pci/msi.h"
#include "hw/sysbus.h"
#include "qemu/iova-tree.h"
#define TYPE_INTEL_IOMMU_DEVICE "intel-iommu"
#define INTEL_IOMMU_DEVICE(obj) \
@ -67,7 +68,6 @@ typedef union VTD_IR_TableEntry VTD_IR_TableEntry;
typedef union VTD_IR_MSIAddress VTD_IR_MSIAddress;
typedef struct VTDIrq VTDIrq;
typedef struct VTD_MSIMessage VTD_MSIMessage;
typedef struct IntelIOMMUNotifierNode IntelIOMMUNotifierNode;
/* Context-Entry */
struct VTDContextEntry {
@ -93,6 +93,10 @@ struct VTDAddressSpace {
MemoryRegion iommu_ir; /* Interrupt region: 0xfeeXXXXX */
IntelIOMMUState *iommu_state;
VTDContextCacheEntry context_cache_entry;
QLIST_ENTRY(VTDAddressSpace) next;
/* Superset of notifier flags that this address space has */
IOMMUNotifierFlag notifier_flags;
IOVATree *iova_tree; /* Traces mapped IOVA ranges */
};
struct VTDBus {
@ -253,11 +257,6 @@ struct VTD_MSIMessage {
/* When IR is enabled, all MSI/MSI-X data bits should be zero */
#define VTD_IR_MSI_DATA (0)
struct IntelIOMMUNotifierNode {
VTDAddressSpace *vtd_as;
QLIST_ENTRY(IntelIOMMUNotifierNode) next;
};
/* The iommu (DMAR) device state struct */
struct IntelIOMMUState {
X86IOMMUState x86_iommu;
@ -295,7 +294,7 @@ struct IntelIOMMUState {
GHashTable *vtd_as_by_busptr; /* VTDBus objects indexed by PCIBus* reference */
VTDBus *vtd_as_by_bus_num[VTD_PCI_BUS_MAX]; /* VTDBus objects indexed by bus number */
/* list of registered notifiers */
QLIST_HEAD(, IntelIOMMUNotifierNode) notifiers_list;
QLIST_HEAD(, VTDAddressSpace) vtd_as_with_notifiers;
/* interrupt remapping */
bool intr_enabled; /* Whether guest enabled IR */
@ -305,6 +304,12 @@ struct IntelIOMMUState {
OnOffAuto intr_eim; /* Toggle for EIM cabability */
bool buggy_eim; /* Force buggy EIM unless eim=off */
uint8_t aw_bits; /* Host/IOVA address width (in bits) */
/*
* Protects IOMMU states in general. Currently it protects the
* per-IOMMU IOTLB cache, and context entry cache in VTDAddressSpace.
*/
QemuMutex iommu_lock;
};
/* Find the VTD Address space associated with the given bus pointer,

View File

@ -217,6 +217,7 @@ struct GICv3State {
uint32_t revision;
bool security_extn;
bool irq_reset_nonsecure;
bool gicd_no_migration_shift_bug;
int dev_fd; /* kvm device fd if backed by kvm vgic support */
Error *migration_blocker;

View File

@ -47,7 +47,7 @@ void qmp_enable_command(QmpCommandList *cmds, const char *name);
bool qmp_command_is_enabled(const QmpCommand *cmd);
const char *qmp_command_name(const QmpCommand *cmd);
bool qmp_has_success_response(const QmpCommand *cmd);
QObject *qmp_build_error_object(Error *err);
QDict *qmp_error_response(Error *err);
QDict *qmp_dispatch_check_obj(const QObject *request, Error **errp);
bool qmp_is_oob(QDict *dict);

View File

@ -19,6 +19,8 @@ QObject *qobject_from_jsonf(const char *string, ...) GCC_FMT_ATTR(1, 2);
QObject *qobject_from_jsonv(const char *string, va_list *ap, Error **errp)
GCC_FMT_ATTR(1, 0);
QDict *qdict_from_jsonf_nofail(const char *string, ...) GCC_FMT_ATTR(1, 2);
QString *qobject_to_json(const QObject *obj);
QString *qobject_to_json_pretty(const QObject *obj);

View File

@ -0,0 +1,134 @@
/*
* An very simplified iova tree implementation based on GTree.
*
* Copyright 2018 Red Hat, Inc.
*
* Authors:
* Peter Xu <peterx@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
*/
#ifndef IOVA_TREE_H
#define IOVA_TREE_H
/*
* Currently the iova tree will only allow to keep ranges
* information, and no extra user data is allowed for each element. A
* benefit is that we can merge adjacent ranges internally within the
* tree. It can save a lot of memory when the ranges are splitted but
* mostly continuous.
*
* Note that current implementation does not provide any thread
* protections. Callers of the iova tree should be responsible
* for the thread safety issue.
*/
#include "qemu/osdep.h"
#include "exec/memory.h"
#include "exec/hwaddr.h"
#define IOVA_OK (0)
#define IOVA_ERR_INVALID (-1) /* Invalid parameters */
#define IOVA_ERR_OVERLAP (-2) /* IOVA range overlapped */
typedef struct IOVATree IOVATree;
typedef struct DMAMap {
hwaddr iova;
hwaddr translated_addr;
hwaddr size; /* Inclusive */
IOMMUAccessFlags perm;
} QEMU_PACKED DMAMap;
typedef gboolean (*iova_tree_iterator)(DMAMap *map);
/**
* iova_tree_new:
*
* Create a new iova tree.
*
* Returns: the tree pointer when succeeded, or NULL if error.
*/
IOVATree *iova_tree_new(void);
/**
* iova_tree_insert:
*
* @tree: the iova tree to insert
* @map: the mapping to insert
*
* Insert an iova range to the tree. If there is overlapped
* ranges, IOVA_ERR_OVERLAP will be returned.
*
* Return: 0 if succeeded, or <0 if error.
*/
int iova_tree_insert(IOVATree *tree, DMAMap *map);
/**
* iova_tree_remove:
*
* @tree: the iova tree to remove range from
* @map: the map range to remove
*
* Remove mappings from the tree that are covered by the map range
* provided. The range does not need to be exactly what has inserted,
* all the mappings that are included in the provided range will be
* removed from the tree. Here map->translated_addr is meaningless.
*
* Return: 0 if succeeded, or <0 if error.
*/
int iova_tree_remove(IOVATree *tree, DMAMap *map);
/**
* iova_tree_find:
*
* @tree: the iova tree to search from
* @map: the mapping to search
*
* Search for a mapping in the iova tree that overlaps with the
* mapping range specified. Only the first found mapping will be
* returned.
*
* Return: DMAMap pointer if found, or NULL if not found. Note that
* the returned DMAMap pointer is maintained internally. User should
* only read the content but never modify or free the content. Also,
* user is responsible to make sure the pointer is valid (say, no
* concurrent deletion in progress).
*/
DMAMap *iova_tree_find(IOVATree *tree, DMAMap *map);
/**
* iova_tree_find_address:
*
* @tree: the iova tree to search from
* @iova: the iova address to find
*
* Similar to iova_tree_find(), but it tries to find mapping with
* range iova=iova & size=0.
*
* Return: same as iova_tree_find().
*/
DMAMap *iova_tree_find_address(IOVATree *tree, hwaddr iova);
/**
* iova_tree_foreach:
*
* @tree: the iova tree to iterate on
* @iterator: the interator for the mappings, return true to stop
*
* Iterate over the iova tree.
*
* Return: 1 if found any overlap, 0 if not, <0 if error.
*/
void iova_tree_foreach(IOVATree *tree, iova_tree_iterator iterator);
/**
* iova_tree_destroy:
*
* @tree: the iova tree to destroy
*
* Destroy an existing iova tree.
*
* Return: None.
*/
void iova_tree_destroy(IOVATree *tree);
#endif

View File

@ -600,6 +600,7 @@ static int dirty_bitmap_load_bits(QEMUFile *f, DirtyBitmapLoadState *s)
ret = qemu_get_buffer(f, buf, buf_size);
if (ret != buf_size) {
error_report("Failed to read bitmap bits");
g_free(buf);
return -EIO;
}
@ -671,6 +672,9 @@ static int dirty_bitmap_load(QEMUFile *f, void *opaque, int version_id)
do {
ret = dirty_bitmap_load_header(f, &s);
if (ret < 0) {
return ret;
}
if (s.flags & DIRTY_BITMAP_MIG_FLAG_START) {
ret = dirty_bitmap_load_start(f, &s);

View File

@ -4036,14 +4036,9 @@ static int monitor_can_read(void *opaque)
static void monitor_qmp_respond(Monitor *mon, QObject *rsp,
Error *err, QObject *id)
{
QDict *qdict = NULL;
if (err) {
assert(!rsp);
qdict = qdict_new();
qdict_put_obj(qdict, "error", qmp_build_error_object(err));
error_free(err);
rsp = QOBJECT(qdict);
rsp = QOBJECT(qmp_error_response(err));
}
if (rsp) {

View File

@ -435,8 +435,8 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
}
be32_to_cpus(&info->min_block);
if (!is_power_of_2(info->min_block)) {
error_setg(errp, "server minimum block size %" PRId32
"is not a power of two", info->min_block);
error_setg(errp, "server minimum block size %" PRIu32
" is not a power of two", info->min_block);
nbd_send_opt_abort(ioc);
return -1;
}
@ -450,8 +450,8 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
be32_to_cpus(&info->opt_block);
if (!is_power_of_2(info->opt_block) ||
info->opt_block < info->min_block) {
error_setg(errp, "server preferred block size %" PRId32
"is not valid", info->opt_block);
error_setg(errp, "server preferred block size %" PRIu32
" is not valid", info->opt_block);
nbd_send_opt_abort(ioc);
return -1;
}
@ -462,6 +462,12 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
return -1;
}
be32_to_cpus(&info->max_block);
if (info->max_block < info->min_block) {
error_setg(errp, "server maximum block size %" PRIu32
" is not valid", info->max_block);
nbd_send_opt_abort(ioc);
return -1;
}
trace_nbd_opt_go_info_block_size(info->min_block, info->opt_block,
info->max_block);
break;
@ -613,8 +619,8 @@ static int nbd_negotiate_simple_meta_context(QIOChannel *ioc,
{
int ret;
NBDOptionReply reply;
uint32_t received_id;
bool received;
uint32_t received_id = 0;
bool received = false;
uint32_t export_len = strlen(export);
uint32_t context_len = strlen(context);
uint32_t data_len = sizeof(export_len) + export_len +

View File

@ -2007,6 +2007,10 @@ static coroutine_fn int nbd_handle_request(NBDClient *client,
"discard failed", errp);
case NBD_CMD_BLOCK_STATUS:
if (!request->len) {
return nbd_send_generic_reply(client, request->handle, -EINVAL,
"need non-zero length", errp);
}
if (client->export_meta.valid && client->export_meta.base_allocation) {
return nbd_co_send_block_status(client, request->handle,
blk_bs(exp->blk), request->from,

View File

@ -40,6 +40,7 @@
#include "qemu-common.h"
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/sockets.h"
#include "net/tap.h"
@ -693,6 +694,7 @@ static void net_init_tap_one(const NetdevTapOptions *tap, NetClientState *peer,
}
return;
}
qemu_set_nonblock(vhostfd);
} else {
vhostfd = open("/dev/vhost-net", O_RDWR);
if (vhostfd < 0) {
@ -803,7 +805,8 @@ int net_init_tap(const Netdev *netdev, const char *name,
} else if (tap->has_fds) {
char **fds;
char **vhost_fds;
int nfds, nvhosts;
int nfds = 0, nvhosts = 0;
int ret = 0;
if (tap->has_ifname || tap->has_script || tap->has_downscript ||
tap->has_vnet_hdr || tap->has_helper || tap->has_queues ||
@ -823,6 +826,7 @@ int net_init_tap(const Netdev *netdev, const char *name,
if (nfds != nvhosts) {
error_setg(errp, "The number of fds passed does not match "
"the number of vhostfds passed");
ret = -1;
goto free_fail;
}
}
@ -831,6 +835,7 @@ int net_init_tap(const Netdev *netdev, const char *name,
fd = monitor_fd_param(cur_mon, fds[i], &err);
if (fd == -1) {
error_propagate(errp, err);
ret = -1;
goto free_fail;
}
@ -841,6 +846,7 @@ int net_init_tap(const Netdev *netdev, const char *name,
} else if (vnet_hdr != tap_probe_vnet_hdr(fd)) {
error_setg(errp,
"vnet_hdr not consistent across given tap fds");
ret = -1;
goto free_fail;
}
@ -850,21 +856,21 @@ int net_init_tap(const Netdev *netdev, const char *name,
vnet_hdr, fd, &err);
if (err) {
error_propagate(errp, err);
ret = -1;
goto free_fail;
}
}
g_free(fds);
g_free(vhost_fds);
return 0;
free_fail:
for (i = 0; i < nvhosts; i++) {
g_free(vhost_fds[i]);
}
for (i = 0; i < nfds; i++) {
g_free(fds[i]);
g_free(vhost_fds[i]);
}
g_free(fds);
g_free(vhost_fds);
return -1;
return ret;
} else if (tap->has_helper) {
if (tap->has_ifname || tap->has_script || tap->has_downscript ||
tap->has_vnet_hdr || tap->has_queues || tap->has_vhostfds) {

View File

@ -299,7 +299,7 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
s = DO_UPCAST(VhostUserState, nc, nc);
if (!qemu_chr_fe_init(&s->chr, chr, &err)) {
error_report_err(err);
return -1;
goto err;
}
}
@ -309,7 +309,7 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
do {
if (qemu_chr_fe_wait_connected(&s->chr, &err) < 0) {
error_report_err(err);
return -1;
goto err;
}
qemu_chr_fe_set_handlers(&s->chr, NULL, NULL,
net_vhost_user_event, NULL, nc0->name, NULL,
@ -319,6 +319,13 @@ static int net_vhost_user_init(NetClientState *peer, const char *device,
assert(s->vhost_net);
return 0;
err:
if (nc0) {
qemu_del_net_client(nc0);
}
return -1;
}
static Chardev *net_vhost_claim_chardev(

Binary file not shown.

View File

@ -125,7 +125,7 @@ struct tpi_info {
__u32 reserved3 : 12;
__u32 int_type : 3;
__u32 reserved4 : 12;
} __attribute__ ((packed));
} __attribute__ ((packed, aligned(4)));
/* channel command word (type 1) */
struct ccw1 {

View File

@ -101,10 +101,11 @@ static inline bool manage_iplb(IplParameterBlock *iplb, bool store)
{
register unsigned long addr asm("0") = (unsigned long) iplb;
register unsigned long rc asm("1") = 0;
unsigned long subcode = store ? 6 : 5;
asm volatile ("diag %0,%2,0x308\n"
: "+d" (addr), "+d" (rc)
: "d" (store ? 6 : 5)
: "d" (subcode)
: "memory", "cc");
return rc == 0x01;
}

View File

@ -1172,6 +1172,9 @@
# @auto-dismiss: Job will dismiss itself when CONCLUDED, moving to the NULL
# state and disappearing from the query list. (since 2.12)
#
# @error: Error information if the job did not complete successfully.
# Not set if the job completed successfully. (since 2.12.1)
#
# Since: 1.1
##
{ 'struct': 'BlockJobInfo',
@ -1179,7 +1182,8 @@
'offset': 'int', 'busy': 'bool', 'paused': 'bool', 'speed': 'int',
'io-status': 'BlockDeviceIoStatus', 'ready': 'bool',
'status': 'BlockJobStatus',
'auto-finalize': 'bool', 'auto-dismiss': 'bool' } }
'auto-finalize': 'bool', 'auto-dismiss': 'bool',
'*error': 'str' } }
##
# @query-block-jobs:

View File

@ -573,7 +573,7 @@
'mips': 'CpuInfoOther',
'tricore': 'CpuInfoOther',
's390': 'CpuInfoS390',
'riscv': 'CpuInfoRISCV',
'riscv': 'CpuInfoOther',
'other': 'CpuInfoOther' } }
##

View File

@ -122,11 +122,15 @@ static QObject *do_qmp_dispatch(QmpCommandList *cmds, QObject *request,
return ret;
}
QObject *qmp_build_error_object(Error *err)
QDict *qmp_error_response(Error *err)
{
return qobject_from_jsonf("{ 'class': %s, 'desc': %s }",
QapiErrorClass_str(error_get_class(err)),
error_get_pretty(err));
QDict *rsp;
rsp = qdict_from_jsonf_nofail("{ 'error': { 'class': %s, 'desc': %s } }",
QapiErrorClass_str(error_get_class(err)),
error_get_pretty(err));
error_free(err);
return rsp;
}
/*
@ -159,15 +163,13 @@ QObject *qmp_dispatch(QmpCommandList *cmds, QObject *request)
ret = do_qmp_dispatch(cmds, request, &err);
rsp = qdict_new();
if (err) {
qdict_put_obj(rsp, "error", qmp_build_error_object(err));
error_free(err);
rsp = qmp_error_response(err);
} else if (ret) {
rsp = qdict_new();
qdict_put_obj(rsp, "return", ret);
} else {
QDECREF(rsp);
return NULL;
rsp = NULL;
}
return QOBJECT(rsp);

View File

@ -277,12 +277,12 @@ static BlockBackend *img_open_opts(const char *optstr,
options = qemu_opts_to_qdict(opts, NULL);
if (force_share) {
if (qdict_haskey(options, BDRV_OPT_FORCE_SHARE)
&& !qdict_get_bool(options, BDRV_OPT_FORCE_SHARE)) {
&& strcmp(qdict_get_str(options, BDRV_OPT_FORCE_SHARE), "on")) {
error_report("--force-share/-U conflicts with image options");
QDECREF(options);
return NULL;
}
qdict_put_bool(options, BDRV_OPT_FORCE_SHARE, true);
qdict_put_str(options, BDRV_OPT_FORCE_SHARE, "on");
}
blk = blk_new_open(NULL, NULL, options, flags, &local_err);
if (!blk) {
@ -1912,6 +1912,8 @@ static int convert_do_copy(ImgConvertState *s)
return s->ret;
}
#define MAX_BUF_SECTORS 32768
static int img_convert(int argc, char **argv)
{
int c, bs_i, flags, src_flags = 0;
@ -2008,8 +2010,12 @@ static int img_convert(int argc, char **argv)
int64_t sval;
sval = cvtnum(optarg);
if (sval < 0) {
error_report("Invalid minimum zero buffer size for sparse output specified");
if (sval < 0 || sval & (BDRV_SECTOR_SIZE - 1) ||
sval / BDRV_SECTOR_SIZE > MAX_BUF_SECTORS) {
error_report("Invalid buffer size for sparse output specified. "
"Valid sizes are multiples of %llu up to %llu. Select "
"0 to disable sparse detection (fully allocates output).",
BDRV_SECTOR_SIZE, MAX_BUF_SECTORS * BDRV_SECTOR_SIZE);
goto fail_getopt;
}
@ -2297,9 +2303,9 @@ static int img_convert(int argc, char **argv)
}
/* increase bufsectors from the default 4096 (2M) if opt_transfer
* or discard_alignment of the out_bs is greater. Limit to 32768 (16MB)
* as maximum. */
s.buf_sectors = MIN(32768,
* or discard_alignment of the out_bs is greater. Limit to
* MAX_BUF_SECTORS as maximum which is currently 32768 (16MB). */
s.buf_sectors = MIN(MAX_BUF_SECTORS,
MAX(s.buf_sectors,
MAX(out_bs->bl.opt_transfer >> BDRV_SECTOR_BITS,
out_bs->bl.pdiscard_alignment >>
@ -2827,7 +2833,7 @@ static int img_map(int argc, char **argv)
int64_t n;
/* Probe up to 1 GiB at a time. */
n = QEMU_ALIGN_DOWN(MIN(1 << 30, length - offset), BDRV_SECTOR_SIZE);
n = MIN(1 << 30, length - offset);
ret = get_block_status(bs, offset, n, &next);
if (ret < 0) {
@ -3191,6 +3197,9 @@ static int img_rebase(int argc, char **argv)
}
if (out_baseimg[0]) {
const char *overlay_filename;
char *out_real_path;
options = qdict_new();
if (out_basefmt) {
qdict_put_str(options, "driver", out_basefmt);
@ -3199,8 +3208,26 @@ static int img_rebase(int argc, char **argv)
qdict_put_bool(options, BDRV_OPT_FORCE_SHARE, true);
}
blk_new_backing = blk_new_open(out_baseimg, NULL,
overlay_filename = bs->exact_filename[0] ? bs->exact_filename
: bs->filename;
out_real_path = g_malloc(PATH_MAX);
bdrv_get_full_backing_filename_from_filename(overlay_filename,
out_baseimg,
out_real_path,
PATH_MAX,
&local_err);
if (local_err) {
error_reportf_err(local_err,
"Could not resolve backing filename: ");
ret = -1;
g_free(out_real_path);
goto out;
}
blk_new_backing = blk_new_open(out_real_path, NULL,
options, src_flags, &local_err);
g_free(out_real_path);
if (!blk_new_backing) {
error_reportf_err(local_err,
"Could not open new backing file '%s': ",

View File

@ -95,12 +95,12 @@ static int openfile(char *name, int flags, bool writethrough, bool force_share,
opts = qdict_new();
}
if (qdict_haskey(opts, BDRV_OPT_FORCE_SHARE)
&& !qdict_get_bool(opts, BDRV_OPT_FORCE_SHARE)) {
&& strcmp(qdict_get_str(opts, BDRV_OPT_FORCE_SHARE), "on")) {
error_report("-U conflicts with image options");
QDECREF(opts);
return 1;
}
qdict_put_bool(opts, BDRV_OPT_FORCE_SHARE, true);
qdict_put_str(opts, BDRV_OPT_FORCE_SHARE, "on");
}
qemuio_blk = blk_new_open(name, NULL, opts, flags, &local_err);
if (!qemuio_blk) {

View File

@ -600,46 +600,42 @@ static void process_command(GAState *s, QDict *req)
static void process_event(JSONMessageParser *parser, GQueue *tokens)
{
GAState *s = container_of(parser, GAState, parser);
QDict *qdict;
QObject *obj;
QDict *req, *rsp;
Error *err = NULL;
int ret;
g_assert(s && parser);
g_debug("process_event: called");
qdict = qobject_to(QDict, json_parser_parse_err(tokens, NULL, &err));
if (err || !qdict) {
QDECREF(qdict);
qdict = qdict_new();
if (!err) {
g_warning("failed to parse event: unknown error");
error_setg(&err, QERR_JSON_PARSING);
} else {
g_warning("failed to parse event: %s", error_get_pretty(err));
}
qdict_put_obj(qdict, "error", qmp_build_error_object(err));
error_free(err);
obj = json_parser_parse_err(tokens, NULL, &err);
if (err) {
goto err;
}
req = qobject_to(QDict, obj);
if (!req) {
error_setg(&err, QERR_JSON_PARSING);
goto err;
}
if (!qdict_haskey(req, "execute")) {
g_warning("unrecognized payload format");
error_setg(&err, QERR_UNSUPPORTED);
goto err;
}
/* handle host->guest commands */
if (qdict_haskey(qdict, "execute")) {
process_command(s, qdict);
} else {
if (!qdict_haskey(qdict, "error")) {
QDECREF(qdict);
qdict = qdict_new();
g_warning("unrecognized payload format");
error_setg(&err, QERR_UNSUPPORTED);
qdict_put_obj(qdict, "error", qmp_build_error_object(err));
error_free(err);
}
ret = send_response(s, QOBJECT(qdict));
if (ret < 0) {
g_warning("error sending error response: %s", strerror(-ret));
}
}
process_command(s, req);
qobject_decref(obj);
return;
QDECREF(qdict);
err:
g_warning("failed to parse event: %s", error_get_pretty(err));
rsp = qmp_error_response(err);
ret = send_response(s, QOBJECT(rsp));
if (ret < 0) {
g_warning("error sending error response: %s", strerror(-ret));
}
QDECREF(rsp);
qobject_decref(obj);
}
/* false return signals GAChannel to close the current client connection */

View File

@ -76,6 +76,24 @@ QObject *qobject_from_jsonf(const char *string, ...)
return obj;
}
/*
* Parse @string as JSON object with %-escapes interpolated.
* Abort on error. Do not use with untrusted @string.
* Return the resulting QDict. It is never null.
*/
QDict *qdict_from_jsonf_nofail(const char *string, ...)
{
QDict *obj;
va_list ap;
va_start(ap, string);
obj = qobject_to(QDict, qobject_from_jsonv(string, &ap, &error_abort));
va_end(ap);
assert(obj);
return obj;
}
typedef struct ToJsonIterState
{
int indent;

View File

@ -311,6 +311,8 @@ static void arm_cpu_reset(CPUState *s)
&env->vfp.fp_status);
set_float_detect_tininess(float_tininess_before_rounding,
&env->vfp.standard_fp_status);
set_float_detect_tininess(float_tininess_before_rounding,
&env->vfp.fp_status_f16);
#ifndef CONFIG_USER_ONLY
if (kvm_enabled()) {
kvm_arm_reset_vcpu(cpu);

View File

@ -85,6 +85,16 @@ static inline uint32_t float_rel_to_flags(int res)
return flags;
}
uint64_t HELPER(vfp_cmph_a64)(float16 x, float16 y, void *fp_status)
{
return float_rel_to_flags(float16_compare_quiet(x, y, fp_status));
}
uint64_t HELPER(vfp_cmpeh_a64)(float16 x, float16 y, void *fp_status)
{
return float_rel_to_flags(float16_compare(x, y, fp_status));
}
uint64_t HELPER(vfp_cmps_a64)(float32 x, float32 y, void *fp_status)
{
return float_rel_to_flags(float32_compare_quiet(x, y, fp_status));

View File

@ -19,6 +19,8 @@
DEF_HELPER_FLAGS_2(udiv64, TCG_CALL_NO_RWG_SE, i64, i64, i64)
DEF_HELPER_FLAGS_2(sdiv64, TCG_CALL_NO_RWG_SE, s64, s64, s64)
DEF_HELPER_FLAGS_1(rbit64, TCG_CALL_NO_RWG_SE, i64, i64)
DEF_HELPER_3(vfp_cmph_a64, i64, f16, f16, ptr)
DEF_HELPER_3(vfp_cmpeh_a64, i64, f16, f16, ptr)
DEF_HELPER_3(vfp_cmps_a64, i64, f32, f32, ptr)
DEF_HELPER_3(vfp_cmpes_a64, i64, f32, f32, ptr)
DEF_HELPER_3(vfp_cmpd_a64, i64, f64, f64, ptr)

View File

@ -11409,11 +11409,94 @@ VFP_CONV_FIX_A64(sq, s, 32, 64, int64)
VFP_CONV_FIX(uh, s, 32, 32, uint16)
VFP_CONV_FIX(ul, s, 32, 32, uint32)
VFP_CONV_FIX_A64(uq, s, 32, 64, uint64)
VFP_CONV_FIX_A64(sl, h, 16, 32, int32)
VFP_CONV_FIX_A64(ul, h, 16, 32, uint32)
#undef VFP_CONV_FIX
#undef VFP_CONV_FIX_FLOAT
#undef VFP_CONV_FLOAT_FIX_ROUND
#undef VFP_CONV_FIX_A64
/* Conversion to/from f16 can overflow to infinity before/after scaling.
* Therefore we convert to f64, scale, and then convert f64 to f16; or
* vice versa for conversion to integer.
*
* For 16- and 32-bit integers, the conversion to f64 never rounds.
* For 64-bit integers, any integer that would cause rounding will also
* overflow to f16 infinity, so there is no double rounding problem.
*/
static float16 do_postscale_fp16(float64 f, int shift, float_status *fpst)
{
return float64_to_float16(float64_scalbn(f, -shift, fpst), true, fpst);
}
float16 HELPER(vfp_sltoh)(uint32_t x, uint32_t shift, void *fpst)
{
return do_postscale_fp16(int32_to_float64(x, fpst), shift, fpst);
}
float16 HELPER(vfp_ultoh)(uint32_t x, uint32_t shift, void *fpst)
{
return do_postscale_fp16(uint32_to_float64(x, fpst), shift, fpst);
}
float16 HELPER(vfp_sqtoh)(uint64_t x, uint32_t shift, void *fpst)
{
return do_postscale_fp16(int64_to_float64(x, fpst), shift, fpst);
}
float16 HELPER(vfp_uqtoh)(uint64_t x, uint32_t shift, void *fpst)
{
return do_postscale_fp16(uint64_to_float64(x, fpst), shift, fpst);
}
static float64 do_prescale_fp16(float16 f, int shift, float_status *fpst)
{
if (unlikely(float16_is_any_nan(f))) {
float_raise(float_flag_invalid, fpst);
return 0;
} else {
int old_exc_flags = get_float_exception_flags(fpst);
float64 ret;
ret = float16_to_float64(f, true, fpst);
ret = float64_scalbn(ret, shift, fpst);
old_exc_flags |= get_float_exception_flags(fpst)
& float_flag_input_denormal;
set_float_exception_flags(old_exc_flags, fpst);
return ret;
}
}
uint32_t HELPER(vfp_toshh)(float16 x, uint32_t shift, void *fpst)
{
return float64_to_int16(do_prescale_fp16(x, shift, fpst), fpst);
}
uint32_t HELPER(vfp_touhh)(float16 x, uint32_t shift, void *fpst)
{
return float64_to_uint16(do_prescale_fp16(x, shift, fpst), fpst);
}
uint32_t HELPER(vfp_toslh)(float16 x, uint32_t shift, void *fpst)
{
return float64_to_int32(do_prescale_fp16(x, shift, fpst), fpst);
}
uint32_t HELPER(vfp_toulh)(float16 x, uint32_t shift, void *fpst)
{
return float64_to_uint32(do_prescale_fp16(x, shift, fpst), fpst);
}
uint64_t HELPER(vfp_tosqh)(float16 x, uint32_t shift, void *fpst)
{
return float64_to_int64(do_prescale_fp16(x, shift, fpst), fpst);
}
uint64_t HELPER(vfp_touqh)(float16 x, uint32_t shift, void *fpst)
{
return float64_to_uint64(do_prescale_fp16(x, shift, fpst), fpst);
}
/* Set the current fp rounding mode and return the old one.
* The argument is a softfloat float_round_ value.

View File

@ -149,8 +149,12 @@ DEF_HELPER_3(vfp_toshd_round_to_zero, i64, f64, i32, ptr)
DEF_HELPER_3(vfp_tosld_round_to_zero, i64, f64, i32, ptr)
DEF_HELPER_3(vfp_touhd_round_to_zero, i64, f64, i32, ptr)
DEF_HELPER_3(vfp_tould_round_to_zero, i64, f64, i32, ptr)
DEF_HELPER_3(vfp_touhh, i32, f16, i32, ptr)
DEF_HELPER_3(vfp_toshh, i32, f16, i32, ptr)
DEF_HELPER_3(vfp_toulh, i32, f16, i32, ptr)
DEF_HELPER_3(vfp_toslh, i32, f16, i32, ptr)
DEF_HELPER_3(vfp_touqh, i64, f16, i32, ptr)
DEF_HELPER_3(vfp_tosqh, i64, f16, i32, ptr)
DEF_HELPER_3(vfp_toshs, i32, f32, i32, ptr)
DEF_HELPER_3(vfp_tosls, i32, f32, i32, ptr)
DEF_HELPER_3(vfp_tosqs, i64, f32, i32, ptr)
@ -177,6 +181,8 @@ DEF_HELPER_3(vfp_ultod, f64, i64, i32, ptr)
DEF_HELPER_3(vfp_uqtod, f64, i64, i32, ptr)
DEF_HELPER_3(vfp_sltoh, f16, i32, i32, ptr)
DEF_HELPER_3(vfp_ultoh, f16, i32, i32, ptr)
DEF_HELPER_3(vfp_sqtoh, f16, i64, i32, ptr)
DEF_HELPER_3(vfp_uqtoh, f16, i64, i32, ptr)
DEF_HELPER_FLAGS_2(set_rmode, TCG_CALL_NO_RWG, i32, i32, ptr)
DEF_HELPER_FLAGS_2(set_neon_rmode, TCG_CALL_NO_RWG, i32, i32, env)

View File

@ -614,6 +614,14 @@ static TCGv_i32 read_fp_sreg(DisasContext *s, int reg)
return v;
}
static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
{
TCGv_i32 v = tcg_temp_new_i32();
tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_16));
return v;
}
/* Clear the bits above an N-bit vector, for N = (is_q ? 128 : 64).
* If SVE is not enabled, then there are only 128 bits in the vector.
*/
@ -4461,14 +4469,14 @@ static void disas_data_proc_reg(DisasContext *s, uint32_t insn)
}
}
static void handle_fp_compare(DisasContext *s, bool is_double,
static void handle_fp_compare(DisasContext *s, int size,
unsigned int rn, unsigned int rm,
bool cmp_with_zero, bool signal_all_nans)
{
TCGv_i64 tcg_flags = tcg_temp_new_i64();
TCGv_ptr fpst = get_fpstatus_ptr(false);
TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
if (is_double) {
if (size == MO_64) {
TCGv_i64 tcg_vn, tcg_vm;
tcg_vn = read_fp_dreg(s, rn);
@ -4485,19 +4493,35 @@ static void handle_fp_compare(DisasContext *s, bool is_double,
tcg_temp_free_i64(tcg_vn);
tcg_temp_free_i64(tcg_vm);
} else {
TCGv_i32 tcg_vn, tcg_vm;
TCGv_i32 tcg_vn = tcg_temp_new_i32();
TCGv_i32 tcg_vm = tcg_temp_new_i32();
tcg_vn = read_fp_sreg(s, rn);
read_vec_element_i32(s, tcg_vn, rn, 0, size);
if (cmp_with_zero) {
tcg_vm = tcg_const_i32(0);
tcg_gen_movi_i32(tcg_vm, 0);
} else {
tcg_vm = read_fp_sreg(s, rm);
read_vec_element_i32(s, tcg_vm, rm, 0, size);
}
if (signal_all_nans) {
gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
} else {
gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
switch (size) {
case MO_32:
if (signal_all_nans) {
gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
} else {
gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
}
break;
case MO_16:
if (signal_all_nans) {
gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
} else {
gen_helper_vfp_cmph_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
}
break;
default:
g_assert_not_reached();
}
tcg_temp_free_i32(tcg_vn);
tcg_temp_free_i32(tcg_vm);
}
@ -4518,16 +4542,35 @@ static void handle_fp_compare(DisasContext *s, bool is_double,
static void disas_fp_compare(DisasContext *s, uint32_t insn)
{
unsigned int mos, type, rm, op, rn, opc, op2r;
int size;
mos = extract32(insn, 29, 3);
type = extract32(insn, 22, 2); /* 0 = single, 1 = double */
type = extract32(insn, 22, 2);
rm = extract32(insn, 16, 5);
op = extract32(insn, 14, 2);
rn = extract32(insn, 5, 5);
opc = extract32(insn, 3, 2);
op2r = extract32(insn, 0, 3);
if (mos || op || op2r || type > 1) {
if (mos || op || op2r) {
unallocated_encoding(s);
return;
}
switch (type) {
case 0:
size = MO_32;
break;
case 1:
size = MO_64;
break;
case 3:
size = MO_16;
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
unallocated_encoding(s);
return;
}
@ -4536,7 +4579,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
return;
}
handle_fp_compare(s, type, rn, rm, opc & 1, opc & 2);
handle_fp_compare(s, size, rn, rm, opc & 1, opc & 2);
}
/* Floating point conditional compare
@ -4550,16 +4593,35 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
unsigned int mos, type, rm, cond, rn, op, nzcv;
TCGv_i64 tcg_flags;
TCGLabel *label_continue = NULL;
int size;
mos = extract32(insn, 29, 3);
type = extract32(insn, 22, 2); /* 0 = single, 1 = double */
type = extract32(insn, 22, 2);
rm = extract32(insn, 16, 5);
cond = extract32(insn, 12, 4);
rn = extract32(insn, 5, 5);
op = extract32(insn, 4, 1);
nzcv = extract32(insn, 0, 4);
if (mos || type > 1) {
if (mos) {
unallocated_encoding(s);
return;
}
switch (type) {
case 0:
size = MO_32;
break;
case 1:
size = MO_64;
break;
case 3:
size = MO_16;
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
unallocated_encoding(s);
return;
}
@ -4580,7 +4642,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
gen_set_label(label_match);
}
handle_fp_compare(s, type, rn, rm, false, op);
handle_fp_compare(s, size, rn, rm, false, op);
if (cond < 0x0e) {
gen_set_label(label_continue);
@ -4598,15 +4660,34 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
unsigned int mos, type, rm, cond, rn, rd;
TCGv_i64 t_true, t_false, t_zero;
DisasCompare64 c;
TCGMemOp sz;
mos = extract32(insn, 29, 3);
type = extract32(insn, 22, 2); /* 0 = single, 1 = double */
type = extract32(insn, 22, 2);
rm = extract32(insn, 16, 5);
cond = extract32(insn, 12, 4);
rn = extract32(insn, 5, 5);
rd = extract32(insn, 0, 5);
if (mos || type > 1) {
if (mos) {
unallocated_encoding(s);
return;
}
switch (type) {
case 0:
sz = MO_32;
break;
case 1:
sz = MO_64;
break;
case 3:
sz = MO_16;
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
unallocated_encoding(s);
return;
}
@ -4615,11 +4696,11 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
return;
}
/* Zero extend sreg inputs to 64 bits now. */
/* Zero extend sreg & hreg inputs to 64 bits now. */
t_true = tcg_temp_new_i64();
t_false = tcg_temp_new_i64();
read_vec_element(s, t_true, rn, 0, type ? MO_64 : MO_32);
read_vec_element(s, t_false, rm, 0, type ? MO_64 : MO_32);
read_vec_element(s, t_true, rn, 0, sz);
read_vec_element(s, t_false, rm, 0, sz);
a64_test_cc(&c, cond);
t_zero = tcg_const_i64(0);
@ -4628,7 +4709,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
tcg_temp_free_i64(t_false);
a64_free_cc(&c);
/* Note that sregs write back zeros to the high bits,
/* Note that sregs & hregs write back zeros to the high bits,
and we've already done the zero-extension. */
write_fp_dreg(s, rd, t_true);
tcg_temp_free_i64(t_true);
@ -4638,11 +4719,9 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
{
TCGv_ptr fpst = NULL;
TCGv_i32 tcg_op = tcg_temp_new_i32();
TCGv_i32 tcg_op = read_fp_hreg(s, rn);
TCGv_i32 tcg_res = tcg_temp_new_i32();
read_vec_element_i32(s, tcg_op, rn, 0, MO_16);
switch (opcode) {
case 0x0: /* FMOV */
tcg_gen_mov_i32(tcg_res, tcg_op);
@ -4654,7 +4733,8 @@ static void handle_fp_1src_half(DisasContext *s, int opcode, int rd, int rn)
tcg_gen_xori_i32(tcg_res, tcg_op, 0x8000);
break;
case 0x3: /* FSQRT */
gen_helper_sqrt_f16(tcg_res, tcg_op, cpu_env);
fpst = get_fpstatus_ptr(true);
gen_helper_sqrt_f16(tcg_res, tcg_op, fpst);
break;
case 0x8: /* FRINTN */
case 0x9: /* FRINTP */
@ -5050,6 +5130,61 @@ static void handle_fp_2src_double(DisasContext *s, int opcode,
tcg_temp_free_i64(tcg_res);
}
/* Floating-point data-processing (2 source) - half precision */
static void handle_fp_2src_half(DisasContext *s, int opcode,
int rd, int rn, int rm)
{
TCGv_i32 tcg_op1;
TCGv_i32 tcg_op2;
TCGv_i32 tcg_res;
TCGv_ptr fpst;
tcg_res = tcg_temp_new_i32();
fpst = get_fpstatus_ptr(true);
tcg_op1 = read_fp_hreg(s, rn);
tcg_op2 = read_fp_hreg(s, rm);
switch (opcode) {
case 0x0: /* FMUL */
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x1: /* FDIV */
gen_helper_advsimd_divh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x2: /* FADD */
gen_helper_advsimd_addh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x3: /* FSUB */
gen_helper_advsimd_subh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x4: /* FMAX */
gen_helper_advsimd_maxh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x5: /* FMIN */
gen_helper_advsimd_minh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x6: /* FMAXNM */
gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x7: /* FMINNM */
gen_helper_advsimd_minnumh(tcg_res, tcg_op1, tcg_op2, fpst);
break;
case 0x8: /* FNMUL */
gen_helper_advsimd_mulh(tcg_res, tcg_op1, tcg_op2, fpst);
tcg_gen_xori_i32(tcg_res, tcg_res, 0x8000);
break;
default:
g_assert_not_reached();
}
write_fp_sreg(s, rd, tcg_res);
tcg_temp_free_ptr(fpst);
tcg_temp_free_i32(tcg_op1);
tcg_temp_free_i32(tcg_op2);
tcg_temp_free_i32(tcg_res);
}
/* Floating point data-processing (2 source)
* 31 30 29 28 24 23 22 21 20 16 15 12 11 10 9 5 4 0
* +---+---+---+-----------+------+---+------+--------+-----+------+------+
@ -5082,6 +5217,16 @@ static void disas_fp_2src(DisasContext *s, uint32_t insn)
}
handle_fp_2src_double(s, opcode, rd, rn, rm);
break;
case 3:
if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
unallocated_encoding(s);
return;
}
if (!fp_access_check(s)) {
return;
}
handle_fp_2src_half(s, opcode, rd, rn, rm);
break;
default:
unallocated_encoding(s);
}
@ -5163,6 +5308,44 @@ static void handle_fp_3src_double(DisasContext *s, bool o0, bool o1,
tcg_temp_free_i64(tcg_res);
}
/* Floating-point data-processing (3 source) - half precision */
static void handle_fp_3src_half(DisasContext *s, bool o0, bool o1,
int rd, int rn, int rm, int ra)
{
TCGv_i32 tcg_op1, tcg_op2, tcg_op3;
TCGv_i32 tcg_res = tcg_temp_new_i32();
TCGv_ptr fpst = get_fpstatus_ptr(true);
tcg_op1 = read_fp_hreg(s, rn);
tcg_op2 = read_fp_hreg(s, rm);
tcg_op3 = read_fp_hreg(s, ra);
/* These are fused multiply-add, and must be done as one
* floating point operation with no rounding between the
* multiplication and addition steps.
* NB that doing the negations here as separate steps is
* correct : an input NaN should come out with its sign bit
* flipped if it is a negated-input.
*/
if (o1 == true) {
tcg_gen_xori_i32(tcg_op3, tcg_op3, 0x8000);
}
if (o0 != o1) {
tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
}
gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_op3, fpst);
write_fp_sreg(s, rd, tcg_res);
tcg_temp_free_ptr(fpst);
tcg_temp_free_i32(tcg_op1);
tcg_temp_free_i32(tcg_op2);
tcg_temp_free_i32(tcg_op3);
tcg_temp_free_i32(tcg_res);
}
/* Floating point data-processing (3 source)
* 31 30 29 28 24 23 22 21 20 16 15 14 10 9 5 4 0
* +---+---+---+-----------+------+----+------+----+------+------+------+
@ -5192,6 +5375,16 @@ static void disas_fp_3src(DisasContext *s, uint32_t insn)
}
handle_fp_3src_double(s, o0, o1, rd, rn, rm, ra);
break;
case 3:
if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
unallocated_encoding(s);
return;
}
if (!fp_access_check(s)) {
return;
}
handle_fp_3src_half(s, o0, o1, rd, rn, rm, ra);
break;
default:
unallocated_encoding(s);
}
@ -5239,11 +5432,25 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
{
int rd = extract32(insn, 0, 5);
int imm8 = extract32(insn, 13, 8);
int is_double = extract32(insn, 22, 2);
int type = extract32(insn, 22, 2);
uint64_t imm;
TCGv_i64 tcg_res;
TCGMemOp sz;
if (is_double > 1) {
switch (type) {
case 0:
sz = MO_32;
break;
case 1:
sz = MO_64;
break;
case 3:
sz = MO_16;
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
unallocated_encoding(s);
return;
}
@ -5252,7 +5459,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
return;
}
imm = vfp_expand_imm(MO_32 + is_double, imm8);
imm = vfp_expand_imm(sz, imm8);
tcg_res = tcg_const_i64(imm);
write_fp_dreg(s, rd, tcg_res);
@ -5268,11 +5475,11 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
bool itof, int rmode, int scale, int sf, int type)
{
bool is_signed = !(opcode & 1);
bool is_double = type;
TCGv_ptr tcg_fpstatus;
TCGv_i32 tcg_shift;
TCGv_i32 tcg_shift, tcg_single;
TCGv_i64 tcg_double;
tcg_fpstatus = get_fpstatus_ptr(false);
tcg_fpstatus = get_fpstatus_ptr(type == 3);
tcg_shift = tcg_const_i32(64 - scale);
@ -5290,8 +5497,9 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
tcg_int = tcg_extend;
}
if (is_double) {
TCGv_i64 tcg_double = tcg_temp_new_i64();
switch (type) {
case 1: /* float64 */
tcg_double = tcg_temp_new_i64();
if (is_signed) {
gen_helper_vfp_sqtod(tcg_double, tcg_int,
tcg_shift, tcg_fpstatus);
@ -5301,8 +5509,10 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
}
write_fp_dreg(s, rd, tcg_double);
tcg_temp_free_i64(tcg_double);
} else {
TCGv_i32 tcg_single = tcg_temp_new_i32();
break;
case 0: /* float32 */
tcg_single = tcg_temp_new_i32();
if (is_signed) {
gen_helper_vfp_sqtos(tcg_single, tcg_int,
tcg_shift, tcg_fpstatus);
@ -5312,6 +5522,23 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
}
write_fp_sreg(s, rd, tcg_single);
tcg_temp_free_i32(tcg_single);
break;
case 3: /* float16 */
tcg_single = tcg_temp_new_i32();
if (is_signed) {
gen_helper_vfp_sqtoh(tcg_single, tcg_int,
tcg_shift, tcg_fpstatus);
} else {
gen_helper_vfp_uqtoh(tcg_single, tcg_int,
tcg_shift, tcg_fpstatus);
}
write_fp_sreg(s, rd, tcg_single);
tcg_temp_free_i32(tcg_single);
break;
default:
g_assert_not_reached();
}
} else {
TCGv_i64 tcg_int = cpu_reg(s, rd);
@ -5328,8 +5555,9 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
if (is_double) {
TCGv_i64 tcg_double = read_fp_dreg(s, rn);
switch (type) {
case 1: /* float64 */
tcg_double = read_fp_dreg(s, rn);
if (is_signed) {
if (!sf) {
gen_helper_vfp_tosld(tcg_int, tcg_double,
@ -5347,9 +5575,14 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
tcg_shift, tcg_fpstatus);
}
}
if (!sf) {
tcg_gen_ext32u_i64(tcg_int, tcg_int);
}
tcg_temp_free_i64(tcg_double);
} else {
TCGv_i32 tcg_single = read_fp_sreg(s, rn);
break;
case 0: /* float32 */
tcg_single = read_fp_sreg(s, rn);
if (sf) {
if (is_signed) {
gen_helper_vfp_tosqs(tcg_int, tcg_single,
@ -5371,14 +5604,39 @@ static void handle_fpfpcvt(DisasContext *s, int rd, int rn, int opcode,
tcg_temp_free_i32(tcg_dest);
}
tcg_temp_free_i32(tcg_single);
break;
case 3: /* float16 */
tcg_single = read_fp_sreg(s, rn);
if (sf) {
if (is_signed) {
gen_helper_vfp_tosqh(tcg_int, tcg_single,
tcg_shift, tcg_fpstatus);
} else {
gen_helper_vfp_touqh(tcg_int, tcg_single,
tcg_shift, tcg_fpstatus);
}
} else {
TCGv_i32 tcg_dest = tcg_temp_new_i32();
if (is_signed) {
gen_helper_vfp_toslh(tcg_dest, tcg_single,
tcg_shift, tcg_fpstatus);
} else {
gen_helper_vfp_toulh(tcg_dest, tcg_single,
tcg_shift, tcg_fpstatus);
}
tcg_gen_extu_i32_i64(tcg_int, tcg_dest);
tcg_temp_free_i32(tcg_dest);
}
tcg_temp_free_i32(tcg_single);
break;
default:
g_assert_not_reached();
}
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
tcg_temp_free_i32(tcg_rmode);
if (!sf) {
tcg_gen_ext32u_i64(tcg_int, tcg_int);
}
}
tcg_temp_free_ptr(tcg_fpstatus);
@ -5403,8 +5661,21 @@ static void disas_fp_fixed_conv(DisasContext *s, uint32_t insn)
bool sf = extract32(insn, 31, 1);
bool itof;
if (sbit || (type > 1)
|| (!sf && scale < 32)) {
if (sbit || (!sf && scale < 32)) {
unallocated_encoding(s);
return;
}
switch (type) {
case 0: /* float32 */
case 1: /* float64 */
break;
case 3: /* float16 */
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
unallocated_encoding(s);
return;
}
@ -5438,32 +5709,34 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
if (itof) {
TCGv_i64 tcg_rn = cpu_reg(s, rn);
TCGv_i64 tmp;
switch (type) {
case 0:
{
/* 32 bit */
TCGv_i64 tmp = tcg_temp_new_i64();
tmp = tcg_temp_new_i64();
tcg_gen_ext32u_i64(tmp, tcg_rn);
tcg_gen_st_i64(tmp, cpu_env, fp_reg_offset(s, rd, MO_64));
tcg_gen_movi_i64(tmp, 0);
tcg_gen_st_i64(tmp, cpu_env, fp_reg_hi_offset(s, rd));
write_fp_dreg(s, rd, tmp);
tcg_temp_free_i64(tmp);
break;
}
case 1:
{
/* 64 bit */
TCGv_i64 tmp = tcg_const_i64(0);
tcg_gen_st_i64(tcg_rn, cpu_env, fp_reg_offset(s, rd, MO_64));
tcg_gen_st_i64(tmp, cpu_env, fp_reg_hi_offset(s, rd));
tcg_temp_free_i64(tmp);
write_fp_dreg(s, rd, tcg_rn);
break;
}
case 2:
/* 64 bit to top half. */
tcg_gen_st_i64(tcg_rn, cpu_env, fp_reg_hi_offset(s, rd));
clear_vec_high(s, true, rd);
break;
case 3:
/* 16 bit */
tmp = tcg_temp_new_i64();
tcg_gen_ext16u_i64(tmp, tcg_rn);
write_fp_dreg(s, rd, tmp);
tcg_temp_free_i64(tmp);
break;
default:
g_assert_not_reached();
}
} else {
TCGv_i64 tcg_rd = cpu_reg(s, rd);
@ -5481,6 +5754,12 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
/* 64 bits from top half */
tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_hi_offset(s, rn));
break;
case 3:
/* 16 bit */
tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_16));
break;
default:
g_assert_not_reached();
}
}
}
@ -5520,6 +5799,12 @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
case 0xa: /* 64 bit */
case 0xd: /* 64 bit to top half of quad */
break;
case 0x6: /* 16-bit float, 32-bit int */
case 0xe: /* 16-bit float, 64-bit int */
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
/* all other sf/type/rmode combinations are invalid */
unallocated_encoding(s);
@ -5534,7 +5819,20 @@ static void disas_fp_int_conv(DisasContext *s, uint32_t insn)
/* actual FP conversions */
bool itof = extract32(opcode, 1, 1);
if (type > 1 || (rmode != 0 && opcode > 1)) {
if (rmode != 0 && opcode > 1) {
unallocated_encoding(s);
return;
}
switch (type) {
case 0: /* float32 */
case 1: /* float64 */
break;
case 3: /* float16 */
if (arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
break;
}
/* fallthru */
default:
unallocated_encoding(s);
return;
}
@ -7159,13 +7457,26 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
int immh, int immb, int opcode,
int rn, int rd)
{
bool is_double = extract32(immh, 3, 1);
int size = is_double ? MO_64 : MO_32;
int elements;
int size, elements, fracbits;
int immhb = immh << 3 | immb;
int fracbits = (is_double ? 128 : 64) - immhb;
if (!extract32(immh, 2, 2)) {
if (immh & 8) {
size = MO_64;
if (!is_scalar && !is_q) {
unallocated_encoding(s);
return;
}
} else if (immh & 4) {
size = MO_32;
} else if (immh & 2) {
size = MO_16;
if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
unallocated_encoding(s);
return;
}
} else {
/* immh == 0 would be a failure of the decode logic */
g_assert(immh == 1);
unallocated_encoding(s);
return;
}
@ -7173,20 +7484,14 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
if (is_scalar) {
elements = 1;
} else {
elements = is_double ? 2 : is_q ? 4 : 2;
if (is_double && !is_q) {
unallocated_encoding(s);
return;
}
elements = (8 << is_q) >> size;
}
fracbits = (16 << size) - immhb;
if (!fp_access_check(s)) {
return;
}
/* immh == 0 would be a failure of the decode logic */
g_assert(immh);
handle_simd_intfp_conv(s, rd, rn, elements, !is_u, fracbits, size);
}
@ -7195,19 +7500,28 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
bool is_q, bool is_u,
int immh, int immb, int rn, int rd)
{
bool is_double = extract32(immh, 3, 1);
int immhb = immh << 3 | immb;
int fracbits = (is_double ? 128 : 64) - immhb;
int pass;
int pass, size, fracbits;
TCGv_ptr tcg_fpstatus;
TCGv_i32 tcg_rmode, tcg_shift;
if (!extract32(immh, 2, 2)) {
unallocated_encoding(s);
return;
}
if (!is_scalar && !is_q && is_double) {
if (immh & 0x8) {
size = MO_64;
if (!is_scalar && !is_q) {
unallocated_encoding(s);
return;
}
} else if (immh & 0x4) {
size = MO_32;
} else if (immh & 0x2) {
size = MO_16;
if (!arm_dc_feature(s, ARM_FEATURE_V8_FP16)) {
unallocated_encoding(s);
return;
}
} else {
/* Should have split out AdvSIMD modified immediate earlier. */
assert(immh == 1);
unallocated_encoding(s);
return;
}
@ -7219,11 +7533,12 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
assert(!(is_scalar && is_q));
tcg_rmode = tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO));
tcg_fpstatus = get_fpstatus_ptr(false);
tcg_fpstatus = get_fpstatus_ptr(size == MO_16);
gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
fracbits = (16 << size) - immhb;
tcg_shift = tcg_const_i32(fracbits);
if (is_double) {
if (size == MO_64) {
int maxpass = is_scalar ? 1 : 2;
for (pass = 0; pass < maxpass; pass++) {
@ -7240,20 +7555,37 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
}
clear_vec_high(s, is_q, rd);
} else {
int maxpass = is_scalar ? 1 : is_q ? 4 : 2;
void (*fn)(TCGv_i32, TCGv_i32, TCGv_i32, TCGv_ptr);
int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);
switch (size) {
case MO_16:
if (is_u) {
fn = gen_helper_vfp_touhh;
} else {
fn = gen_helper_vfp_toshh;
}
break;
case MO_32:
if (is_u) {
fn = gen_helper_vfp_touls;
} else {
fn = gen_helper_vfp_tosls;
}
break;
default:
g_assert_not_reached();
}
for (pass = 0; pass < maxpass; pass++) {
TCGv_i32 tcg_op = tcg_temp_new_i32();
read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
if (is_u) {
gen_helper_vfp_touls(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
} else {
gen_helper_vfp_tosls(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
}
read_vec_element_i32(s, tcg_op, rn, pass, size);
fn(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
if (is_scalar) {
write_fp_sreg(s, rd, tcg_op);
} else {
write_vec_element_i32(s, tcg_op, rd, pass, MO_32);
write_vec_element_i32(s, tcg_op, rd, pass, size);
}
tcg_temp_free_i32(tcg_op);
}
@ -7413,13 +7745,10 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
tcg_temp_free_i64(tcg_op2);
tcg_temp_free_i64(tcg_res);
} else {
TCGv_i32 tcg_op1 = tcg_temp_new_i32();
TCGv_i32 tcg_op2 = tcg_temp_new_i32();
TCGv_i32 tcg_op1 = read_fp_hreg(s, rn);
TCGv_i32 tcg_op2 = read_fp_hreg(s, rm);
TCGv_i64 tcg_res = tcg_temp_new_i64();
read_vec_element_i32(s, tcg_op1, rn, 0, MO_16);
read_vec_element_i32(s, tcg_op2, rm, 0, MO_16);
gen_helper_neon_mull_s16(tcg_res, tcg_op1, tcg_op2);
gen_helper_neon_addl_saturate_s32(tcg_res, cpu_env, tcg_res, tcg_res);
@ -7960,13 +8289,10 @@ static void disas_simd_scalar_three_reg_same_fp16(DisasContext *s,
fpst = get_fpstatus_ptr(true);
tcg_op1 = tcg_temp_new_i32();
tcg_op2 = tcg_temp_new_i32();
tcg_op1 = read_fp_hreg(s, rn);
tcg_op2 = read_fp_hreg(s, rm);
tcg_res = tcg_temp_new_i32();
read_vec_element_i32(s, tcg_op1, rn, 0, MO_16);
read_vec_element_i32(s, tcg_op2, rm, 0, MO_16);
switch (fpopcode) {
case 0x03: /* FMULX */
gen_helper_advsimd_mulxh(tcg_res, tcg_op1, tcg_op2, fpst);
@ -11885,11 +12211,9 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
}
if (is_scalar) {
TCGv_i32 tcg_op = tcg_temp_new_i32();
TCGv_i32 tcg_op = read_fp_hreg(s, rn);
TCGv_i32 tcg_res = tcg_temp_new_i32();
read_vec_element_i32(s, tcg_op, rn, 0, MO_16);
switch (fpop) {
case 0x1a: /* FCVTNS */
case 0x1b: /* FCVTMS */

View File

@ -10783,8 +10783,23 @@ static void disas_thumb2_insn(DisasContext *s, uint32_t insn)
/* Coprocessor. */
if (arm_dc_feature(s, ARM_FEATURE_M)) {
/* We don't currently implement M profile FP support,
* so this entire space should give a NOCP fault.
* so this entire space should give a NOCP fault, with
* the exception of the v8M VLLDM and VLSTM insns, which
* must be NOPs in Secure state and UNDEF in Nonsecure state.
*/
if (arm_dc_feature(s, ARM_FEATURE_V8) &&
(insn & 0xffa00f00) == 0xec200a00) {
/* 0b1110_1100_0x1x_xxxx_xxxx_1010_xxxx_xxxx
* - VLLDM, VLSTM
* We choose to UNDEF if the RAZ bits are non-zero.
*/
if (!s->v8m_secure || (insn & 0x0040f0ff)) {
goto illegal_op;
}
/* Just NOP since FP support is not implemented */
break;
}
/* All other insns: NOCP */
gen_exception_insn(s, 4, EXCP_NOCP, syn_uncategorized(),
default_exception_el(s));
break;

View File

@ -510,7 +510,7 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, "spec-ctrl", NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, "ssbd",
},
.cpuid_eax = 7,
.cpuid_needs_ecx = true, .cpuid_ecx = 0,
@ -541,7 +541,7 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
"ibpb", NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, NULL, NULL, NULL,
NULL, "virt-ssbd", NULL, NULL,
NULL, NULL, NULL, NULL,
},
.cpuid_eax = 0x80000008,

View File

@ -351,6 +351,7 @@ typedef enum X86Seg {
#define MSR_IA32_FEATURE_CONTROL 0x0000003a
#define MSR_TSC_ADJUST 0x0000003b
#define MSR_IA32_SPEC_CTRL 0x48
#define MSR_VIRT_SSBD 0xc001011f
#define MSR_IA32_TSCDEADLINE 0x6e0
#define FEATURE_CONTROL_LOCKED (1<<0)
@ -684,6 +685,7 @@ typedef uint32_t FeatureWordArray[FEATURE_WORDS];
#define CPUID_7_0_EDX_AVX512_4VNNIW (1U << 2) /* AVX512 Neural Network Instructions */
#define CPUID_7_0_EDX_AVX512_4FMAPS (1U << 3) /* AVX512 Multiply Accumulation Single Precision */
#define CPUID_7_0_EDX_SPEC_CTRL (1U << 26) /* Speculation Control */
#define CPUID_7_0_EDX_SPEC_CTRL_SSBD (1U << 31) /* Speculative Store Bypass Disable */
#define KVM_HINTS_DEDICATED (1U << 0)
@ -1149,6 +1151,7 @@ typedef struct CPUX86State {
uint32_t pkru;
uint64_t spec_ctrl;
uint64_t virt_ssbd;
/* End of state preserved by INIT (dummy marker). */
struct {} end_init_save;

View File

@ -92,6 +92,7 @@ static bool has_msr_hv_stimer;
static bool has_msr_hv_frequencies;
static bool has_msr_xss;
static bool has_msr_spec_ctrl;
static bool has_msr_virt_ssbd;
static bool has_msr_smi_count;
static uint32_t has_architectural_pmu_version;
@ -1218,6 +1219,9 @@ static int kvm_get_supported_msrs(KVMState *s)
case MSR_IA32_SPEC_CTRL:
has_msr_spec_ctrl = true;
break;
case MSR_VIRT_SSBD:
has_msr_virt_ssbd = true;
break;
}
}
}
@ -1706,6 +1710,10 @@ static int kvm_put_msrs(X86CPU *cpu, int level)
if (has_msr_spec_ctrl) {
kvm_msr_entry_add(cpu, MSR_IA32_SPEC_CTRL, env->spec_ctrl);
}
if (has_msr_virt_ssbd) {
kvm_msr_entry_add(cpu, MSR_VIRT_SSBD, env->virt_ssbd);
}
#ifdef TARGET_X86_64
if (lm_capable_kernel) {
kvm_msr_entry_add(cpu, MSR_CSTAR, env->cstar);
@ -2077,8 +2085,9 @@ static int kvm_get_msrs(X86CPU *cpu)
if (has_msr_spec_ctrl) {
kvm_msr_entry_add(cpu, MSR_IA32_SPEC_CTRL, 0);
}
if (has_msr_virt_ssbd) {
kvm_msr_entry_add(cpu, MSR_VIRT_SSBD, 0);
}
if (!env->tsc_valid) {
kvm_msr_entry_add(cpu, MSR_IA32_TSC, 0);
env->tsc_valid = !runstate_is_running();
@ -2444,6 +2453,9 @@ static int kvm_get_msrs(X86CPU *cpu)
case MSR_IA32_SPEC_CTRL:
env->spec_ctrl = msrs[i].data;
break;
case MSR_VIRT_SSBD:
env->virt_ssbd = msrs[i].data;
break;
case MSR_IA32_RTIT_CTL:
env->msr_rtit_ctrl = msrs[i].data;
break;

View File

@ -893,6 +893,25 @@ static const VMStateDescription vmstate_msr_intel_pt = {
}
};
static bool virt_ssbd_needed(void *opaque)
{
X86CPU *cpu = opaque;
CPUX86State *env = &cpu->env;
return env->virt_ssbd != 0;
}
static const VMStateDescription vmstate_msr_virt_ssbd = {
.name = "cpu/virt_ssbd",
.version_id = 1,
.minimum_version_id = 1,
.needed = virt_ssbd_needed,
.fields = (VMStateField[]){
VMSTATE_UINT64(env.virt_ssbd, X86CPU),
VMSTATE_END_OF_LIST()
}
};
VMStateDescription vmstate_x86_cpu = {
.name = "cpu",
.version_id = 12,
@ -1015,6 +1034,7 @@ VMStateDescription vmstate_x86_cpu = {
&vmstate_spec_ctrl,
&vmstate_mcg_ext_ctl,
&vmstate_msr_intel_pt,
&vmstate_msr_virt_ssbd,
NULL
}
};

View File

@ -102,12 +102,16 @@ void HELPER(wcsr_dc)(CPULM32State *env, uint32_t dc)
void HELPER(wcsr_im)(CPULM32State *env, uint32_t im)
{
qemu_mutex_lock_iothread();
lm32_pic_set_im(env->pic_state, im);
qemu_mutex_unlock_iothread();
}
void HELPER(wcsr_ip)(CPULM32State *env, uint32_t im)
{
qemu_mutex_lock_iothread();
lm32_pic_set_ip(env->pic_state, im);
qemu_mutex_unlock_iothread();
}
void HELPER(wcsr_jtx)(CPULM32State *env, uint32_t jtx)

View File

@ -200,6 +200,11 @@ static int cpu_pre_save(void *opaque)
;
cpu->mig_msr_mask = env->msr_mask & ~metamask;
cpu->mig_insns_flags = env->insns_flags & insns_compat_mask;
/* CPU models supported by old machines all have PPC_MEM_TLBIE,
* so we set it unconditionally to allow backward migration from
* a POWER9 host to a POWER8 host.
*/
cpu->mig_insns_flags |= PPC_MEM_TLBIE;
cpu->mig_insns_flags2 = env->insns_flags2 & insns_compat_mask2;
cpu->mig_nb_BATs = env->nb_BATs;
}

View File

@ -7296,6 +7296,7 @@ static bool ppc_tr_breakpoint_check(DisasContextBase *dcbase, CPUState *cs,
DisasContext *ctx = container_of(dcbase, DisasContext, base);
gen_debug_exception(ctx);
dcbase->is_jmp = DISAS_NORETURN;
/* The address covered by the breakpoint must be included in
[tb->pc, tb->pc + tb->size) in order to for it to be
properly cleared -- thus we increment the PC here so that

View File

@ -1733,7 +1733,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
tcg_out_insn(s, 3305, LDR, offset, TCG_REG_TMP);
}
tcg_out_insn(s, 3207, BR, TCG_REG_TMP);
s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
set_jmp_reset_offset(s, a0);
break;
case INDEX_op_goto_ptr:

View File

@ -159,8 +159,8 @@ typedef enum {
INSN_STRD_IMM = 0x004000f0,
INSN_STRD_REG = 0x000000f0,
INSN_DMB_ISH = 0x5bf07ff5,
INSN_DMB_MCR = 0xba0f07ee,
INSN_DMB_ISH = 0xf57ff05b,
INSN_DMB_MCR = 0xee070fba,
/* Architected nop introduced in v6k. */
/* ??? This is an MSR (imm) 0,0,0 insn. Anyone know if this
@ -1822,7 +1822,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
tcg_out_movi32(s, COND_AL, base, ptr - dil);
}
tcg_out_ld32_12(s, COND_AL, TCG_REG_PC, base, dil);
s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
set_jmp_reset_offset(s, args[0]);
}
break;
case INDEX_op_goto_ptr:

View File

@ -854,11 +854,11 @@ static void tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
switch (vece) {
case MO_8:
/* ??? With zero in a register, use PSHUFB. */
tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, 0, a);
tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
a = r;
/* FALLTHRU */
case MO_16:
tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, 0, a);
tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
a = r;
/* FALLTHRU */
case MO_32:
@ -867,7 +867,7 @@ static void tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
tcg_out8(s, 0);
break;
case MO_64:
tcg_out_vex_modrm(s, OPC_PUNPCKLQDQ, r, 0, a);
tcg_out_vex_modrm(s, OPC_PUNPCKLQDQ, r, a, a);
break;
default:
g_assert_not_reached();
@ -2245,7 +2245,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
tcg_out_modrm_offset(s, OPC_GRP5, EXT5_JMPN_Ev, -1,
(intptr_t)(s->tb_jmp_target_addr + a0));
}
s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
set_jmp_reset_offset(s, a0);
break;
case INDEX_op_goto_ptr:
/* jmp to the given host address (could be epilogue) */
@ -3529,7 +3529,7 @@ static void tcg_target_init(TCGContext *s)
tcg_target_available_regs[TCG_TYPE_V256] = ALL_VECTOR_REGS;
}
tcg_target_call_clobber_regs = 0;
tcg_target_call_clobber_regs = ALL_VECTOR_REGS;
tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_EAX);
tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_EDX);
tcg_regset_set_reg(tcg_target_call_clobber_regs, TCG_REG_ECX);

View File

@ -1744,7 +1744,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
tcg_out_opc_reg(s, OPC_JR, 0, TCG_TMP0, 0);
}
tcg_out_nop(s);
s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
set_jmp_reset_offset(s, a0);
break;
case INDEX_op_goto_ptr:
/* jmp to the given host address (could be epilogue) */

View File

@ -2025,10 +2025,10 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args,
}
tcg_out32(s, MTSPR | RS(TCG_REG_TB) | CTR);
tcg_out32(s, BCCTR | BO_ALWAYS);
s->tb_jmp_reset_offset[args[0]] = c = tcg_current_code_size(s);
set_jmp_reset_offset(s, args[0]);
if (USE_REG_TB) {
/* For the unlinked case, need to reset TCG_REG_TB. */
c = -c;
c = -tcg_current_code_size(s);
assert(c == (int16_t)c);
tcg_out32(s, ADDI | TAI(TCG_REG_TB, TCG_REG_TB, c));
}

View File

@ -1783,7 +1783,7 @@ static inline void tcg_out_op(TCGContext *s, TCGOpcode opc,
/* and go there */
tcg_out_insn(s, RR, BCR, S390_CC_ALWAYS, TCG_REG_TB);
}
s->tb_jmp_reset_offset[a0] = tcg_current_code_size(s);
set_jmp_reset_offset(s, a0);
/* For the unlinked path of goto_tb, we need to reset
TCG_REG_TB to the beginning of this TB. */

View File

@ -1388,12 +1388,12 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
tcg_out_arithi(s, TCG_REG_G0, TCG_REG_TB, 0, JMPL);
tcg_out_nop(s);
}
s->tb_jmp_reset_offset[a0] = c = tcg_current_code_size(s);
set_jmp_reset_offset(s, a0);
/* For the unlinked path of goto_tb, we need to reset
TCG_REG_TB to the beginning of this TB. */
if (USE_REG_TB) {
c = -c;
c = -tcg_current_code_size(s);
if (check_fit_i32(c, 13)) {
tcg_out_arithi(s, TCG_REG_TB, TCG_REG_TB, c, ARITH_ADD);
} else {

View File

@ -293,6 +293,14 @@ TCGLabel *gen_new_label(void)
return l;
}
static void set_jmp_reset_offset(TCGContext *s, int which)
{
size_t off = tcg_current_code_size(s);
s->tb_jmp_reset_offset[which] = off;
/* Make sure that we didn't overflow the stored offset. */
assert(s->tb_jmp_reset_offset[which] == off);
}
#include "tcg-target.inc.c"
static void tcg_region_bounds(size_t curr_region, void **pstart, void **pend)
@ -866,6 +874,7 @@ void tcg_func_start(TCGContext *s)
/* No temps have been previously allocated for size or locality. */
memset(s->free_temps, 0, sizeof(s->free_temps));
s->nb_ops = 0;
s->nb_labels = 0;
s->current_frame_offset = s->frame_start;
@ -1983,6 +1992,7 @@ void tcg_op_remove(TCGContext *s, TCGOp *op)
{
QTAILQ_REMOVE(&s->ops, op, link);
QTAILQ_INSERT_TAIL(&s->free_ops, op, link);
s->nb_ops--;
#ifdef CONFIG_PROFILER
atomic_set(&s->prof.del_op_count, s->prof.del_op_count + 1);
@ -2002,6 +2012,7 @@ static TCGOp *tcg_op_alloc(TCGOpcode opc)
}
memset(op, 0, offsetof(TCGOp, link));
op->opc = opc;
s->nb_ops++;
return op;
}
@ -3351,7 +3362,10 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb)
break;
case INDEX_op_insn_start:
if (num_insns >= 0) {
s->gen_insn_end_off[num_insns] = tcg_current_code_size(s);
size_t off = tcg_current_code_size(s);
s->gen_insn_end_off[num_insns] = off;
/* Assert that we do not overflow our stored offset. */
assert(s->gen_insn_end_off[num_insns] == off);
}
num_insns++;
for (i = 0; i < TARGET_INSN_START_WORDS; ++i) {

View File

@ -655,6 +655,7 @@ struct TCGContext {
int nb_globals;
int nb_temps;
int nb_indirects;
int nb_ops;
/* goto_tb support */
tcg_insn_unit *code_buf;
@ -844,7 +845,14 @@ static inline TCGOp *tcg_last_op(void)
/* Test for whether to terminate the TB for using too many opcodes. */
static inline bool tcg_op_buf_full(void)
{
return false;
/* This is not a hard limit, it merely stops translation when
* we have produced "enough" opcodes. We want to limit TB size
* such that a RISC host can reasonably use a 16-bit signed
* branch within the TB. We also need to be mindful of the
* 16-bit unsigned offsets, TranslationBlock.jmp_reset_offset[]
* and TCGContext.gen_insn_end_off[].
*/
return tcg_ctx->nb_ops >= 4000;
}
/* pool based memory allocation */

View File

@ -574,7 +574,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, const TCGArg *args,
/* Indirect jump method. */
TODO();
}
s->tb_jmp_reset_offset[args[0]] = tcg_current_code_size(s);
set_jmp_reset_offset(s, args[0]);
break;
case INDEX_op_br:
tci_out_label(s, arg_label(args[0]));

View File

@ -29,9 +29,14 @@ status=1 # failure is the default!
_cleanup()
{
_cleanup_test_img
rm -f "$TEST_DIR/t.$IMGFMT.base_old"
rm -f "$TEST_DIR/t.$IMGFMT.base_new"
_cleanup_test_img
rm -f "$TEST_DIR/t.$IMGFMT.base_old"
rm -f "$TEST_DIR/t.$IMGFMT.base_new"
rm -f "$TEST_DIR/subdir/t.$IMGFMT"
rm -f "$TEST_DIR/subdir/t.$IMGFMT.base_old"
rm -f "$TEST_DIR/subdir/t.$IMGFMT.base_new"
rmdir "$TEST_DIR/subdir" 2> /dev/null
}
trap "_cleanup; exit \$status" 0 1 2 3 15
@ -123,6 +128,77 @@ io_pattern readv $((13 * CLUSTER_SIZE)) $CLUSTER_SIZE 0 1 0x00
io_pattern readv $((14 * CLUSTER_SIZE)) $CLUSTER_SIZE 0 1 0x11
io_pattern readv $((15 * CLUSTER_SIZE)) $CLUSTER_SIZE 0 1 0x00
echo
echo "=== Test rebase in a subdirectory of the working directory ==="
echo
# Clean up the old images beforehand so they do not interfere with
# this test
_cleanup
mkdir "$TEST_DIR/subdir"
# Relative to the overlay
BASE_OLD_OREL="t.$IMGFMT.base_old"
BASE_NEW_OREL="t.$IMGFMT.base_new"
# Relative to $TEST_DIR (which is going to be our working directory)
OVERLAY_WREL="subdir/t.$IMGFMT"
BASE_OLD="$TEST_DIR/subdir/$BASE_OLD_OREL"
BASE_NEW="$TEST_DIR/subdir/$BASE_NEW_OREL"
OVERLAY="$TEST_DIR/$OVERLAY_WREL"
# Test done here:
#
# Backing (old): 11 11 -- 11
# Backing (new): -- 22 22 11
# Overlay: -- -- -- --
#
# Rebasing works, we have verified that above. Here, we just want to
# see that rebasing is done for the correct target backing file.
TEST_IMG=$BASE_OLD _make_test_img 1M
TEST_IMG=$BASE_NEW _make_test_img 1M
TEST_IMG=$OVERLAY _make_test_img -b "$BASE_OLD_OREL" 1M
echo
$QEMU_IO "$BASE_OLD" \
-c "write -P 0x11 $((0 * CLUSTER_SIZE)) $((2 * CLUSTER_SIZE))" \
-c "write -P 0x11 $((3 * CLUSTER_SIZE)) $((1 * CLUSTER_SIZE))" \
| _filter_qemu_io
$QEMU_IO "$BASE_NEW" \
-c "write -P 0x22 $((1 * CLUSTER_SIZE)) $((2 * CLUSTER_SIZE))" \
-c "write -P 0x11 $((3 * CLUSTER_SIZE)) $((1 * CLUSTER_SIZE))" \
| _filter_qemu_io
echo
pushd "$TEST_DIR" >/dev/null
$QEMU_IMG rebase -f "$IMGFMT" -b "$BASE_NEW_OREL" "$OVERLAY_WREL"
popd >/dev/null
# Verify the backing path is correct
TEST_IMG=$OVERLAY _img_info | grep '^backing file'
echo
# Verify the data is correct
$QEMU_IO "$OVERLAY" \
-c "read -P 0x11 $((0 * CLUSTER_SIZE)) $CLUSTER_SIZE" \
-c "read -P 0x11 $((1 * CLUSTER_SIZE)) $CLUSTER_SIZE" \
-c "read -P 0x00 $((2 * CLUSTER_SIZE)) $CLUSTER_SIZE" \
-c "read -P 0x11 $((3 * CLUSTER_SIZE)) $CLUSTER_SIZE" \
| _filter_qemu_io
echo
# Verify that cluster #3 is not allocated (because it is the same in
# $BASE_OLD and $BASE_NEW)
$QEMU_IMG map "$OVERLAY" | _filter_qemu_img_map
# success, all done
echo "*** done"

View File

@ -141,4 +141,34 @@ read 65536/65536 bytes at offset 917504
=== IO: pattern 0x00
read 65536/65536 bytes at offset 983040
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
=== Test rebase in a subdirectory of the working directory ===
Formatting 'TEST_DIR/subdir/t.IMGFMT.base_old', fmt=IMGFMT size=1048576
Formatting 'TEST_DIR/subdir/t.IMGFMT.base_new', fmt=IMGFMT size=1048576
Formatting 'TEST_DIR/subdir/t.IMGFMT', fmt=IMGFMT size=1048576 backing_file=t.IMGFMT.base_old
wrote 131072/131072 bytes at offset 0
128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 65536/65536 bytes at offset 196608
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 131072/131072 bytes at offset 65536
128 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 65536/65536 bytes at offset 196608
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
backing file: t.IMGFMT.base_new (actual path: TEST_DIR/subdir/t.IMGFMT.base_new)
read 65536/65536 bytes at offset 0
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
read 65536/65536 bytes at offset 65536
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
read 65536/65536 bytes at offset 131072
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
read 65536/65536 bytes at offset 196608
64 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
Offset Length File
0 0x30000 TEST_DIR/subdir/t.IMGFMT
0x30000 0x10000 TEST_DIR/subdir/t.IMGFMT.base_new
*** done

View File

@ -440,6 +440,36 @@ echo "{'execute': 'qmp_capabilities'}
-drive if=none,node-name=drive,file="$TEST_IMG",driver=qcow2 \
| _filter_qmp | _filter_qemu_io
echo
echo "=== Testing incoming inactive corrupted image ==="
echo
_make_test_img 64M
# Create an unaligned L1 entry, so qemu will signal a corruption when
# reading from the covered area
poke_file "$TEST_IMG" "$l1_offset" "\x00\x00\x00\x00\x2a\x2a\x2a\x2a"
# Inactive images are effectively read-only images, so this should be a
# non-fatal corruption (which does not modify the image)
echo "{'execute': 'qmp_capabilities'}
{'execute': 'human-monitor-command',
'arguments': {'command-line': 'qemu-io drive \"read 0 512\"'}}
{'execute': 'quit'}" \
| $QEMU -qmp stdio -nographic -nodefaults \
-blockdev "{'node-name': 'drive',
'driver': 'qcow2',
'file': {
'driver': 'file',
'filename': '$TEST_IMG'
}}" \
-incoming exec:'cat /dev/null' \
2>&1 \
| _filter_qmp | _filter_qemu_io
echo
# Image should not have been marked corrupt
_img_info --format-specific | grep 'corrupt:'
# success, all done
echo "*** done"
rm -f $seq.full

View File

@ -420,4 +420,18 @@ write failed: Input/output error
{"return": ""}
{"return": {}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false}}
=== Testing incoming inactive corrupted image ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864
QMP_VERSION
{"return": {}}
qcow2: Image is corrupt: L2 table offset 0x2a2a2a00 unaligned (L1 index: 0); further non-fatal corruption events will be suppressed
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_IMAGE_CORRUPTED", "data": {"device": "", "msg": "L2 table offset 0x2a2a2a00 unaligned (L1 index: 0)", "node-name": "drive", "fatal": false}}
read failed: Input/output error
{"return": ""}
{"return": {}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false}}
corrupt: false
*** done

View File

@ -242,6 +242,23 @@ _run_cmd $QEMU_IO "${TEST_IMG}" -c 'write 0 512'
_cleanup_qemu
echo
echo "== Detecting -U and force-share conflicts =="
echo
echo 'No conflict:'
$QEMU_IMG info -U --image-opts driver=null-co,force-share=on
echo
echo 'Conflict:'
$QEMU_IMG info -U --image-opts driver=null-co,force-share=off
echo
echo 'No conflict:'
$QEMU_IO -c 'open -r -U -o driver=null-co,force-share=on'
echo
echo 'Conflict:'
$QEMU_IO -c 'open -r -U -o driver=null-co,force-share=off'
# success, all done
echo "*** done"
rm -f $seq.full

View File

@ -399,4 +399,20 @@ Is another process using the image?
Closing the other
_qemu_io_wrapper TEST_DIR/t.qcow2 -c write 0 512
== Detecting -U and force-share conflicts ==
No conflict:
image: null-co://
file format: null-co
virtual size: 1.0G (1073741824 bytes)
disk size: unavailable
Conflict:
qemu-img: --force-share/-U conflicts with image options
No conflict:
Conflict:
-U conflicts with image options
*** done

View File

@ -36,9 +36,9 @@ Formatting 'TEST_DIR/t.qcow2', fmt=qcow2 size=67108864 backing_file=TEST_DIR/t.q
{"return": {}}
Formatting 'TEST_DIR/t.qcow2.copy', fmt=qcow2 size=67108864 cluster_size=65536 lazy_refcounts=off refcount_bits=16
{"return": {}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "disk", "len": 4194304, "offset": 4194304, "speed": 65536, "type": "mirror"}}
{"return": {}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "SHUTDOWN", "data": {"guest": false}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_READY", "data": {"device": "disk", "len": 4194304, "offset": 4194304, "speed": 65536, "type": "mirror"}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "disk", "len": 4194304, "offset": 4194304, "speed": 65536, "type": "mirror"}}
=== Start backup job and exit qemu ===

View File

@ -0,0 +1,138 @@
#!/usr/bin/env python
#
# This test covers what happens when a mirror block job is cancelled
# in various phases of its existence.
#
# Note that this test only checks the emitted events (i.e.
# BLOCK_JOB_COMPLETED vs. BLOCK_JOB_CANCELLED), it does not compare
# whether the target is in sync with the source when the
# BLOCK_JOB_COMPLETED event occurs. This is covered by other tests
# (such as 041).
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Creator/Owner: Max Reitz <mreitz@redhat.com>
import iotests
from iotests import log
iotests.verify_platform(['linux'])
# Launches the VM, adds two null-co nodes (source and target), and
# starts a blockdev-mirror job on them.
#
# Either both or none of speed and buf_size must be given.
def start_mirror(vm, speed=None, buf_size=None):
vm.launch()
ret = vm.qmp('blockdev-add',
node_name='source',
driver='null-co',
size=1048576)
assert ret['return'] == {}
ret = vm.qmp('blockdev-add',
node_name='target',
driver='null-co',
size=1048576)
assert ret['return'] == {}
if speed is not None:
ret = vm.qmp('blockdev-mirror',
job_id='mirror',
device='source',
target='target',
sync='full',
speed=speed,
buf_size=buf_size)
else:
ret = vm.qmp('blockdev-mirror',
job_id='mirror',
device='source',
target='target',
sync='full')
assert ret['return'] == {}
log('')
log('=== Cancel mirror job before convergence ===')
log('')
log('--- force=false ---')
log('')
with iotests.VM() as vm:
# Low speed so it does not converge
start_mirror(vm, 65536, 65536)
log('Cancelling job')
log(vm.qmp('block-job-cancel', device='mirror', force=False))
log(vm.event_wait('BLOCK_JOB_CANCELLED'),
filters=[iotests.filter_qmp_event])
log('')
log('--- force=true ---')
log('')
with iotests.VM() as vm:
# Low speed so it does not converge
start_mirror(vm, 65536, 65536)
log('Cancelling job')
log(vm.qmp('block-job-cancel', device='mirror', force=True))
log(vm.event_wait('BLOCK_JOB_CANCELLED'),
filters=[iotests.filter_qmp_event])
log('')
log('=== Cancel mirror job after convergence ===')
log('')
log('--- force=false ---')
log('')
with iotests.VM() as vm:
start_mirror(vm)
log(vm.event_wait('BLOCK_JOB_READY'),
filters=[iotests.filter_qmp_event])
log('Cancelling job')
log(vm.qmp('block-job-cancel', device='mirror', force=False))
log(vm.event_wait('BLOCK_JOB_COMPLETED'),
filters=[iotests.filter_qmp_event])
log('')
log('--- force=true ---')
log('')
with iotests.VM() as vm:
start_mirror(vm)
log(vm.event_wait('BLOCK_JOB_READY'),
filters=[iotests.filter_qmp_event])
log('Cancelling job')
log(vm.qmp('block-job-cancel', device='mirror', force=True))
log(vm.event_wait('BLOCK_JOB_CANCELLED'),
filters=[iotests.filter_qmp_event])

View File

@ -0,0 +1,30 @@
=== Cancel mirror job before convergence ===
--- force=false ---
Cancelling job
{u'return': {}}
{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror', u'type': u'mirror', u'speed': 65536, u'len': 1048576, u'offset': 65536}, u'event': u'BLOCK_JOB_CANCELLED'}
--- force=true ---
Cancelling job
{u'return': {}}
{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror', u'type': u'mirror', u'speed': 65536, u'len': 1048576, u'offset': 65536}, u'event': u'BLOCK_JOB_CANCELLED'}
=== Cancel mirror job after convergence ===
--- force=false ---
{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror', u'type': u'mirror', u'speed': 0, u'len': 1048576, u'offset': 1048576}, u'event': u'BLOCK_JOB_READY'}
Cancelling job
{u'return': {}}
{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror', u'type': u'mirror', u'speed': 0, u'len': 1048576, u'offset': 1048576}, u'event': u'BLOCK_JOB_COMPLETED'}
--- force=true ---
{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror', u'type': u'mirror', u'speed': 0, u'len': 1048576, u'offset': 1048576}, u'event': u'BLOCK_JOB_READY'}
Cancelling job
{u'return': {}}
{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data': {u'device': u'mirror', u'type': u'mirror', u'speed': 0, u'len': 1048576, u'offset': 1048576}, u'event': u'BLOCK_JOB_CANCELLED'}

View File

@ -0,0 +1,60 @@
#!/bin/bash
#
# Test qemu-img vs. unaligned images
#
# Copyright (C) 2018 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
seq="$(basename $0)"
echo "QA output created by $seq"
here="$PWD"
status=1 # failure is the default!
_cleanup()
{
_cleanup_test_img
}
trap "_cleanup; exit \$status" 0 1 2 3 15
# get standard environment, filters and checks
. ./common.rc
. ./common.filter
_supported_fmt raw
_supported_proto file
_supported_os Linux
echo
echo "=== Check mapping of unaligned raw image ==="
echo
_make_test_img 43009 # qemu-img create rounds size up
$QEMU_IMG map --output=json "$TEST_IMG" | _filter_qemu_img_map
truncate --size=43009 "$TEST_IMG" # so we resize it and check again
$QEMU_IMG map --output=json "$TEST_IMG" | _filter_qemu_img_map
$QEMU_IO -c 'w 43008 1' "$TEST_IMG" | _filter_qemu_io # writing also rounds up
$QEMU_IMG map --output=json "$TEST_IMG" | _filter_qemu_img_map
truncate --size=43009 "$TEST_IMG" # so we resize it and check again
$QEMU_IMG map --output=json "$TEST_IMG" | _filter_qemu_img_map
# success, all done
echo '*** done'
rm -f $seq.full
status=0

View File

@ -0,0 +1,16 @@
QA output created by 221
=== Check mapping of unaligned raw image ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=43009
[{ "start": 0, "length": 43520, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 43520, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
wrote 1/1 bytes at offset 43008
1 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
[{ "start": 0, "length": 40960, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
{ "start": 40960, "length": 2049, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 43009, "length": 511, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 40960, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
{ "start": 40960, "length": 2049, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 43009, "length": 511, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
*** done

View File

@ -212,3 +212,5 @@
211 rw auto quick
212 rw auto quick
213 rw auto quick
218 rw auto quick
221 rw auto quick

View File

@ -214,6 +214,10 @@ static void char_mux_test(void)
g_assert_cmpint(h2.last_event, ==, -1);
/* switch focus */
qemu_chr_be_write(base, (void *)"\1b", 2);
g_assert_cmpint(h1.last_event, ==, 42);
g_assert_cmpint(h2.last_event, ==, CHR_EVENT_BREAK);
qemu_chr_be_write(base, (void *)"\1c", 2);
g_assert_cmpint(h1.last_event, ==, CHR_EVENT_MUX_IN);
g_assert_cmpint(h2.last_event, ==, CHR_EVENT_MUX_OUT);
@ -227,6 +231,10 @@ static void char_mux_test(void)
g_assert_cmpstr(h1.read_buf, ==, "hello");
h1.read_count = 0;
qemu_chr_be_write(base, (void *)"\1b", 2);
g_assert_cmpint(h1.last_event, ==, CHR_EVENT_BREAK);
g_assert_cmpint(h2.last_event, ==, CHR_EVENT_MUX_OUT);
/* remove first handler */
qemu_chr_fe_set_handlers(&chr_be1, NULL, NULL, NULL, NULL,
NULL, NULL, true);

Some files were not shown because too many files have changed in this diff Show More