Compare commits

...

97 Commits

Author SHA1 Message Date
Michael Roth ba87166e14 Update version for 2.10.2 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-18 10:09:38 -06:00
Laurent Vivier b7d059b91f spapr: don't initialize PATB entry if max-cpu-compat < power9
if KVM is enabled and KVM capabilities MMU radix is available,
the partition table entry (patb_entry) for the radix mode is
initialized by default in ppc_spapr_reset().

It's a problem if we want to migrate the guest to a POWER8 host
while the kernel is not started to set the value to the one
expected for a POWER8 CPU.

The "-machine max-cpu-compat=power8" should allow to migrate
a POWER9 KVM host to a POWER8 KVM host, but because patb_entry
is set, the destination QEMU tries to enable radix mode on the
POWER8 host. This fails and cancels the migration:

    Process table config unsupported by the host
    error while loading state for instance 0x0 of device 'spapr'
    load of migration failed: Invalid argument

This patch doesn't set the PATB entry if the user provides
a CPU compatibility mode that doesn't support radix mode.

Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 1481fe5fcf)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-15 09:36:56 -06:00
Suraj Jitindar Singh 2f3e3890c4 target/ppc: Update setting of cpu features to account for compat modes
The device tree nodes ibm,arch-vec-5-platform-support and ibm,pa-features
are used to communicate features of the cpu to the guest operating
system. The properties of each of these are determined based on the
selected cpu model and the availability of hypervisor features.
Currently the compatibility mode of the cpu is not taken into account.

The ibm,arch-vec-5-platform-support node is used to communicate the
level of support for various ISAv3 processor features to the guest
before CAS to inform the guests' request. The available mmu mode should
only be hash unless the cpu is a POWER9 which is not in a prePOWER9
compat mode, in which case the available modes depend on the
accelerator and the hypervisor capabilities.

The ibm,pa-featues node is used to communicate the level of cpu support
for various features to the guest os. This should only contain features
relevant to the operating mode of the processor, that is the selected
cpu model taking into account any compat mode. This means that the
compat mode should be taken into account when choosing the properties of
ibm,pa-features and they should match the compat mode selected, or the
cpu model selected if no compat mode.

Update the setting of these cpu features in the device tree as described
above to properly take into account any compat mode. We use the
ppc_check_compat function which takes into account the current processor
model and the cpu compat mode.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 7abd43baec)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-15 09:36:56 -06:00
Alex Williamson 26c1b49d56 vfio: Fix vfio-kvm group registration
Commit 8c37faa475 ("vfio-pci, ppc64/spapr: Reorder group-to-container
attaching") moved registration of groups with the vfio-kvm device from
vfio_get_group() to vfio_connect_container(), but it missed the case
where a group is attached to an existing container and takes an early
exit.  Perhaps this is a less common case on ppc64/spapr, but on x86
(without viommu) all groups are connected to the same container and
thus only the first group gets registered with the vfio-kvm device.
This becomes a problem if we then hot-unplug the devices associated
with that first group and we end up with KVM being misinformed about
any vfio connections that might remain.  Fix by including the call to
vfio_kvm_device_add_group() in this early exit path.

Fixes: 8c37faa475 ("vfio-pci, ppc64/spapr: Reorder group-to-container attaching")
Cc: qemu-stable@nongnu.org # qemu-2.10+
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit 2016986aed)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-14 20:18:13 -06:00
David Gibson 5f214279d4 spapr: Include "pre-plugged" DIMMS in ram size calculation at reset
At guest reset time, we allocate a hash page table (HPT) for the guest
based on the guest's RAM size.  If dynamic HPT resizing is not available we
use the maximum RAM size, if it is we use the current RAM size.

But the "current RAM size" calculation is incorrect - we just use the
"base" ram_size from the machine structure.  This doesn't include any
pluggable DIMMs that are already plugged at reset time.

This means that if you try to start a 'pseries' machine with a DIMM
specified on the command line that's much larger than the "base" RAM size,
then the guest will get a woefully inadequate HPT.  This can lead to a
guest freeze during boot as it runs out of HPT space during initial MMU
setup.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 768a20f3a4)
*drop dep on 9aa3397f
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 18:12:45 -06:00
Gerd Hoffmann 9c7714afd7 vga: handle cirrus vbe mode wraparounds.
Commit "3d90c62548 vga: stop passing pointers to vga_draw_line*
functions" is incomplete.  It doesn't handle the case that the vga
rendering code tries to create a shared surface, i.e. a pixman image
backed by vga video memory.  That can not work in case the guest display
wraps from end of video memory to the start.  So force shadowing in that
case.  Also adjust the snapshot region calculation.

Can trigger with cirrus only, when programming vbe modes using the bochs
api (stdvga, also qxl and virtio-vga in vga compat mode) wrap arounds
can't happen.

Fixes: CVE-2017-13672
Fixes: 3d90c62548
Cc: P J P <ppandit@redhat.com>
Reported-by: David Buchanan <d@vidbuchanan.co.uk>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20171010141323.14049-3-kraxel@redhat.com
(cherry picked from commit 28f77de26a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 13:01:53 -06:00
Gerd Hoffmann a0ad811956 vga: drop line_offset variable
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 362f811793)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 13:01:50 -06:00
Eric Blake b81833fe7d nbd/client: Don't hard-disconnect on ESHUTDOWN from server
The NBD spec says that a server may fail any transmission request
with ESHUTDOWN when it is apparent that no further request from
the client can be successfully honored.  The client is supposed
to then initiate a soft shutdown (wait for all remaining in-flight
requests to be answered, then send NBD_CMD_DISC).  However, since
qemu's server never uses ESHUTDOWN errors, this code was mostly
untested since its introduction in commit b6f5d3b5.

More recently, I learned that nbdkit as the NBD server is able to
send ESHUTDOWN errors, so I finally tested this code, and noticed
that our client was special-casing ESHUTDOWN to cause a hard
shutdown (immediate disconnect, with no NBD_CMD_DISC), but only
if the server sends this error as a simple reply.  Further
investigation found that commit d2febedb introduced a regression
where structured replies behave differently than simple replies -
but that the structured reply behavior is more in line with the
spec (even if we still lack code in nbd-client.c to properly quit
sending further requests).  So this patch reverts the portion of
b6f5d3b5 that introduced an improper hard-disconnect special-case
at the lower level, and leaves the future enhancement of a nicer
soft-disconnect at the higher level for another day.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20171113194857.13933-1-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit 01b05c66a3)
 Conflicts:
	nbd/client.c
*drop dep on d2febedb
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 12:33:46 -06:00
Eric Blake 0fd80ef569 nbd-client: Refuse read-only client with BDRV_O_RDWR
The NBD spec says that clients should not try to write/trim to
an export advertised as read-only by the server.  But we failed
to check that, and would allow the block layer to use NBD with
BDRV_O_RDWR even when the server is read-only, which meant we
were depending on the server sending a proper EPERM failure for
various commands, and also exposes a leaky abstraction: using
qemu-io in read-write mode would succeed on 'w -z 0 0' because
of local short-circuiting logic, but 'w 0 0' would send a
request over the wire (where it then depends on the server, and
fails at least for qemu-nbd but might pass for other NBD
implementations).

With this patch, a client MUST request read-only mode to access
a server that is doing a read-only export, or else it will get
a message like:

can't open device nbd://localhost:10809/foo: request for write access conflicts with read-only export

It is no longer possible to even attempt writes over the wire
(including the corner case of 0-length writes), because the block
layer enforces the explicit read-only request; this matches the
behavior of qcow2 when backed by a read-only POSIX file.

Fix several iotests to comply with the new behavior (since
qemu-nbd of an internal snapshot, as well as nbd-server-add over QMP,
default to a read-only export, we must tell blockdev-add/qemu-io to
set up a read-only client).

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20171108215703.9295-3-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
(cherry picked from commit 1104d83c72)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:54:08 -06:00
Vladimir Sementsov-Ogievskiy b01b1609e6 nbd/server: fix nbd_negotiate_handle_info
namelen should be here, length is unrelated, and always 0 at this
point.  Broken in introduction in commit f37708f6, but mostly
harmless (replying with '' as the name does not violate protocol,
and does not confuse qemu as the nbd client since our implementation
does not ask for the name; but might confuse some other client that
does ask for the name especially if the default export is different
than the export name being queried).

Adding an assert makes it obvious that we are not skipping any bytes
in the client's message, as well as making it obvious that we were
using the wrong variable.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
CC: qemu-stable@nongnu.org
Message-Id: <20171101154204.27146-1-vsementsov@virtuozzo.com>
[eblake: improve commit message, squash in assert addition]
Signed-off-by: Eric Blake <eblake@redhat.com>

(cherry picked from commit 46321d6b5f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:49:26 -06:00
Greg Kurz 82ded5166b vhost: fix error check in vhost_verify_ring_mappings()
Since commit f1f9e6c5 "vhost: adapt vhost_verify_ring_mappings() to
virtio 1 ring layout", we check the mapping of each part (descriptor
table, available ring and used ring) of each virtqueue separately.

The checking of a part is done by the vhost_verify_ring_part_mapping()
function: it returns either 0 on success or a negative errno if the
part cannot be mapped at the same place.

Unfortunately, the vhost_verify_ring_mappings() function checks its
return value the other way round. It means that we either:
- only verify the descriptor table of the first virtqueue, and if it
  is valid we ignore all the other mappings
- or ignore all broken mappings until we reach a valid one

ie, we only raise an error if all mappings are broken, and we consider
all mappings are valid otherwise (false success), which is obviously
wrong.

This patch ensures that vhost_verify_ring_mappings() only returns
success if ALL mappings are okay.

Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2fe45ec3bf)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:47:33 -06:00
Eric Blake 227196c1e7 nbd/server: CVE-2017-15118 Stack smash on large export name
Introduced in commit f37708f6b8 (2.10).  The NBD spec says a client
can request export names up to 4096 bytes in length, even though
they should not expect success on names longer than 256.  However,
qemu hard-codes the limit of 256, and fails to filter out a client
that probes for a longer name; the result is a stack smash that can
potentially give an attacker arbitrary control over the qemu
process.

The smash can be easily demonstrated with this client:
$ qemu-io f raw nbd://localhost:10809/$(printf %3000d 1 | tr ' ' a)

If the qemu NBD server binary (whether the standalone qemu-nbd, or
the builtin server of QMP nbd-server-start) was compiled with
-fstack-protector-strong, the ability to exploit the stack smash
into arbitrary execution is a lot more difficult (but still
theoretically possible to a determined attacker, perhaps in
combination with other CVEs).  Still, crashing a running qemu (and
losing the VM) is bad enough, even if the attacker did not obtain
full execution control.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 51ae4f8455)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:41:27 -06:00
Eric Blake 2ce8993512 nbd/server: CVE-2017-15119 Reject options larger than 32M
The NBD spec gives us permission to abruptly disconnect on clients
that send outrageously large option requests, rather than having
to spend the time reading to the end of the option.  No real
option request requires that much data anyways; and meanwhile, we
already have the practice of abruptly dropping the connection on
any client that sends NBD_CMD_WRITE with a payload larger than 32M.

For comparison, nbdkit drops the connection on any request with
more than 4096 bytes; however, that limit is probably too low
(as the NBD spec states an export name can theoretically be up
to 4096 bytes, which means a valid NBD_OPT_INFO could be even
longer) - even if qemu doesn't permit exports longer than 256
bytes.

It could be argued that a malicious client trying to get us to
read nearly 4G of data on a bad request is a form of denial of
service.  In particular, if the server requires TLS, but a client
that does not know the TLS credentials sends any option (other
than NBD_OPT_STARTTLS or NBD_OPT_EXPORT_NAME) with a stated
payload of nearly 4G, then the server was keeping the connection
alive trying to read all the payload, tying up resources that it
would rather be spending on a client that can get past the TLS
handshake.  Hence, this warranted a CVE.

Present since at least 2.5 when handling known options, and made
worse in 2.6 when fixing support for NBD_FLAG_C_FIXED_NEWSTYLE
to handle unknown options.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit fdad35ef6c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:41:22 -06:00
Jason Wang c2269a0b54 virtio-net: don't touch virtqueue if vm is stopped
Guest state should not be touched if VM is stopped, unfortunately we
didn't check running state and tried to drain tx queue unconditionally
in virtio_net_set_status(). A crash was then noticed as a migration
destination when user type quit after virtqueue state is loaded but
before region cache is initialized. In this case,
virtio_net_drop_tx_queue_data() tries to access the uninitialized
region cache.

Fix this by only dropping tx queue data when vm is running.

Fixes: 283e2c2adc ("net: virtio-net discards TX data after link down")
Cc: Yuri Benditovich <yuri.benditovich@daynix.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 70e53e6e4d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:40:52 -06:00
Peter Lieven 30e499bdc9 block/nfs: fix nfs_client_open for filesize greater than 1TB
DIV_ROUND_UP(st.st_size, BDRV_SECTOR_SIZE) was overflowing ret (int) if
st.st_size is greater than 1TB.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-id: 1511798407-31129-1-git-send-email-pl@kamp.de
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit f1a7ff770f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:40:10 -06:00
Michael Roth e1a2a27327 scripts/make-release: ship u-boot source as a tarball
The u-boot sources we ship currently cause problems with unpacking on
a case-insensitive filesystem due to path conflicts. This has been
fixed in upstream u-boot via commit 610eec7f, but since it is not
yet included in an official release we implement this approach as a
temporary workaround.

Once we move to a u-boot containing commit 610eec7f we should revert
this patch.

Cc: qemu-stable@nongnu.org
Cc: Alexander Graf <agraf@suse.de>
Cc: Richard Henderson <richard.henderson@linaro.org>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20171107205201.10207-1-mdroth@linux.vnet.ibm.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit d0dead3b6d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:34:07 -06:00
Greg Kurz a77c5873fe spapr: reset DRCs after devices
A DRC with a pending unplug request releases its associated device at
machine reset time.

In the case of LMB, when all DRCs for a DIMM device have been reset,
the DIMM gets unplugged, causing guest memory to disappear. This may
be very confusing for anything still using this memory.

This is exactly what happens with vhost backends, and QEMU aborts
with:

qemu-system-ppc64: used ring relocated for ring 2
qemu-system-ppc64: qemu/hw/virtio/vhost.c:649: vhost_commit: Assertion
 `r >= 0' failed.

The issue is that each DRC registers a QEMU reset handler, and we
don't control the order in which these handlers are called (ie,
a LMB DRC will unplug a DIMM before the virtio device using the
memory on this DIMM could stop its vhost backend).

To avoid such situations, let's reset DRCs after all devices
have been reset.

Reported-by: Mallesh N. Koti <mallesh@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 8251248394)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:00:58 -06:00
Daniel Henrique Barboza 0a5a2b938a hw/ppc: clear pending_events on machine reset
The sPAPR machine isn't clearing up the pending events QTAILQ on
machine reboot. This allows for unprocessed hotplug/epow events
to persist in the queue after reset and, when reasserting the IRQs in
check_exception later on, these will be being processed by the OS.

This patch implements a new function called 'spapr_clear_pending_events'
that clears up the pending_events QTAILQ. This helper is then called
inside ppc_spapr_reset to clear up the events queue, preventing
old/deprecated events from persisting after a reset.

Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 5625817423)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 11:00:55 -06:00
Maxime Coquelin 0bc76c8d08 vhost: restore avail index from vring used index on disconnection
vhost_virtqueue_stop() gets avail index value from the backend,
except if the backend is not responding.

It happens when the backend crashes, and in this case, internal
state of the virtio queue is inconsistent, making packets
to corrupt the vring state.

With a Linux guest, it results in following error message on
backend reconnection:

[   22.444905] virtio_net virtio0: output.0:id 0 is not a head!
[   22.446746] net enp0s3: Unexpected TXQ (0) queue failure: -5
[   22.476360] net enp0s3: Unexpected TXQ (0) queue failure: -5

Fixes: 283e2c2adc ("net: virtio-net discards TX data after link down")
Cc: qemu-stable@nongnu.org
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2ae39a113a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:55:22 -06:00
Maxime Coquelin 059422ddbc virtio: Add queue interface to restore avail index from vring used index
In case of backend crash, it is not possible to restore internal
avail index from the backend value as vhost_get_vring_base
callback fails.

This patch provides a new interface to restore internal avail index
from the vring used index, as done by some vhost-user backend on
reconnection.

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2d4ba6cc74)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:55:16 -06:00
Max Reitz d6c99e8ff5 util/stats64: Fix min/max comparisons
stat64_min_slow() and stat64_max_slow() compare the wrong way.  This
makes iotest 136 fail with clang and -m32.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20171114232223.25207-1-mreitz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 26a5db322b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:53:22 -06:00
Eric Blake 56a10ff664 nbd/client: Use error_prepend() correctly
When using error prepend(), it is necessary to end with a space
in the format string; otherwise, messages come out incorrectly,
such as when connecting to a socket that hangs up immediately:

can't open device nbd://localhost:10809/: Failed to read dataUnexpected end-of-file before all bytes were read

Originally botched in commit e44ed99d, then several more instances
added in the meantime.

Pre-existing and not fixed here: we are inconsistent on capitalization;
some of our messages start with lower case, and others start with upper,
although the use of error_prepend() is much nicer to read when all
fragments consistently start with lower.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20171113152424.25381-1-eblake@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit cb6b1a3fc3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:50:49 -06:00
Jens Freimann 69f562ad9e net: fix check for number of parameters to -netdev socket
Since commit 0f8c289ad "net: fix -netdev socket,fd= for UDP sockets"
we allow more than one parameter for -netdev socket. But now
we run into an assert when no parameter at all is specified

> qemu-system-x86_64 -netdev socket
socket.c:729: net_init_socket: Assertion `sock->has_udp' failed.

Fix this by reverting the change of the if condition done in 0f8c289ad.

Cc: Jason Wang <jasowang@redhat.com>
Cc: qemu-stable@nongnu.org
Fixes: 0f8c289ad5
Reported-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit ff86d57625)
 Conflicts:
	net/socket.c
* drop context dep on 0522a959
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:49:07 -06:00
Jens Freimann 957bd48acf net/socket: fix coverity issue
This fixes coverity issue CID1005339.

Make sure that saddr is not used uninitialized if the
mcast parameter is NULL.

Cc: qemu-stable@nongnu.org
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit bb160b571f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:43:29 -06:00
Eric Auger 3a82a03a2e hw/intc/arm_gicv3_its: Don't abort on table save failure
The ITS is not fully properly reset at the moment. Caches are
not emptied.

After a reset, in case we attempt to save the state before
the bound devices have registered their MSIs and after the
1st level table has been allocated by the ITS driver
(device BASER is valid), the first level entries are still
invalid. If the device cache is not empty (devices registered
before the reset), vgic_its_save_device_tables fails with -EINVAL.
This causes a QEMU abort().

Cc: qemu-stable@nongnu.org
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Reported-by: wanghaibin <wanghaibin.wang@huawei.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 8a7348b5d6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:42:12 -06:00
Peter Maydell b637b865ed translate.c: Fix usermode big-endian AArch32 LDREXD and STREXD
For AArch32 LDREXD and STREXD, architecturally the 32-bit word at the
lowest address is always Rt and the one at addr+4 is Rt2, even if the
CPU is big-endian. Our implementation does these with a single
64-bit store, so if we're big-endian then we need to put the two
32-bit halves together in the opposite order to little-endian,
so that they end up in the right places. We were trying to do
this with the gen_aa32_frob64() function, but that is not correct
for the usermode emulator, because there there is a distinction
between "load a 64 bit value" (which does a BE 64-bit access
and doesn't need swapping) and "load two 32 bit values as one
64 bit access" (where we still need to do the swapping, like
system mode BE32).

Fixes: https://bugs.launchpad.net/qemu/+bug/1725267
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1509622400-13351-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit 3448d47b31)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:41:14 -06:00
Greg Kurz 3342fd0286 ppc: fix setting of compat mode
While trying to make KVM PR usable again, commit 5dfaa532ae introduced a
regression: the current compat_pvr value is passed to KVM instead of the
new one. This means that we always pass 0 instead of the max-cpu-compat
PVR during the initial machine reset. And at CAS time, we either pass
the PVR from the command line or even don't call kvmppc_set_compat() at
all, ie, the PCR will not be set as expected.

For example if we start a big endian fedora26 guest in power7 compat
mode on a POWER8 host, we get this in the guest:

$ cat /proc/cpuinfo
processor       : 0
cpu             : POWER7 (architected), altivec supported
clock           : 4024.000000MHz
revision        : 2.0 (pvr 004d 0200)

timebase        : 512000000
platform        : pSeries
model           : IBM pSeries (emulated by qemu)
machine         : CHRP IBM pSeries (emulated by qemu)
MMU             : Hash

but the guest can still execute POWER8 instructions, and the following
program succeeds:

int main()
{
        asm("vncipher 0,0,0"); // ISA 2.07 instruction
}

Let's pass the new compat_pvr to kvmppc_set_compat() and the program fails
with SIGILL as expected.

Reported-by: Nageswara R Sastry <rnsastry@linux.vnet.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit e4f0c6bb1a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:38:59 -06:00
Daniel P. Berrange e0809fcc4b io: monitor encoutput buffer size from websocket GSource
The websocket GSource is monitoring the size of the rawoutput
buffer to determine if the channel can accepts more writes.
The rawoutput buffer, however, is merely a temporary staging
buffer before data is copied into the encoutput buffer. Thus
its size will always be zero when the GSource runs.

This flaw causes the encoutput buffer to grow without bound
if the other end of the underlying data channel doesn't
read data being sent. This can be seen with VNC if a client
is on a slow WAN link and the guest OS is sending many screen
updates. A malicious VNC client can act like it is on a slow
link by playing a video in the guest and then reading data
very slowly, causing QEMU host memory to expand arbitrarily.

This issue is assigned CVE-2017-15268, publically reported in

  https://bugs.launchpad.net/qemu/+bug/1718964

(cherry picked from commit a7b20a8efa)

Reviewed-by: Eric Blake <eblake@redhat.com>

[Dan: Added extra checks to deal with code refactored in master but
 not stable 2.10]

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:38:20 -06:00
Paolo Bonzini e31942b486 nios2: define tcg_env
This should be done by all target and, since commit 53f6672bcf
("gen-icount: use tcg_ctx.tcg_env instead of cpu_env", 2017-06-30),
is causing the NIOS2 target to hang.

This is because the test for "should I exit to the main loop"
was being done with the correct offset to the icount decrementer,
but using TCG temporary 0 (the frame pointer) rather than the
env pointer.

Cc: qemu-stable@nongnu.org
Cc: Marek Vasut <marex@denx.de>
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 17bd9597be)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-06 09:32:04 -06:00
Max Reitz 5aa698ab5f iotests: Add cluster_size=64k to 125
Apparently it would be a good idea to test that, too.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171009215533.12530-4-mreitz@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 4c112a397c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-05 19:41:16 -06:00
Max Reitz 39475b8805 qcow2: Always execute preallocate() in a coroutine
Some qcow2 functions (at least perform_cow()) expect s->lock to be
taken.  Therefore, if we want to make use of them, we should execute
preallocate() (as "preallocate_co") in a coroutine so that we can use
the qemu_co_mutex_* functions.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171009215533.12530-3-mreitz@redhat.com
Cc: qemu-stable@nongnu.org
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 572b07bea1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-05 19:40:38 -06:00
Max Reitz a25aca75f8 qcow2: Fix unaligned preallocated truncation
A qcow2 image file's length is not required to have a length that is a
multiple of the cluster size.  However, qcow2_refcount_area() expects an
aligned value for its @start_offset parameter, so we need to round
@old_file_size up to the next cluster boundary.

Reported-by: Ping Li <pingl@redhat.com>
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1414049
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20171009215533.12530-2-mreitz@redhat.com
Cc: qemu-stable@nongnu.org
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit e400ad1e1f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-05 19:40:33 -06:00
Michael Olbrich 64f62e4e90 hw/sd: fix out-of-bounds check for multi block reads
The current code checks if the next block exceeds the size of the card.
This generates an error while reading the last block of the card.
Do the out-of-bounds check when starting to read a new block to fix this.

This issue became visible with increased error checking in Linux 4.13.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael Olbrich <m.olbrich@pengutronix.de>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 20170916091611.10241-1-m.olbrich@pengutronix.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 8573378e62)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-05 19:39:35 -06:00
Maxime Coquelin d765c5e577 memory: fix off-by-one error in memory_region_notify_one()
This patch fixes an off-by-one error that could lead to the
notifyee to receive notifications for ranges it is not
registered to.

The bug has been spotted by code review.

Fixes: bd2bfa4c52 ("memory: introduce memory_region_notify_one()")
Cc: qemu-stable@nongnu.org
Cc: Peter Xu <peterx@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Message-Id: <20171010094247.10173-4-maxime.coquelin@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b021d1c044)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:42:41 -06:00
Peter Xu ae13e2cfa8 exec: simplify address_space_get_iotlb_entry
This patch let address_space_get_iotlb_entry() to use the newly
introduced page_mask parameter in flatview_do_translate(). Then we
will be sure the IOTLB can be aligned to page mask, also we should
nicely support huge pages now when introducing a764040.

Fixes: a764040 ("exec: abstract address_space_do_translate()")
Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-Id: <20171010094247.10173-3-maxime.coquelin@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 076a93d797)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:42:34 -06:00
Peter Xu c9dbe3e0fc exec: add page_mask for flatview_do_translate
The function is originally used for flatview_space_translate() and what
we care about most is (xlat, plen) range. However for iotlb requests, we
don't really care about "plen", but the size of the page that "xlat" is
located on. While, plen cannot really contain this information.

A simple example to show why "plen" is not good for IOTLB translations:

E.g., for huge pages, it is possible that guest mapped 1G huge page on
device side that used this GPA range:

  0x100000000 - 0x13fffffff

Then let's say we want to translate one IOVA that finally mapped to GPA
0x13ffffe00 (which is located on this 1G huge page). Then here we'll
get:

  (xlat, plen) = (0x13fffe00, 0x200)

So the IOTLB would be only covering a very small range since from
"plen" (which is 0x200 bytes) we cannot tell the size of the page.

Actually we can really know that this is a huge page - we just throw the
information away in flatview_do_translate().

This patch introduced "page_mask" optional parameter to capture that
page mask info. Also, I made "plen" an optional parameter as well, with
some comments for the whole function.

No functional change yet.

Signed-off-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Message-Id: <20171010094247.10173-2-maxime.coquelin@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit d5e5fafd11)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:42:28 -06:00
Alexey Kardashevskiy 496f97293e memory: Share special empty FlatView
This shares an cached empty FlatView among address spaces. The empty
FV is used every time when a root MR renders into a FV without memory
sections which happens when MR or its children are not enabled or
zero-sized. The empty_view is not NULL to keep the rest of memory
API intact; it also has a dispatch tree for the same reason.

On POWER8 with 255 CPUs, 255 virtio-net, 40 PCI bridges guest this halves
the amount of FlatView's in use (557 -> 260) and dispatch tables
(~800000 -> ~370000).  In an unrelated experiment with 112 non-virtio
devices on x86 ("-M pc"), only 4 FlatViews are alive, and about ~2000
are created at startup.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-16-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 092aa2fc65)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:42:02 -06:00
Paolo Bonzini 639701e4f2 memory: seek FlatView sharing candidates among children subregions
A container can be used instead of an alias to allow switching between
multiple subregions.  In this case we cannot directly share the
subregions (since they only belong to a single parent), but if the
subregions are aliases we can in turn walk those.

This is not enough to remove all source of quadratic FlatView creation,
but it enables sharing of the PCI bus master FlatViews (and their
AddressSpaceDispatch structures) across all PCI devices.  For 112
virtio-net-pci devices, boot time is reduced from 25 to 10 seconds and
memory consumption from 1.4 to 1 G.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e673ba9af9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:41:57 -06:00
Paolo Bonzini 5dbd1f7884 memory: trace FlatView creation and destruction
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 02d9651d6a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:41:52 -06:00
Alexey Kardashevskiy 5b5e49ab5f memory: Create FlatView directly
This avoids usual memory_region_transaction_commit() which rebuilds
all FVs.

On POWER8 with 255 CPUs, 255 virtio-net, 40 PCI bridges guest this brings
down the boot time from 25s to 20s and reduces the amount of temporary FVs
allocated during machine constructon (~800000 -> ~640000) and amount of
temporary dispatch trees (~370000 -> ~300000), the total memory footprint
goes down (18G -> 17G).

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-18-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 202fc01b05)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:32:47 -06:00
Alexey Kardashevskiy a7bb94e784 memory: Get rid of address_space_init_shareable
Since FlatViews are shared now and ASes not, this gets rid of
address_space_init_shareable().

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-17-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b516572f31)
 Conflicts:
	target/arm/cpu.c
* drop context deps on 1d2091bc and 1e577cc7
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:32:11 -06:00
Alexey Kardashevskiy 7dd7f7ef44 memory: Do not allocate FlatView in address_space_init
This creates a new AS object without any FlatView as
memory_region_transaction_commit() may want to reuse the empty FV.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-14-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 67ace39b25)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:04:50 -06:00
Alexey Kardashevskiy e8c7ea3e75 memory: Share FlatView's and dispatch trees between address spaces
This allows sharing flat views between address spaces (AS) when
the same root memory region is used when creating a new address space.
This is done by walking through all ASes and caching one FlatView per
a physical root MR (i.e. not aliased).

This removes search for duplicates from address_space_init_shareable() as
FlatViews are shared elsewhere and keeping as::ref_count correct seems
an unnecessary and useless complication.

This should cause no change and memory use or boot time yet.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-13-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 967dc9b119)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:04:44 -06:00
Alexey Kardashevskiy c943efe8b5 memory: Move address_space_update_ioeventfds
So it is called (twice) from the same function. This is to make the next
patches a bit simpler.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-12-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 0221848764)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:04:32 -06:00
Alexey Kardashevskiy c14ce078b2 memory: Alloc dispatch tree where topology is generared
This is to make next patches simpler.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-11-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9bf561e36c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:04:19 -06:00
Alexey Kardashevskiy 260d3646b0 memory: Store physical root MR in FlatView
Address spaces get to keep a root MR (alias or not) but FlatView stores
the actual MR as this is going to be used later on to decide whether to
share a particular FlatView or not.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-10-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 89c177bbdd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:04:14 -06:00
Alexey Kardashevskiy 08101db63b memory: Rename mem_begin/mem_commit/mem_add helpers
This renames some helpers to reflect better what they do.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-9-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 8629d3fcb7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:04:00 -06:00
Alexey Kardashevskiy eff5ed4ae9 memory: Cleanup after switching to FlatView
We store AddressSpaceDispatch* in FlatView anyway so there is no need
to carry it from mem_add() to register_subpage/register_multipage.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-8-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9950322a59)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:03:54 -06:00
Alexey Kardashevskiy f7774e329b memory: Switch memory from using AddressSpace to FlatView
FlatView's will be shared between AddressSpace's and subpage_t
and MemoryRegionSection cannot store AS anymore, hence this change.

In particular, for:

 typedef struct subpage_t {
     MemoryRegion iomem;
-    AddressSpace *as;
+    FlatView *fv;
     hwaddr base;
     uint16_t sub_section[];
 } subpage_t;

  struct MemoryRegionSection {
     MemoryRegion *mr;
-    AddressSpace *address_space;
+    FlatView *fv;
     hwaddr offset_within_region;
     Int128 size;
     hwaddr offset_within_address_space;
     bool readonly;
 };

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-7-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 166206845f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:03:46 -06:00
Paolo Bonzini 3568e11940 memory: avoid "resurrection" of dead FlatViews
It's possible for address_space_get_flatview() as it currently stands
to cause a use-after-free for the returned FlatView, if the reference
count is incremented after the FlatView has been replaced by a writer:

   thread 1             thread 2             RCU thread
  -------------------------------------------------------------
   rcu_read_lock
   read as->current_map
                        set as->current_map
                        flatview_unref
                           '--> call_rcu
   flatview_ref
     [ref=1]
   rcu_read_unlock
                                             flatview_destroy
   <badness>

Since FlatViews are not updated very often, we can just detect the
situation using a new atomic op atomic_fetch_inc_nonzero, similar to
Linux's atomic_inc_not_zero, which performs the refcount increment only if
it hasn't already hit zero.  This is similar to Linux commit de09a9771a53
("CRED: Fix get_task_cred() and task_state() to not resurrect dead
credentials", 2010-07-29).

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 447b0d0b9e)
 Conflicts:
	docs/devel/atomics.txt
* drop documentation ref to atomic_fetch_xor
* prereq for 166206845f
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 22:03:33 -06:00
Alexey Kardashevskiy d0136db812 memory: Remove AddressSpace pointer from AddressSpaceDispatch
AS in ASD is only used to pass AS from mem_begin() to register_subpage()
to store it in MemoryRegionSection, we can do this directly now.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-6-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c775252378)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 21:55:04 -06:00
Alexey Kardashevskiy 4d2f8abb22 memory: Move AddressSpaceDispatch from AddressSpace to FlatView
As we are going to share FlatView's between AddressSpace's,
and AddressSpaceDispatch is a structure to perform quick lookup
in FlatView, this moves ASD to FlatView.

After previosly open coded ASD rendering, we can also remove
as->next_dispatch as the new FlatView pointer is stored
on a stack and set to an AS atomically.

flatview_destroy() is executed under RCU instead of
address_space_dispatch_free() now.

This makes mem_begin/mem_commit to work with ASD and mem_add with FV
as later on mem_add will be taking FV as an argument anyway.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-5-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 66a6df1dc6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 21:54:59 -06:00
Alexey Kardashevskiy de7e6815b8 memory: Move FlatView allocation to a helper
This moves a FlatView allocation and initialization to a helper.
While we are nere, replace g_new with g_new0 to not to bother if we add
new fields in the future.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-4-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit cc94cd6d36)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 21:54:44 -06:00
Alexey Kardashevskiy 1b04a15809 memory: Open code FlatView rendering
We are going to share FlatView's between AddressSpace's and per-AS
memory listeners won't suit the purpose anymore so open code
the dispatch tree rendering.

Since there is a good chance that dispatch_listener was the only
listener, this avoids address_space_update_topology_pass() if there is
no registered listeners; this should improve starting time.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-3-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9a62e24f45)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 21:54:37 -06:00
Alexey Kardashevskiy 6424975ce9 exec: Explicitly export target AS from address_space_translate_internal
This adds an AS** parameter to address_space_do_translate()
to make it easier for the next patch to share FlatViews.

This should cause no behavioural change.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-Id: <20170921085110.25598-2-aik@ozlabs.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e76bb18f7e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 21:54:29 -06:00
Eric Blake 4af42e3cf1 block: Perform copy-on-read in loop
Improve our braindead copy-on-read implementation.  Pre-patch,
we have multiple issues:
- we create a bounce buffer and perform a write for the entire
request, even if the active image already has 99% of the
clusters occupied, and really only needs to copy-on-read the
remaining 1% of the clusters
- our bounce buffer was as large as the read request, and can
needlessly exhaust our memory by using double the memory of
the request size (the original request plus our bounce buffer),
rather than a capped maximum overhead beyond the original
- if a driver has a max_transfer limit, we are bypassing the
normal code in bdrv_aligned_preadv() that fragments to that
limit, and instead attempt to read the entire buffer from the
driver in one go, which some drivers may assert on
- a client can request a large request of nearly 2G such that
rounding the request out to cluster boundaries results in a
byte count larger than 2G.  While this cannot exceed 32 bits,
it DOES have some follow-on problems:
-- the call to bdrv_driver_pread() can assert for exceeding
BDRV_REQUEST_MAX_BYTES, if the driver is old and lacks
.bdrv_co_preadv
-- if the buffer is all zeroes, the subsequent call to
bdrv_co_do_pwrite_zeroes is a no-op due to a negative size,
which means we did not actually copy on read

Fix all of these issues by breaking up the action into a loop,
where each iteration is capped to sane limits.  Also, querying
the allocation status allows us to optimize: when data is
already present in the active layer, we don't need to bounce.

Note that the code has a telling comment that copy-on-read
should probably be a filter driver rather than a bolt-on hack
in io.c; but that remains a task for another day.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit cb2e28780c)
 Conflicts:
	block/io.c
* remove context dep on d855ebcd3
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 21:36:39 -06:00
Jim Somerville 26914ce48d kvmclock: use the updated system_timer_msr
Fixes e2b6c17 (kvmclock: update system_time_msr address forcibly)
which makes a call to get the latest value of the address
stored in system_timer_msr, but then uses the old address anyway.

Signed-off-by: Jim Somerville <Jim.Somerville@windriver.com>
Message-Id: <59b67db0bd15a46ab47c3aa657c81a4c11f168ea.1506702472.git.Jim.Somerville@windriver.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 346b1215b1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 20:42:20 -06:00
Vladimir Sementsov-Ogievskiy 49958d37e7 block/mirror: check backing in bdrv_mirror_top_flush
Backing may be zero after failed bdrv_append in mirror_start_job,
which leads to SIGSEGV.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20170929152255.5431-1-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit ce960aa906)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 20:40:23 -06:00
Thomas Huth b234266086 hw/usb/bus: Remove bad object_unparent() from usb_try_create_simple()
Valgrind detects an invalid read operation when hot-plugging of an
USB device fails:

$ valgrind x86_64-softmmu/qemu-system-x86_64 -device usb-ehci -nographic -S
==30598== Memcheck, a memory error detector
==30598== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==30598== Using Valgrind-3.12.0 and LibVEX; rerun with -h for copyright info
==30598== Command: x86_64-softmmu/qemu-system-x86_64 -device usb-ehci -nographic -S
==30598==
QEMU 2.10.50 monitor - type 'help' for more information
(qemu) device_add usb-tablet
(qemu) device_add usb-tablet
(qemu) device_add usb-tablet
(qemu) device_add usb-tablet
(qemu) device_add usb-tablet
(qemu) device_add usb-tablet
==30598== Invalid read of size 8
==30598==    at 0x60EF50: object_unparent (object.c:445)
==30598==    by 0x580F0D: usb_try_create_simple (bus.c:346)
==30598==    by 0x581BEB: usb_claim_port (bus.c:451)
==30598==    by 0x582310: usb_qdev_realize (bus.c:257)
==30598==    by 0x4CB399: device_set_realized (qdev.c:914)
==30598==    by 0x60E26D: property_set_bool (object.c:1886)
==30598==    by 0x61235E: object_property_set_qobject (qom-qobject.c:27)
==30598==    by 0x61000F: object_property_set_bool (object.c:1162)
==30598==    by 0x4567C3: qdev_device_add (qdev-monitor.c:630)
==30598==    by 0x456D52: qmp_device_add (qdev-monitor.c:807)
==30598==    by 0x470A99: hmp_device_add (hmp.c:1933)
==30598==    by 0x3679C3: handle_hmp_command (monitor.c:3123)

The object_unparent() here is not necessary anymore since commit
69382d8b3e ("qdev: Fix object reference leak in case device.realize()
fails"), so let's remove it now.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-id: 1506526106-30971-1-git-send-email-thuth@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit f3b2bea3c7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-12-04 20:37:19 -06:00
Daniel Henrique Barboza 62695f60c3 hw/ppc: CAS reset on early device hotplug
This patch is a follow up on the discussions made in patch
"hw/ppc: disable hotplug before CAS is completed" that can be
found at [1].

At this moment, we do not support CPU/memory hotplug in early
boot stages, before CAS. When a hotplug occurs, the event is logged
in an internal RTAS event log queue and an IRQ pulse is fired. In
regular conditions, the guest handles the interrupt by executing
check_exception, fetching the generated hotplug event and enabling
the device for use.

In early boot, this IRQ isn't caught (SLOF does not handle hotplug
events), leaving the event in the rtas event log queue. If the guest
executes check_exception due to another hotplug event, the re-assertion
of the IRQ ends up de-queuing the first hotplug event as well. In short,
a device hotplugged before CAS is considered coldplugged by SLOF.
This leads to device misbehavior and, in some cases, guest kernel
Ooops when trying to unplug the device.

A proper fix would be to turn every device hotplugged before CAS
as a colplugged device. This is not trivial to do with the current
code base though - the FDT is written in the guest memory at
ppc_spapr_reset and can't be retrieved without adding extra state
(fdt_size for example) that will need to managed and migrated. Adding
the hotplugged DT in the middle of CAS negotiation via the updated DT
tree works with CPU devs, but panics the guest kernel at boot. Additional
analysis would be necessary for LMBs and PCI devices. There are
questions to be made in QEMU/SLOF/kernel level about how we can make
this change in a sustainable way.

With Linux guests, a fix would be the kernel executing check_exception
at boot time, de-queueing the events that happened in early boot and
processing them. However, even if/when the newer kernels start
fetching these events at boot time, we need to take care of older
kernels that won't be doing that.

This patch works around the situation by issuing a CAS reset if a hotplugged
device is detected during CAS:

- the DRC conditions that warrant a CAS reset is the same as those that
triggers a DRC migration - the DRC must have a device attached and
the DRC state is not equal to its ready_state. With that in mind, this
patch makes use of 'spapr_drc_needed' to determine if a CAS reset
is needed.

- In the middle of CAS negotiations, the function
'spapr_hotplugged_dev_before_cas' goes through all the DRCs to see
if there are any DRC that requires a reset, using spapr_drc_needed. If
that happens, returns '1' in 'spapr_h_cas_compose_response' which will set
spapr->cas_reboot to true, causing the machine to reboot.

No changes are made for coldplug devices.

[1] http://lists.nongnu.org/archive/html/qemu-devel/2017-08/msg02855.html

Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 10f12e6450)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-10-03 17:40:40 -05:00
Michael Roth 7851197b81 Update version for 2.10.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-10-02 12:42:58 -05:00
Peter Lieven 547435f550 migration: disable auto-converge during bulk block migration
auto-converge and block migration currently do not play well together.
During block migration the auto-converge logic detects that ram
migration makes no progress and thus throttles down the vm until
it nearly stalls completely. Avoid this by disabling the throttling
logic during the bulk phase of the block migration.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Message-Id: <1506421996-12513-1-git-send-email-pl@kamp.de>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
(cherry picked from commit 9ac78b6171)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:55:44 -05:00
Christian Borntraeger 17cd46fbdf s390x/cpumodel: remove ais from z14 default model-> also for 2.10.1
We disabled ais for 2.10, so let's also remove it from the z14
default model.

Fixes: 3f2d07b3b0 ("s390x/ais: for 2.10 stable: disable ais facility")
CC: qemu-stable@nongnu.org
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20170927072030.35737-2-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 9dacc90846)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:55:08 -05:00
Anthony PERARD 6a903482b1 Revert "ACPI: don't call acpi_pcihp_device_plug_cb on xen"
This reverts commit 153eba4726.

This patch prevents PCI passthrough hotplug on Xen. Even if the Xen tool
stack prepares its own ACPI tables, we still rely on QEMU for hotplug
ACPI notifications.

The original issue is fixed by the two previous patch:
  hw/acpi: Limit hotplug to root bus on legacy mode
  hw/acpi: Move acpi_set_pci_info to pcihp

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 2bed1ba77f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:53:20 -05:00
Anthony PERARD 8edf4c6adc hw/acpi: Move acpi_set_pci_info to pcihp
HW part of ACPI PCI hotplug in QEMU depends on ACPI_PCIHP_PROP_BSEL
being set on a PCI bus that supports ACPI hotplug. It should work
regardless of the source of ACPI tables (QEMU generator/legacy SeaBIOS/Xen).
So move ACPI_PCIHP_PROP_BSEL initialization into HW ACPI implementation
part from QEMU's ACPI table generator.

To do PCI passthrough with Xen, the property ACPI_PCIHP_PROP_BSEL needs
to be set, but this was done only when ACPI tables are built which is
not needed for a Xen guest. The need for the property starts with commit
"pc: pcihp: avoid adding ACPI_PCIHP_PROP_BSEL twice"
(f0c9d64a68).

Adding find_i440fx into stubs so that mips-softmmu target can be built.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit ab938ae43f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:53:11 -05:00
Anthony PERARD 2c3a8cc581 hw/acpi: Limit hotplug to root bus on legacy mode
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit f5855994fe)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:53:04 -05:00
Stefan Hajnoczi 0691b70a2a nbd-client: avoid read_reply_co entry if send failed
The following segfault is encountered if the NBD server closes the UNIX
domain socket immediately after negotiation:

  Program terminated with signal SIGSEGV, Segmentation fault.
  #0  aio_co_schedule (ctx=0x0, co=0xd3c0ff2ef0) at util/async.c:441
  441       QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines,
  (gdb) bt
  #0  0x000000d3c01a50f8 in aio_co_schedule (ctx=0x0, co=0xd3c0ff2ef0) at util/async.c:441
  #1  0x000000d3c012fa90 in nbd_coroutine_end (bs=bs@entry=0xd3c0fec650, request=<optimized out>) at block/nbd-client.c:207
  #2  0x000000d3c012fb58 in nbd_client_co_preadv (bs=0xd3c0fec650, offset=0, bytes=<optimized out>, qiov=0x7ffc10a91b20, flags=0) at block/nbd-client.c:237
  #3  0x000000d3c0128e63 in bdrv_driver_preadv (bs=bs@entry=0xd3c0fec650, offset=offset@entry=0, bytes=bytes@entry=512, qiov=qiov@entry=0x7ffc10a91b20, flags=0) at block/io.c:836
  #4  0x000000d3c012c3e0 in bdrv_aligned_preadv (child=child@entry=0xd3c0ff51d0, req=req@entry=0x7f31885d6e90, offset=offset@entry=0, bytes=bytes@entry=512, align=align@entry=1, qiov=qiov@entry=0x7ffc10a91b20, f
+lags=0) at block/io.c:1086
  #5  0x000000d3c012c6b8 in bdrv_co_preadv (child=0xd3c0ff51d0, offset=offset@entry=0, bytes=bytes@entry=512, qiov=qiov@entry=0x7ffc10a91b20, flags=flags@entry=0) at block/io.c:1182
  #6  0x000000d3c011cc17 in blk_co_preadv (blk=0xd3c0ff4f80, offset=0, bytes=512, qiov=0x7ffc10a91b20, flags=0) at block/block-backend.c:1032
  #7  0x000000d3c011ccec in blk_read_entry (opaque=0x7ffc10a91b40) at block/block-backend.c:1079
  #8  0x000000d3c01bbb96 in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:79
  #9  0x00007f3196cb8600 in __start_context () at /lib64/libc.so.6

The problem is that nbd_client_init() uses
nbd_client_attach_aio_context() -> aio_co_schedule(new_context,
client->read_reply_co).  Execution of read_reply_co is deferred to a BH
which doesn't run until later.

In the mean time blk_co_preadv() can be called and nbd_coroutine_end()
calls aio_wake() on read_reply_co.  At this point in time
read_reply_co's ctx isn't set because it has never been entered yet.

This patch simplifies the nbd_co_send_request() ->
nbd_co_receive_reply() -> nbd_coroutine_end() lifecycle to just
nbd_co_send_request() -> nbd_co_receive_reply().  The request is "ended"
if an error occurs at any point.  Callers no longer have to invoke
nbd_coroutine_end().

This cleanup also eliminates the segfault because we don't call
aio_co_schedule() to wake up s->read_reply_co if sending the request
failed.  It is only necessary to wake up s->read_reply_co if a reply was
received.

Note this only happens with UNIX domain sockets on Linux.  It doesn't
seem possible to reproduce this with TCP sockets.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170829122745.14309-2-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
(cherry picked from commit 3c2d5183f9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:52:37 -05:00
Alex Bennée 4d824886c8 accel/tcg/cputlb: avoid recursive BQL (fixes #1706296)
The mmio path (see exec.c:prepare_mmio_access) already protects itself
against recursive locking and it makes sense to do the same for
io_readx/writex. Otherwise any helper running in the BQL context will
assert when it attempts to write to device memory as in the case of
the bug report.

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
CC: Richard Jones <rjones@redhat.com>
CC: Paolo Bonzini <bonzini@gnu.org>
CC: qemu-stable@nongnu.org
Message-Id: <20170921110625.9500-1-alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit 8b81253332)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:52:09 -05:00
Vladimir Sementsov-Ogievskiy 780fb4ce48 block/qcow2-bitmap: fix use of uninitialized pointer
Without initialization to zero dirty_bitmap field may be not zero
for a bitmap which should not be stored and
qcow2_store_persistent_dirty_bitmaps will erroneously call
store_bitmap for it which leads to SIGSEGV on bdrv_dirty_bitmap_name.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20170922144353.4220-1-vsementsov@virtuozzo.com
Cc: qemu-stable@nongnu.org
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 5330f32b71)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:51:42 -05:00
Manos Pitsidianakis 7496699ba6 block/throttle-groups.c: allocate RestartData on the heap
RestartData is the opaque data of the throttle_group_restart_queue_entry
coroutine. By being stack allocated, it isn't available anymore if
aio_co_enter schedules the coroutine with a bottom half and runs after
throttle_group_restart_queue returns.

Cc: qemu-stable@nongnu.org
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 43a5dc02fd)
 Conflicts:
	block/throttle-groups.c
* reworked to avoid functional dep on 022cdc9, since that involves
  refactoring for a feature not present in 2.10
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:49:39 -05:00
Eric Blake 33a599667a osdep: Fix ROUND_UP(64-bit, 32-bit)
When using bit-wise operations that exploit the power-of-two
nature of the second argument of ROUND_UP(), we still need to
ensure that the mask is as wide as the first argument (done
by using a ternary to force proper arithmetic promotion).
Unpatched, ROUND_UP(2ULL*1024*1024*1024*1024, 512U) produces 0,
instead of the intended 2TiB, because negation of an unsigned
32-bit quantity followed by widening to 64-bits does not
sign-extend the mask.

Broken since its introduction in commit 292c8e50 (v1.5.0).
Callers that passed the same width type to both macro parameters,
or that had other code to ensure the first parameter's maximum
runtime value did not exceed the second parameter's width, are
unaffected, but I did not audit to see which (if any) existing
clients of the macro could trigger incorrect behavior (I found
the bug while adding a new use of the macro).

While preparing the patch, checkpatch complained about poor
spacing, so I also fixed that here and in the nearby DIV_ROUND_UP.

CC: qemu-trivial@nongnu.org
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 2098b073f3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-28 16:37:36 -05:00
Christian Borntraeger a432f419ab s390x/ais: for 2.10 stable: disable ais facility
The migration interface for ais was introduced with kernel 4.13
but the capability itself had been active since 4.12. As migration
support is considered necessary lets disable ais in the 2.10
stable version. A proper fix and re-enablement will be done
for qemu 2.11.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <20170921140834.14233-2-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 3f2d07b3b0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:23:37 -05:00
Jan Dakinevich a83858fdb5 9pfs: check the size of transport buffer before marshaling
v9fs_do_readdir_with_stat() should check for a maximum buffer size
before an attempt to marshal gathered data. Otherwise, buffers assumed
as misconfigured and the transport would be broken.

The patch brings v9fs_do_readdir_with_stat() in conformity with
v9fs_do_readdir() behavior.

Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
[groug, regression caused my commit 8d37de41ca # 2.10]
Signed-off-by: Greg Kurz <groug@kaod.org>

(cherry picked from commit 772a73692e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:23:16 -05:00
Jan Dakinevich d13a0bde83 9pfs: fix name_to_path assertion in v9fs_complete_rename()
The third parameter of v9fs_co_name_to_path() must not contain `/'
character.

The issue is most likely related to 9p2000.u protocol only.

Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
[groug, regression caused by commit f57f587857 # 2.10]
Signed-off-by: Greg Kurz <groug@kaod.org>

(cherry picked from commit 4d8bc7334b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:23:11 -05:00
Jan Dakinevich e90997dc8f 9pfs: fix readdir() for 9p2000.u
If the client is using 9p2000.u, the following occurs:

$ cd ${virtfs_shared_dir}
$ mkdir -p a/b/c
$ ls a/b
ls: cannot access 'a/b/a': No such file or directory
ls: cannot access 'a/b/b': No such file or directory
a  b  c

instead of the expected:

$ ls a/b
c

This is a regression introduced by commit f57f5878578a;
local_name_to_path() now resolves ".." and "." in paths,
and v9fs_do_readdir_with_stat()->stat_to_v9stat() then
copies the basename of the resulting path to the response.
With the example above, this means that "." and ".." are
turned into "b" and "a" respectively...

stat_to_v9stat() currently assumes it is passed a full
canonicalized path and uses it to do two different things:
1) to pass it to v9fs_co_readlink() in case the file is a symbolic
   link
2) to set the name field of the V9fsStat structure to the basename
   part of the given path

It only has two users: v9fs_stat() and v9fs_do_readdir_with_stat().

v9fs_stat() really needs 1) and 2) to be performed since it starts
with the full canonicalized path stored in the fid. It is different
for v9fs_do_readdir_with_stat() though because the name we want to
put into the V9fsStat structure is the d_name field of the dirent
actually (ie, we want to keep the "." and ".." special names). So,
we only need 1) in this case.

This patch hence adds a basename argument to stat_to_v9stat(), to
be used to set the name field of the V9fsStat structure, and moves
the basename logic to v9fs_stat().

Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
(groug, renamed old name argument to path and updated changelog)
Signed-off-by: Greg Kurz <groug@kaod.org>

(cherry picked from commit 6069537f43)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:23:05 -05:00
Gerd Hoffmann 7e1288cd0c console: fix dpy_gfx_replace_surface assert
virtio-gpu can trigger the assert added by commit "6905b93447 console:
add same surface replace pre-condition" in multihead setups (where
surface can be NULL for secondary displays).  Allow surface being NULL.

Fixes: 6905b93447
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20170906142109.2685-1-kraxel@redhat.com
(cherry picked from commit 1540008629)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:13:37 -05:00
Igor Mammedov 83b23fe55c ide: ahci: unparent children buses before freeing their memory
Fixes read after freeing error reported
  https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04243.html
  Message-Id: <59a56959-ca12-ea75-33fa-ff07eba1b090@redhat.com>

ich9-ahci device creates ide buses and attaches them as QOM children
at realize time, however it forgets to properly clean them up
at unrealize time and frees memory containing these children,
with following call-chain:

   qdev_device_add()
     object_property_set_bool('realized', true)
       device_set_realized()
          ...
          pci_qdev_realize() -> pci_ich9_ahci_realize() -> ahci_realize()
               ...
               s->dev = g_new0(AHCIDevice, ports);
               ...
                  AHCIDevice *ad = &s->dev[i];
                  ide_bus_new(&ad->port, sizeof(ad->port), qdev, i, 1);
                  ^^^ creates bus in memory allocated by above gnew()
                      and adds it as child propety to ahci device
          ...
          hotplug_handler_plug(); -> goto post_realize_fail;
          pci_qdev_unrealize() -> pci_ich9_uninit() -> ahci_uninit()
              ...
               g_free(s->dev);
               ^^^ free memory that holds children busses

          return with error from device_set_realized()

As result later when qdev_device_add() tries to unparent ich9-ahci
after failed device_set_realized(),
    object_unparent() -> object_property_del_child()
iterates over existing QOM children including buses added by
ide_bus_new() and tries to unparent them, which causes access to
freed memory where they where located.

Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1503938085-169486-1-git-send-email-imammedo@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 955f5c7ba1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:12:41 -05:00
Thomas Huth e96002e0d1 hw/ide/microdrive: Mark the dscm1xxxx device with user_creatable = false
QEMU currently aborts with an assertion message when the user is trying
to remove a dscm1xxxx again:

$ aarch64-softmmu/qemu-system-aarch64 -S -M integratorcp -nographic
QEMU 2.9.93 monitor - type 'help' for more information
(qemu) device_add dscm1xxxx,id=xyz
(qemu) device_del xyz
**
ERROR:qemu/qdev-monitor.c:872:qdev_unplug: assertion failed: (hotplug_ctrl)
Aborted (core dumped)

Looks like this device has to be wired up in code and is not meant
to be hot-pluggable, so let's mark it with user_creatable = false.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1503543783-17192-1-git-send-email-thuth@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 4c93950659)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:12:35 -05:00
Thomas Huth cc7dd3ad3f hw/arm/aspeed_soc: Mark devices as user_creatable = false
QEMU currently aborts if the user is accidentially trying to
do something like this:

$ aarch64-softmmu/qemu-system-aarch64 -S -M integratorcp -nographic
QEMU 2.9.93 monitor - type 'help' for more information
(qemu) device_add ast2400
Unexpected error in error_set_from_qdev_prop_error()
 at hw/core/qdev-properties.c:1032:
Aborted (core dumped)

The ast2400 SoC devices are clearly not creatable by the user since
they are using the serial_hds and nd_table arrays directly in their
realize function, so mark them with user_creatable = false.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 469f3da42e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:12:31 -05:00
Thomas Huth de4ad17a8e hw/arm/digic: Mark device with user_creatable = false
QEMU currently shows some unexpected behavior when the user trys to
do a "device_add digic" on an unrelated ARM machine like integratorcp
in "-nographic" mode (the device_add command does not immediately
return to the monitor prompt), and trying to "device_del" the device
later results in a "qemu/qdev-monitor.c:872:qdev_unplug: assertion
failed: (hotplug_ctrl)" error condition.
Looking at the realize function of the device, it uses serial_hds
directly and this means that the device can not be added a second
time, so let's simply mark it with "user_creatable = false" now.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit f58f25599b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:12:26 -05:00
Thomas Huth 8a9d7f3063 s390x/ipl: The s390-ipl device is not hot-pluggable
The s390-ipl device can not be created by the user, since it is meant only
to  be instantiated once internally to load the ROMs and kernel. If the user
tries to do a "device_add s390-ipl" via the monitor later, QEMU aborts with
a "ROM images must be loaded at startup" error message.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1502861458-30270-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 0d4fa4996f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:12:09 -05:00
Thomas Huth d3f05848fc watchdog/wdt_diag288: Mark diag288 watchdog as non-hotpluggable
QEMU currently aborts when the user tries to hot-unplug a diag288
device:

$ qemu-system-s390x -nographic -nodefaults -S -monitor stdio
QEMU 2.9.92 monitor - type 'help' for more information
(qemu) device_add diag288,id=x
(qemu) device_del x
**
ERROR:qemu/qdev-monitor.c:872:qdev_unplug: assertion failed: (hotplug_ctrl)
Aborted (core dumped)

The device is not designed as hot-pluggable (it should only be used
via the "-watchdog" parameter), so let's simply remove the possibility
to hotplug it to prevent that users can run into this ugly situation.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1502892528-22618-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 84ebd3e8c7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:12:04 -05:00
Prasad J Pandit fca5f37fe9 multiboot: validate multiboot header address values
While loading kernel via multiboot-v1 image, (flags & 0x00010000)
indicates that multiboot header contains valid addresses to load
the kernel image. These addresses are used to compute kernel
size and kernel text offset in the OS image. Validate these
address values to avoid an OOB access issue.

This is CVE-2017-14167.

Reported-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Message-Id: <20170907063256.7418-1-ppandit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit ed4f86e8b6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:11:49 -05:00
Gerd Hoffmann 2965be1f00 vga: stop passing pointers to vga_draw_line* functions
Instead pass around the address (aka offset into vga memory).
Add vga_read_* helper functions which apply vbe_size_mask to
the address, to make sure the address stays within the valid
range, similar to the cirrus blitter fixes (commits ffaf857778
and 026aeffcb4).

Impact:  DoS for privileged guest users.  qemu crashes with
a segfault, when hitting the guard page after vga memory
allocation, while reading vga memory for display updates.

Fixes: CVE-2017-13672
Cc: P J P <ppandit@redhat.com>
Reported-by: David Buchanan <d@vidbuchanan.co.uk>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170828122906.18993-1-kraxel@redhat.com
(cherry picked from commit 3d90c62548)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:11:23 -05:00
Gerd Hoffmann d6f7f3b0cf vga: fix display update region calculation (split screen)
vga display update mis-calculated the region for the dirty bitmap
snapshot in case split screen mode is used.  This can trigger an
assert in cpu_physical_memory_snapshot_get_dirty().

Impact:  DoS for privileged guest users.

Fixes: CVE-2017-13673
Fixes: fec5e8c92b
Cc: P J P <ppandit@redhat.com>
Reported-by: David Buchanan <d@vidbuchanan.co.uk>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170828123307.15392-1-kraxel@redhat.com
(cherry picked from commit e65294157d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-22 18:11:17 -05:00
Marc-André Lureau 2a2eab6660 vhost-user-bridge: fix resume regression (since 2.9)
Commit e10e798c85 switched to libvhost-user which lacked support
for resuming the avail_idx based on used_idx.

Fixes:
https://bugzilla.redhat.com/show_bug.cgi?id=1485867

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 672339f7ef)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-19 17:48:37 -05:00
Marc-André Lureau 48f65ce837 libvhost-user: support resuming vq->last_avail_idx based on used_idx
This is the same workaround as commit 523b018dde, which was lost
with libvhost-user transition in commit e10e798c85.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 35480cbfcb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-19 17:48:27 -05:00
Hannes Reinecke b95fbe6f12 scsi-bus: correct responses for INQUIRY and REQUEST SENSE
According to SPC-3 INQUIRY and REQUEST SENSE should return GOOD
even on unsupported LUNS.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Message-Id: <1503049022-14749-1-git-send-email-hare@suse.de>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Fixes: ded6ddc5a7
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Hannes Reinecke <hare@suse.de>
(cherry picked from commit b07fbce634)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-19 17:44:14 -05:00
Peter Maydell b8cd978919 mps2-an511: Fix wiring of UART overflow interrupt lines
Fix an error that meant we were wiring every UART's overflow
interrupts into the same inputs 0 and 1 of the OR gate,
rather than giving each its own input.

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1505232834-20890-1-git-send-email-peter.maydell@linaro.org
(cherry picked from commit ce3bc112cd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-17 15:02:04 -05:00
Alex Williamson b24304ca13 vhost: Release memory references on cleanup
vhost registers a MemoryListener where it adds and removes references
to MemoryRegions as the MemoryRegionSections pass through.  The
region_add callback is invoked for each existing section when the
MemoryListener is registered, but unregistering the MemoryListener
performs no reciprocal region_del callback.  It's therefore the
owner of the MemoryListener's responsibility to cleanup any persistent
changes, such as these memory references, after unregistering.

The consequence of this bug is that if we have both a vhost device
and a vfio device, the vhost device will reference any mmap'd MMIO of
the vfio device via this MemoryListener.  If the vhost device is then
removed, those references remain outstanding.  If we then attempt to
remove the vfio device, it never gets finalized and the only way to
release the kernel file descriptors is to terminate the QEMU process.

Fixes: dfde4e6e1a ("memory: add ref/unref calls")
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org # v1.6.0+
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit ee4c112846)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-14 19:31:09 -05:00
Pavel Butsykin c6841b112e qcow2: move qcow2_store_persistent_dirty_bitmaps() before cache flushing
After calling qcow2_inactivate(), all qcow2 caches must be flushed, but this
may not happen, because the last call qcow2_store_persistent_dirty_bitmaps()
can lead to marking l2/refcont cache as dirty.

Let's move qcow2_store_persistent_dirty_bitmaps() before the caсhe flushing
to fix it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 83a8c775a8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-14 19:29:40 -05:00
Thomas Huth 65a24b5c44 hw/arm/allwinner-a10: Mark the allwinner-a10 device with user_creatable = false
QEMU currently exits unexpectedly when the user accidentially
tries to do something like this:

$ aarch64-softmmu/qemu-system-aarch64 -S -M integratorcp -nographic
QEMU 2.9.93 monitor - type 'help' for more information
(qemu) device_add allwinner-a10
Unsupported NIC model: smc91c111

Exiting just due to a "device_add" should not happen. Looking closer
at the the realize and instance_init function of this device also
reveals that it is using serial_hds and nd_table directly there, so
this device is clearly not creatable by the user and should be marked
accordingly.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 1503416789-32080-1-git-send-email-thuth@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit dc89a180ca)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-14 19:26:40 -05:00
Pranith Kumar 85cdc23e75 arm_gicv3_kvm: Fix compile warning
Fix the following warning:

/home/pranith/qemu/hw/intc/arm_gicv3_kvm.c:296:17: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
            if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
                ^             ~
/home/pranith/qemu/hw/intc/arm_gicv3_kvm.c:296:17: note: add parentheses after the '!' to evaluate the bitwise operator first
            if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
                ^
/home/pranith/qemu/hw/intc/arm_gicv3_kvm.c:296:17: note: add parentheses around left hand side expression to silence this warning
            if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
                ^

This logic error meant we were not setting the PTZ
bit when we should -- luckily as the comment suggests
this wouldn't have had any effects beyond making GIC
initialization take a little longer.

Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-id: 20170829173226.7625-1-bobby.prani@gmail.com
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 7229ec5825)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-14 19:24:05 -05:00
Greg Kurz 168ff32c5d virtfs: error out gracefully when mandatory suboptions are missing
We internally convert -virtfs to -fsdev/-device. If the user doesn't
provide the path or security_model suboptions, and the fsdev backend
requires them, we hit an assertion when populating the internal -fsdev
option:

util/qemu-option.c:547: opt_set: Assertion `opt->str' failed.
Aborted (core dumped)

Let's test the suboption presence on the command line before trying
to set it in the internal -fsdev option, and let the backend code
error out gracefully (ie, like it already does when the user passes
-fsdev on the command line).

Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit 32b6943699)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-12 11:25:01 -05:00
Richard Henderson 728bfa3273 target/arm: Fix aa64 ldp register writeback
For "ldp x0, x1, [x0]", if the second load is on a second page and
the second page is unmapped, the exception would be raised with x0
already modified.  This means the instruction couldn't be restarted.

Cc: qemu-arm@nongnu.org
Cc: qemu-stable@nongnu.org
Reported-by: Andrew <andrew@fubar.geek.nz>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20170825224833.4463-1-richard.henderson@linaro.org
Fixes: https://bugs.launchpad.net/qemu/+bug/1713066
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
[PMM: tweaked comment format]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

(cherry picked from commit 3e4d91b94c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-12 11:24:55 -05:00
Farhan Ali e1b4750f06 s390-ccw: Fix alignment for CCW1
The commit 198c0d1f9d s390x/css: check ccw address validity
exposes an alignment issue in ccw bios.

According to PoP the CCW must be doubleword aligned. Let's fix
this in the bios.

Cc: qemu-stable@nongnu.org
Signed-off-by: Farhan Ali <alifm@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Eric Farman <farman@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <3ed8b810b6592daee6a775037ce21f850e40647d.1503667215.git.alifm@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 3a1e4561ad)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-12 11:24:49 -05:00
Samuel Thibault 53d421dd9c slirp: fix clearing ifq_so from pending packets
The if_fastq and if_batchq contain not only packets, but queues of packets
for the same socket. When sofree frees a socket, it thus has to clear ifq_so
from all the packets from the queues, not only the first.

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 1201d30851)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2017-09-12 11:24:43 -05:00
86 changed files with 1652 additions and 748 deletions

View File

@ -1 +1 @@
2.10.0
2.10.2

View File

@ -763,7 +763,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
cpu->mem_io_vaddr = addr;
if (mr->global_locking) {
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
qemu_mutex_lock_iothread();
locked = true;
}
@ -791,7 +791,7 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
cpu->mem_io_vaddr = addr;
cpu->mem_io_pc = retaddr;
if (mr->global_locking) {
if (mr->global_locking && !qemu_mutex_iothread_locked()) {
qemu_mutex_lock_iothread();
locked = true;
}

View File

@ -34,6 +34,9 @@
#define NOT_DONE 0x7fffffff /* used while emulated sync operation in progress */
/* Maximum bounce buffer for copy-on-read and write zeroes, in bytes */
#define MAX_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS)
static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int bytes, BdrvRequestFlags flags);
@ -945,11 +948,14 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BdrvChild *child,
BlockDriver *drv = bs->drv;
struct iovec iov;
QEMUIOVector bounce_qiov;
QEMUIOVector local_qiov;
int64_t cluster_offset;
unsigned int cluster_bytes;
size_t skip_bytes;
int ret;
int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
BDRV_REQUEST_MAX_BYTES);
unsigned int progress = 0;
/* FIXME We cannot require callers to have write permissions when all they
* are doing is a read request. If we did things right, write permissions
@ -961,52 +967,94 @@ static int coroutine_fn bdrv_co_do_copy_on_readv(BdrvChild *child,
// assert(child->perm & (BLK_PERM_WRITE_UNCHANGED | BLK_PERM_WRITE));
/* Cover entire cluster so no additional backing file I/O is required when
* allocating cluster in the image file.
* allocating cluster in the image file. Note that this value may exceed
* BDRV_REQUEST_MAX_BYTES (even when the original read did not), which
* is one reason we loop rather than doing it all at once.
*/
bdrv_round_to_clusters(bs, offset, bytes, &cluster_offset, &cluster_bytes);
skip_bytes = offset - cluster_offset;
trace_bdrv_co_do_copy_on_readv(bs, offset, bytes,
cluster_offset, cluster_bytes);
iov.iov_len = cluster_bytes;
iov.iov_base = bounce_buffer = qemu_try_blockalign(bs, iov.iov_len);
bounce_buffer = qemu_try_blockalign(bs,
MIN(MIN(max_transfer, cluster_bytes),
MAX_BOUNCE_BUFFER));
if (bounce_buffer == NULL) {
ret = -ENOMEM;
goto err;
}
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
while (cluster_bytes) {
int64_t pnum;
ret = bdrv_driver_preadv(bs, cluster_offset, cluster_bytes,
&bounce_qiov, 0);
if (ret < 0) {
goto err;
ret = bdrv_is_allocated(bs, cluster_offset,
MIN(cluster_bytes, max_transfer), &pnum);
if (ret < 0) {
/* Safe to treat errors in querying allocation as if
* unallocated; we'll probably fail again soon on the
* read, but at least that will set a decent errno.
*/
pnum = MIN(cluster_bytes, max_transfer);
}
assert(skip_bytes < pnum);
if (ret <= 0) {
/* Must copy-on-read; use the bounce buffer */
iov.iov_base = bounce_buffer;
iov.iov_len = pnum = MIN(pnum, MAX_BOUNCE_BUFFER);
qemu_iovec_init_external(&local_qiov, &iov, 1);
ret = bdrv_driver_preadv(bs, cluster_offset, pnum,
&local_qiov, 0);
if (ret < 0) {
goto err;
}
if (drv->bdrv_co_pwrite_zeroes &&
buffer_is_zero(bounce_buffer, pnum)) {
/* FIXME: Should we (perhaps conditionally) be setting
* BDRV_REQ_MAY_UNMAP, if it will allow for a sparser copy
* that still correctly reads as zero? */
ret = bdrv_co_do_pwrite_zeroes(bs, cluster_offset, pnum, 0);
} else {
/* This does not change the data on the disk, it is not
* necessary to flush even in cache=writethrough mode.
*/
ret = bdrv_driver_pwritev(bs, cluster_offset, pnum,
&local_qiov, 0);
}
if (ret < 0) {
/* It might be okay to ignore write errors for guest
* requests. If this is a deliberate copy-on-read
* then we don't want to ignore the error. Simply
* report it in all cases.
*/
goto err;
}
qemu_iovec_from_buf(qiov, progress, bounce_buffer + skip_bytes,
pnum - skip_bytes);
} else {
/* Read directly into the destination */
qemu_iovec_init(&local_qiov, qiov->niov);
qemu_iovec_concat(&local_qiov, qiov, progress, pnum - skip_bytes);
ret = bdrv_driver_preadv(bs, offset + progress, local_qiov.size,
&local_qiov, 0);
qemu_iovec_destroy(&local_qiov);
if (ret < 0) {
goto err;
}
}
cluster_offset += pnum;
cluster_bytes -= pnum;
progress += pnum - skip_bytes;
skip_bytes = 0;
}
if (drv->bdrv_co_pwrite_zeroes &&
buffer_is_zero(bounce_buffer, iov.iov_len)) {
/* FIXME: Should we (perhaps conditionally) be setting
* BDRV_REQ_MAY_UNMAP, if it will allow for a sparser copy
* that still correctly reads as zero? */
ret = bdrv_co_do_pwrite_zeroes(bs, cluster_offset, cluster_bytes, 0);
} else {
/* This does not change the data on the disk, it is not necessary
* to flush even in cache=writethrough mode.
*/
ret = bdrv_driver_pwritev(bs, cluster_offset, cluster_bytes,
&bounce_qiov, 0);
}
if (ret < 0) {
/* It might be okay to ignore write errors for guest requests. If this
* is a deliberate copy-on-read then we don't want to ignore the error.
* Simply report it in all cases.
*/
goto err;
}
skip_bytes = offset - cluster_offset;
qemu_iovec_from_buf(qiov, 0, bounce_buffer + skip_bytes, bytes);
ret = 0;
err:
qemu_vfree(bounce_buffer);
@ -1212,9 +1260,6 @@ int coroutine_fn bdrv_co_readv(BdrvChild *child, int64_t sector_num,
return bdrv_co_do_readv(child, sector_num, nb_sectors, qiov, 0);
}
/* Maximum buffer for write zeroes fallback, in bytes */
#define MAX_WRITE_ZEROES_BOUNCE_BUFFER (32768 << BDRV_SECTOR_BITS)
static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
int64_t offset, int bytes, BdrvRequestFlags flags)
{
@ -1229,8 +1274,7 @@ static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
int max_write_zeroes = MIN_NON_ZERO(bs->bl.max_pwrite_zeroes, INT_MAX);
int alignment = MAX(bs->bl.pwrite_zeroes_alignment,
bs->bl.request_alignment);
int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer,
MAX_WRITE_ZEROES_BOUNCE_BUFFER);
int max_transfer = MIN_NON_ZERO(bs->bl.max_transfer, MAX_BOUNCE_BUFFER);
assert(alignment % bs->bl.request_alignment == 0);
head = offset % alignment;

View File

@ -1056,6 +1056,10 @@ static int coroutine_fn bdrv_mirror_top_pwritev(BlockDriverState *bs,
static int coroutine_fn bdrv_mirror_top_flush(BlockDriverState *bs)
{
if (bs->backing == NULL) {
/* we can be here after failed bdrv_append in mirror_start_job */
return 0;
}
return bdrv_co_flush(bs->backing->bs);
}

View File

@ -144,12 +144,12 @@ static int nbd_co_send_request(BlockDriverState *bs,
request->handle = INDEX_TO_HANDLE(s, i);
if (s->quit) {
qemu_co_mutex_unlock(&s->send_mutex);
return -EIO;
rc = -EIO;
goto err;
}
if (!s->ioc) {
qemu_co_mutex_unlock(&s->send_mutex);
return -EPIPE;
rc = -EPIPE;
goto err;
}
if (qiov) {
@ -166,8 +166,13 @@ static int nbd_co_send_request(BlockDriverState *bs,
} else {
rc = nbd_send_request(s->ioc, request);
}
err:
if (rc < 0) {
s->quit = true;
s->requests[i].coroutine = NULL;
s->in_flight--;
qemu_co_queue_next(&s->free_sema);
}
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
@ -201,13 +206,6 @@ static void nbd_co_receive_reply(NBDClientSession *s,
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_end(BlockDriverState *bs,
NBDRequest *request)
{
NBDClientSession *s = nbd_get_client_session(bs);
int i = HANDLE_TO_INDEX(s, request->handle);
s->requests[i].coroutine = NULL;
@ -243,7 +241,6 @@ int nbd_client_co_preadv(BlockDriverState *bs, uint64_t offset,
} else {
nbd_co_receive_reply(client, &request, &reply, qiov);
}
nbd_coroutine_end(bs, &request);
return -reply.error;
}
@ -259,6 +256,7 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
NBDReply reply;
ssize_t ret;
assert(!(client->info.flags & NBD_FLAG_READ_ONLY));
if (flags & BDRV_REQ_FUA) {
assert(client->info.flags & NBD_FLAG_SEND_FUA);
request.flags |= NBD_CMD_FLAG_FUA;
@ -272,7 +270,6 @@ int nbd_client_co_pwritev(BlockDriverState *bs, uint64_t offset,
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
}
nbd_coroutine_end(bs, &request);
return -reply.error;
}
@ -288,6 +285,7 @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
};
NBDReply reply;
assert(!(client->info.flags & NBD_FLAG_READ_ONLY));
if (!(client->info.flags & NBD_FLAG_SEND_WRITE_ZEROES)) {
return -ENOTSUP;
}
@ -306,7 +304,6 @@ int nbd_client_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset,
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
}
nbd_coroutine_end(bs, &request);
return -reply.error;
}
@ -330,7 +327,6 @@ int nbd_client_co_flush(BlockDriverState *bs)
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
}
nbd_coroutine_end(bs, &request);
return -reply.error;
}
@ -345,6 +341,7 @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int bytes)
NBDReply reply;
ssize_t ret;
assert(!(client->info.flags & NBD_FLAG_READ_ONLY));
if (!(client->info.flags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
@ -355,7 +352,6 @@ int nbd_client_co_pdiscard(BlockDriverState *bs, int64_t offset, int bytes)
} else {
nbd_co_receive_reply(client, &request, &reply, NULL);
}
nbd_coroutine_end(bs, &request);
return -reply.error;
}
@ -410,6 +406,12 @@ int nbd_client_init(BlockDriverState *bs,
logout("Failed to negotiate with the NBD server\n");
return ret;
}
if (client->info.flags & NBD_FLAG_READ_ONLY &&
!bdrv_is_read_only(bs)) {
error_setg(errp,
"request for write access conflicts with read-only export");
return -EACCES;
}
if (client->info.flags & NBD_FLAG_SEND_FUA) {
bs->supported_write_flags = BDRV_REQ_FUA;
bs->supported_zero_flags |= BDRV_REQ_FUA;

View File

@ -1,7 +1,7 @@
/*
* QEMU Block driver for native access to files on NFS shares
*
* Copyright (c) 2014-2016 Peter Lieven <pl@kamp.de>
* Copyright (c) 2014-2017 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
@ -496,7 +496,7 @@ out:
static int64_t nfs_client_open(NFSClient *client, QDict *options,
int flags, int open_flags, Error **errp)
{
int ret = -EINVAL;
int64_t ret = -EINVAL;
QemuOpts *opts = NULL;
Error *local_err = NULL;
struct stat st;
@ -686,8 +686,7 @@ static QemuOptsList nfs_create_opts = {
static int nfs_file_create(const char *url, QemuOpts *opts, Error **errp)
{
int ret = 0;
int64_t total_size = 0;
int64_t ret, total_size;
NFSClient *client = g_new0(NFSClient, 1);
QDict *options = NULL;

View File

@ -602,7 +602,7 @@ static Qcow2BitmapList *bitmap_list_load(BlockDriverState *bs, uint64_t offset,
goto fail;
}
bm = g_new(Qcow2Bitmap, 1);
bm = g_new0(Qcow2Bitmap, 1);
bm->table.offset = e->bitmap_table_offset;
bm->table.size = e->bitmap_table_size;
bm->flags = e->flags;

View File

@ -2053,6 +2053,14 @@ static int qcow2_inactivate(BlockDriverState *bs)
int ret, result = 0;
Error *local_err = NULL;
qcow2_store_persistent_dirty_bitmaps(bs, &local_err);
if (local_err != NULL) {
result = -EINVAL;
error_report_err(local_err);
error_report("Persistent bitmaps are lost for node '%s'",
bdrv_get_device_or_node_name(bs));
}
ret = qcow2_cache_flush(bs, s->l2_table_cache);
if (ret) {
result = ret;
@ -2067,14 +2075,6 @@ static int qcow2_inactivate(BlockDriverState *bs)
strerror(-ret));
}
qcow2_store_persistent_dirty_bitmaps(bs, &local_err);
if (local_err != NULL) {
result = -EINVAL;
error_report_err(local_err);
error_report("Persistent bitmaps are lost for node '%s'",
bdrv_get_device_or_node_name(bs));
}
if (result == 0) {
qcow2_mark_clean(bs);
}
@ -2476,6 +2476,14 @@ static int qcow2_set_up_encryption(BlockDriverState *bs, const char *encryptfmt,
}
typedef struct PreallocCo {
BlockDriverState *bs;
uint64_t offset;
uint64_t new_length;
int ret;
} PreallocCo;
/**
* Preallocates metadata structures for data clusters between @offset (in the
* guest disk) and @new_length (which is thus generally the new guest disk
@ -2483,9 +2491,12 @@ static int qcow2_set_up_encryption(BlockDriverState *bs, const char *encryptfmt,
*
* Returns: 0 on success, -errno on failure.
*/
static int preallocate(BlockDriverState *bs,
uint64_t offset, uint64_t new_length)
static void coroutine_fn preallocate_co(void *opaque)
{
PreallocCo *params = opaque;
BlockDriverState *bs = params->bs;
uint64_t offset = params->offset;
uint64_t new_length = params->new_length;
BDRVQcow2State *s = bs->opaque;
uint64_t bytes;
uint64_t host_offset = 0;
@ -2493,9 +2504,7 @@ static int preallocate(BlockDriverState *bs,
int ret;
QCowL2Meta *meta;
if (qemu_in_coroutine()) {
qemu_co_mutex_lock(&s->lock);
}
qemu_co_mutex_lock(&s->lock);
assert(offset <= new_length);
bytes = new_length - offset;
@ -2549,10 +2558,28 @@ static int preallocate(BlockDriverState *bs,
ret = 0;
done:
qemu_co_mutex_unlock(&s->lock);
params->ret = ret;
}
static int preallocate(BlockDriverState *bs,
uint64_t offset, uint64_t new_length)
{
PreallocCo params = {
.bs = bs,
.offset = offset,
.new_length = new_length,
.ret = -EINPROGRESS,
};
if (qemu_in_coroutine()) {
qemu_co_mutex_unlock(&s->lock);
preallocate_co(&params);
} else {
Coroutine *co = qemu_coroutine_create(preallocate_co, &params);
bdrv_coroutine_enter(bs, co);
BDRV_POLL_WHILE(bs, params.ret == -EINPROGRESS);
}
return ret;
return params.ret;
}
/* qcow2_refcount_metadata_size:
@ -3161,6 +3188,7 @@ static int qcow2_truncate(BlockDriverState *bs, int64_t offset,
"Failed to inquire current file length");
return ret;
}
old_file_size = ROUND_UP(old_file_size, s->cluster_size);
nb_new_data_clusters = DIV_ROUND_UP(offset - old_length,
s->cluster_size);

View File

@ -392,17 +392,19 @@ static void coroutine_fn throttle_group_restart_queue_entry(void *opaque)
schedule_next_request(blk, is_write);
qemu_mutex_unlock(&tg->lock);
}
g_free(data);
}
static void throttle_group_restart_queue(BlockBackend *blk, bool is_write)
{
Coroutine *co;
RestartData rd = {
.blk = blk,
.is_write = is_write
};
RestartData *rd = g_new0(RestartData, 1);
co = qemu_coroutine_create(throttle_group_restart_queue_entry, &rd);
rd->blk = blk;
rd->is_write = is_write;
co = qemu_coroutine_create(throttle_group_restart_queue_entry, rd);
aio_co_enter(blk_get_aio_context(blk), co);
}

View File

@ -521,6 +521,19 @@ vu_set_vring_addr_exec(VuDev *dev, VhostUserMsg *vmsg)
vq->used_idx = vq->vring.used->idx;
if (vq->last_avail_idx != vq->used_idx) {
bool resume = dev->iface->queue_is_processed_in_order &&
dev->iface->queue_is_processed_in_order(dev, index);
DPRINT("Last avail index != used index: %u != %u%s\n",
vq->last_avail_idx, vq->used_idx,
resume ? ", resuming" : "");
if (resume) {
vq->shadow_avail_idx = vq->last_avail_idx = vq->used_idx;
}
}
return false;
}

View File

@ -132,6 +132,7 @@ typedef void (*vu_set_features_cb) (VuDev *dev, uint64_t features);
typedef int (*vu_process_msg_cb) (VuDev *dev, VhostUserMsg *vmsg,
int *do_reply);
typedef void (*vu_queue_set_started_cb) (VuDev *dev, int qidx, bool started);
typedef bool (*vu_queue_is_processed_in_order_cb) (VuDev *dev, int qidx);
typedef struct VuDevIface {
/* called by VHOST_USER_GET_FEATURES to get the features bitmask */
@ -148,6 +149,12 @@ typedef struct VuDevIface {
vu_process_msg_cb process_msg;
/* tells when queues can be processed */
vu_queue_set_started_cb queue_set_started;
/*
* If the queue is processed in order, in which case it will be
* resumed to vring.used->idx. This can help to support resuming
* on unmanaged exit/crash.
*/
vu_queue_is_processed_in_order_cb queue_is_processed_in_order;
} VuDevIface;
typedef void (*vu_queue_handler_cb) (VuDev *dev, int qidx);

5
cpus.c
View File

@ -1764,8 +1764,9 @@ void qemu_init_vcpu(CPUState *cpu)
/* If the target cpu hasn't set up any address spaces itself,
* give it the default one.
*/
AddressSpace *as = address_space_init_shareable(cpu->memory,
"cpu-memory");
AddressSpace *as = g_new0(AddressSpace, 1);
address_space_init(as, cpu->memory, "cpu-memory");
cpu->num_ases = 1;
cpu_address_space_init(cpu, as, 0);
}

View File

@ -63,6 +63,7 @@ operations:
typeof(*ptr) atomic_fetch_sub(ptr, val)
typeof(*ptr) atomic_fetch_and(ptr, val)
typeof(*ptr) atomic_fetch_or(ptr, val)
typeof(*ptr) atomic_fetch_inc_nonzero(ptr)
typeof(*ptr) atomic_xchg(ptr, val)
typeof(*ptr) atomic_cmpxchg(ptr, old, new)

316
exec.c
View File

@ -188,21 +188,18 @@ typedef struct PhysPageMap {
} PhysPageMap;
struct AddressSpaceDispatch {
struct rcu_head rcu;
MemoryRegionSection *mru_section;
/* This is a multi-level map on the physical address space.
* The bottom level has pointers to MemoryRegionSections.
*/
PhysPageEntry phys_map;
PhysPageMap map;
AddressSpace *as;
};
#define SUBPAGE_IDX(addr) ((addr) & ~TARGET_PAGE_MASK)
typedef struct subpage_t {
MemoryRegion iomem;
AddressSpace *as;
FlatView *fv;
hwaddr base;
uint16_t sub_section[];
} subpage_t;
@ -362,7 +359,7 @@ static void phys_page_compact(PhysPageEntry *lp, Node *nodes)
}
}
static void phys_page_compact_all(AddressSpaceDispatch *d, int nodes_nb)
void address_space_dispatch_compact(AddressSpaceDispatch *d)
{
if (d->phys_map.skip) {
phys_page_compact(&d->phys_map, d->map.nodes);
@ -471,22 +468,48 @@ address_space_translate_internal(AddressSpaceDispatch *d, hwaddr addr, hwaddr *x
return section;
}
/* Called from RCU critical section */
static MemoryRegionSection address_space_do_translate(AddressSpace *as,
hwaddr addr,
hwaddr *xlat,
hwaddr *plen,
bool is_write,
bool is_mmio)
/**
* flatview_do_translate - translate an address in FlatView
*
* @fv: the flat view that we want to translate on
* @addr: the address to be translated in above address space
* @xlat: the translated address offset within memory region. It
* cannot be @NULL.
* @plen_out: valid read/write length of the translated address. It
* can be @NULL when we don't care about it.
* @page_mask_out: page mask for the translated address. This
* should only be meaningful for IOMMU translated
* addresses, since there may be huge pages that this bit
* would tell. It can be @NULL if we don't care about it.
* @is_write: whether the translation operation is for write
* @is_mmio: whether this can be MMIO, set true if it can
*
* This function is called from RCU critical section
*/
static MemoryRegionSection flatview_do_translate(FlatView *fv,
hwaddr addr,
hwaddr *xlat,
hwaddr *plen_out,
hwaddr *page_mask_out,
bool is_write,
bool is_mmio,
AddressSpace **target_as)
{
IOMMUTLBEntry iotlb;
MemoryRegionSection *section;
IOMMUMemoryRegion *iommu_mr;
IOMMUMemoryRegionClass *imrc;
hwaddr page_mask = (hwaddr)(-1);
hwaddr plen = (hwaddr)(-1);
if (plen_out) {
plen = *plen_out;
}
for (;;) {
AddressSpaceDispatch *d = atomic_rcu_read(&as->dispatch);
section = address_space_translate_internal(d, addr, &addr, plen, is_mmio);
section = address_space_translate_internal(
flatview_to_dispatch(fv), addr, &addr,
&plen, is_mmio);
iommu_mr = memory_region_get_iommu(section->mr);
if (!iommu_mr) {
@ -498,16 +521,31 @@ static MemoryRegionSection address_space_do_translate(AddressSpace *as,
IOMMU_WO : IOMMU_RO);
addr = ((iotlb.translated_addr & ~iotlb.addr_mask)
| (addr & iotlb.addr_mask));
*plen = MIN(*plen, (addr | iotlb.addr_mask) - addr + 1);
page_mask &= iotlb.addr_mask;
plen = MIN(plen, (addr | iotlb.addr_mask) - addr + 1);
if (!(iotlb.perm & (1 << is_write))) {
goto translate_fail;
}
as = iotlb.target_as;
fv = address_space_to_flatview(iotlb.target_as);
*target_as = iotlb.target_as;
}
*xlat = addr;
if (page_mask == (hwaddr)(-1)) {
/* Not behind an IOMMU, use default page size. */
page_mask = ~TARGET_PAGE_MASK;
}
if (page_mask_out) {
*page_mask_out = page_mask;
}
if (plen_out) {
*plen_out = plen;
}
return *section;
translate_fail:
@ -519,14 +557,14 @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
bool is_write)
{
MemoryRegionSection section;
hwaddr xlat, plen;
hwaddr xlat, page_mask;
/* Try to get maximum page mask during translation. */
plen = (hwaddr)-1;
/* This can never be MMIO. */
section = address_space_do_translate(as, addr, &xlat, &plen,
is_write, false);
/*
* This can never be MMIO, and we don't really care about plen,
* but page mask.
*/
section = flatview_do_translate(address_space_to_flatview(as), addr, &xlat,
NULL, &page_mask, is_write, false, &as);
/* Illegal translation */
if (section.mr == &io_mem_unassigned) {
@ -537,22 +575,11 @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
xlat += section.offset_within_address_space -
section.offset_within_region;
if (plen == (hwaddr)-1) {
/*
* We use default page size here. Logically it only happens
* for identity mappings.
*/
plen = TARGET_PAGE_SIZE;
}
/* Convert to address mask */
plen -= 1;
return (IOMMUTLBEntry) {
.target_as = section.address_space,
.iova = addr & ~plen,
.translated_addr = xlat & ~plen,
.addr_mask = plen,
.target_as = as,
.iova = addr & ~page_mask,
.translated_addr = xlat & ~page_mask,
.addr_mask = page_mask,
/* IOTLBs are for DMAs, and DMA only allows on RAMs. */
.perm = IOMMU_RW,
};
@ -562,15 +589,16 @@ iotlb_fail:
}
/* Called from RCU critical section */
MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
hwaddr *xlat, hwaddr *plen,
bool is_write)
MemoryRegion *flatview_translate(FlatView *fv, hwaddr addr, hwaddr *xlat,
hwaddr *plen, bool is_write)
{
MemoryRegion *mr;
MemoryRegionSection section;
AddressSpace *as = NULL;
/* This can be MMIO, so setup MMIO bit. */
section = address_space_do_translate(as, addr, xlat, plen, is_write, true);
section = flatview_do_translate(fv, addr, xlat, plen, NULL,
is_write, true, &as);
mr = section.mr;
if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
@ -1220,7 +1248,7 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
} else {
AddressSpaceDispatch *d;
d = atomic_rcu_read(&section->address_space->dispatch);
d = flatview_to_dispatch(section->fv);
iotlb = section - d->map.sections;
iotlb += xlat;
}
@ -1246,7 +1274,7 @@ hwaddr memory_region_section_get_iotlb(CPUState *cpu,
static int subpage_register (subpage_t *mmio, uint32_t start, uint32_t end,
uint16_t section);
static subpage_t *subpage_init(AddressSpace *as, hwaddr base);
static subpage_t *subpage_init(FlatView *fv, hwaddr base);
static void *(*phys_mem_alloc)(size_t size, uint64_t *align) =
qemu_anon_ram_alloc;
@ -1303,8 +1331,9 @@ static void phys_sections_free(PhysPageMap *map)
g_free(map->nodes);
}
static void register_subpage(AddressSpaceDispatch *d, MemoryRegionSection *section)
static void register_subpage(FlatView *fv, MemoryRegionSection *section)
{
AddressSpaceDispatch *d = flatview_to_dispatch(fv);
subpage_t *subpage;
hwaddr base = section->offset_within_address_space
& TARGET_PAGE_MASK;
@ -1318,8 +1347,8 @@ static void register_subpage(AddressSpaceDispatch *d, MemoryRegionSection *secti
assert(existing->mr->subpage || existing->mr == &io_mem_unassigned);
if (!(existing->mr->subpage)) {
subpage = subpage_init(d->as, base);
subsection.address_space = d->as;
subpage = subpage_init(fv, base);
subsection.fv = fv;
subsection.mr = &subpage->iomem;
phys_page_set(d, base >> TARGET_PAGE_BITS, 1,
phys_section_add(&d->map, &subsection));
@ -1333,9 +1362,10 @@ static void register_subpage(AddressSpaceDispatch *d, MemoryRegionSection *secti
}
static void register_multipage(AddressSpaceDispatch *d,
static void register_multipage(FlatView *fv,
MemoryRegionSection *section)
{
AddressSpaceDispatch *d = flatview_to_dispatch(fv);
hwaddr start_addr = section->offset_within_address_space;
uint16_t section_index = phys_section_add(&d->map, section);
uint64_t num_pages = int128_get64(int128_rshift(section->size,
@ -1345,10 +1375,8 @@ static void register_multipage(AddressSpaceDispatch *d,
phys_page_set(d, start_addr >> TARGET_PAGE_BITS, num_pages, section_index);
}
static void mem_add(MemoryListener *listener, MemoryRegionSection *section)
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section)
{
AddressSpace *as = container_of(listener, AddressSpace, dispatch_listener);
AddressSpaceDispatch *d = as->next_dispatch;
MemoryRegionSection now = *section, remain = *section;
Int128 page_size = int128_make64(TARGET_PAGE_SIZE);
@ -1357,7 +1385,7 @@ static void mem_add(MemoryListener *listener, MemoryRegionSection *section)
- now.offset_within_address_space;
now.size = int128_min(int128_make64(left), now.size);
register_subpage(d, &now);
register_subpage(fv, &now);
} else {
now.size = int128_zero();
}
@ -1367,13 +1395,13 @@ static void mem_add(MemoryListener *listener, MemoryRegionSection *section)
remain.offset_within_region += int128_get64(now.size);
now = remain;
if (int128_lt(remain.size, page_size)) {
register_subpage(d, &now);
register_subpage(fv, &now);
} else if (remain.offset_within_address_space & ~TARGET_PAGE_MASK) {
now.size = page_size;
register_subpage(d, &now);
register_subpage(fv, &now);
} else {
now.size = int128_and(now.size, int128_neg(page_size));
register_multipage(d, &now);
register_multipage(fv, &now);
}
}
}
@ -2501,6 +2529,11 @@ static const MemoryRegionOps watch_mem_ops = {
.endianness = DEVICE_NATIVE_ENDIAN,
};
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
const uint8_t *buf, int len);
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
bool is_write);
static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
unsigned len, MemTxAttrs attrs)
{
@ -2512,8 +2545,7 @@ static MemTxResult subpage_read(void *opaque, hwaddr addr, uint64_t *data,
printf("%s: subpage %p len %u addr " TARGET_FMT_plx "\n", __func__,
subpage, len, addr);
#endif
res = address_space_read(subpage->as, addr + subpage->base,
attrs, buf, len);
res = flatview_read(subpage->fv, addr + subpage->base, attrs, buf, len);
if (res) {
return res;
}
@ -2562,8 +2594,7 @@ static MemTxResult subpage_write(void *opaque, hwaddr addr,
default:
abort();
}
return address_space_write(subpage->as, addr + subpage->base,
attrs, buf, len);
return flatview_write(subpage->fv, addr + subpage->base, attrs, buf, len);
}
static bool subpage_accepts(void *opaque, hwaddr addr,
@ -2575,8 +2606,8 @@ static bool subpage_accepts(void *opaque, hwaddr addr,
__func__, subpage, is_write ? 'w' : 'r', len, addr);
#endif
return address_space_access_valid(subpage->as, addr + subpage->base,
len, is_write);
return flatview_access_valid(subpage->fv, addr + subpage->base,
len, is_write);
}
static const MemoryRegionOps subpage_ops = {
@ -2610,12 +2641,12 @@ static int subpage_register (subpage_t *mmio, uint32_t start, uint32_t end,
return 0;
}
static subpage_t *subpage_init(AddressSpace *as, hwaddr base)
static subpage_t *subpage_init(FlatView *fv, hwaddr base)
{
subpage_t *mmio;
mmio = g_malloc0(sizeof(subpage_t) + TARGET_PAGE_SIZE * sizeof(uint16_t));
mmio->as = as;
mmio->fv = fv;
mmio->base = base;
memory_region_init_io(&mmio->iomem, NULL, &subpage_ops, mmio,
NULL, TARGET_PAGE_SIZE);
@ -2629,12 +2660,11 @@ static subpage_t *subpage_init(AddressSpace *as, hwaddr base)
return mmio;
}
static uint16_t dummy_section(PhysPageMap *map, AddressSpace *as,
MemoryRegion *mr)
static uint16_t dummy_section(PhysPageMap *map, FlatView *fv, MemoryRegion *mr)
{
assert(as);
assert(fv);
MemoryRegionSection section = {
.address_space = as,
.fv = fv,
.mr = mr,
.offset_within_address_space = 0,
.offset_within_region = 0,
@ -2671,46 +2701,31 @@ static void io_mem_init(void)
NULL, UINT64_MAX);
}
static void mem_begin(MemoryListener *listener)
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv)
{
AddressSpace *as = container_of(listener, AddressSpace, dispatch_listener);
AddressSpaceDispatch *d = g_new0(AddressSpaceDispatch, 1);
uint16_t n;
n = dummy_section(&d->map, as, &io_mem_unassigned);
n = dummy_section(&d->map, fv, &io_mem_unassigned);
assert(n == PHYS_SECTION_UNASSIGNED);
n = dummy_section(&d->map, as, &io_mem_notdirty);
n = dummy_section(&d->map, fv, &io_mem_notdirty);
assert(n == PHYS_SECTION_NOTDIRTY);
n = dummy_section(&d->map, as, &io_mem_rom);
n = dummy_section(&d->map, fv, &io_mem_rom);
assert(n == PHYS_SECTION_ROM);
n = dummy_section(&d->map, as, &io_mem_watch);
n = dummy_section(&d->map, fv, &io_mem_watch);
assert(n == PHYS_SECTION_WATCH);
d->phys_map = (PhysPageEntry) { .ptr = PHYS_MAP_NODE_NIL, .skip = 1 };
d->as = as;
as->next_dispatch = d;
return d;
}
static void address_space_dispatch_free(AddressSpaceDispatch *d)
void address_space_dispatch_free(AddressSpaceDispatch *d)
{
phys_sections_free(&d->map);
g_free(d);
}
static void mem_commit(MemoryListener *listener)
{
AddressSpace *as = container_of(listener, AddressSpace, dispatch_listener);
AddressSpaceDispatch *cur = as->dispatch;
AddressSpaceDispatch *next = as->next_dispatch;
phys_page_compact_all(next, next->map.nodes_nb);
atomic_rcu_set(&as->dispatch, next);
if (cur) {
call_rcu(cur, address_space_dispatch_free, rcu);
}
}
static void tcg_commit(MemoryListener *listener)
{
CPUAddressSpace *cpuas;
@ -2724,39 +2739,11 @@ static void tcg_commit(MemoryListener *listener)
* We reload the dispatch pointer now because cpu_reloading_memory_map()
* may have split the RCU critical section.
*/
d = atomic_rcu_read(&cpuas->as->dispatch);
d = address_space_to_dispatch(cpuas->as);
atomic_rcu_set(&cpuas->memory_dispatch, d);
tlb_flush(cpuas->cpu);
}
void address_space_init_dispatch(AddressSpace *as)
{
as->dispatch = NULL;
as->dispatch_listener = (MemoryListener) {
.begin = mem_begin,
.commit = mem_commit,
.region_add = mem_add,
.region_nop = mem_add,
.priority = 0,
};
memory_listener_register(&as->dispatch_listener, as);
}
void address_space_unregister(AddressSpace *as)
{
memory_listener_unregister(&as->dispatch_listener);
}
void address_space_destroy_dispatch(AddressSpace *as)
{
AddressSpaceDispatch *d = as->dispatch;
atomic_rcu_set(&as->dispatch, NULL);
if (d) {
call_rcu(d, address_space_dispatch_free, rcu);
}
}
static void memory_map_init(void)
{
system_memory = g_malloc(sizeof(*system_memory));
@ -2900,11 +2887,11 @@ static bool prepare_mmio_access(MemoryRegion *mr)
}
/* Called within RCU critical section. */
static MemTxResult address_space_write_continue(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs,
const uint8_t *buf,
int len, hwaddr addr1,
hwaddr l, MemoryRegion *mr)
static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
MemTxAttrs attrs,
const uint8_t *buf,
int len, hwaddr addr1,
hwaddr l, MemoryRegion *mr)
{
uint8_t *ptr;
uint64_t val;
@ -2966,14 +2953,14 @@ static MemTxResult address_space_write_continue(AddressSpace *as, hwaddr addr,
}
l = len;
mr = address_space_translate(as, addr, &addr1, &l, true);
mr = flatview_translate(fv, addr, &addr1, &l, true);
}
return result;
}
MemTxResult address_space_write(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
const uint8_t *buf, int len)
static MemTxResult flatview_write(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
const uint8_t *buf, int len)
{
hwaddr l;
hwaddr addr1;
@ -2983,20 +2970,27 @@ MemTxResult address_space_write(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
if (len > 0) {
rcu_read_lock();
l = len;
mr = address_space_translate(as, addr, &addr1, &l, true);
result = address_space_write_continue(as, addr, attrs, buf, len,
addr1, l, mr);
mr = flatview_translate(fv, addr, &addr1, &l, true);
result = flatview_write_continue(fv, addr, attrs, buf, len,
addr1, l, mr);
rcu_read_unlock();
}
return result;
}
MemTxResult address_space_write(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs,
const uint8_t *buf, int len)
{
return flatview_write(address_space_to_flatview(as), addr, attrs, buf, len);
}
/* Called within RCU critical section. */
MemTxResult address_space_read_continue(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf,
int len, hwaddr addr1, hwaddr l,
MemoryRegion *mr)
MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf,
int len, hwaddr addr1, hwaddr l,
MemoryRegion *mr)
{
uint8_t *ptr;
uint64_t val;
@ -3056,14 +3050,14 @@ MemTxResult address_space_read_continue(AddressSpace *as, hwaddr addr,
}
l = len;
mr = address_space_translate(as, addr, &addr1, &l, false);
mr = flatview_translate(fv, addr, &addr1, &l, false);
}
return result;
}
MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf, int len)
MemTxResult flatview_read_full(FlatView *fv, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf, int len)
{
hwaddr l;
hwaddr addr1;
@ -3073,25 +3067,33 @@ MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr,
if (len > 0) {
rcu_read_lock();
l = len;
mr = address_space_translate(as, addr, &addr1, &l, false);
result = address_space_read_continue(as, addr, attrs, buf, len,
addr1, l, mr);
mr = flatview_translate(fv, addr, &addr1, &l, false);
result = flatview_read_continue(fv, addr, attrs, buf, len,
addr1, l, mr);
rcu_read_unlock();
}
return result;
}
MemTxResult address_space_rw(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
uint8_t *buf, int len, bool is_write)
static MemTxResult flatview_rw(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
uint8_t *buf, int len, bool is_write)
{
if (is_write) {
return address_space_write(as, addr, attrs, (uint8_t *)buf, len);
return flatview_write(fv, addr, attrs, (uint8_t *)buf, len);
} else {
return address_space_read(as, addr, attrs, (uint8_t *)buf, len);
return flatview_read(fv, addr, attrs, (uint8_t *)buf, len);
}
}
MemTxResult address_space_rw(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf,
int len, bool is_write)
{
return flatview_rw(address_space_to_flatview(as),
addr, attrs, buf, len, is_write);
}
void cpu_physical_memory_rw(hwaddr addr, uint8_t *buf,
int len, int is_write)
{
@ -3249,7 +3251,8 @@ static void cpu_notify_map_clients(void)
qemu_mutex_unlock(&map_client_list_lock);
}
bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_write)
static bool flatview_access_valid(FlatView *fv, hwaddr addr, int len,
bool is_write)
{
MemoryRegion *mr;
hwaddr l, xlat;
@ -3257,7 +3260,7 @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
rcu_read_lock();
while (len > 0) {
l = len;
mr = address_space_translate(as, addr, &xlat, &l, is_write);
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
if (!memory_access_is_direct(mr, is_write)) {
l = memory_access_size(mr, l, addr);
if (!memory_region_access_valid(mr, xlat, l, is_write)) {
@ -3273,8 +3276,16 @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, int len, bool is_
return true;
}
bool address_space_access_valid(AddressSpace *as, hwaddr addr,
int len, bool is_write)
{
return flatview_access_valid(address_space_to_flatview(as),
addr, len, is_write);
}
static hwaddr
address_space_extend_translation(AddressSpace *as, hwaddr addr, hwaddr target_len,
flatview_extend_translation(FlatView *fv, hwaddr addr,
hwaddr target_len,
MemoryRegion *mr, hwaddr base, hwaddr len,
bool is_write)
{
@ -3291,7 +3302,8 @@ address_space_extend_translation(AddressSpace *as, hwaddr addr, hwaddr target_le
}
len = target_len;
this_mr = address_space_translate(as, addr, &xlat, &len, is_write);
this_mr = flatview_translate(fv, addr, &xlat,
&len, is_write);
if (this_mr != mr || xlat != base + done) {
return done;
}
@ -3314,6 +3326,7 @@ void *address_space_map(AddressSpace *as,
hwaddr l, xlat;
MemoryRegion *mr;
void *ptr;
FlatView *fv = address_space_to_flatview(as);
if (len == 0) {
return NULL;
@ -3321,7 +3334,7 @@ void *address_space_map(AddressSpace *as,
l = len;
rcu_read_lock();
mr = address_space_translate(as, addr, &xlat, &l, is_write);
mr = flatview_translate(fv, addr, &xlat, &l, is_write);
if (!memory_access_is_direct(mr, is_write)) {
if (atomic_xchg(&bounce.in_use, true)) {
@ -3337,7 +3350,7 @@ void *address_space_map(AddressSpace *as,
memory_region_ref(mr);
bounce.mr = mr;
if (!is_write) {
address_space_read(as, addr, MEMTXATTRS_UNSPECIFIED,
flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED,
bounce.buffer, l);
}
@ -3348,7 +3361,8 @@ void *address_space_map(AddressSpace *as,
memory_region_ref(mr);
*plen = address_space_extend_translation(as, addr, len, mr, xlat, l, is_write);
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
l, is_write);
ptr = qemu_ram_ptr_length(mr->ram_block, xlat, plen, true);
rcu_read_unlock();

View File

@ -803,12 +803,12 @@ static uint32_t stat_to_v9mode(const struct stat *stbuf)
return mode;
}
static int coroutine_fn stat_to_v9stat(V9fsPDU *pdu, V9fsPath *name,
static int coroutine_fn stat_to_v9stat(V9fsPDU *pdu, V9fsPath *path,
const char *basename,
const struct stat *stbuf,
V9fsStat *v9stat)
{
int err;
const char *str;
memset(v9stat, 0, sizeof(*v9stat));
@ -829,7 +829,7 @@ static int coroutine_fn stat_to_v9stat(V9fsPDU *pdu, V9fsPath *name,
v9fs_string_free(&v9stat->extension);
if (v9stat->mode & P9_STAT_MODE_SYMLINK) {
err = v9fs_co_readlink(pdu, name, &v9stat->extension);
err = v9fs_co_readlink(pdu, path, &v9stat->extension);
if (err < 0) {
return err;
}
@ -842,14 +842,7 @@ static int coroutine_fn stat_to_v9stat(V9fsPDU *pdu, V9fsPath *name,
"HARDLINKCOUNT", (unsigned long)stbuf->st_nlink);
}
str = strrchr(name->data, '/');
if (str) {
str += 1;
} else {
str = name->data;
}
v9fs_string_sprintf(&v9stat->name, "%s", str);
v9fs_string_sprintf(&v9stat->name, "%s", basename);
v9stat->size = 61 +
v9fs_string_size(&v9stat->name) +
@ -1058,6 +1051,7 @@ static void coroutine_fn v9fs_stat(void *opaque)
struct stat stbuf;
V9fsFidState *fidp;
V9fsPDU *pdu = opaque;
char *basename;
err = pdu_unmarshal(pdu, offset, "d", &fid);
if (err < 0) {
@ -1074,7 +1068,9 @@ static void coroutine_fn v9fs_stat(void *opaque)
if (err < 0) {
goto out;
}
err = stat_to_v9stat(pdu, &fidp->path, &stbuf, &v9stat);
basename = g_path_get_basename(fidp->path.data);
err = stat_to_v9stat(pdu, &fidp->path, basename, &stbuf, &v9stat);
g_free(basename);
if (err < 0) {
goto out;
}
@ -1750,22 +1746,31 @@ static int coroutine_fn v9fs_do_readdir_with_stat(V9fsPDU *pdu,
if (err < 0) {
break;
}
err = stat_to_v9stat(pdu, &path, &stbuf, &v9stat);
err = stat_to_v9stat(pdu, &path, dent->d_name, &stbuf, &v9stat);
if (err < 0) {
break;
}
/* 11 = 7 + 4 (7 = start offset, 4 = space for storing count) */
len = pdu_marshal(pdu, 11 + count, "S", &v9stat);
if ((count + v9stat.size + 2) > max_count) {
v9fs_readdir_unlock(&fidp->fs.dir);
v9fs_readdir_unlock(&fidp->fs.dir);
if ((len != (v9stat.size + 2)) || ((count + len) > max_count)) {
/* Ran out of buffer. Set dir back to old position and return */
v9fs_co_seekdir(pdu, fidp, saved_dir_pos);
v9fs_stat_free(&v9stat);
v9fs_path_free(&path);
return count;
}
/* 11 = 7 + 4 (7 = start offset, 4 = space for storing count) */
len = pdu_marshal(pdu, 11 + count, "S", &v9stat);
v9fs_readdir_unlock(&fidp->fs.dir);
if (len < 0) {
v9fs_co_seekdir(pdu, fidp, saved_dir_pos);
v9fs_stat_free(&v9stat);
v9fs_path_free(&path);
return len;
}
count += len;
v9fs_stat_free(&v9stat);
v9fs_path_free(&path);
@ -2559,13 +2564,11 @@ static int coroutine_fn v9fs_complete_rename(V9fsPDU *pdu, V9fsFidState *fidp,
int32_t newdirfid,
V9fsString *name)
{
char *end;
int err = 0;
V9fsPath new_path;
V9fsFidState *tfidp;
V9fsState *s = pdu->s;
V9fsFidState *dirfidp = NULL;
char *old_name, *new_name;
v9fs_path_init(&new_path);
if (newdirfid != -1) {
@ -2583,18 +2586,15 @@ static int coroutine_fn v9fs_complete_rename(V9fsPDU *pdu, V9fsFidState *fidp,
goto out;
}
} else {
old_name = fidp->path.data;
end = strrchr(old_name, '/');
if (end) {
end++;
} else {
end = old_name;
}
new_name = g_malloc0(end - old_name + name->size + 1);
strncat(new_name, old_name, end - old_name);
strncat(new_name + (end - old_name), name->data, name->size);
err = v9fs_co_name_to_path(pdu, NULL, new_name, &new_path);
g_free(new_name);
char *dir_name = g_path_get_dirname(fidp->path.data);
V9fsPath dir_path;
v9fs_path_init(&dir_path);
v9fs_path_sprintf(&dir_path, "%s", dir_name);
g_free(dir_name);
err = v9fs_co_name_to_path(pdu, &dir_path, name->data, &new_path);
v9fs_path_free(&dir_path);
if (err < 0) {
goto out;
}

View File

@ -75,6 +75,43 @@ static int acpi_pcihp_get_bsel(PCIBus *bus)
}
}
/* Assign BSEL property to all buses. In the future, this can be changed
* to only assign to buses that support hotplug.
*/
static void *acpi_set_bsel(PCIBus *bus, void *opaque)
{
unsigned *bsel_alloc = opaque;
unsigned *bus_bsel;
if (qbus_is_hotpluggable(BUS(bus))) {
bus_bsel = g_malloc(sizeof *bus_bsel);
*bus_bsel = (*bsel_alloc)++;
object_property_add_uint32_ptr(OBJECT(bus), ACPI_PCIHP_PROP_BSEL,
bus_bsel, &error_abort);
}
return bsel_alloc;
}
static void acpi_set_pci_info(void)
{
static bool bsel_is_set;
PCIBus *bus;
unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
if (bsel_is_set) {
return;
}
bsel_is_set = true;
bus = find_i440fx(); /* TODO: Q35 support */
if (bus) {
/* Scan all PCI buses. Set property to enable acpi based hotplug. */
pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
}
}
static void acpi_pcihp_test_hotplug_bus(PCIBus *bus, void *opaque)
{
AcpiPciHpFind *find = opaque;
@ -177,6 +214,7 @@ static void acpi_pcihp_update(AcpiPciHpState *s)
void acpi_pcihp_reset(AcpiPciHpState *s)
{
acpi_set_pci_info();
acpi_pcihp_update(s);
}
@ -273,7 +311,7 @@ static void pci_write(void *opaque, hwaddr addr, uint64_t data,
addr, data);
break;
case PCI_SEL_BASE:
s->hotplug_select = data;
s->hotplug_select = s->legacy_piix ? ACPI_PCIHP_BSEL_DEFAULT : data;
ACPI_PCIHP_DPRINTF("pcisel write %" HWADDR_PRIx " <== %" PRIu64 "\n",
addr, data);
default:

View File

@ -385,10 +385,7 @@ static void piix4_device_plug_cb(HotplugHandler *hotplug_dev,
dev, errp);
}
} else if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
if (!xen_enabled()) {
acpi_pcihp_device_plug_cb(hotplug_dev, &s->acpi_pci_hotplug, dev,
errp);
}
acpi_pcihp_device_plug_cb(hotplug_dev, &s->acpi_pci_hotplug, dev, errp);
} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
if (s->cpu_hotplug_legacy) {
legacy_acpi_cpu_plug_cb(hotplug_dev, &s->gpe_cpu, dev, errp);
@ -411,10 +408,8 @@ static void piix4_device_unplug_request_cb(HotplugHandler *hotplug_dev,
acpi_memory_unplug_request_cb(hotplug_dev, &s->acpi_memory_hotplug,
dev, errp);
} else if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
if (!xen_enabled()) {
acpi_pcihp_device_unplug_cb(hotplug_dev, &s->acpi_pci_hotplug, dev,
errp);
}
acpi_pcihp_device_unplug_cb(hotplug_dev, &s->acpi_pci_hotplug, dev,
errp);
} else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU) &&
!s->cpu_hotplug_legacy) {
acpi_cpu_unplug_request_cb(hotplug_dev, &s->cpuhp_state, dev, errp);

View File

@ -118,6 +118,8 @@ static void aw_a10_class_init(ObjectClass *oc, void *data)
DeviceClass *dc = DEVICE_CLASS(oc);
dc->realize = aw_a10_realize;
/* Reason: Uses serial_hds in realize and nd_table in instance_init */
dc->user_creatable = false;
}
static const TypeInfo aw_a10_type_info = {

View File

@ -41,7 +41,7 @@ static MemTxResult bitband_read(void *opaque, hwaddr offset,
/* Find address in underlying memory and round down to multiple of size */
addr = bitband_addr(s, offset) & (-size);
res = address_space_read(s->source_as, addr, attrs, buf, size);
res = address_space_read(&s->source_as, addr, attrs, buf, size);
if (res) {
return res;
}
@ -66,7 +66,7 @@ static MemTxResult bitband_write(void *opaque, hwaddr offset, uint64_t value,
/* Find address in underlying memory and round down to multiple of size */
addr = bitband_addr(s, offset) & (-size);
res = address_space_read(s->source_as, addr, attrs, buf, size);
res = address_space_read(&s->source_as, addr, attrs, buf, size);
if (res) {
return res;
}
@ -79,7 +79,7 @@ static MemTxResult bitband_write(void *opaque, hwaddr offset, uint64_t value,
} else {
buf[bitpos >> 3] &= ~bit;
}
return address_space_write(s->source_as, addr, attrs, buf, size);
return address_space_write(&s->source_as, addr, attrs, buf, size);
}
static const MemoryRegionOps bitband_ops = {
@ -117,8 +117,7 @@ static void bitband_realize(DeviceState *dev, Error **errp)
return;
}
s->source_as = address_space_init_shareable(s->source_memory,
"bitband-source");
address_space_init(&s->source_as, s->source_memory, "bitband-source");
}
/* Board init. */

View File

@ -338,6 +338,8 @@ static void aspeed_soc_class_init(ObjectClass *oc, void *data)
sc->info = (AspeedSoCInfo *) data;
dc->realize = aspeed_soc_realize;
/* Reason: Uses serial_hds and nd_table in realize() directly */
dc->user_creatable = false;
}
static const TypeInfo aspeed_soc_type_info = {

View File

@ -101,6 +101,8 @@ static void digic_class_init(ObjectClass *oc, void *data)
DeviceClass *dc = DEVICE_CLASS(oc);
dc->realize = digic_realize;
/* Reason: Uses serial_hds in the realize function --> not usable twice */
dc->user_creatable = false;
}
static const TypeInfo digic_type_info = {

View File

@ -287,8 +287,8 @@ static void mps2_common_init(MachineState *machine)
cmsdk_apb_uart_create(uartbase[i],
qdev_get_gpio_in(txrx_orgate_dev, 0),
qdev_get_gpio_in(txrx_orgate_dev, 1),
qdev_get_gpio_in(orgate_dev, 0),
qdev_get_gpio_in(orgate_dev, 1),
qdev_get_gpio_in(orgate_dev, i * 2),
qdev_get_gpio_in(orgate_dev, i * 2 + 1),
NULL,
uartchr, SYSCLK_FRQ);
}

View File

@ -95,20 +95,46 @@ static void vga_draw_glyph9(uint8_t *d, int linesize,
} while (--h);
}
static inline uint8_t vga_read_byte(VGACommonState *vga, uint32_t addr)
{
return vga->vram_ptr[addr & vga->vbe_size_mask];
}
static inline uint16_t vga_read_word_le(VGACommonState *vga, uint32_t addr)
{
uint32_t offset = addr & vga->vbe_size_mask & ~1;
uint16_t *ptr = (uint16_t *)(vga->vram_ptr + offset);
return lduw_le_p(ptr);
}
static inline uint16_t vga_read_word_be(VGACommonState *vga, uint32_t addr)
{
uint32_t offset = addr & vga->vbe_size_mask & ~1;
uint16_t *ptr = (uint16_t *)(vga->vram_ptr + offset);
return lduw_be_p(ptr);
}
static inline uint32_t vga_read_dword_le(VGACommonState *vga, uint32_t addr)
{
uint32_t offset = addr & vga->vbe_size_mask & ~3;
uint32_t *ptr = (uint32_t *)(vga->vram_ptr + offset);
return ldl_le_p(ptr);
}
/*
* 4 color mode
*/
static void vga_draw_line2(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line2(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
uint32_t plane_mask, *palette, data, v;
int x;
palette = s1->last_palette;
plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
palette = vga->last_palette;
plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
width >>= 3;
for(x = 0; x < width; x++) {
data = ((uint32_t *)s)[0];
data = vga_read_dword_le(vga, addr);
data &= plane_mask;
v = expand2[GET_PLANE(data, 0)];
v |= expand2[GET_PLANE(data, 2)] << 2;
@ -124,7 +150,7 @@ static void vga_draw_line2(VGACommonState *s1, uint8_t *d,
((uint32_t *)d)[6] = palette[(v >> 4) & 0xf];
((uint32_t *)d)[7] = palette[(v >> 0) & 0xf];
d += 32;
s += 4;
addr += 4;
}
}
@ -134,17 +160,17 @@ static void vga_draw_line2(VGACommonState *s1, uint8_t *d,
/*
* 4 color mode, dup2 horizontal
*/
static void vga_draw_line2d2(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line2d2(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
uint32_t plane_mask, *palette, data, v;
int x;
palette = s1->last_palette;
plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
palette = vga->last_palette;
plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
width >>= 3;
for(x = 0; x < width; x++) {
data = ((uint32_t *)s)[0];
data = vga_read_dword_le(vga, addr);
data &= plane_mask;
v = expand2[GET_PLANE(data, 0)];
v |= expand2[GET_PLANE(data, 2)] << 2;
@ -160,24 +186,24 @@ static void vga_draw_line2d2(VGACommonState *s1, uint8_t *d,
PUT_PIXEL2(d, 6, palette[(v >> 4) & 0xf]);
PUT_PIXEL2(d, 7, palette[(v >> 0) & 0xf]);
d += 64;
s += 4;
addr += 4;
}
}
/*
* 16 color mode
*/
static void vga_draw_line4(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line4(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
uint32_t plane_mask, data, v, *palette;
int x;
palette = s1->last_palette;
plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
palette = vga->last_palette;
plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
width >>= 3;
for(x = 0; x < width; x++) {
data = ((uint32_t *)s)[0];
data = vga_read_dword_le(vga, addr);
data &= plane_mask;
v = expand4[GET_PLANE(data, 0)];
v |= expand4[GET_PLANE(data, 1)] << 1;
@ -192,24 +218,24 @@ static void vga_draw_line4(VGACommonState *s1, uint8_t *d,
((uint32_t *)d)[6] = palette[(v >> 4) & 0xf];
((uint32_t *)d)[7] = palette[(v >> 0) & 0xf];
d += 32;
s += 4;
addr += 4;
}
}
/*
* 16 color mode, dup2 horizontal
*/
static void vga_draw_line4d2(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line4d2(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
uint32_t plane_mask, data, v, *palette;
int x;
palette = s1->last_palette;
plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
palette = vga->last_palette;
plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
width >>= 3;
for(x = 0; x < width; x++) {
data = ((uint32_t *)s)[0];
data = vga_read_dword_le(vga, addr);
data &= plane_mask;
v = expand4[GET_PLANE(data, 0)];
v |= expand4[GET_PLANE(data, 1)] << 1;
@ -224,7 +250,7 @@ static void vga_draw_line4d2(VGACommonState *s1, uint8_t *d,
PUT_PIXEL2(d, 6, palette[(v >> 4) & 0xf]);
PUT_PIXEL2(d, 7, palette[(v >> 0) & 0xf]);
d += 64;
s += 4;
addr += 4;
}
}
@ -233,21 +259,21 @@ static void vga_draw_line4d2(VGACommonState *s1, uint8_t *d,
*
* XXX: add plane_mask support (never used in standard VGA modes)
*/
static void vga_draw_line8d2(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line8d2(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
uint32_t *palette;
int x;
palette = s1->last_palette;
palette = vga->last_palette;
width >>= 3;
for(x = 0; x < width; x++) {
PUT_PIXEL2(d, 0, palette[s[0]]);
PUT_PIXEL2(d, 1, palette[s[1]]);
PUT_PIXEL2(d, 2, palette[s[2]]);
PUT_PIXEL2(d, 3, palette[s[3]]);
PUT_PIXEL2(d, 0, palette[vga_read_byte(vga, addr + 0)]);
PUT_PIXEL2(d, 1, palette[vga_read_byte(vga, addr + 1)]);
PUT_PIXEL2(d, 2, palette[vga_read_byte(vga, addr + 2)]);
PUT_PIXEL2(d, 3, palette[vga_read_byte(vga, addr + 3)]);
d += 32;
s += 4;
addr += 4;
}
}
@ -256,63 +282,63 @@ static void vga_draw_line8d2(VGACommonState *s1, uint8_t *d,
*
* XXX: add plane_mask support (never used in standard VGA modes)
*/
static void vga_draw_line8(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line8(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
uint32_t *palette;
int x;
palette = s1->last_palette;
palette = vga->last_palette;
width >>= 3;
for(x = 0; x < width; x++) {
((uint32_t *)d)[0] = palette[s[0]];
((uint32_t *)d)[1] = palette[s[1]];
((uint32_t *)d)[2] = palette[s[2]];
((uint32_t *)d)[3] = palette[s[3]];
((uint32_t *)d)[4] = palette[s[4]];
((uint32_t *)d)[5] = palette[s[5]];
((uint32_t *)d)[6] = palette[s[6]];
((uint32_t *)d)[7] = palette[s[7]];
((uint32_t *)d)[0] = palette[vga_read_byte(vga, addr + 0)];
((uint32_t *)d)[1] = palette[vga_read_byte(vga, addr + 1)];
((uint32_t *)d)[2] = palette[vga_read_byte(vga, addr + 2)];
((uint32_t *)d)[3] = palette[vga_read_byte(vga, addr + 3)];
((uint32_t *)d)[4] = palette[vga_read_byte(vga, addr + 4)];
((uint32_t *)d)[5] = palette[vga_read_byte(vga, addr + 5)];
((uint32_t *)d)[6] = palette[vga_read_byte(vga, addr + 6)];
((uint32_t *)d)[7] = palette[vga_read_byte(vga, addr + 7)];
d += 32;
s += 8;
addr += 8;
}
}
/*
* 15 bit color
*/
static void vga_draw_line15_le(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line15_le(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
int w;
uint32_t v, r, g, b;
w = width;
do {
v = lduw_le_p((void *)s);
v = vga_read_word_le(vga, addr);
r = (v >> 7) & 0xf8;
g = (v >> 2) & 0xf8;
b = (v << 3) & 0xf8;
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 2;
addr += 2;
d += 4;
} while (--w != 0);
}
static void vga_draw_line15_be(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line15_be(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
int w;
uint32_t v, r, g, b;
w = width;
do {
v = lduw_be_p((void *)s);
v = vga_read_word_be(vga, addr);
r = (v >> 7) & 0xf8;
g = (v >> 2) & 0xf8;
b = (v << 3) & 0xf8;
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 2;
addr += 2;
d += 4;
} while (--w != 0);
}
@ -320,38 +346,38 @@ static void vga_draw_line15_be(VGACommonState *s1, uint8_t *d,
/*
* 16 bit color
*/
static void vga_draw_line16_le(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line16_le(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
int w;
uint32_t v, r, g, b;
w = width;
do {
v = lduw_le_p((void *)s);
v = vga_read_word_le(vga, addr);
r = (v >> 8) & 0xf8;
g = (v >> 3) & 0xfc;
b = (v << 3) & 0xf8;
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 2;
addr += 2;
d += 4;
} while (--w != 0);
}
static void vga_draw_line16_be(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line16_be(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
int w;
uint32_t v, r, g, b;
w = width;
do {
v = lduw_be_p((void *)s);
v = vga_read_word_be(vga, addr);
r = (v >> 8) & 0xf8;
g = (v >> 3) & 0xfc;
b = (v << 3) & 0xf8;
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 2;
addr += 2;
d += 4;
} while (--w != 0);
}
@ -359,36 +385,36 @@ static void vga_draw_line16_be(VGACommonState *s1, uint8_t *d,
/*
* 24 bit color
*/
static void vga_draw_line24_le(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line24_le(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
int w;
uint32_t r, g, b;
w = width;
do {
b = s[0];
g = s[1];
r = s[2];
b = vga_read_byte(vga, addr + 0);
g = vga_read_byte(vga, addr + 1);
r = vga_read_byte(vga, addr + 2);
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 3;
addr += 3;
d += 4;
} while (--w != 0);
}
static void vga_draw_line24_be(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line24_be(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
int w;
uint32_t r, g, b;
w = width;
do {
r = s[0];
g = s[1];
b = s[2];
r = vga_read_byte(vga, addr + 0);
g = vga_read_byte(vga, addr + 1);
b = vga_read_byte(vga, addr + 2);
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 3;
addr += 3;
d += 4;
} while (--w != 0);
}
@ -396,44 +422,36 @@ static void vga_draw_line24_be(VGACommonState *s1, uint8_t *d,
/*
* 32 bit color
*/
static void vga_draw_line32_le(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line32_le(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
#ifndef HOST_WORDS_BIGENDIAN
memcpy(d, s, width * 4);
#else
int w;
uint32_t r, g, b;
w = width;
do {
b = s[0];
g = s[1];
r = s[2];
b = vga_read_byte(vga, addr + 0);
g = vga_read_byte(vga, addr + 1);
r = vga_read_byte(vga, addr + 2);
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 4;
addr += 4;
d += 4;
} while (--w != 0);
#endif
}
static void vga_draw_line32_be(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width)
static void vga_draw_line32_be(VGACommonState *vga, uint8_t *d,
uint32_t addr, int width)
{
#ifdef HOST_WORDS_BIGENDIAN
memcpy(d, s, width * 4);
#else
int w;
uint32_t r, g, b;
w = width;
do {
r = s[1];
g = s[2];
b = s[3];
r = vga_read_byte(vga, addr + 1);
g = vga_read_byte(vga, addr + 2);
b = vga_read_byte(vga, addr + 3);
((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
s += 4;
addr += 4;
d += 4;
} while (--w != 0);
#endif
}

View File

@ -1005,7 +1005,7 @@ void vga_mem_writeb(VGACommonState *s, hwaddr addr, uint32_t val)
}
typedef void vga_draw_line_func(VGACommonState *s1, uint8_t *d,
const uint8_t *s, int width);
uint32_t srcaddr, int width);
#include "vga-helpers.h"
@ -1464,14 +1464,14 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
{
DisplaySurface *surface = qemu_console_surface(s->con);
int y1, y, update, linesize, y_start, double_scan, mask, depth;
int width, height, shift_control, line_offset, bwidth, bits;
ram_addr_t page0, page1;
int width, height, shift_control, bwidth, bits;
ram_addr_t page0, page1, region_start, region_end;
DirtyBitmapSnapshot *snap = NULL;
int disp_width, multi_scan, multi_run;
uint8_t *d;
uint32_t v, addr1, addr;
vga_draw_line_func *vga_draw_line = NULL;
bool share_surface;
bool share_surface, force_shadow = false;
pixman_format_code_t format;
#ifdef HOST_WORDS_BIGENDIAN
bool byteswap = !s->big_endian_fb;
@ -1484,6 +1484,15 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
s->get_resolution(s, &width, &height);
disp_width = width;
region_start = (s->start_addr * 4);
region_end = region_start + s->line_offset * height;
if (region_end > s->vbe_size) {
/* wraps around (can happen with cirrus vbe modes) */
region_start = 0;
region_end = s->vbe_size;
force_shadow = true;
}
shift_control = (s->gr[VGA_GFX_MODE] >> 5) & 3;
double_scan = (s->cr[VGA_CRTC_MAX_SCAN] >> 7);
if (shift_control != 1) {
@ -1523,7 +1532,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
format = qemu_default_pixman_format(depth, !byteswap);
if (format) {
share_surface = dpy_gfx_check_format(s->con, format)
&& !s->force_shadow;
&& !s->force_shadow && !force_shadow;
} else {
share_surface = false;
}
@ -1614,7 +1623,6 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
s->cursor_invalidate(s);
}
line_offset = s->line_offset;
#if 0
printf("w=%d h=%d v=%d line_offset=%d cr[0x09]=0x%02x cr[0x17]=0x%02x linecmp=%d sr[0x01]=0x%02x\n",
width, height, v, line_offset, s->cr[9], s->cr[VGA_CRTC_MODE],
@ -1629,8 +1637,12 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
if (!full_update) {
vga_sync_dirty_bitmap(s);
snap = memory_region_snapshot_and_clear_dirty(&s->vram, addr1,
line_offset * height,
if (s->line_compare < height) {
/* split screen mode */
region_start = 0;
}
snap = memory_region_snapshot_and_clear_dirty(&s->vram, region_start,
region_end - region_start,
DIRTY_MEMORY_VGA);
}
@ -1646,10 +1658,17 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
addr = (addr & ~0x8000) | ((y1 & 2) << 14);
}
update = full_update;
page0 = addr;
page1 = addr + bwidth - 1;
page0 = addr & s->vbe_size_mask;
page1 = (addr + bwidth - 1) & s->vbe_size_mask;
if (full_update) {
update = 1;
} else if (page1 < page0) {
/* scanline wraps from end of video memory to the start */
assert(force_shadow);
update = memory_region_snapshot_get_dirty(&s->vram, snap,
page0, 0);
update |= memory_region_snapshot_get_dirty(&s->vram, snap,
page1, 0);
} else {
update = memory_region_snapshot_get_dirty(&s->vram, snap,
page0, page1 - page0);
@ -1660,7 +1679,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
if (y_start < 0)
y_start = y;
if (!(is_buffer_shared(surface))) {
vga_draw_line(s, d, s->vram_ptr + addr, width);
vga_draw_line(s, d, addr, width);
if (s->cursor_draw_line)
s->cursor_draw_line(s, d, y);
}
@ -1675,7 +1694,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
if (!multi_run) {
mask = (s->cr[VGA_CRTC_MODE] & 3) ^ 3;
if ((y1 & mask) == mask)
addr1 += line_offset;
addr1 += s->line_offset;
y1++;
multi_run = multi_scan;
} else {
@ -2164,6 +2183,7 @@ void vga_common_init(VGACommonState *s, Object *obj, bool global_vmstate)
if (!s->vbe_size) {
s->vbe_size = s->vram_size;
}
s->vbe_size_mask = s->vbe_size - 1;
s->is_vbe_vmstate = 1;
memory_region_init_ram_nomigrate(&s->vram, obj, "vga.vram", s->vram_size,

View File

@ -94,6 +94,7 @@ typedef struct VGACommonState {
uint32_t vram_size;
uint32_t vram_size_mb; /* property */
uint32_t vbe_size;
uint32_t vbe_size_mask;
uint32_t latch;
bool has_chain4_alias;
MemoryRegion chain4_alias;

View File

@ -493,36 +493,6 @@ build_madt(GArray *table_data, BIOSLinker *linker, PCMachineState *pcms)
table_data->len - madt_start, 1, NULL, NULL);
}
/* Assign BSEL property to all buses. In the future, this can be changed
* to only assign to buses that support hotplug.
*/
static void *acpi_set_bsel(PCIBus *bus, void *opaque)
{
unsigned *bsel_alloc = opaque;
unsigned *bus_bsel;
if (qbus_is_hotpluggable(BUS(bus))) {
bus_bsel = g_malloc(sizeof *bus_bsel);
*bus_bsel = (*bsel_alloc)++;
object_property_add_uint32_ptr(OBJECT(bus), ACPI_PCIHP_PROP_BSEL,
bus_bsel, &error_abort);
}
return bsel_alloc;
}
static void acpi_set_pci_info(void)
{
PCIBus *bus = find_i440fx(); /* TODO: Q35 support */
unsigned bsel_alloc = ACPI_PCIHP_BSEL_DEFAULT;
if (bus) {
/* Scan all PCI buses. Set property to enable acpi based hotplug. */
pci_for_each_bus_depth_first(bus, acpi_set_bsel, NULL, &bsel_alloc);
}
}
static void build_append_pcihp_notify_entry(Aml *method, int slot)
{
Aml *if_ctx;
@ -2888,8 +2858,6 @@ void acpi_setup(void)
build_state = g_malloc0(sizeof *build_state);
acpi_set_pci_info();
acpi_build_tables_init(&tables);
acpi_build(&tables, MACHINE(pcms));

View File

@ -62,7 +62,7 @@ static uint64_t kvmclock_current_nsec(KVMClockState *s)
{
CPUState *cpu = first_cpu;
CPUX86State *env = cpu->env_ptr;
hwaddr kvmclock_struct_pa = env->system_time_msr & ~1ULL;
hwaddr kvmclock_struct_pa;
uint64_t migration_tsc = env->tsc;
struct pvclock_vcpu_time_info time;
uint64_t delta;
@ -77,6 +77,7 @@ static uint64_t kvmclock_current_nsec(KVMClockState *s)
return 0;
}
kvmclock_struct_pa = env->system_time_msr & ~1ULL;
cpu_physical_memory_read(kvmclock_struct_pa, &time, sizeof(time));
assert(time.tsc_timestamp <= migration_tsc);

View File

@ -221,15 +221,34 @@ int load_multiboot(FWCfgState *fw_cfg,
uint32_t mh_header_addr = ldl_p(header+i+12);
uint32_t mh_load_end_addr = ldl_p(header+i+20);
uint32_t mh_bss_end_addr = ldl_p(header+i+24);
mh_load_addr = ldl_p(header+i+16);
if (mh_header_addr < mh_load_addr) {
fprintf(stderr, "invalid mh_load_addr address\n");
exit(1);
}
uint32_t mb_kernel_text_offset = i - (mh_header_addr - mh_load_addr);
uint32_t mb_load_size = 0;
mh_entry_addr = ldl_p(header+i+28);
if (mh_load_end_addr) {
if (mh_bss_end_addr < mh_load_addr) {
fprintf(stderr, "invalid mh_bss_end_addr address\n");
exit(1);
}
mb_kernel_size = mh_bss_end_addr - mh_load_addr;
if (mh_load_end_addr < mh_load_addr) {
fprintf(stderr, "invalid mh_load_end_addr address\n");
exit(1);
}
mb_load_size = mh_load_end_addr - mh_load_addr;
} else {
if (kernel_file_size < mb_kernel_text_offset) {
fprintf(stderr, "invalid kernel_file_size\n");
exit(1);
}
mb_kernel_size = kernel_file_size - mb_kernel_text_offset;
mb_load_size = mb_kernel_size;
}

View File

@ -1495,6 +1495,7 @@ void ahci_uninit(AHCIState *s)
ide_exit(s);
}
object_unparent(OBJECT(&ad->port));
}
g_free(s->dev);

View File

@ -575,12 +575,15 @@ PCMCIACardState *dscm1xxxx_init(DriveInfo *dinfo)
static void dscm1xxxx_class_init(ObjectClass *oc, void *data)
{
PCMCIACardClass *pcc = PCMCIA_CARD_CLASS(oc);
DeviceClass *dc = DEVICE_CLASS(oc);
pcc->cis = dscm1xxxx_cis;
pcc->cis_len = sizeof(dscm1xxxx_cis);
pcc->attach = dscm1xxxx_attach;
pcc->detach = dscm1xxxx_detach;
/* Reason: Needs to be wired-up in code, see dscm1xxxx_init() */
dc->user_creatable = false;
}
static const TypeInfo dscm1xxxx_type_info = {

View File

@ -64,20 +64,16 @@ static void vm_change_state_handler(void *opaque, int running,
{
GICv3ITSState *s = (GICv3ITSState *)opaque;
Error *err = NULL;
int ret;
if (running) {
return;
}
ret = kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
KVM_DEV_ARM_ITS_SAVE_TABLES, NULL, true, &err);
kvm_device_access(s->dev_fd, KVM_DEV_ARM_VGIC_GRP_CTRL,
KVM_DEV_ARM_ITS_SAVE_TABLES, NULL, true, &err);
if (err) {
error_report_err(err);
}
if (ret < 0 && ret != -EFAULT) {
abort();
}
}
static void kvm_arm_its_realize(DeviceState *dev, Error **errp)

View File

@ -293,7 +293,7 @@ static void kvm_arm_gicv3_put(GICv3State *s)
kvm_gicr_access(s, GICR_PROPBASER + 4, ncpu, &regh, true);
reg64 = c->gicr_pendbaser;
if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
if (!(c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS)) {
/* Setting PTZ is advised if LPIs are disabled, to reduce
* GIC initialization time.
*/

View File

@ -124,7 +124,7 @@ static void kvm_openpic_region_add(MemoryListener *listener,
uint64_t reg_base;
int ret;
if (section->address_space != &address_space_memory) {
if (section->fv != address_space_to_flatview(&address_space_memory)) {
abort();
}

View File

@ -288,7 +288,8 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
qemu_bh_cancel(q->tx_bh);
}
if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
(queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
(queue_status & VIRTIO_CONFIG_S_DRIVER_OK) &&
vdev->vm_running) {
/* if tx is waiting we are likely have some packets in tx queue
* and disabled notification */
q->tx_waiting = 0;

View File

@ -43,6 +43,7 @@
#include "migration/register.h"
#include "mmu-hash64.h"
#include "mmu-book3s-v3.h"
#include "cpu-models.h"
#include "qom/cpu.h"
#include "hw/boards.h"
@ -251,9 +252,10 @@ static int spapr_fixup_cpu_numa_dt(void *fdt, int offset, PowerPCCPU *cpu)
}
/* Populate the "ibm,pa-features" property */
static void spapr_populate_pa_features(CPUPPCState *env, void *fdt, int offset,
bool legacy_guest)
static void spapr_populate_pa_features(PowerPCCPU *cpu, void *fdt, int offset,
bool legacy_guest)
{
CPUPPCState *env = &cpu->env;
uint8_t pa_features_206[] = { 6, 0,
0xf6, 0x1f, 0xc7, 0x00, 0x80, 0xc0 };
uint8_t pa_features_207[] = { 24, 0,
@ -286,23 +288,22 @@ static void spapr_populate_pa_features(CPUPPCState *env, void *fdt, int offset,
/* 60: NM atomic, 62: RNG */
0x80, 0x00, 0x80, 0x00, 0x00, 0x00, /* 60 - 65 */
};
uint8_t *pa_features;
uint8_t *pa_features = NULL;
size_t pa_size;
switch (POWERPC_MMU_VER(env->mmu_model)) {
case POWERPC_MMU_VER_2_06:
if (ppc_check_compat(cpu, CPU_POWERPC_LOGICAL_2_06, 0, cpu->compat_pvr)) {
pa_features = pa_features_206;
pa_size = sizeof(pa_features_206);
break;
case POWERPC_MMU_VER_2_07:
}
if (ppc_check_compat(cpu, CPU_POWERPC_LOGICAL_2_07, 0, cpu->compat_pvr)) {
pa_features = pa_features_207;
pa_size = sizeof(pa_features_207);
break;
case POWERPC_MMU_VER_3_00:
}
if (ppc_check_compat(cpu, CPU_POWERPC_LOGICAL_3_00, 0, cpu->compat_pvr)) {
pa_features = pa_features_300;
pa_size = sizeof(pa_features_300);
break;
default:
}
if (!pa_features) {
return;
}
@ -339,7 +340,6 @@ static int spapr_fixup_cpu_dt(void *fdt, sPAPRMachineState *spapr)
CPU_FOREACH(cs) {
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *env = &cpu->env;
DeviceClass *dc = DEVICE_GET_CLASS(cs);
int index = ppc_get_vcpu_dt_id(cpu);
int compat_smt = MIN(smp_threads, ppc_compat_max_threads(cpu));
@ -384,7 +384,7 @@ static int spapr_fixup_cpu_dt(void *fdt, sPAPRMachineState *spapr)
return ret;
}
spapr_populate_pa_features(env, fdt, offset,
spapr_populate_pa_features(cpu, fdt, offset,
spapr->cas_legacy_guest_workaround);
}
return ret;
@ -581,7 +581,7 @@ static void spapr_populate_cpu_dt(CPUState *cs, void *fdt, int offset,
page_sizes_prop, page_sizes_prop_size)));
}
spapr_populate_pa_features(env, fdt, offset, false);
spapr_populate_pa_features(cpu, fdt, offset, false);
_FDT((fdt_setprop_cell(fdt, offset, "ibm,chip-id",
cs->cpu_index / vcpus_per_socket)));
@ -790,6 +790,26 @@ out:
return ret;
}
static bool spapr_hotplugged_dev_before_cas(void)
{
Object *drc_container, *obj;
ObjectProperty *prop;
ObjectPropertyIterator iter;
drc_container = container_get(object_get_root(), "/dr-connector");
object_property_iter_init(&iter, drc_container);
while ((prop = object_property_iter_next(&iter))) {
if (!strstart(prop->type, "link<", NULL)) {
continue;
}
obj = object_property_get_link(drc_container, prop->name, NULL);
if (spapr_drc_needed(obj)) {
return true;
}
}
return false;
}
int spapr_h_cas_compose_response(sPAPRMachineState *spapr,
target_ulong addr, target_ulong size,
sPAPROptionVector *ov5_updates)
@ -797,9 +817,13 @@ int spapr_h_cas_compose_response(sPAPRMachineState *spapr,
void *fdt, *fdt_skel;
sPAPRDeviceTreeUpdateHeader hdr = { .version_id = 1 };
if (spapr_hotplugged_dev_before_cas()) {
return 1;
}
size -= sizeof(hdr);
/* Create sceleton */
/* Create skeleton */
fdt_skel = g_malloc0(size);
_FDT((fdt_create(fdt_skel, size)));
_FDT((fdt_begin_node(fdt_skel, "")));
@ -920,7 +944,11 @@ static void spapr_dt_ov5_platform_support(void *fdt, int chosen)
26, 0x40, /* Radix options: GTSE == yes. */
};
if (kvm_enabled()) {
if (!ppc_check_compat(first_ppc_cpu, CPU_POWERPC_LOGICAL_3_00, 0,
first_ppc_cpu->compat_pvr)) {
/* If we're in a pre POWER9 compat mode then the guest should do hash */
val[3] = 0x00; /* Hash */
} else if (kvm_enabled()) {
if (kvmppc_has_cap_mmu_radix() && kvmppc_has_cap_mmu_hash_v3()) {
val[3] = 0x80; /* OV5_MMU_BOTH */
} else if (kvmppc_has_cap_mmu_radix()) {
@ -929,13 +957,8 @@ static void spapr_dt_ov5_platform_support(void *fdt, int chosen)
val[3] = 0x00; /* Hash */
}
} else {
if (first_ppc_cpu->env.mmu_model & POWERPC_MMU_V3) {
/* V3 MMU supports both hash and radix (with dynamic switching) */
val[3] = 0xC0;
} else {
/* Otherwise we can only do hash */
val[3] = 0x00;
}
/* V3 MMU supports both hash and radix in tcg (with dynamic switching) */
val[3] = 0xC0;
}
_FDT(fdt_setprop(fdt, chosen, "ibm,arch-vec-5-platform-support",
val, sizeof(val)));
@ -1342,7 +1365,10 @@ void spapr_setup_hpt_and_vrma(sPAPRMachineState *spapr)
&& !spapr_ovec_test(spapr->ov5_cas, OV5_HPT_RESIZE))) {
hpt_shift = spapr_hpt_shift_for_ramsize(MACHINE(spapr)->maxram_size);
} else {
hpt_shift = spapr_hpt_shift_for_ramsize(MACHINE(spapr)->ram_size);
uint64_t current_ram_size;
current_ram_size = MACHINE(spapr)->ram_size + pc_existing_dimms_capacity(&error_abort);
hpt_shift = spapr_hpt_shift_for_ramsize(current_ram_size);
}
spapr_reallocate_hpt(spapr, hpt_shift, &error_fatal);
@ -1369,6 +1395,19 @@ static void find_unknown_sysbus_device(SysBusDevice *sbdev, void *opaque)
}
}
static int spapr_reset_drcs(Object *child, void *opaque)
{
sPAPRDRConnector *drc =
(sPAPRDRConnector *) object_dynamic_cast(child,
TYPE_SPAPR_DR_CONNECTOR);
if (drc) {
spapr_drc_reset(drc);
}
return 0;
}
static void ppc_spapr_reset(void)
{
MachineState *machine = MACHINE(qdev_get_machine());
@ -1382,7 +1421,10 @@ static void ppc_spapr_reset(void)
/* Check for unknown sysbus devices */
foreach_dynamic_sysbus_device(find_unknown_sysbus_device, NULL);
if (kvm_enabled() && kvmppc_has_cap_mmu_radix()) {
first_ppc_cpu = POWERPC_CPU(first_cpu);
if (kvm_enabled() && kvmppc_has_cap_mmu_radix() &&
ppc_check_compat(first_ppc_cpu, CPU_POWERPC_LOGICAL_3_00, 0,
spapr->max_compat_pvr)) {
/* If using KVM with radix mode available, VCPUs can be started
* without a HPT because KVM will start them in radix mode.
* Set the GR bit in PATB so that we know there is no HPT. */
@ -1393,6 +1435,15 @@ static void ppc_spapr_reset(void)
qemu_devices_reset();
/* DRC reset may cause a device to be unplugged. This will cause troubles
* if this device is used by another device (eg, a running vhost backend
* will crash QEMU if the DIMM holding the vring goes away). To avoid such
* situations, we reset DRCs after all devices have been reset.
*/
object_child_foreach_recursive(object_get_root(), spapr_reset_drcs, NULL);
spapr_clear_pending_events(spapr);
/*
* We place the device tree and RTAS just below either the top of the RMA,
* or just below 2GB, whichever is lowere, so that it can be
@ -1432,7 +1483,6 @@ static void ppc_spapr_reset(void)
g_free(fdt);
/* Set up the entry state */
first_ppc_cpu = POWERPC_CPU(first_cpu);
first_ppc_cpu->env.gpr[3] = fdt_addr;
first_ppc_cpu->env.gpr[5] = 0;
first_cpu->halted = 0;

View File

@ -455,12 +455,7 @@ void spapr_drc_reset(sPAPRDRConnector *drc)
}
}
static void drc_reset(void *opaque)
{
spapr_drc_reset(SPAPR_DR_CONNECTOR(opaque));
}
static bool spapr_drc_needed(void *opaque)
bool spapr_drc_needed(void *opaque)
{
sPAPRDRConnector *drc = (sPAPRDRConnector *)opaque;
sPAPRDRConnectorClass *drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
@ -518,7 +513,6 @@ static void realize(DeviceState *d, Error **errp)
}
vmstate_register(DEVICE(drc), spapr_drc_index(drc), &vmstate_spapr_drc,
drc);
qemu_register_reset(drc_reset, drc);
trace_spapr_drc_realize_complete(spapr_drc_index(drc));
}
@ -529,7 +523,6 @@ static void unrealize(DeviceState *d, Error **errp)
char name[256];
trace_spapr_drc_unrealize(spapr_drc_index(drc));
qemu_unregister_reset(drc_reset, drc);
vmstate_unregister(DEVICE(drc), &vmstate_spapr_drc, drc);
root_container = container_get(object_get_root(), DRC_CONTAINER_PATH);
snprintf(name, sizeof(name), "%x", spapr_drc_index(drc));

View File

@ -700,6 +700,17 @@ static void event_scan(PowerPCCPU *cpu, sPAPRMachineState *spapr,
rtas_st(rets, 0, RTAS_OUT_NO_ERRORS_FOUND);
}
void spapr_clear_pending_events(sPAPRMachineState *spapr)
{
sPAPREventLogEntry *entry = NULL;
QTAILQ_FOREACH(entry, &spapr->pending_events, next) {
QTAILQ_REMOVE(&spapr->pending_events, entry, next);
g_free(entry->extended_log);
g_free(entry);
}
}
void spapr_events_init(sPAPRMachineState *spapr)
{
QTAILQ_INIT(&spapr->pending_events);

View File

@ -442,6 +442,8 @@ static void s390_ipl_class_init(ObjectClass *klass, void *data)
dc->reset = s390_ipl_reset;
dc->vmsd = &vmstate_ipl;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
/* Reason: Loads the ROMs and thus can only be used one time - internally */
dc->user_creatable = false;
}
static const TypeInfo s390_ipl_info = {

View File

@ -516,8 +516,10 @@ static size_t scsi_sense_len(SCSIRequest *req)
static int32_t scsi_target_send_command(SCSIRequest *req, uint8_t *buf)
{
SCSITargetReq *r = DO_UPCAST(SCSITargetReq, req, req);
int fixed_sense = (req->cmd.buf[1] & 1) == 0;
if (req->lun != 0) {
if (req->lun != 0 &&
buf[0] != INQUIRY && buf[0] != REQUEST_SENSE) {
scsi_req_build_sense(req, SENSE_CODE(LUN_NOT_SUPPORTED));
scsi_req_complete(req, CHECK_CONDITION);
return 0;
@ -535,9 +537,28 @@ static int32_t scsi_target_send_command(SCSIRequest *req, uint8_t *buf)
break;
case REQUEST_SENSE:
scsi_target_alloc_buf(&r->req, scsi_sense_len(req));
r->len = scsi_device_get_sense(r->req.dev, r->buf,
MIN(req->cmd.xfer, r->buf_len),
(req->cmd.buf[1] & 1) == 0);
if (req->lun != 0) {
const struct SCSISense sense = SENSE_CODE(LUN_NOT_SUPPORTED);
if (fixed_sense) {
r->buf[0] = 0x70;
r->buf[2] = sense.key;
r->buf[10] = 10;
r->buf[12] = sense.asc;
r->buf[13] = sense.ascq;
r->len = MIN(req->cmd.xfer, SCSI_SENSE_LEN);
} else {
r->buf[0] = 0x72;
r->buf[1] = sense.key;
r->buf[2] = sense.asc;
r->buf[3] = sense.ascq;
r->len = 8;
}
} else {
r->len = scsi_device_get_sense(r->req.dev, r->buf,
MIN(req->cmd.xfer, r->buf_len),
fixed_sense);
}
if (r->req.dev->sense_is_ua) {
scsi_device_unit_attention_reported(req->dev);
r->req.dev->sense_len = 0;

View File

@ -1797,8 +1797,13 @@ uint8_t sd_read_data(SDState *sd)
break;
case 18: /* CMD18: READ_MULTIPLE_BLOCK */
if (sd->data_offset == 0)
if (sd->data_offset == 0) {
if (sd->data_start + io_len > sd->size) {
sd->card_status |= ADDRESS_ERROR;
return 0x00;
}
BLK_READ_BLOCK(sd->data_start, io_len);
}
ret = sd->data[sd->data_offset ++];
if (sd->data_offset >= io_len) {
@ -1812,11 +1817,6 @@ uint8_t sd_read_data(SDState *sd)
break;
}
}
if (sd->data_start + io_len > sd->size) {
sd->card_status |= ADDRESS_ERROR;
break;
}
}
break;

View File

@ -341,9 +341,7 @@ static USBDevice *usb_try_create_simple(USBBus *bus, const char *name,
object_property_set_bool(OBJECT(dev), true, "realized", &err);
if (err) {
error_propagate(errp, err);
error_prepend(errp, "Failed to initialize USB device '%s': ",
name);
object_unparent(OBJECT(dev));
error_prepend(errp, "Failed to initialize USB device '%s': ", name);
return NULL;
}
return dev;

View File

@ -968,6 +968,7 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as,
if (!ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &container->fd)) {
group->container = container;
QLIST_INSERT_HEAD(&container->group_list, group, container_next);
vfio_kvm_device_add_group(group);
return 0;
}
}

View File

@ -492,21 +492,21 @@ static int vhost_verify_ring_mappings(struct vhost_dev *dev,
j = 0;
r = vhost_verify_ring_part_mapping(dev, vq->desc, vq->desc_phys,
vq->desc_size, start_addr, size);
if (!r) {
if (r) {
break;
}
j++;
r = vhost_verify_ring_part_mapping(dev, vq->avail, vq->avail_phys,
vq->avail_size, start_addr, size);
if (!r) {
if (r) {
break;
}
j++;
r = vhost_verify_ring_part_mapping(dev, vq->used, vq->used_phys,
vq->used_size, start_addr, size);
if (!r) {
if (r) {
break;
}
}
@ -1137,6 +1137,10 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
r = dev->vhost_ops->vhost_get_vring_base(dev, &state);
if (r < 0) {
VHOST_OPS_DEBUG("vhost VQ %d ring restore failed: %d", idx, r);
/* Connection to the backend is broken, so let's sync internal
* last avail idx to the device used idx.
*/
virtio_queue_restore_last_avail_idx(vdev, idx);
} else {
virtio_queue_set_last_avail_idx(vdev, idx, state.num);
}
@ -1356,6 +1360,10 @@ void vhost_dev_cleanup(struct vhost_dev *hdev)
if (hdev->mem) {
/* those are only safe after successful init */
memory_listener_unregister(&hdev->memory_listener);
for (i = 0; i < hdev->n_mem_sections; ++i) {
MemoryRegionSection *section = &hdev->mem_sections[i];
memory_region_unref(section->mr);
}
QLIST_REMOVE(hdev, entry);
}
if (hdev->migration_blocker) {

View File

@ -2311,6 +2311,16 @@ void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx)
vdev->vq[n].shadow_avail_idx = idx;
}
void virtio_queue_restore_last_avail_idx(VirtIODevice *vdev, int n)
{
rcu_read_lock();
if (vdev->vq[n].vring.desc) {
vdev->vq[n].last_avail_idx = vring_used_idx(&vdev->vq[n]);
vdev->vq[n].shadow_avail_idx = vdev->vq[n].last_avail_idx;
}
rcu_read_unlock();
}
void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
{
rcu_read_lock();

View File

@ -121,6 +121,7 @@ static void wdt_diag288_class_init(ObjectClass *klass, void *data)
dc->realize = wdt_diag288_realize;
dc->unrealize = wdt_diag288_unrealize;
dc->reset = wdt_diag288_reset;
dc->hotpluggable = false;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
dc->vmsd = &vmstate_diag288;
diag288->handle_timer = wdt_diag288_handle_timer;

View File

@ -22,14 +22,18 @@
#ifndef CONFIG_USER_ONLY
typedef struct AddressSpaceDispatch AddressSpaceDispatch;
void address_space_init_dispatch(AddressSpace *as);
void address_space_unregister(AddressSpace *as);
void address_space_destroy_dispatch(AddressSpace *as);
extern const MemoryRegionOps unassigned_mem_ops;
bool memory_region_access_valid(MemoryRegion *mr, hwaddr addr,
unsigned size, bool is_write);
void flatview_add_to_dispatch(FlatView *fv, MemoryRegionSection *section);
AddressSpaceDispatch *address_space_dispatch_new(FlatView *fv);
void address_space_dispatch_compact(AddressSpaceDispatch *d);
AddressSpaceDispatch *address_space_to_dispatch(AddressSpace *as);
AddressSpaceDispatch *flatview_to_dispatch(FlatView *fv);
void address_space_dispatch_free(AddressSpaceDispatch *d);
#endif
#endif

View File

@ -318,21 +318,18 @@ struct AddressSpace {
struct rcu_head rcu;
char *name;
MemoryRegion *root;
int ref_count;
bool malloced;
/* Accessed via RCU. */
struct FlatView *current_map;
int ioeventfd_nb;
struct MemoryRegionIoeventfd *ioeventfds;
struct AddressSpaceDispatch *dispatch;
struct AddressSpaceDispatch *next_dispatch;
MemoryListener dispatch_listener;
QTAILQ_HEAD(memory_listeners_as, MemoryListener) listeners;
QTAILQ_ENTRY(AddressSpace) address_spaces_link;
};
FlatView *address_space_to_flatview(AddressSpace *as);
/**
* MemoryRegionSection: describes a fragment of a #MemoryRegion
*
@ -346,7 +343,7 @@ struct AddressSpace {
*/
struct MemoryRegionSection {
MemoryRegion *mr;
AddressSpace *address_space;
FlatView *fv;
hwaddr offset_within_region;
Int128 size;
hwaddr offset_within_address_space;
@ -1594,23 +1591,6 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
*/
void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name);
/**
* address_space_init_shareable: return an address space for a memory region,
* creating it if it does not already exist
*
* @root: a #MemoryRegion that routes addresses for the address space
* @name: an address space name. The name is only used for debugging
* output.
*
* This function will return a pointer to an existing AddressSpace
* which was initialized with the specified MemoryRegion, or it will
* create and initialize one if it does not already exist. The ASes
* are reference-counted, so the memory will be freed automatically
* when the AddressSpace is destroyed via address_space_destroy.
*/
AddressSpace *address_space_init_shareable(MemoryRegion *root,
const char *name);
/**
* address_space_destroy: destroy an address space
*
@ -1855,9 +1835,17 @@ IOMMUTLBEntry address_space_get_iotlb_entry(AddressSpace *as, hwaddr addr,
* @len: pointer to length
* @is_write: indicates the transfer direction
*/
MemoryRegion *address_space_translate(AddressSpace *as, hwaddr addr,
hwaddr *xlat, hwaddr *len,
bool is_write);
MemoryRegion *flatview_translate(FlatView *fv,
hwaddr addr, hwaddr *xlat,
hwaddr *len, bool is_write);
static inline MemoryRegion *address_space_translate(AddressSpace *as,
hwaddr addr, hwaddr *xlat,
hwaddr *len, bool is_write)
{
return flatview_translate(address_space_to_flatview(as),
addr, xlat, len, is_write);
}
/* address_space_access_valid: check for validity of accessing an address
* space range
@ -1908,12 +1896,13 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
/* Internal functions, part of the implementation of address_space_read. */
MemTxResult address_space_read_continue(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf,
int len, hwaddr addr1, hwaddr l,
MemoryRegion *mr);
MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf, int len);
MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf,
int len, hwaddr addr1, hwaddr l,
MemoryRegion *mr);
MemTxResult flatview_read_full(FlatView *fv, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf, int len);
void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr);
static inline bool memory_access_is_direct(MemoryRegion *mr, bool is_write)
@ -1940,8 +1929,8 @@ static inline bool memory_access_is_direct(MemoryRegion *mr, bool is_write)
* @buf: buffer with the data transferred
*/
static inline __attribute__((__always_inline__))
MemTxResult address_space_read(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
uint8_t *buf, int len)
MemTxResult flatview_read(FlatView *fv, hwaddr addr, MemTxAttrs attrs,
uint8_t *buf, int len)
{
MemTxResult result = MEMTX_OK;
hwaddr l, addr1;
@ -1952,22 +1941,29 @@ MemTxResult address_space_read(AddressSpace *as, hwaddr addr, MemTxAttrs attrs,
if (len) {
rcu_read_lock();
l = len;
mr = address_space_translate(as, addr, &addr1, &l, false);
mr = flatview_translate(fv, addr, &addr1, &l, false);
if (len == l && memory_access_is_direct(mr, false)) {
ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
memcpy(buf, ptr, len);
} else {
result = address_space_read_continue(as, addr, attrs, buf, len,
addr1, l, mr);
result = flatview_read_continue(fv, addr, attrs, buf, len,
addr1, l, mr);
}
rcu_read_unlock();
}
} else {
result = address_space_read_full(as, addr, attrs, buf, len);
result = flatview_read_full(fv, addr, attrs, buf, len);
}
return result;
}
static inline MemTxResult address_space_read(AddressSpace *as, hwaddr addr,
MemTxAttrs attrs, uint8_t *buf,
int len)
{
return flatview_read(address_space_to_flatview(as), addr, attrs, buf, len);
}
/**
* address_space_read_cached: read from a cached RAM region
*

View File

@ -21,7 +21,7 @@ typedef struct {
SysBusDevice parent_obj;
/*< public >*/
AddressSpace *source_as;
AddressSpace source_as;
MemoryRegion iomem;
uint32_t base;
MemoryRegion *source_memory;

View File

@ -662,6 +662,7 @@ void spapr_cpu_parse_features(sPAPRMachineState *spapr);
int spapr_hpt_shift_for_ramsize(uint64_t ramsize);
void spapr_reallocate_hpt(sPAPRMachineState *spapr, int shift,
Error **errp);
void spapr_clear_pending_events(sPAPRMachineState *spapr);
/* CPU and LMB DRC release callbacks. */
void spapr_core_release(DeviceState *dev);

View File

@ -257,6 +257,7 @@ int spapr_drc_populate_dt(void *fdt, int fdt_offset, Object *owner,
void spapr_drc_attach(sPAPRDRConnector *drc, DeviceState *d, void *fdt,
int fdt_start_offset, Error **errp);
void spapr_drc_detach(sPAPRDRConnector *drc);
bool spapr_drc_needed(void *opaque);
static inline bool spapr_drc_unplug_requested(sPAPRDRConnector *drc)
{

View File

@ -272,6 +272,7 @@ hwaddr virtio_queue_get_avail_size(VirtIODevice *vdev, int n);
hwaddr virtio_queue_get_used_size(VirtIODevice *vdev, int n);
uint16_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n);
void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx);
void virtio_queue_restore_last_avail_idx(VirtIODevice *vdev, int n);
void virtio_queue_invalidate_signalled_used(VirtIODevice *vdev, int n);
void virtio_queue_update_used_idx(VirtIODevice *vdev, int n);
VirtQueue *virtio_get_queue(VirtIODevice *vdev, int n);

View File

@ -442,4 +442,12 @@
} while(0)
#endif
#define atomic_fetch_inc_nonzero(ptr) ({ \
typeof_strip_qual(*ptr) _oldn = atomic_read(ptr); \
while (_oldn && atomic_cmpxchg(ptr, _oldn, _oldn + 1) != _oldn) { \
_oldn = atomic_read(ptr); \
} \
_oldn; \
})
#endif /* QEMU_ATOMIC_H */

View File

@ -189,13 +189,13 @@ extern int daemon(int, int);
/* Round number up to multiple. Requires that d be a power of 2 (see
* QEMU_ALIGN_UP for a safer but slower version on arbitrary
* numbers) */
* numbers); works even if d is a smaller type than n. */
#ifndef ROUND_UP
#define ROUND_UP(n,d) (((n) + (d) - 1) & -(d))
#define ROUND_UP(n, d) (((n) + (d) - 1) & -(0 ? (n) : (d)))
#endif
#ifndef DIV_ROUND_UP
#define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))
#define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
#endif
/*

View File

@ -30,6 +30,7 @@ typedef struct DisplaySurface DisplaySurface;
typedef struct DriveInfo DriveInfo;
typedef struct Error Error;
typedef struct EventNotifier EventNotifier;
typedef struct FlatView FlatView;
typedef struct FWCfgEntry FWCfgEntry;
typedef struct FWCfgIoState FWCfgIoState;
typedef struct FWCfgMemState FWCfgMemState;

View File

@ -26,7 +26,7 @@
#include "trace.h"
/* Max amount to allow in rawinput/rawoutput buffers */
/* Max amount to allow in rawinput/encoutput buffers */
#define QIO_CHANNEL_WEBSOCK_MAX_BUFFER 8192
#define QIO_CHANNEL_WEBSOCK_CLIENT_KEY_LEN 24
@ -1006,7 +1006,7 @@ qio_channel_websock_source_prepare(GSource *source,
if (wsource->wioc->rawinput.offset) {
cond |= G_IO_IN;
}
if (wsource->wioc->rawoutput.offset < QIO_CHANNEL_WEBSOCK_MAX_BUFFER) {
if (wsource->wioc->encoutput.offset < QIO_CHANNEL_WEBSOCK_MAX_BUFFER) {
cond |= G_IO_OUT;
}
@ -1022,7 +1022,7 @@ qio_channel_websock_source_check(GSource *source)
if (wsource->wioc->rawinput.offset) {
cond |= G_IO_IN;
}
if (wsource->wioc->rawoutput.offset < QIO_CHANNEL_WEBSOCK_MAX_BUFFER) {
if (wsource->wioc->encoutput.offset < QIO_CHANNEL_WEBSOCK_MAX_BUFFER) {
cond |= G_IO_OUT;
}
@ -1041,7 +1041,7 @@ qio_channel_websock_source_dispatch(GSource *source,
if (wsource->wioc->rawinput.offset) {
cond |= G_IO_IN;
}
if (wsource->wioc->rawoutput.offset < QIO_CHANNEL_WEBSOCK_MAX_BUFFER) {
if (wsource->wioc->encoutput.offset < QIO_CHANNEL_WEBSOCK_MAX_BUFFER) {
cond |= G_IO_OUT;
}

273
memory.c
View File

@ -47,6 +47,8 @@ static QTAILQ_HEAD(memory_listeners, MemoryListener) memory_listeners
static QTAILQ_HEAD(, AddressSpace) address_spaces
= QTAILQ_HEAD_INITIALIZER(address_spaces);
static GHashTable *flat_views;
typedef struct AddrRange AddrRange;
/*
@ -154,7 +156,8 @@ enum ListenerDirection { Forward, Reverse };
/* No need to ref/unref .mr, the FlatRange keeps it alive. */
#define MEMORY_LISTENER_UPDATE_REGION(fr, as, dir, callback, _args...) \
do { \
MemoryRegionSection mrs = section_from_flat_range(fr, as); \
MemoryRegionSection mrs = section_from_flat_range(fr, \
address_space_to_flatview(as)); \
MEMORY_LISTENER_CALL(as, callback, dir, &mrs, ##_args); \
} while(0)
@ -208,7 +211,6 @@ static bool memory_region_ioeventfd_equal(MemoryRegionIoeventfd a,
}
typedef struct FlatRange FlatRange;
typedef struct FlatView FlatView;
/* Range of memory in the global map. Addresses are absolute. */
struct FlatRange {
@ -229,6 +231,8 @@ struct FlatView {
FlatRange *ranges;
unsigned nr;
unsigned nr_allocated;
struct AddressSpaceDispatch *dispatch;
MemoryRegion *root;
};
typedef struct AddressSpaceOps AddressSpaceOps;
@ -237,11 +241,11 @@ typedef struct AddressSpaceOps AddressSpaceOps;
for (var = (view)->ranges; var < (view)->ranges + (view)->nr; ++var)
static inline MemoryRegionSection
section_from_flat_range(FlatRange *fr, AddressSpace *as)
section_from_flat_range(FlatRange *fr, FlatView *fv)
{
return (MemoryRegionSection) {
.mr = fr->mr,
.address_space = as,
.fv = fv,
.offset_within_region = fr->offset_in_region,
.size = fr->addr.size,
.offset_within_address_space = int128_get64(fr->addr.start),
@ -258,12 +262,17 @@ static bool flatrange_equal(FlatRange *a, FlatRange *b)
&& a->readonly == b->readonly;
}
static void flatview_init(FlatView *view)
static FlatView *flatview_new(MemoryRegion *mr_root)
{
FlatView *view;
view = g_new0(FlatView, 1);
view->ref = 1;
view->ranges = NULL;
view->nr = 0;
view->nr_allocated = 0;
view->root = mr_root;
memory_region_ref(mr_root);
trace_flatview_new(view, mr_root);
return view;
}
/* Insert a range into a given position. Caller is responsible for maintaining
@ -287,25 +296,47 @@ static void flatview_destroy(FlatView *view)
{
int i;
trace_flatview_destroy(view, view->root);
if (view->dispatch) {
address_space_dispatch_free(view->dispatch);
}
for (i = 0; i < view->nr; i++) {
memory_region_unref(view->ranges[i].mr);
}
g_free(view->ranges);
memory_region_unref(view->root);
g_free(view);
}
static void flatview_ref(FlatView *view)
static bool flatview_ref(FlatView *view)
{
atomic_inc(&view->ref);
return atomic_fetch_inc_nonzero(&view->ref) > 0;
}
static void flatview_unref(FlatView *view)
{
if (atomic_fetch_dec(&view->ref) == 1) {
flatview_destroy(view);
trace_flatview_destroy_rcu(view, view->root);
assert(view->root);
call_rcu(view, flatview_destroy, rcu);
}
}
FlatView *address_space_to_flatview(AddressSpace *as)
{
return atomic_rcu_read(&as->current_map);
}
AddressSpaceDispatch *flatview_to_dispatch(FlatView *fv)
{
return fv->dispatch;
}
AddressSpaceDispatch *address_space_to_dispatch(AddressSpace *as)
{
return flatview_to_dispatch(address_space_to_flatview(as));
}
static bool can_merge(FlatRange *r1, FlatRange *r2)
{
return int128_eq(addrrange_end(r1->addr), r2->addr.start)
@ -701,13 +732,57 @@ static void render_memory_region(FlatView *view,
}
}
static MemoryRegion *memory_region_get_flatview_root(MemoryRegion *mr)
{
while (mr->enabled) {
if (mr->alias) {
if (!mr->alias_offset && int128_ge(mr->size, mr->alias->size)) {
/* The alias is included in its entirety. Use it as
* the "real" root, so that we can share more FlatViews.
*/
mr = mr->alias;
continue;
}
} else if (!mr->terminates) {
unsigned int found = 0;
MemoryRegion *child, *next = NULL;
QTAILQ_FOREACH(child, &mr->subregions, subregions_link) {
if (child->enabled) {
if (++found > 1) {
next = NULL;
break;
}
if (!child->addr && int128_ge(mr->size, child->size)) {
/* A child is included in its entirety. If it's the only
* enabled one, use it in the hope of finding an alias down the
* way. This will also let us share FlatViews.
*/
next = child;
}
}
}
if (found == 0) {
return NULL;
}
if (next) {
mr = next;
continue;
}
}
return mr;
}
return NULL;
}
/* Render a memory topology into a list of disjoint absolute ranges. */
static FlatView *generate_memory_topology(MemoryRegion *mr)
{
int i;
FlatView *view;
view = g_new(FlatView, 1);
flatview_init(view);
view = flatview_new(mr);
if (mr) {
render_memory_region(view, mr, int128_zero(),
@ -715,6 +790,15 @@ static FlatView *generate_memory_topology(MemoryRegion *mr)
}
flatview_simplify(view);
view->dispatch = address_space_dispatch_new(view);
for (i = 0; i < view->nr; i++) {
MemoryRegionSection mrs =
section_from_flat_range(&view->ranges[i], view);
flatview_add_to_dispatch(view, &mrs);
}
address_space_dispatch_compact(view->dispatch);
g_hash_table_replace(flat_views, mr, view);
return view;
}
@ -740,7 +824,7 @@ static void address_space_add_del_ioeventfds(AddressSpace *as,
fds_new[inew]))) {
fd = &fds_old[iold];
section = (MemoryRegionSection) {
.address_space = as,
.fv = address_space_to_flatview(as),
.offset_within_address_space = int128_get64(fd->addr.start),
.size = fd->addr.size,
};
@ -753,7 +837,7 @@ static void address_space_add_del_ioeventfds(AddressSpace *as,
fds_old[iold]))) {
fd = &fds_new[inew];
section = (MemoryRegionSection) {
.address_space = as,
.fv = address_space_to_flatview(as),
.offset_within_address_space = int128_get64(fd->addr.start),
.size = fd->addr.size,
};
@ -772,8 +856,12 @@ static FlatView *address_space_get_flatview(AddressSpace *as)
FlatView *view;
rcu_read_lock();
view = atomic_rcu_read(&as->current_map);
flatview_ref(view);
do {
view = address_space_to_flatview(as);
/* If somebody has replaced as->current_map concurrently,
* flatview_ref returns false.
*/
} while (!flatview_ref(view));
rcu_read_unlock();
return view;
}
@ -879,18 +967,81 @@ static void address_space_update_topology_pass(AddressSpace *as,
}
}
static void address_space_update_topology(AddressSpace *as)
static void flatviews_init(void)
{
FlatView *old_view = address_space_get_flatview(as);
FlatView *new_view = generate_memory_topology(as->root);
static FlatView *empty_view;
address_space_update_topology_pass(as, old_view, new_view, false);
address_space_update_topology_pass(as, old_view, new_view, true);
if (flat_views) {
return;
}
flat_views = g_hash_table_new_full(g_direct_hash, g_direct_equal, NULL,
(GDestroyNotify) flatview_unref);
if (!empty_view) {
empty_view = generate_memory_topology(NULL);
/* We keep it alive forever in the global variable. */
flatview_ref(empty_view);
} else {
g_hash_table_replace(flat_views, NULL, empty_view);
flatview_ref(empty_view);
}
}
static void flatviews_reset(void)
{
AddressSpace *as;
if (flat_views) {
g_hash_table_unref(flat_views);
flat_views = NULL;
}
flatviews_init();
/* Render unique FVs */
QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
MemoryRegion *physmr = memory_region_get_flatview_root(as->root);
if (g_hash_table_lookup(flat_views, physmr)) {
continue;
}
generate_memory_topology(physmr);
}
}
static void address_space_set_flatview(AddressSpace *as)
{
FlatView *old_view = address_space_to_flatview(as);
MemoryRegion *physmr = memory_region_get_flatview_root(as->root);
FlatView *new_view = g_hash_table_lookup(flat_views, physmr);
assert(new_view);
if (old_view == new_view) {
return;
}
if (old_view) {
flatview_ref(old_view);
}
flatview_ref(new_view);
if (!QTAILQ_EMPTY(&as->listeners)) {
FlatView tmpview = { .nr = 0 }, *old_view2 = old_view;
if (!old_view2) {
old_view2 = &tmpview;
}
address_space_update_topology_pass(as, old_view2, new_view, false);
address_space_update_topology_pass(as, old_view2, new_view, true);
}
/* Writes are protected by the BQL. */
atomic_rcu_set(&as->current_map, new_view);
call_rcu(old_view, flatview_unref, rcu);
if (old_view) {
flatview_unref(old_view);
}
/* Note that all the old MemoryRegions are still alive up to this
* point. This relieves most MemoryListeners from the need to
@ -898,9 +1049,20 @@ static void address_space_update_topology(AddressSpace *as)
* outside the iothread mutex, in which case precise reference
* counting is necessary.
*/
flatview_unref(old_view);
if (old_view) {
flatview_unref(old_view);
}
}
address_space_update_ioeventfds(as);
static void address_space_update_topology(AddressSpace *as)
{
MemoryRegion *physmr = memory_region_get_flatview_root(as->root);
flatviews_init();
if (!g_hash_table_lookup(flat_views, physmr)) {
generate_memory_topology(physmr);
}
address_space_set_flatview(as);
}
void memory_region_transaction_begin(void)
@ -919,10 +1081,13 @@ void memory_region_transaction_commit(void)
--memory_region_transaction_depth;
if (!memory_region_transaction_depth) {
if (memory_region_update_pending) {
flatviews_reset();
MEMORY_LISTENER_CALL_GLOBAL(begin, Forward);
QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
address_space_update_topology(as);
address_space_set_flatview(as);
address_space_update_ioeventfds(as);
}
memory_region_update_pending = false;
MEMORY_LISTENER_CALL_GLOBAL(commit, Forward);
@ -1726,7 +1891,7 @@ void memory_region_notify_one(IOMMUNotifier *notifier,
* Skip the notification if the notification does not overlap
* with registered range.
*/
if (notifier->start > entry->iova + entry->addr_mask + 1 ||
if (notifier->start > entry->iova + entry->addr_mask ||
notifier->end < entry->iova) {
return;
}
@ -1835,7 +2000,7 @@ void memory_region_sync_dirty_bitmap(MemoryRegion *mr)
view = address_space_get_flatview(as);
FOR_EACH_FLAT_RANGE(fr, view) {
if (fr->mr == mr) {
MemoryRegionSection mrs = section_from_flat_range(fr, as);
MemoryRegionSection mrs = section_from_flat_range(fr, view);
listener->log_sync(listener, &mrs);
}
}
@ -1938,7 +2103,7 @@ static void memory_region_update_coalesced_range_as(MemoryRegion *mr, AddressSpa
FOR_EACH_FLAT_RANGE(fr, view) {
if (fr->mr == mr) {
section = (MemoryRegionSection) {
.address_space = as,
.fv = view,
.offset_within_address_space = int128_get64(fr->addr.start),
.size = fr->addr.size,
};
@ -2289,7 +2454,7 @@ static MemoryRegionSection memory_region_find_rcu(MemoryRegion *mr,
}
range = addrrange_make(int128_make64(addr), int128_make64(size));
view = atomic_rcu_read(&as->current_map);
view = address_space_to_flatview(as);
fr = flatview_lookup(view, range);
if (!fr) {
return ret;
@ -2300,7 +2465,7 @@ static MemoryRegionSection memory_region_find_rcu(MemoryRegion *mr,
}
ret.mr = fr->mr;
ret.address_space = as;
ret.fv = view;
range = addrrange_intersection(range, fr->addr);
ret.offset_within_region = fr->offset_in_region;
ret.offset_within_region += int128_get64(int128_sub(range.start,
@ -2349,7 +2514,8 @@ void memory_global_dirty_log_sync(void)
view = address_space_get_flatview(as);
FOR_EACH_FLAT_RANGE(fr, view) {
if (fr->dirty_log_mask) {
MemoryRegionSection mrs = section_from_flat_range(fr, as);
MemoryRegionSection mrs = section_from_flat_range(fr, view);
listener->log_sync(listener, &mrs);
}
}
@ -2434,7 +2600,7 @@ static void listener_add_address_space(MemoryListener *listener,
FOR_EACH_FLAT_RANGE(fr, view) {
MemoryRegionSection section = {
.mr = fr->mr,
.address_space = as,
.fv = view,
.offset_within_region = fr->offset_in_region,
.size = fr->addr.size,
.offset_within_address_space = int128_get64(fr->addr.start),
@ -2610,69 +2776,36 @@ void memory_region_invalidate_mmio_ptr(MemoryRegion *mr, hwaddr offset,
void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
{
memory_region_ref(root);
memory_region_transaction_begin();
as->ref_count = 1;
as->root = root;
as->malloced = false;
as->current_map = g_new(FlatView, 1);
flatview_init(as->current_map);
as->current_map = NULL;
as->ioeventfd_nb = 0;
as->ioeventfds = NULL;
QTAILQ_INIT(&as->listeners);
QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
as->name = g_strdup(name ? name : "anonymous");
address_space_init_dispatch(as);
memory_region_update_pending |= root->enabled;
memory_region_transaction_commit();
address_space_update_topology(as);
address_space_update_ioeventfds(as);
}
static void do_address_space_destroy(AddressSpace *as)
{
bool do_free = as->malloced;
address_space_destroy_dispatch(as);
assert(QTAILQ_EMPTY(&as->listeners));
flatview_unref(as->current_map);
g_free(as->name);
g_free(as->ioeventfds);
memory_region_unref(as->root);
if (do_free) {
g_free(as);
}
}
AddressSpace *address_space_init_shareable(MemoryRegion *root, const char *name)
{
AddressSpace *as;
QTAILQ_FOREACH(as, &address_spaces, address_spaces_link) {
if (root == as->root && as->malloced) {
as->ref_count++;
return as;
}
}
as = g_malloc0(sizeof *as);
address_space_init(as, root, name);
as->malloced = true;
return as;
}
void address_space_destroy(AddressSpace *as)
{
MemoryRegion *root = as->root;
as->ref_count--;
if (as->ref_count) {
return;
}
/* Flush out anything from MemoryListeners listening in on this */
memory_region_transaction_begin();
as->root = NULL;
memory_region_transaction_commit();
QTAILQ_REMOVE(&address_spaces, as, address_spaces_link);
address_space_unregister(as);
/* At this point, as->dispatch and as->current_map are dummy
* entries that the guest should never use. Wait for the old

View File

@ -161,6 +161,11 @@ int blk_mig_active(void)
return !QSIMPLEQ_EMPTY(&block_mig_state.bmds_list);
}
int blk_mig_bulk_active(void)
{
return blk_mig_active() && !block_mig_state.bulk_completed;
}
uint64_t blk_mig_bytes_transferred(void)
{
BlkMigDevState *bmds;

View File

@ -16,6 +16,7 @@
#ifdef CONFIG_LIVE_BLOCK_MIGRATION
int blk_mig_active(void);
int blk_mig_bulk_active(void);
uint64_t blk_mig_bytes_transferred(void);
uint64_t blk_mig_bytes_remaining(void);
uint64_t blk_mig_bytes_total(void);
@ -25,6 +26,12 @@ static inline int blk_mig_active(void)
{
return false;
}
static inline int blk_mig_bulk_active(void)
{
return false;
}
static inline uint64_t blk_mig_bytes_transferred(void)
{
return 0;

View File

@ -46,6 +46,7 @@
#include "exec/ram_addr.h"
#include "qemu/rcu_queue.h"
#include "migration/colo.h"
#include "migration/block.h"
/***********************************************************/
/* ram save/restore */
@ -623,7 +624,10 @@ static void migration_bitmap_sync(RAMState *rs)
/ (end_time - rs->time_last_bitmap_sync);
bytes_xfer_now = ram_counters.transferred;
if (migrate_auto_converge()) {
/* During block migration the auto-converge logic incorrectly detects
* that ram migration makes no progress. Avoid this by disabling the
* throttling logic during the bulk phase of block migration. */
if (migrate_auto_converge() && !blk_mig_bulk_active()) {
/* The following detection logic can be refined later. For now:
Check to see if the dirtied bytes is 50% more than the approx.
amount of bytes that just got transferred since the last time we

View File

@ -111,12 +111,12 @@ static int nbd_send_option_request(QIOChannel *ioc, uint32_t opt,
stl_be_p(&req.length, len);
if (nbd_write(ioc, &req, sizeof(req), errp) < 0) {
error_prepend(errp, "Failed to send option request header");
error_prepend(errp, "Failed to send option request header: ");
return -1;
}
if (len && nbd_write(ioc, (char *) data, len, errp) < 0) {
error_prepend(errp, "Failed to send option request data");
error_prepend(errp, "Failed to send option request data: ");
return -1;
}
@ -145,7 +145,7 @@ static int nbd_receive_option_reply(QIOChannel *ioc, uint32_t opt,
{
QEMU_BUILD_BUG_ON(sizeof(*reply) != 20);
if (nbd_read(ioc, reply, sizeof(*reply), errp) < 0) {
error_prepend(errp, "failed to read option reply");
error_prepend(errp, "failed to read option reply: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -198,7 +198,7 @@ static int nbd_handle_reply_err(QIOChannel *ioc, nbd_opt_reply *reply,
msg = g_malloc(reply->length + 1);
if (nbd_read(ioc, msg, reply->length, errp) < 0) {
error_prepend(errp, "failed to read option error 0x%" PRIx32
" (%s) message",
" (%s) message: ",
reply->type, nbd_rep_lookup(reply->type));
goto cleanup;
}
@ -309,7 +309,7 @@ static int nbd_receive_list(QIOChannel *ioc, const char *want, bool *match,
return -1;
}
if (nbd_read(ioc, &namelen, sizeof(namelen), errp) < 0) {
error_prepend(errp, "failed to read option name length");
error_prepend(errp, "failed to read option name length: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -322,7 +322,8 @@ static int nbd_receive_list(QIOChannel *ioc, const char *want, bool *match,
}
if (namelen != strlen(want)) {
if (nbd_drop(ioc, len, errp) < 0) {
error_prepend(errp, "failed to skip export name with wrong length");
error_prepend(errp,
"failed to skip export name with wrong length: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -331,14 +332,14 @@ static int nbd_receive_list(QIOChannel *ioc, const char *want, bool *match,
assert(namelen < sizeof(name));
if (nbd_read(ioc, name, namelen, errp) < 0) {
error_prepend(errp, "failed to read export name");
error_prepend(errp, "failed to read export name: ");
nbd_send_opt_abort(ioc);
return -1;
}
name[namelen] = '\0';
len -= namelen;
if (nbd_drop(ioc, len, errp) < 0) {
error_prepend(errp, "failed to read export description");
error_prepend(errp, "failed to read export description: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -424,7 +425,7 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
return -1;
}
if (nbd_read(ioc, &type, sizeof(type), errp) < 0) {
error_prepend(errp, "failed to read info type");
error_prepend(errp, "failed to read info type: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -439,13 +440,13 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
return -1;
}
if (nbd_read(ioc, &info->size, sizeof(info->size), errp) < 0) {
error_prepend(errp, "failed to read info size");
error_prepend(errp, "failed to read info size: ");
nbd_send_opt_abort(ioc);
return -1;
}
be64_to_cpus(&info->size);
if (nbd_read(ioc, &info->flags, sizeof(info->flags), errp) < 0) {
error_prepend(errp, "failed to read info flags");
error_prepend(errp, "failed to read info flags: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -462,7 +463,7 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
}
if (nbd_read(ioc, &info->min_block, sizeof(info->min_block),
errp) < 0) {
error_prepend(errp, "failed to read info minimum block size");
error_prepend(errp, "failed to read info minimum block size: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -475,7 +476,8 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
}
if (nbd_read(ioc, &info->opt_block, sizeof(info->opt_block),
errp) < 0) {
error_prepend(errp, "failed to read info preferred block size");
error_prepend(errp,
"failed to read info preferred block size: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -489,7 +491,7 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
}
if (nbd_read(ioc, &info->max_block, sizeof(info->max_block),
errp) < 0) {
error_prepend(errp, "failed to read info maximum block size");
error_prepend(errp, "failed to read info maximum block size: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -501,7 +503,7 @@ static int nbd_opt_go(QIOChannel *ioc, const char *wantname,
default:
trace_nbd_opt_go_info_unknown(type, nbd_info_lookup(type));
if (nbd_drop(ioc, len, errp) < 0) {
error_prepend(errp, "Failed to read info payload");
error_prepend(errp, "Failed to read info payload: ");
nbd_send_opt_abort(ioc);
return -1;
}
@ -624,7 +626,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
}
if (nbd_read(ioc, buf, 8, errp) < 0) {
error_prepend(errp, "Failed to read data");
error_prepend(errp, "Failed to read data: ");
goto fail;
}
@ -643,7 +645,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
}
if (nbd_read(ioc, &magic, sizeof(magic), errp) < 0) {
error_prepend(errp, "Failed to read magic");
error_prepend(errp, "Failed to read magic: ");
goto fail;
}
magic = be64_to_cpu(magic);
@ -655,7 +657,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
bool fixedNewStyle = false;
if (nbd_read(ioc, &globalflags, sizeof(globalflags), errp) < 0) {
error_prepend(errp, "Failed to read server flags");
error_prepend(errp, "Failed to read server flags: ");
goto fail;
}
globalflags = be16_to_cpu(globalflags);
@ -671,7 +673,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
/* client requested flags */
clientflags = cpu_to_be32(clientflags);
if (nbd_write(ioc, &clientflags, sizeof(clientflags), errp) < 0) {
error_prepend(errp, "Failed to send clientflags field");
error_prepend(errp, "Failed to send clientflags field: ");
goto fail;
}
if (tlscreds) {
@ -723,13 +725,13 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
/* Read the response */
if (nbd_read(ioc, &info->size, sizeof(info->size), errp) < 0) {
error_prepend(errp, "Failed to read export length");
error_prepend(errp, "Failed to read export length: ");
goto fail;
}
be64_to_cpus(&info->size);
if (nbd_read(ioc, &info->flags, sizeof(info->flags), errp) < 0) {
error_prepend(errp, "Failed to read export flags");
error_prepend(errp, "Failed to read export flags: ");
goto fail;
}
be16_to_cpus(&info->flags);
@ -746,13 +748,13 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
}
if (nbd_read(ioc, &info->size, sizeof(info->size), errp) < 0) {
error_prepend(errp, "Failed to read export length");
error_prepend(errp, "Failed to read export length: ");
goto fail;
}
be64_to_cpus(&info->size);
if (nbd_read(ioc, &oldflags, sizeof(oldflags), errp) < 0) {
error_prepend(errp, "Failed to read export flags");
error_prepend(errp, "Failed to read export flags: ");
goto fail;
}
be32_to_cpus(&oldflags);
@ -768,7 +770,7 @@ int nbd_receive_negotiate(QIOChannel *ioc, const char *name,
trace_nbd_receive_negotiate_size_flags(info->size, info->flags);
if (zeroes && nbd_drop(ioc, 124, errp) < 0) {
error_prepend(errp, "Failed to read reserved block");
error_prepend(errp, "Failed to read reserved block: ");
goto fail;
}
rc = 0;
@ -943,12 +945,6 @@ ssize_t nbd_receive_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
reply->handle = ldq_be_p(buf + 8);
reply->error = nbd_errno_to_system_errno(reply->error);
if (reply->error == ESHUTDOWN) {
/* This works even on mingw which lacks a native ESHUTDOWN */
error_setg(errp, "server shutting down");
return -EINVAL;
}
trace_nbd_receive_reply(magic, reply->error, reply->handle);
if (magic != NBD_REPLY_MAGIC) {

View File

@ -393,6 +393,10 @@ static int nbd_negotiate_handle_info(NBDClient *client, uint32_t length,
msg = "name length is incorrect";
goto invalid;
}
if (namelen >= sizeof(name)) {
msg = "name too long for qemu";
goto invalid;
}
if (nbd_read(client->ioc, name, namelen, errp) < 0) {
return -EIO;
}
@ -430,6 +434,7 @@ static int nbd_negotiate_handle_info(NBDClient *client, uint32_t length,
break;
}
}
assert(length == 0);
exp = nbd_export_find(name);
if (!exp) {
@ -440,7 +445,7 @@ static int nbd_negotiate_handle_info(NBDClient *client, uint32_t length,
/* Don't bother sending NBD_INFO_NAME unless client requested it */
if (sendname) {
rc = nbd_negotiate_send_info(client, opt, NBD_INFO_NAME, length, name,
rc = nbd_negotiate_send_info(client, opt, NBD_INFO_NAME, namelen, name,
errp);
if (rc < 0) {
return rc;
@ -661,6 +666,12 @@ static int nbd_negotiate_options(NBDClient *client, uint16_t myflags,
}
length = be32_to_cpu(length);
if (length > NBD_MAX_BUFFER_SIZE) {
error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
length, NBD_MAX_BUFFER_SIZE);
return -EINVAL;
}
trace_nbd_negotiate_options_check_option(option,
nbd_opt_lookup(option));
if (client->tlscreds &&

View File

@ -369,7 +369,7 @@ static NetSocketState *net_socket_fd_init_dgram(NetClientState *peer,
net_socket_read_poll(s, true);
/* mcast: save bound address as dst */
if (is_connected) {
if (is_connected && mcast != NULL) {
s->dgram_dst = saddr;
snprintf(nc->info_str, sizeof(nc->info_str),
"socket: fd=%d (cloned mcast=%s:%d)",
@ -674,8 +674,8 @@ int net_init_socket(const Netdev *netdev, const char *name,
assert(netdev->type == NET_CLIENT_DRIVER_SOCKET);
sock = &netdev->u.socket;
if (sock->has_listen + sock->has_connect + sock->has_mcast +
sock->has_udp > 1) {
if (sock->has_fd + sock->has_listen + sock->has_connect + sock->has_mcast +
sock->has_udp != 1) {
error_report("exactly one of listen=, connect=, mcast= or udp="
" is required");
return -1;

View File

@ -133,7 +133,7 @@ struct ccw1 {
__u8 flags;
__u16 count;
__u32 cda;
} __attribute__ ((packed));
} __attribute__ ((packed, aligned(8)));
#define CCW_FLAG_DC 0x80
#define CCW_FLAG_CC 0x40

View File

@ -187,7 +187,6 @@ ERROR_WHITELIST = [
{'log':r"Device [\w.,-]+ can not be dynamically instantiated"},
{'log':r"Platform Bus: Can not fit MMIO region of size "},
# other more specific errors we will ignore:
{'device':'allwinner-a10', 'log':"Unsupported NIC model:"},
{'device':'.*-spapr-cpu-core', 'log':r"CPU core type should be"},
{'log':r"MSI(-X)? is not supported by interrupt controller"},
{'log':r"pxb-pcie? devices cannot reside on a PCIe? bus"},

View File

@ -20,6 +20,10 @@ git checkout "v${version}"
git submodule update --init
(cd roms/seabios && git describe --tags --long --dirty > .version)
rm -rf .git roms/*/.git dtc/.git pixman/.git
# FIXME: The following line is a workaround for avoiding filename collisions
# when unpacking u-boot sources on case-insensitive filesystems. Once we
# update to something with u-boot commit 610eec7f0 we can drop this line.
tar cfj roms/u-boot.tar.bz2 -C roms u-boot && rm -rf roms/u-boot
popd
tar cfj ${destination}.tar.bz2 ${destination}
rm -rf ${destination}

View File

@ -59,6 +59,27 @@ socreate(Slirp *slirp)
return(so);
}
/*
* Remove references to so from the given message queue.
*/
static void
soqfree(struct socket *so, struct quehead *qh)
{
struct mbuf *ifq;
for (ifq = (struct mbuf *) qh->qh_link;
(struct quehead *) ifq != qh;
ifq = ifq->ifq_next) {
if (ifq->ifq_so == so) {
struct mbuf *ifm;
ifq->ifq_so = NULL;
for (ifm = ifq->ifs_next; ifm != ifq; ifm = ifm->ifs_next) {
ifm->ifq_so = NULL;
}
}
}
}
/*
* remque and free a socket, clobber cache
*/
@ -66,23 +87,9 @@ void
sofree(struct socket *so)
{
Slirp *slirp = so->slirp;
struct mbuf *ifm;
for (ifm = (struct mbuf *) slirp->if_fastq.qh_link;
(struct quehead *) ifm != &slirp->if_fastq;
ifm = ifm->ifq_next) {
if (ifm->ifq_so == so) {
ifm->ifq_so = NULL;
}
}
for (ifm = (struct mbuf *) slirp->if_batchq.qh_link;
(struct quehead *) ifm != &slirp->if_batchq;
ifm = ifm->ifq_next) {
if (ifm->ifq_so == so) {
ifm->ifq_so = NULL;
}
}
soqfree(so, &slirp->if_fastq);
soqfree(so, &slirp->if_batchq);
if (so->so_emu==EMU_RSH && so->extra) {
sofree(so->extra);

View File

@ -40,3 +40,4 @@ stub-obj-y += pc_madt_cpu_entry.o
stub-obj-y += vmgenid.o
stub-obj-y += xen-common.o
stub-obj-y += xen-hvm.o
stub-obj-y += pci-host-piix.o

View File

@ -0,0 +1,6 @@
#include "qemu/osdep.h"
#include "hw/i386/pc.h"
PCIBus *find_i440fx(void)
{
return NULL;
}

View File

@ -650,6 +650,9 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
CPUARMState *env = &cpu->env;
int pagebits;
Error *local_err = NULL;
#ifndef CONFIG_USER_ONLY
AddressSpace *as;
#endif
cpu_exec_realizefn(cs, &local_err);
if (local_err != NULL) {
@ -835,19 +838,17 @@ static void arm_cpu_realizefn(DeviceState *dev, Error **errp)
}
if (cpu->has_el3) {
AddressSpace *as;
as = g_new0(AddressSpace, 1);
if (!cpu->secure_memory) {
cpu->secure_memory = cs->memory;
}
as = address_space_init_shareable(cpu->secure_memory,
"cpu-secure-memory");
address_space_init(as, cpu->secure_memory, "cpu-secure-memory");
cpu_address_space_init(cs, as, ARMASIdx_S);
}
cpu_address_space_init(cs,
address_space_init_shareable(cs->memory,
"cpu-memory"),
ARMASIdx_NS);
as = g_new0(AddressSpace, 1);
address_space_init(as, cs->memory, "cpu-memory");
cpu_address_space_init(cs, as, ARMASIdx_NS);
#endif
qemu_init_vcpu(cs);

View File

@ -2217,29 +2217,34 @@ static void disas_ldst_pair(DisasContext *s, uint32_t insn)
} else {
do_fp_st(s, rt, tcg_addr, size);
}
} else {
TCGv_i64 tcg_rt = cpu_reg(s, rt);
if (is_load) {
do_gpr_ld(s, tcg_rt, tcg_addr, size, is_signed, false,
false, 0, false, false);
} else {
do_gpr_st(s, tcg_rt, tcg_addr, size,
false, 0, false, false);
}
}
tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
if (is_vector) {
tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
if (is_load) {
do_fp_ld(s, rt2, tcg_addr, size);
} else {
do_fp_st(s, rt2, tcg_addr, size);
}
} else {
TCGv_i64 tcg_rt = cpu_reg(s, rt);
TCGv_i64 tcg_rt2 = cpu_reg(s, rt2);
if (is_load) {
TCGv_i64 tmp = tcg_temp_new_i64();
/* Do not modify tcg_rt before recognizing any exception
* from the second load.
*/
do_gpr_ld(s, tmp, tcg_addr, size, is_signed, false,
false, 0, false, false);
tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
do_gpr_ld(s, tcg_rt2, tcg_addr, size, is_signed, false,
false, 0, false, false);
tcg_gen_mov_i64(tcg_rt, tmp);
tcg_temp_free_i64(tmp);
} else {
do_gpr_st(s, tcg_rt, tcg_addr, size,
false, 0, false, false);
tcg_gen_addi_i64(tcg_addr, tcg_addr, 1 << size);
do_gpr_st(s, tcg_rt2, tcg_addr, size,
false, 0, false, false);
}

View File

@ -7858,9 +7858,27 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
TCGv_i32 tmp2 = tcg_temp_new_i32();
TCGv_i64 t64 = tcg_temp_new_i64();
gen_aa32_ld_i64(s, t64, addr, get_mem_index(s), opc);
/* For AArch32, architecturally the 32-bit word at the lowest
* address is always Rt and the one at addr+4 is Rt2, even if
* the CPU is big-endian. That means we don't want to do a
* gen_aa32_ld_i64(), which invokes gen_aa32_frob64() as if
* for an architecturally 64-bit access, but instead do a
* 64-bit access using MO_BE if appropriate and then split
* the two halves.
* This only makes a difference for BE32 user-mode, where
* frob64() must not flip the two halves of the 64-bit data
* but this code must treat BE32 user-mode like BE32 system.
*/
TCGv taddr = gen_aa32_addr(s, addr, opc);
tcg_gen_qemu_ld_i64(t64, taddr, get_mem_index(s), opc);
tcg_temp_free(taddr);
tcg_gen_mov_i64(cpu_exclusive_val, t64);
tcg_gen_extr_i64_i32(tmp, tmp2, t64);
if (s->be_data == MO_BE) {
tcg_gen_extr_i64_i32(tmp2, tmp, t64);
} else {
tcg_gen_extr_i64_i32(tmp, tmp2, t64);
}
tcg_temp_free_i64(t64);
store_reg(s, rt2, tmp2);
@ -7909,15 +7927,26 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
TCGv_i64 n64 = tcg_temp_new_i64();
t2 = load_reg(s, rt2);
tcg_gen_concat_i32_i64(n64, t1, t2);
/* For AArch32, architecturally the 32-bit word at the lowest
* address is always Rt and the one at addr+4 is Rt2, even if
* the CPU is big-endian. Since we're going to treat this as a
* single 64-bit BE store, we need to put the two halves in the
* opposite order for BE to LE, so that they end up in the right
* places.
* We don't want gen_aa32_frob64() because that does the wrong
* thing for BE32 usermode.
*/
if (s->be_data == MO_BE) {
tcg_gen_concat_i32_i64(n64, t2, t1);
} else {
tcg_gen_concat_i32_i64(n64, t1, t2);
}
tcg_temp_free_i32(t2);
gen_aa32_frob64(s, n64);
tcg_gen_atomic_cmpxchg_i64(o64, taddr, cpu_exclusive_val, n64,
get_mem_index(s), opc);
tcg_temp_free_i64(n64);
gen_aa32_frob64(s, o64);
tcg_gen_setcond_i64(TCG_COND_NE, o64, o64, cpu_exclusive_val);
tcg_gen_extrl_i64_i32(t0, o64);

View File

@ -3702,10 +3702,11 @@ static void x86_cpu_realizefn(DeviceState *dev, Error **errp)
#ifndef CONFIG_USER_ONLY
if (tcg_enabled()) {
AddressSpace *as_normal = address_space_init_shareable(cs->memory,
"cpu-memory");
AddressSpace *as_normal = g_new0(AddressSpace, 1);
AddressSpace *as_smm = g_new(AddressSpace, 1);
address_space_init(as_normal, cs->memory, "cpu-memory");
cpu->cpu_as_mem = g_new(MemoryRegion, 1);
cpu->cpu_as_root = g_new(MemoryRegion, 1);

View File

@ -942,6 +942,7 @@ void nios2_tcg_init(void)
int i;
cpu_env = tcg_global_reg_new_ptr(TCG_AREG0, "env");
tcg_ctx.tcg_env = cpu_env;
for (i = 0; i < NUM_CORE_REGS; i++) {
cpu_R[i] = tcg_global_mem_new(cpu_env,

View File

@ -141,7 +141,7 @@ void ppc_set_compat(PowerPCCPU *cpu, uint32_t compat_pvr, Error **errp)
cpu_synchronize_state(CPU(cpu));
if (kvm_enabled() && cpu->compat_pvr != compat_pvr) {
int ret = kvmppc_set_compat(cpu, cpu->compat_pvr);
int ret = kvmppc_set_compat(cpu, compat_pvr);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Unable to set CPU compatibility mode in KVM");

View File

@ -527,7 +527,6 @@ static uint16_t default_GEN13_GA1[] = {
#define default_GEN13_GA2 EmptyFeat
static uint16_t default_GEN14_GA1[] = {
S390_FEAT_ADAPTER_INT_SUPPRESSION,
S390_FEAT_INSTRUCTION_EXEC_PROT,
S390_FEAT_GUARDED_STORAGE,
S390_FEAT_VECTOR_PACKED_DECIMAL,

View File

@ -308,8 +308,13 @@ int kvm_arch_init(MachineState *ms, KVMState *s)
}
}
/* Try to enable AIS facility */
kvm_vm_enable_cap(s, KVM_CAP_S390_AIS, 0);
/*
* The migration interface for ais was introduced with kernel 4.13
* but the capability itself had been active since 4.12. As migration
* support is considered necessary let's disable ais in the 2.10
* machine.
*/
/* kvm_vm_enable_cap(s, KVM_CAP_S390_AIS, 0); */
qemu_mutex_init(&qemu_sigp_mutex);

View File

@ -117,15 +117,15 @@ _export_nbd_snapshot sn1
echo
echo "== verifying the exported snapshot with patterns, method 1 =="
$QEMU_IO_NBD -c 'read -P 0xa 0x1000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IO_NBD -c 'read -P 0xb 0x2000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IO_NBD -r -c 'read -P 0xa 0x1000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IO_NBD -r -c 'read -P 0xb 0x2000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
_export_nbd_snapshot1 sn1
echo
echo "== verifying the exported snapshot with patterns, method 2 =="
$QEMU_IO_NBD -c 'read -P 0xa 0x1000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IO_NBD -c 'read -P 0xb 0x2000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IO_NBD -r -c 'read -P 0xa 0x1000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IO_NBD -r -c 'read -P 0xb 0x2000 0x1000' "$nbd_snapshot_img" | _filter_qemu_io
$QEMU_IMG convert "$TEST_IMG" -l sn1 -O qcow2 "$converted_image"

View File

@ -69,13 +69,15 @@ fi
# in B
CREATION_SIZE=$((2 * 1024 * 1024 - 48 * 1024))
# 512 is the actual test -- but it's good to test 64k as well, just to be sure.
for cluster_size in 512 64k; do
# in kB
for GROWTH_SIZE in 16 48 80; do
for create_mode in off metadata falloc full; do
for growth_mode in off metadata falloc full; do
echo "--- growth_size=$GROWTH_SIZE create_mode=$create_mode growth_mode=$growth_mode ---"
echo "--- cluster_size=$cluster_size growth_size=$GROWTH_SIZE create_mode=$create_mode growth_mode=$growth_mode ---"
IMGOPTS="preallocation=$create_mode,cluster_size=512" _make_test_img ${CREATION_SIZE}
IMGOPTS="preallocation=$create_mode,cluster_size=$cluster_size" _make_test_img ${CREATION_SIZE}
$QEMU_IMG resize -f "$IMGFMT" --preallocation=$growth_mode "$TEST_IMG" +${GROWTH_SIZE}K
host_size_0=$(get_image_size_on_host)
@ -123,6 +125,7 @@ for GROWTH_SIZE in 16 48 80; do
done
done
done
done
# success, all done
echo '*** done'

View File

@ -1,5 +1,5 @@
QA output created by 125
--- growth_size=16 create_mode=off growth_mode=off ---
--- cluster_size=512 growth_size=16 create_mode=off growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -7,7 +7,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=off growth_mode=metadata ---
--- cluster_size=512 growth_size=16 create_mode=off growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -15,7 +15,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=off growth_mode=falloc ---
--- cluster_size=512 growth_size=16 create_mode=off growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -23,7 +23,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=off growth_mode=full ---
--- cluster_size=512 growth_size=16 create_mode=off growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -31,7 +31,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=metadata growth_mode=off ---
--- cluster_size=512 growth_size=16 create_mode=metadata growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -39,7 +39,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=metadata growth_mode=metadata ---
--- cluster_size=512 growth_size=16 create_mode=metadata growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -47,7 +47,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=metadata growth_mode=falloc ---
--- cluster_size=512 growth_size=16 create_mode=metadata growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -55,7 +55,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=metadata growth_mode=full ---
--- cluster_size=512 growth_size=16 create_mode=metadata growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -63,7 +63,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=falloc growth_mode=off ---
--- cluster_size=512 growth_size=16 create_mode=falloc growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -71,7 +71,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=falloc growth_mode=metadata ---
--- cluster_size=512 growth_size=16 create_mode=falloc growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -79,7 +79,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=falloc growth_mode=falloc ---
--- cluster_size=512 growth_size=16 create_mode=falloc growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -87,7 +87,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=falloc growth_mode=full ---
--- cluster_size=512 growth_size=16 create_mode=falloc growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -95,7 +95,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=full growth_mode=off ---
--- cluster_size=512 growth_size=16 create_mode=full growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -103,7 +103,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=full growth_mode=metadata ---
--- cluster_size=512 growth_size=16 create_mode=full growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -111,7 +111,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=full growth_mode=falloc ---
--- cluster_size=512 growth_size=16 create_mode=full growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -119,7 +119,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=16 create_mode=full growth_mode=full ---
--- cluster_size=512 growth_size=16 create_mode=full growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -127,7 +127,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=off growth_mode=off ---
--- cluster_size=512 growth_size=48 create_mode=off growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -135,7 +135,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=off growth_mode=metadata ---
--- cluster_size=512 growth_size=48 create_mode=off growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -143,7 +143,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=off growth_mode=falloc ---
--- cluster_size=512 growth_size=48 create_mode=off growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -151,7 +151,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=off growth_mode=full ---
--- cluster_size=512 growth_size=48 create_mode=off growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -159,7 +159,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=metadata growth_mode=off ---
--- cluster_size=512 growth_size=48 create_mode=metadata growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -167,7 +167,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=metadata growth_mode=metadata ---
--- cluster_size=512 growth_size=48 create_mode=metadata growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -175,7 +175,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=metadata growth_mode=falloc ---
--- cluster_size=512 growth_size=48 create_mode=metadata growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -183,7 +183,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=metadata growth_mode=full ---
--- cluster_size=512 growth_size=48 create_mode=metadata growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -191,7 +191,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=falloc growth_mode=off ---
--- cluster_size=512 growth_size=48 create_mode=falloc growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -199,7 +199,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=falloc growth_mode=metadata ---
--- cluster_size=512 growth_size=48 create_mode=falloc growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -207,7 +207,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=falloc growth_mode=falloc ---
--- cluster_size=512 growth_size=48 create_mode=falloc growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -215,7 +215,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=falloc growth_mode=full ---
--- cluster_size=512 growth_size=48 create_mode=falloc growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -223,7 +223,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=full growth_mode=off ---
--- cluster_size=512 growth_size=48 create_mode=full growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -231,7 +231,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=full growth_mode=metadata ---
--- cluster_size=512 growth_size=48 create_mode=full growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -239,7 +239,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=full growth_mode=falloc ---
--- cluster_size=512 growth_size=48 create_mode=full growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -247,7 +247,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=48 create_mode=full growth_mode=full ---
--- cluster_size=512 growth_size=48 create_mode=full growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -255,7 +255,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=off growth_mode=off ---
--- cluster_size=512 growth_size=80 create_mode=off growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -263,7 +263,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=off growth_mode=metadata ---
--- cluster_size=512 growth_size=80 create_mode=off growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -271,7 +271,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=off growth_mode=falloc ---
--- cluster_size=512 growth_size=80 create_mode=off growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -279,7 +279,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=off growth_mode=full ---
--- cluster_size=512 growth_size=80 create_mode=off growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -287,7 +287,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=metadata growth_mode=off ---
--- cluster_size=512 growth_size=80 create_mode=metadata growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -295,7 +295,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=metadata growth_mode=metadata ---
--- cluster_size=512 growth_size=80 create_mode=metadata growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -303,7 +303,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=metadata growth_mode=falloc ---
--- cluster_size=512 growth_size=80 create_mode=metadata growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -311,7 +311,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=metadata growth_mode=full ---
--- cluster_size=512 growth_size=80 create_mode=metadata growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -319,7 +319,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=falloc growth_mode=off ---
--- cluster_size=512 growth_size=80 create_mode=falloc growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -327,7 +327,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=falloc growth_mode=metadata ---
--- cluster_size=512 growth_size=80 create_mode=falloc growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -335,7 +335,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=falloc growth_mode=falloc ---
--- cluster_size=512 growth_size=80 create_mode=falloc growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -343,7 +343,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=falloc growth_mode=full ---
--- cluster_size=512 growth_size=80 create_mode=falloc growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -351,7 +351,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=full growth_mode=off ---
--- cluster_size=512 growth_size=80 create_mode=full growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -359,7 +359,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=full growth_mode=metadata ---
--- cluster_size=512 growth_size=80 create_mode=full growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -367,7 +367,7 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=full growth_mode=falloc ---
--- cluster_size=512 growth_size=80 create_mode=full growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
@ -375,7 +375,391 @@ wrote 2048000/2048000 bytes at offset 0
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- growth_size=80 create_mode=full growth_mode=full ---
--- cluster_size=512 growth_size=80 create_mode=full growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=off growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=off growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=off growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=off growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=metadata growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=metadata growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=metadata growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=metadata growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=falloc growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=falloc growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=falloc growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=falloc growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=full growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=full growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=full growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=16 create_mode=full growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 16384/16384 bytes at offset 2048000
16 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=off growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=off growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=off growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=off growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=metadata growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=metadata growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=metadata growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=metadata growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=falloc growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=falloc growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=falloc growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=falloc growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=full growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=full growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=full growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=48 create_mode=full growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 49152/49152 bytes at offset 2048000
48 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=off growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=off growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=off growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=off growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=off
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=metadata growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=metadata growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=metadata growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=metadata growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=metadata
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=falloc growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=falloc growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=falloc growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=falloc growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=falloc
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=full growth_mode=off ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=full growth_mode=metadata ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=full growth_mode=falloc ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0
1.953 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
wrote 81920/81920 bytes at offset 2048000
80 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- cluster_size=64k growth_size=80 create_mode=full growth_mode=full ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=2048000 preallocation=full
Image resized.
wrote 2048000/2048000 bytes at offset 0

View File

@ -78,7 +78,7 @@ _send_qemu_cmd $QEMU_HANDLE \
'arguments': { 'device': 'drv' }}" \
'return'
$QEMU_IO_PROG -f raw -c 'read -P 42 0 64k' \
$QEMU_IO_PROG -f raw -r -c 'read -P 42 0 64k' \
"nbd+unix:///drv?socket=$TEST_DIR/nbd" 2>&1 \
| _filter_qemu_io | _filter_nbd
@ -87,7 +87,7 @@ _send_qemu_cmd $QEMU_HANDLE \
'arguments': { 'device': 'drv' }}" \
'return'
$QEMU_IO_PROG -f raw -c close \
$QEMU_IO_PROG -f raw -r -c close \
"nbd+unix:///drv?socket=$TEST_DIR/nbd" 2>&1 \
| _filter_qemu_io | _filter_nbd

View File

@ -43,6 +43,7 @@ class NBDBlockdevAddBase(iotests.QMPTestCase):
'driver': 'raw',
'file': {
'driver': 'nbd',
'read-only': True,
'server': address
} }
if export is not None:

View File

@ -466,11 +466,18 @@ vubr_panic(VuDev *dev, const char *msg)
vubr->quit = 1;
}
static bool
vubr_queue_is_processed_in_order(VuDev *dev, int qidx)
{
return true;
}
static const VuDevIface vuiface = {
.get_features = vubr_get_features,
.set_features = vubr_set_features,
.process_msg = vubr_process_msg,
.queue_set_started = vubr_queue_set_started,
.queue_is_processed_in_order = vubr_queue_is_processed_in_order,
};
static void

View File

@ -64,6 +64,9 @@ memory_region_tb_read(int cpu_index, uint64_t addr, uint64_t value, unsigned siz
memory_region_tb_write(int cpu_index, uint64_t addr, uint64_t value, unsigned size) "cpu %d addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
flatview_new(FlatView *view, MemoryRegion *root) "%p (root %p)"
flatview_destroy(FlatView *view, MemoryRegion *root) "%p (root %p)"
flatview_destroy_rcu(FlatView *view, MemoryRegion *root) "%p (root %p)"
### Guest events, keep at bottom

View File

@ -1540,7 +1540,7 @@ void dpy_gfx_replace_surface(QemuConsole *con,
DisplaySurface *old_surface = con->surface;
DisplayChangeListener *dcl;
assert(old_surface != surface);
assert(old_surface != surface || surface == NULL);
con->surface = surface;
QLIST_FOREACH(dcl, &s->listeners, next) {

View File

@ -91,7 +91,7 @@ bool stat64_min_slow(Stat64 *s, uint64_t value)
low = atomic_read(&s->low);
orig = ((uint64_t)high << 32) | low;
if (orig < value) {
if (value < orig) {
/* We have to set low before high, just like stat64_min reads
* high before low. The value may become higher temporarily, but
* stat64_get does not notice (it takes the lock) and the only ill
@ -120,7 +120,7 @@ bool stat64_max_slow(Stat64 *s, uint64_t value)
low = atomic_read(&s->low);
orig = ((uint64_t)high << 32) | low;
if (orig > value) {
if (value > orig) {
/* We have to set low before high, just like stat64_max reads
* high before low. The value may become lower temporarily, but
* stat64_get does not notice (it takes the lock) and the only ill

16
vl.c
View File

@ -3557,7 +3557,7 @@ int main(int argc, char **argv, char **envp)
case QEMU_OPTION_virtfs: {
QemuOpts *fsdev;
QemuOpts *device;
const char *writeout, *sock_fd, *socket;
const char *writeout, *sock_fd, *socket, *path, *security_model;
olist = qemu_find_opts("virtfs");
if (!olist) {
@ -3596,11 +3596,15 @@ int main(int argc, char **argv, char **envp)
}
qemu_opt_set(fsdev, "fsdriver",
qemu_opt_get(opts, "fsdriver"), &error_abort);
qemu_opt_set(fsdev, "path", qemu_opt_get(opts, "path"),
&error_abort);
qemu_opt_set(fsdev, "security_model",
qemu_opt_get(opts, "security_model"),
&error_abort);
path = qemu_opt_get(opts, "path");
if (path) {
qemu_opt_set(fsdev, "path", path, &error_abort);
}
security_model = qemu_opt_get(opts, "security_model");
if (security_model) {
qemu_opt_set(fsdev, "security_model", security_model,
&error_abort);
}
socket = qemu_opt_get(opts, "socket");
if (socket) {
qemu_opt_set(fsdev, "socket", socket, &error_abort);