Compare commits

...

74 Commits

Author SHA1 Message Date
Michael Roth 99c5874a9b Update version for 4.1.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-14 12:04:03 -06:00
Kevin Wolf e092a17d38 mirror: Keep mirror_top_bs drained after dropping permissions
mirror_top_bs is currently implicitly drained through its connection to
the source or the target node. However, the drain section for target_bs
ends early after moving mirror_top_bs from src to target_bs, so that
requests can already be restarted while mirror_top_bs is still present
in the chain, but has dropped all permissions and therefore runs into an
assertion failure like this:

    qemu-system-x86_64: block/io.c:1634: bdrv_co_write_req_prepare:
    Assertion `child->perm & BLK_PERM_WRITE' failed.

Keep mirror_top_bs drained until all graph changes have completed.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit d2da5e288a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 16:31:49 -06:00
Philippe Mathieu-Daudé 088f1e8fd9 block/create: Do not abort if a block driver is not available
The 'blockdev-create' QMP command was introduced as experimental
feature in commit b0292b851b, using the assert() debug call.
It got promoted to 'stable' command in 3fb588a0f2, but the
assert call was not removed.

Some block drivers are optional, and bdrv_find_format() might
return a NULL value, triggering the assertion.

Stable code is not expected to abort, so return an error instead.

This is easily reproducible when libnfs is not installed:

  ./configure
  [...]
  module support    no
  Block whitelist (rw)
  Block whitelist (ro)
  libiscsi support  yes
  libnfs support    no
  [...]

Start QEMU:

  $ qemu-system-x86_64 -S -qmp unix:/tmp/qemu.qmp,server,nowait

Send the 'blockdev-create' with the 'nfs' driver:

  $ ( cat << 'EOF'
  {'execute': 'qmp_capabilities'}
  {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'}
  EOF
  ) | socat STDIO UNIX:/tmp/qemu.qmp
  {"QMP": {"version": {"qemu": {"micro": 50, "minor": 1, "major": 4}, "package": "v4.1.0-733-g89ea03a7dc"}, "capabilities": ["oob"]}}
  {"return": {}}

QEMU crashes:

  $ gdb qemu-system-x86_64 core
  Program received signal SIGSEGV, Segmentation fault.
  (gdb) bt
  #0  0x00007ffff510957f in raise () at /lib64/libc.so.6
  #1  0x00007ffff50f3895 in abort () at /lib64/libc.so.6
  #2  0x00007ffff50f3769 in _nl_load_domain.cold.0 () at /lib64/libc.so.6
  #3  0x00007ffff5101a26 in .annobin_assert.c_end () at /lib64/libc.so.6
  #4  0x0000555555d7e1f1 in qmp_blockdev_create (job_id=0x555556baee40 "x", options=0x555557666610, errp=0x7fffffffc770) at block/create.c:69
  #5  0x0000555555c96b52 in qmp_marshal_blockdev_create (args=0x7fffdc003830, ret=0x7fffffffc7f8, errp=0x7fffffffc7f0) at qapi/qapi-commands-block-core.c:1314
  #6  0x0000555555deb0a0 in do_qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false, errp=0x7fffffffc898) at qapi/qmp-dispatch.c:131
  #7  0x0000555555deb2a1 in qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false) at qapi/qmp-dispatch.c:174

With this patch applied, QEMU returns a QMP error:

  {'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'}
  {"id": "x", "error": {"class": "GenericError", "desc": "Block driver 'nfs' not found or not supported"}}

Cc: qemu-stable@nongnu.org
Reported-by: Xu Tian <xutian@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit d90d5cae2b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 16:31:38 -06:00
Dr. David Alan Gilbert 145b562990 vhost: Fix memory region section comparison
Using memcmp to compare structures wasn't safe,
as I found out on ARM when I was getting falce miscompares.

Use the helper function for comparing the MRSs.

Fixes: ade6d081fc ("vhost: Regenerate region list from changed sections list")
Cc: qemu-stable@nongnu.org
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190814175535.2023-4-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3fc4a64cba)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 16:23:54 -06:00
Dr. David Alan Gilbert 42b6571357 memory: Provide an equality function for MemoryRegionSections
Provide a comparison function that checks all the fields are the same.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-3-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9366cf02e4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 16:23:54 -06:00
Dr. David Alan Gilbert c0aca9352d memory: Align MemoryRegionSections fields
MemoryRegionSection includes an Int128 'size' field;
on some platforms the compiler causes an alignment of this to
a 128bit boundary, leaving 8 bytes of dead space.
This deadspace can be filled with junk.

Move the size field to the top avoiding unnecessary alignment.

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-2-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 44f85d3276)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 16:23:49 -06:00
Daniel P. Berrangé 54c130493c tests: make filemonitor test more robust to event ordering
The ordering of events that are emitted during the rmdir
test have changed with kernel >= 5.3. Semantically both
new & old orderings are correct, so we must be able to
cope with either.

To cope with this, when we see an unexpected event, we
push it back onto the queue and look and the subsequent
event to see if that matches instead.

Tested-by: Peter Xu <peterx@redhat.com>
Tested-by: Wei Yang <richardw.yang@linux.intel.com>
Tested-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
(cherry picked from commit bf9e0313c2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Nir Soffer 3d018ff3bd block: posix: Always allocate the first block
When creating an image with preallocation "off" or "falloc", the first
block of the image is typically not allocated. When using Gluster
storage backed by XFS filesystem, reading this block using direct I/O
succeeds regardless of request length, fooling alignment detection.

In this case we fallback to a safe value (4096) instead of the optimal
value (512), which may lead to unneeded data copying when aligning
requests.  Allocating the first block avoids the fallback.

Since we allocate the first block even with preallocation=off, we no
longer create images with zero disk size:

    $ ./qemu-img create -f raw test.raw 1g
    Formatting 'test.raw', fmt=raw size=1073741824

    $ ls -lhs test.raw
    4.0K -rw-r--r--. 1 nsoffer nsoffer 1.0G Aug 16 23:48 test.raw

And converting the image requires additional cluster:

    $ ./qemu-img measure -f raw -O qcow2 test.raw
    required size: 458752
    fully allocated size: 1074135040

When using format like vmdk with multiple files per image, we allocate
one block per file:

    $ ./qemu-img create -f vmdk -o subformat=twoGbMaxExtentFlat test.vmdk 4g
    Formatting 'test.vmdk', fmt=vmdk size=4294967296 compat6=off hwversion=undefined subformat=twoGbMaxExtentFlat

    $ ls -lhs test*.vmdk
    4.0K -rw-r--r--. 1 nsoffer nsoffer 2.0G Aug 27 03:23 test-f001.vmdk
    4.0K -rw-r--r--. 1 nsoffer nsoffer 2.0G Aug 27 03:23 test-f002.vmdk
    4.0K -rw-r--r--. 1 nsoffer nsoffer  353 Aug 27 03:23 test.vmdk

I did quick performance test for copying disks with qemu-img convert to
new raw target image to Gluster storage with sector size of 512 bytes:

    for i in $(seq 10); do
        rm -f dst.raw
        sleep 10
        time ./qemu-img convert -f raw -O raw -t none -T none src.raw dst.raw
    done

Here is a table comparing the total time spent:

Type    Before(s)   After(s)    Diff(%)
---------------------------------------
real      530.028    469.123      -11.4
user       17.204     10.768      -37.4
sys        17.881      7.011      -60.7

We can see very clear improvement in CPU usage.

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Message-id: 20190827010528.8818-2-nsoffer@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>

(cherry picked from commit 3a20013fbb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Nir Soffer f0d3fa265d file-posix: Handle undetectable alignment
In some cases buf_align or request_alignment cannot be detected:

1. With Gluster, buf_align cannot be detected since the actual I/O is
   done on Gluster server, and qemu buffer alignment does not matter.
   Since we don't have alignment requirement, buf_align=1 is the best
   value.

2. With local XFS filesystem, buf_align cannot be detected if reading
   from unallocated area. In this we must align the buffer, but we don't
   know what is the correct size. Using the wrong alignment results in
   I/O error.

3. With Gluster backed by XFS, request_alignment cannot be detected if
   reading from unallocated area. In this case we need to use the
   correct alignment, and failing to do so results in I/O errors.

4. With NFS, the server does not use direct I/O, so both buf_align cannot
   be detected. In this case we don't need any alignment so we can use
   buf_align=1 and request_alignment=1.

These cases seems to work when storage sector size is 512 bytes, because
the current code starts checking align=512. If the check succeeds
because alignment cannot be detected we use 512. But this does not work
for storage with 4k sector size.

To determine if we can detect the alignment, we probe first with
align=1. If probing succeeds, maybe there are no alignment requirement
(cases 1, 4) or we are probing unallocated area (cases 2, 3). Since we
don't have any way to tell, we treat this as undetectable alignment. If
probing with align=1 fails with EINVAL, but probing with one of the
expected alignments succeeds, we know that we found a working alignment.

Practically the alignment requirements are the same for buffer
alignment, buffer length, and offset in file. So in case we cannot
detect buf_align, we can use request alignment. If we cannot detect
request alignment, we can fallback to a safe value. To use this logic,
we probe first request alignment instead of buf_align.

Here is a table showing the behaviour with current code (the value in
parenthesis is the optimal value).

Case    Sector    buf_align (opt)   request_alignment (opt)     result
======================================================================
1       512       512   (1)          512   (512)                 OK
1       4096      512   (1)          4096  (4096)                FAIL
----------------------------------------------------------------------
2       512       512   (512)        512   (512)                 OK
2       4096      512   (4096)       4096  (4096)                FAIL
----------------------------------------------------------------------
3       512       512   (1)          512   (512)                 OK
3       4096      512   (1)          512   (4096)                FAIL
----------------------------------------------------------------------
4       512       512   (1)          512   (1)                   OK
4       4096      512   (1)          512   (1)                   OK

Same cases with this change:

Case    Sector    buf_align (opt)   request_alignment (opt)     result
======================================================================
1       512       512   (1)          512   (512)                 OK
1       4096      4096  (1)          4096  (4096)                OK
----------------------------------------------------------------------
2       512       512   (512)        512   (512)                 OK
2       4096      4096  (4096)       4096  (4096)                OK
----------------------------------------------------------------------
3       512       4096  (1)          4096  (512)                 OK
3       4096      4096  (1)          4096  (4096)                OK
----------------------------------------------------------------------
4       512       4096  (1)          4096  (1)                   OK
4       4096      4096  (1)          4096  (1)                   OK

I tested that provisioning VMs and copying disks on local XFS and
Gluster with 4k bytes sector size work now, resolving bugs [1],[2].
I tested also on XFS, NFS, Gluster with 512 bytes sector size.

[1] https://bugzilla.redhat.com/1737256
[2] https://bugzilla.redhat.com/1738657

Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

(cherry picked from commit a6b257a08e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Max Reitz 7db05c8a73 block/file-posix: Let post-EOF fallocate serialize
The XFS kernel driver has a bug that may cause data corruption for qcow2
images as of qemu commit c8bb23cbdb.  We can work around it by
treating post-EOF fallocates as serializing up until infinity (INT64_MAX
in practice).

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20191101152510.11719-4-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 292d06b925)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Max Reitz d9b88f7e0d block: Add bdrv_co_get_self_request()
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20191101152510.11719-3-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit c28107e9e5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Max Reitz 590cff8230 block: Make wait/mark serialising requests public
Make both bdrv_mark_request_serialising() and
bdrv_wait_serialising_requests() public so they can be used from block
drivers.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20191101152510.11719-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 304d9d7f03)
 Conflicts:
	block/io.c
*drop context dependency on 1acc3466a2
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Vladimir Sementsov-Ogievskiy 2e2ad02f2c block/io: refactor padding
We have similar padding code in bdrv_co_pwritev,
bdrv_co_do_pwrite_zeroes and bdrv_co_preadv. Let's combine and unify
it.

[Squashed in Vladimir's qemu-iotests 077 fix
--Stefan]

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190604161514.262241-4-vsementsov@virtuozzo.com
Message-Id: <20190604161514.262241-4-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 7a3f542fbd)
*prereq for 292d06b9
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Vladimir Sementsov-Ogievskiy b3b76fc643 util/iov: improve qemu_iovec_is_zero
We'll need to check a part of qiov soon, so implement it now.

Optimization with align down to 4 * sizeof(long) is dropped due to:
1. It is strange: it aligns length of the buffer, but where is a
   guarantee that buffer pointer is aligned itself?
2. buffer_is_zero() is a better place for optimizations and it has
   them.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190604161514.262241-3-vsementsov@virtuozzo.com
Message-Id: <20190604161514.262241-3-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit f76889e7b9)
*prereq for 292d06b9
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Vladimir Sementsov-Ogievskiy cff024fe85 util/iov: introduce qemu_iovec_init_extended
Introduce new initialization API, to create requests with padding. Will
be used in the following patch. New API uses qemu_iovec_init_buf if
resulting io vector has only one element, to avoid extra allocations.
So, we need to update qemu_iovec_destroy to support destroying such
QIOVs.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20190604161514.262241-2-vsementsov@virtuozzo.com
Message-Id: <20190604161514.262241-2-vsementsov@virtuozzo.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit d953169d48)
*prereq for 292d06b9
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Tuguoyi 40df4a1bf7 qcow2-bitmap: Fix uint64_t left-shift overflow
There are two issues in In check_constraints_on_bitmap(),
1) The sanity check on the granularity will cause uint64_t
integer left-shift overflow when cluster_size is 2M and the
granularity is BIGGER than 32K.
2) The way to calculate image size that the maximum bitmap
supported can map to is a bit incorrect.
This patch fix it by add a helper function to calculate the
number of bytes needed by a normal bitmap in image and compare
it to the maximum bitmap bytes supported by qemu.

Fixes: 5f72826e7f
Signed-off-by: Guoyi Tu <tu.guoyi@h3c.com>
Message-id: 4ba40cd1e7ee4a708b40899952e49f22@h3c.com
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 570542ecb1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:58 -06:00
Max Reitz b156178553 iotests: Add peek_file* functions
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20191011152814.14791-16-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit fc8ba423ca)
*prereq for 570542ec
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:59:33 -06:00
Max Reitz 15f5e8c367 iotests: Add test for 4G+ compressed qcow2 write
Test what qemu-img check says about an image after one has written
compressed data to an offset above 4 GB.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20191028161841.1198-3-mreitz@redhat.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit b7cd2c11f7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-12 11:35:07 -06:00
Max Reitz 405deba14f qcow2: Fix QCOW2_COMPRESSED_SECTOR_MASK
Masks for L2 table entries should have 64 bit.

Fixes: b6c246942b
Buglink: https://bugs.launchpad.net/qemu/+bug/1850000
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20191028161841.1198-2-mreitz@redhat.com
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 24552feb6a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-11 15:40:36 -06:00
Philippe Mathieu-Daudé 01be50603b virtio-blk: Cancel the pending BH when the dataplane is reset
When 'system_reset' is called, the main loop clear the memory
region cache before the BH has a chance to execute. Later when
the deferred function is called, some assumptions that were
made when scheduling them are no longer true when they actually
execute.

This is what happens using a virtio-blk device (fresh RHEL7.8 install):

 $ (sleep 12.3; echo system_reset; sleep 12.3; echo system_reset; sleep 1; echo q) \
   | qemu-system-x86_64 -m 4G -smp 8 -boot menu=on \
     -device virtio-blk-pci,id=image1,drive=drive_image1 \
     -drive file=/var/lib/libvirt/images/rhel78.qcow2,if=none,id=drive_image1,format=qcow2,cache=none \
     -device virtio-net-pci,netdev=net0,id=nic0,mac=52:54:00:c4:e7:84 \
     -netdev tap,id=net0,script=/bin/true,downscript=/bin/true,vhost=on \
     -monitor stdio -serial null -nographic
  (qemu) system_reset
  (qemu) system_reset
  (qemu) qemu-system-x86_64: hw/virtio/virtio.c:225: vring_get_region_caches: Assertion `caches != NULL' failed.
  Aborted

  (gdb) bt
  Thread 1 (Thread 0x7f109c17b680 (LWP 10939)):
  #0  0x00005604083296d1 in vring_get_region_caches (vq=0x56040a24bdd0) at hw/virtio/virtio.c:227
  #1  0x000056040832972b in vring_avail_flags (vq=0x56040a24bdd0) at hw/virtio/virtio.c:235
  #2  0x000056040832d13d in virtio_should_notify (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1648
  #3  0x000056040832d1f8 in virtio_notify_irqfd (vdev=0x56040a240630, vq=0x56040a24bdd0) at hw/virtio/virtio.c:1662
  #4  0x00005604082d213d in notify_guest_bh (opaque=0x56040a243ec0) at hw/block/dataplane/virtio-blk.c:75
  #5  0x000056040883dc35 in aio_bh_call (bh=0x56040a243f10) at util/async.c:90
  #6  0x000056040883dccd in aio_bh_poll (ctx=0x560409161980) at util/async.c:118
  #7  0x0000560408842af7 in aio_dispatch (ctx=0x560409161980) at util/aio-posix.c:460
  #8  0x000056040883e068 in aio_ctx_dispatch (source=0x560409161980, callback=0x0, user_data=0x0) at util/async.c:261
  #9  0x00007f10a8fca06d in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
  #10 0x0000560408841445 in glib_pollfds_poll () at util/main-loop.c:215
  #11 0x00005604088414bf in os_host_main_loop_wait (timeout=0) at util/main-loop.c:238
  #12 0x00005604088415c4 in main_loop_wait (nonblocking=0) at util/main-loop.c:514
  #13 0x0000560408416b1e in main_loop () at vl.c:1923
  #14 0x000056040841e0e8 in main (argc=20, argv=0x7ffc2c3f9c58, envp=0x7ffc2c3f9d00) at vl.c:4578

Fix this by cancelling the BH when the virtio dataplane is stopped.

[This is version of the patch was modified as discussed with Philippe on
the mailing list thread.
--Stefan]

Reported-by: Yihuang Yu <yihyu@redhat.com>
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Fixes: https://bugs.launchpad.net/qemu/+bug/1839428
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190816171503.24761-1-philmd@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ebb6ff25cd)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-05 13:43:41 -06:00
Paolo Bonzini 051c9b3cbc scsi: lsi: exit infinite loop while executing script (CVE-2019-12068)
When executing script in lsi_execute_script(), the LSI scsi adapter
emulator advances 's->dsp' index to read next opcode. This can lead
to an infinite loop if the next opcode is empty. Move the existing
loop exit after 10k iterations so that it covers no-op opcodes as
well.

Reported-by: Bugs SysSec <bugs-syssec@rub.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit de594e4765)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-05 13:39:31 -06:00
Max Filippov b387531323 target/xtensa: regenerate and re-import test_mmuhifi_c3 core
Overlay part of the test_mmuhifi_c3 core has GPL3 copyright headers in
it. Fix that by regenerating test_mmuhifi_c3 core overlay and
re-importing it.

Fixes: d848ea7767 ("target/xtensa: add test_mmuhifi_c3 core")
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
(cherry picked from commit d5eaec84e5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-05 12:11:47 -06:00
Christophe Lyon cdc6896659 target/arm: Allow reading flags from FPSCR for M-profile
rt==15 is a special case when reading the flags: it means the
destination is APSR. This patch avoids rejecting
vmrs apsr_nzcv, fpscr
as illegal instruction.

Cc: qemu-stable@nongnu.org
Signed-off-by: Christophe Lyon <christophe.lyon@linaro.org>
Message-id: 20191025095711.10853-1-christophe.lyon@linaro.org
[PMM: updated the comment]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 2529ab43b8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:34:45 -06:00
Vladimir Sementsov-Ogievskiy c0b35d87de hbitmap: handle set/reset with zero length
Passing zero length to these functions leads to unpredicted results.
Zero-length set/reset may occur in active-mirror, on zero-length write
(which is unlikely, but not guaranteed to never happen).

Let's just do nothing on zero-length request.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20191011090711.19940-2-vsementsov@virtuozzo.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit fed33bd175)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:31:55 -06:00
Vladimir Sementsov-Ogievskiy fcd7cba6ac util/hbitmap: strict hbitmap_reset
hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.

Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20190806152611.280389-1-vsementsov@virtuozzo.com>
[Maintainer edit: Max's suggestions from on-list. --js]
[Maintainer edit: Eric's suggestion for aligned macro. --js]
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 48557b1383)
*prereq for fed33bd175
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:31:40 -06:00
Fan Yang aea18ef938 COLO-compare: Fix incorrect `if` logic
'colo_mark_tcp_pkt' should return 'true' when packets are the same, and
'false' otherwise.  However, it returns 'true' when
'colo_compare_packet_payload' returns non-zero while
'colo_compare_packet_payload' is just a 'memcmp'.  The result is that
COLO-compare reports inconsistent TCP packets when they are actually
the same.

Fixes: f449c9e549 ("colo: compare the packet based on the tcp sequence number")
Cc: qemu-stable@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Fan Yang <Fan_Yang@sjtu.edu.cn>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 1e907a32b7)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:25:01 -06:00
Mikhail Sennikovsky 4887acf574 virtio-net: prevent offloads reset on migration
Currently offloads disabled by guest via the VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET
command are not preserved on VM migration.
Instead all offloads reported by guest features (via VIRTIO_PCI_GUEST_FEATURES)
get enabled.
What happens is: first the VirtIONet::curr_guest_offloads gets restored and offloads
are getting set correctly:

 #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=0, tso6=0, ecn=0, ufo=0) at net/net.c:474
 #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
 #2  virtio_net_post_load_device (opaque=0x555557701ca0, version_id=11) at hw/net/virtio-net.c:2334
 #3  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577c80 <vmstate_virtio_net_device>, opaque=0x555557701ca0, version_id=11)
     at migration/vmstate.c:168
 #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2197
 #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
 #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
 #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
 #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
 #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449

However later on the features are getting restored, and offloads get reset to
everything supported by features:

 #0  qemu_set_offload (nc=0x555556a11400, csum=1, tso4=1, tso6=1, ecn=0, ufo=0) at net/net.c:474
 #1  virtio_net_apply_guest_offloads (n=0x555557701ca0) at hw/net/virtio-net.c:720
 #2  virtio_net_set_features (vdev=0x555557701ca0, features=5104441767) at hw/net/virtio-net.c:773
 #3  virtio_set_features_nocheck (vdev=0x555557701ca0, val=5104441767) at hw/virtio/virtio.c:2052
 #4  virtio_load (vdev=0x555557701ca0, f=0x5555569dc010, version_id=11) at hw/virtio/virtio.c:2220
 #5  virtio_device_get (f=0x5555569dc010, opaque=0x555557701ca0, size=0, field=0x55555668cd00 <__compound_literal.5>) at hw/virtio/virtio.c:2036
 #6  vmstate_load_state (f=0x5555569dc010, vmsd=0x555556577ce0 <vmstate_virtio_net>, opaque=0x555557701ca0, version_id=11) at migration/vmstate.c:143
 #7  vmstate_load (f=0x5555569dc010, se=0x5555578189e0) at migration/savevm.c:829
 #8  qemu_loadvm_section_start_full (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2211
 #9  qemu_loadvm_state_main (f=0x5555569dc010, mis=0x5555569eee20) at migration/savevm.c:2395
 #10 qemu_loadvm_state (f=0x5555569dc010) at migration/savevm.c:2467
 #11 process_incoming_migration_co (opaque=0x0) at migration/migration.c:449

Fix this by preserving the state in saved_guest_offloads field and
pushing out offload initialization to the new post load hook.

Cc: qemu-stable@nongnu.org
Signed-off-by: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 7788c3f2e2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:24:34 -06:00
Michael S. Tsirkin 8010d3fce0 virtio: new post_load hook
Post load hook in virtio vmsd is called early while device is processed,
and when VirtIODevice core isn't fully initialized.  Most device
specific code isn't ready to deal with a device in such state, and
behaves weirdly.

Add a new post_load hook in a device class instead.  Devices should use
this unless they specifically want to verify the migration stream as
it's processed, e.g. for bounds checking.

Cc: qemu-stable@nongnu.org
Suggested-by: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Mikhail Sennikovsky <mikhail.sennikovskii@cloud.ionos.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 1dd713837c)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:23:53 -06:00
Hikaru Nishida 6705b9344f ui: Fix hanging up Cocoa display on macOS 10.15 (Catalina)
macOS API documentation says that before applicationDidFinishLaunching
is called, any events will not be processed. However, some events are
fired before it is called in macOS Catalina. This causes deadlock of
iothread_lock in handleEvent while it will be released after the
app_started_sem is posted.
This patch avoids processing events before the app_started_sem is
posted to prevent this deadlock.

Buglink: https://bugs.launchpad.net/qemu/+bug/1847906
Signed-off-by: Hikaru Nishida <hikarupsp@gmail.com>
Message-id: 20191015010734.85229-1-hikarupsp@gmail.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit dff742ad27)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:19:19 -06:00
Max Reitz c0e2fbf124 mirror: Do not dereference invalid pointers
mirror_exit_common() may be called twice (if it is called from
mirror_prepare() and fails, it will be called from mirror_abort()
again).

In such a case, many of the pointers in the MirrorBlockJob object will
already be freed.  This can be seen most reliably for s->target, which
is set to NULL (and then dereferenced by blk_bs()).

Cc: qemu-stable@nongnu.org
Fixes: 737efc1eda
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20191014153931.20699-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit f93c3add3a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:18:06 -06:00
Max Reitz b077ac637d iotests: Test large write request to qcow2 file
Without HEAD^, the following happens when you attempt a large write
request to a qcow2 file such that the number of bytes covered by all
clusters involved in a single allocation will exceed INT_MAX:

(A) handle_alloc_space() decides to fill the whole area with zeroes and
    fails because bdrv_co_pwrite_zeroes() fails (the request is too
    large).

(B) If handle_alloc_space() does not do anything, but merge_cow()
    decides that the requests can be merged, it will create a too long
    IOV that later cannot be written.

(C) Otherwise, all parts will be written separately, so those requests
    will work.

In either B or C, though, qcow2_alloc_cluster_link_l2() will have an
overflow: We use an int (i) to iterate over nb_clusters, and then
calculate the L2 entry based on "i << s->cluster_bits" -- which will
overflow if the range covers more than INT_MAX bytes.  This then leads
to image corruption because the L2 entry will be wrong (it will be
recognized as a compressed cluster).

Even if that were not the case, the .cow_end area would be empty
(because handle_alloc() will cap avail_bytes and nb_bytes at INT_MAX, so
their difference (which is the .cow_end size) will be 0).

So this test checks that on such large requests, the image will not be
corrupted.  Unfortunately, we cannot check whether COW will be handled
correctly, because that data is discarded when it is written to null-co
(but we have to use null-co, because writing 2 GB of data in a test is
not quite reasonable).

Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a1406a9262)
 Conflicts:
	tests/qemu-iotests/group
*drop context dep. on tests not in 4.1
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:16:16 -06:00
Max Reitz 9e51c5306c qcow2: Limit total allocation range to INT_MAX
When the COW areas are included, the size of an allocation can exceed
INT_MAX.  This is kind of limited by handle_alloc() in that it already
caps avail_bytes at INT_MAX, but the number of clusters still reflects
the original length.

This can have all sorts of effects, ranging from the storage layer write
call failing to image corruption.  (If there were no image corruption,
then I suppose there would be data loss because the .cow_end area is
forced to be empty, even though there might be something we need to
COW.)

Fix all of it by limiting nb_clusters so the equivalent number of bytes
will not exceed INT_MAX.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit d1b9d19f99)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:15:25 -06:00
Thomas Huth aae0faa5d3 hw/core/loader: Fix possible crash in rom_copy()
Both, "rom->addr" and "addr" are derived from the binary image
that can be loaded with the "-kernel" paramer. The code in
rom_copy() then calculates:

    d = dest + (rom->addr - addr);

and uses "d" as destination in a memcpy() some lines later. Now with
bad kernel images, it is possible that rom->addr is smaller than addr,
thus "rom->addr - addr" gets negative and the memcpy() then tries to
copy contents from the image to a bad memory location. This could
maybe be used to inject code from a kernel image into the QEMU binary,
so we better fix it with an additional sanity check here.

Cc: qemu-stable@nongnu.org
Reported-by: Guangming Liu
Buglink: https://bugs.launchpad.net/qemu/+bug/1844635
Message-Id: <20190925130331.27825-1-thuth@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
(cherry picked from commit e423455c4f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:10:48 -06:00
Adrian Moreno 7b404cae7f vhost-user: save features if the char dev is closed
That way the state can be correctly restored when the device is opened
again. This might happen if the backend is restarted.

Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1738768
Reported-by: Pei Zhang <pezhang@redhat.com>
Fixes: 6ab79a20af ("do not call vhost_net_cleanup() on running net from char user event")
Cc: ddstreet@canonical.com
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Message-Id: <20190924162044.11414-1-amorenoz@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit c6beefd674)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:09:49 -06:00
Kevin Wolf d868d30db6 iotests: Test internal snapshots with -blockdev
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Tested-by: Peter Krempa <pkrempa@redhat.com>
(cherry picked from commit 92b22e7b17)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:07:08 -06:00
Kevin Wolf 7a8aa6c734 block/snapshot: Restrict set of snapshot nodes
Nodes involved in internal snapshots were those that were returned by
bdrv_next(), inserted and not read-only. bdrv_next() in turn returns all
nodes that are either the root node of a BlockBackend or monitor-owned
nodes.

With the typical -drive use, this worked well enough. However, in the
typical -blockdev case, the user defines one node per option, making all
nodes monitor-owned nodes. This includes protocol nodes etc. which often
are not snapshottable, so "savevm" only returns an error.

Change the conditions so that internal snapshot still include all nodes
that have a BlockBackend attached (we definitely want to snapshot
anything attached to a guest device and probably also the built-in NBD
server; snapshotting block job BlockBackends is more of an accident, but
a preexisting one), but other monitor-owned nodes are only included if
they have no parents.

This makes internal snapshots usable again with typical -blockdev
configurations.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Tested-by: Peter Krempa <pkrempa@redhat.com>
(cherry picked from commit 05f4aced65)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-11-04 08:06:36 -06:00
Matthew Rosato 331c08d300 s390: PCI: fix IOMMU region init
The fix in dbe9cf606c shrinks the IOMMU memory region to a size
that seems reasonable on the surface, however is actually too
small as it is based against a 0-mapped address space.  This
causes breakage with small guests as they can overrun the IOMMU window.

Let's go back to the prior method of initializing iommu for now.

Fixes: dbe9cf606c ("s390x/pci: Set the iommu region size mpcifc request")
Cc: qemu-stable@nongnu.org
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Reported-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Tested-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Reported-by: Stefan Zimmerman <stzi@linux.ibm.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
Message-Id: <1569507036-15314-1-git-send-email-mjrosato@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
(cherry picked from commit 7df1dac5f1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:42:13 -05:00
Michael Roth fc5afb1a92 roms/Makefile.edk2: don't pull in submodules when building from tarball
Currently the `make efi` target pulls submodules nested under the
roms/edk2 submodule as dependencies. However, when we attempt to build
from a tarball this fails since we are no longer in a git tree.

A preceding patch will pre-populate these submodules in the tarball,
so assume this build dependency is only needed when building from a
git tree.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Bruce Rogers <brogers@suse.com>
Cc: qemu-stable@nongnu.org # v4.1.0
Reported-by: Bruce Rogers <brogers@suse.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-Id: <20190912231202.12327-3-mdroth@linux.vnet.ibm.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
(cherry picked from commit f3e330e3c3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:39:09 -05:00
Michael Roth c5c9b1362d make-release: pull in edk2 submodules so we can build it from tarballs
The `make efi` target added by 536d2173 is built from the roms/edk2
submodule, which in turn relies on additional submodules nested under
roms/edk2.

The make-release script currently only pulls in top-level submodules,
so these nested submodules are missing in the resulting tarball.

We could try to address this situation more generally by recursively
pulling in all submodules, but this doesn't necessarily ensure the
end-result will build properly (this case also required other changes).

Additionally, due to the nature of submodules, we may not always have
control over how these sorts of things are dealt with, so for now we
continue to handle it on a case-by-case in the make-release script.

Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Bruce Rogers <brogers@suse.com>
Cc: qemu-stable@nongnu.org # v4.1.0
Reported-by: Bruce Rogers <brogers@suse.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Message-Id: <20190912231202.12327-2-mdroth@linux.vnet.ibm.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
(cherry picked from commit 45c61c6c23)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:38:57 -05:00
Peter Maydell 220816989c hw/arm/boot.c: Set NSACR.{CP11,CP10} for NS kernel boots
If we're booting a Linux kernel directly into Non-Secure
state on a CPU which has Secure state, then make sure we
set the NSACR CP11 and CP10 bits, so that Non-Secure is allowed
to access the FPU. Otherwise an AArch32 kernel will UNDEF as
soon as it tries to use the FPU.

It used to not matter that we didn't do this until commit
fc1120a7f5, where we implemented actually honouring
these NSACR bits.

The problem only exists for CPUs where EL3 is AArch32; the
equivalent AArch64 trap bits are in CPTR_EL3 and are "0 to
not trap, 1 to trap", so the reset value of the register
permits NS access, unlike NSACR.

Fixes: fc1120a7f5
Fixes: https://bugs.launchpad.net/qemu/+bug/1844597
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190920174039.3916-1-peter.maydell@linaro.org
(cherry picked from commit ece628fcf6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:37:26 -05:00
Vladimir Sementsov-Ogievskiy 783e7eb52c block/backup: fix backup_cow_with_offload for last cluster
We shouldn't try to copy bytes beyond EOF. Fix it.

Fixes: 9ded4a0114
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190920142056.12778-3-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 1048ddf0a3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Vladimir Sementsov-Ogievskiy e01ed1a1ae block/backup: fix max_transfer handling for copy_range
Of course, QEMU_ALIGN_UP is a typo, it should be QEMU_ALIGN_DOWN, as we
are trying to find aligned size which satisfy both source and target.
Also, don't ignore too small max_transfer. In this case seems safer to
disable copy_range.

Fixes: 9ded4a0114
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190920142056.12778-2-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 981fb5810a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Kevin Wolf 416a692e51 qcow2: Fix corruption bug in qcow2_detect_metadata_preallocation()
qcow2_detect_metadata_preallocation() calls qcow2_get_refcount() which
requires s->lock to be taken to protect its accesses to the refcount
table and refcount blocks. However, nothing in this code path actually
took the lock. This could cause the same cache entry to be used by two
requests at the same time, for different tables at different offsets,
resulting in image corruption.

As it would be preferable to base the detection on consistent data (even
though it's just heuristics), let's take the lock not only around the
qcow2_get_refcount() calls, but around the whole function.

This patch takes the lock in qcow2_co_block_status() earlier and asserts
in qcow2_detect_metadata_preallocation() that we hold the lock.

Fixes: 69f47505ee
Cc: qemu-stable@nongnu.org
Reported-by: Michael Weiser <michael.weiser@gmx.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Michael Weiser <michael.weiser@gmx.de>
Reviewed-by: Michael Weiser <michael.weiser@gmx.de>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 5e97855052)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Kevin Wolf e9bb3d942e coroutine: Add qemu_co_mutex_assert_locked()
Some functions require that the caller holds a certain CoMutex for them
to operate correctly. Add a function so that they can assert the lock is
really held.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Tested-by: Michael Weiser <michael.weiser@gmx.de>
Reviewed-by: Michael Weiser <michael.weiser@gmx.de>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 944f3d5dd2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Maxim Levitsky 84f22c7285 block/qcow2: Fix corruption introduced by commit 8ac0f15f33
This fixes subtle corruption introduced by luks threaded encryption
in commit 8ac0f15f33

Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1745922

The corruption happens when we do a write that
   * writes to two or more unallocated clusters at once
   * doesn't fully cover the first sector
   * doesn't fully cover the last sector
   * uses luks encryption

In this case, when allocating the new clusters we COW both areas
prior to the write and after the write, and we encrypt them.

The above mentioned commit accidentally made it so we encrypt the
second COW area using the physical cluster offset of the first area.

The problem is that offset_in_cluster in do_perform_cow_encrypt
can be larger that the cluster size, thus cluster_offset
will no longer point to the start of the cluster at which encrypted
area starts.

Next patch in this series will refactor the code to avoid all these
assumptions.

In the bugreport that was triggered by rebasing a luks image to new,
zero filled base, which lot of such writes, and causes some files
with zero areas to contain garbage there instead.
But as described above it can happen elsewhere as well

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190915203655.21638-2-mlevitsk@redhat.com
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 38e7d54bdc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Sergio Lopez 86b0f4022b blockjob: update nodes head while removing all bdrv
block_job_remove_all_bdrv() iterates through job->nodes, calling
bdrv_root_unref_child() for each entry. The call to the latter may
reach child_job_[can_]set_aio_ctx(), which will also attempt to
traverse job->nodes, potentially finding entries that where freed
on previous iterations.

To avoid this situation, update job->nodes head on each iteration to
ensure that already freed entries are no longer linked to the list.

RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1746631
Signed-off-by: Sergio Lopez <slp@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190911100316.32282-1-mreitz@redhat.com
Reviewed-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit d876bf676f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Max Reitz 2d86df1f78 curl: Handle success in multi_check_completion
Background: As of cURL 7.59.0, it verifies that several functions are
not called from within a callback.  Among these functions is
curl_multi_add_handle().

curl_read_cb() is a callback from cURL and not a coroutine.  Waking up
acb->co will lead to entering it then and there, which means the current
request will settle and the caller (if it runs in the same coroutine)
may then issue the next request.  In such a case, we will enter
curl_setup_preadv() effectively from within curl_read_cb().

Calling curl_multi_add_handle() will then fail and the new request will
not be processed.

Fix this by not letting curl_read_cb() wake up acb->co.  Instead, leave
the whole business of settling the AIOCB objects to
curl_multi_check_completion() (which is called from our timer callback
and our FD handler, so not from any cURL callbacks).

Reported-by: Natalie Gavrielov <ngavrilo@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1740193
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-7-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit bfb23b480a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Max Reitz 18e1b71937 curl: Report only ready sockets
Instead of reporting all sockets to cURL, only report the one that has
caused curl_multi_do_locked() to be called.  This lets us get rid of the
QLIST_FOREACH_SAFE() list, which was actually wrong: SAFE foreaches are
only safe when the current element is removed in each iteration.  If it
possible for the list to be concurrently modified, we cannot guarantee
that only the current element will be removed.  Therefore, we must not
use QLIST_FOREACH_SAFE() here.

Fixes: ff5ca1664a
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-6-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 9abaf9fc47)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Max Reitz 0888ddac8e curl: Pass CURLSocket to curl_multi_do()
curl_multi_do_locked() currently marks all sockets as ready.  That is
not only inefficient, but in fact unsafe (the loop is).  A follow-up
patch will change that, but to do so, curl_multi_do_locked() needs to
know exactly which socket is ready; and that is accomplished by this
patch here.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-5-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 9dbad87d25)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Max Reitz 4be97ef966 curl: Check completion in curl_multi_do()
While it is more likely that transfers complete after some file
descriptor has data ready to read, we probably should not rely on it.
Better be safe than sorry and call curl_multi_check_completion() in
curl_multi_do(), too, just like it is done in curl_multi_read().

With this change, curl_multi_do() and curl_multi_read() are actually the
same, so drop curl_multi_read() and use curl_multi_do() as the sole FD
handler.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-4-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 948403bcb1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Max Reitz 78ea94e389 curl: Keep *socket until the end of curl_sock_cb()
This does not really change anything, but it makes the code a bit easier
to follow once we use @socket as the opaque pointer for
aio_set_fd_handler().

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-3-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 007f339b10)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Max Reitz 3648493495 curl: Keep pointer to the CURLState in CURLSocket
A follow-up patch will make curl_multi_do() and curl_multi_read() take a
CURLSocket instead of the CURLState.  They still need the latter,
though, so add a pointer to it to the former.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190910124136.10565-2-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 0487861685)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-30 11:34:26 -05:00
Peter Lieven 0694c489cd block/nfs: tear down aio before nfs_close
nfs_close is a sync call from libnfs and has its own event
handler polling on the nfs FD. Avoid that both QEMU and libnfs
are intefering here.

CC: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 601dc65597)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-28 00:03:19 -05:00
Alberto Garcia c9ffb12754 qcow2: Fix the calculation of the maximum L2 cache size
The size of the qcow2 L2 cache defaults to 32 MB, which can be easily
larger than the maximum amount of L2 metadata that the image can have.
For example: with 64 KB clusters the user would need a qcow2 image
with a virtual size of 256 GB in order to have 32 MB of L2 metadata.

Because of that, since commit b749562d98
we forbid the L2 cache to become larger than the maximum amount of L2
metadata for the image, calculated using this formula:

    uint64_t max_l2_cache = virtual_disk_size / (s->cluster_size / 8);

The problem with this formula is that the result should be rounded up
to the cluster size because an L2 table on disk always takes one full
cluster.

For example, a 1280 MB qcow2 image with 64 KB clusters needs exactly
160 KB of L2 metadata, but we need 192 KB on disk (3 clusters) even if
the last 32 KB of those are not going to be used.

However QEMU rounds the numbers down and only creates 2 cache tables
(128 KB), which is not enough for the image.

A quick test doing 4KB random writes on a 1280 MB image gives me
around 500 IOPS, while with the correct cache size I get 16K IOPS.

Cc: qemu-stable@nongnu.org
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b70d08205b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-28 00:00:03 -05:00
Johannes Berg 28a9a3558a libvhost-user: fix SLAVE_SEND_FD handling
It doesn't look like this could possibly work properly since
VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD is defined to 10, but the
dev->protocol_features has a bitmap. I suppose the peer this
was tested with also supported VHOST_USER_PROTOCOL_F_LOG_SHMFD,
in which case the test would always be false, but nevertheless
the code seems wrong.

Use has_feature() to fix this.

Fixes: d84599f56c ("libvhost-user: support host notifier")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Message-Id: <20190903200422.11693-1-johannes@sipsolutions.net>
Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 8726b70b44)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:59:27 -05:00
Peter Maydell 9027d3fba6 target/arm: Don't abort on M-profile exception return in linux-user mode
An attempt to do an exception-return (branch to one of the magic
addresses) in linux-user mode for M-profile should behave like
a normal branch, because linux-user mode is always going to be
in 'handler' mode. This used to work, but we broke it when we added
support for the M-profile security extension in commit d02a8698d7.

In that commit we allowed even handler-mode calls to magic return
values to be checked for and dealt with by causing an
EXCP_EXCEPTION_EXIT exception to be taken, because this is
needed for the FNC_RETURN return-from-non-secure-function-call
handling. For system mode we added a check in do_v7m_exception_exit()
to make any spurious calls from Handler mode behave correctly, but
forgot that linux-user mode would also be affected.

How an attempted return-from-non-secure-function-call in linux-user
mode should be handled is not clear -- on real hardware it would
result in return to secure code (not to the Linux kernel) which
could then handle the error in any way it chose. For QEMU we take
the simple approach of treating this erroneous return the same way
it would be handled on a CPU without the security extensions --
treat it as a normal branch.

The upshot of all this is that for linux-user mode we should never
do any of the bx_excret magic, so the code change is simple.

This ought to be a weird corner case that only affects broken guest
code (because Linux user processes should never be attempting to do
exception returns or NS function returns), except that the code that
assigns addresses in RAM for the process and stack in our linux-user
code does not attempt to avoid this magic address range, so
legitimate code attempting to return to a trampoline routine on the
stack can fall into this case. This change fixes those programs,
but we should also look at restricting the range of memory we
use for M-profile linux-user guests to the area that would be
real RAM in hardware.

Cc: qemu-stable@nongnu.org
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190822131534.16602-1-peter.maydell@linaro.org
Fixes: https://bugs.launchpad.net/qemu/+bug/1840922
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 5e5584c89f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:58:01 -05:00
Peter Maydell 38fb634853 target/arm: Free TCG temps in trans_VMOV_64_sp()
The function neon_store_reg32() doesn't free the TCG temp that it
is passed, so the caller must do that. We got this right in most
places but forgot to free the TCG temps in trans_VMOV_64_sp().

Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190827121931.26836-1-peter.maydell@linaro.org
(cherry picked from commit 342d27581b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:57:16 -05:00
Max Reitz ad95e0573e iotests: Test blockdev-create for vpc
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit cb73747e1a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:55:44 -05:00
Max Reitz 593beeaf81 iotests: Restrict nbd Python tests to nbd
We have two Python unittest-style tests that test NBD.  As such, they
should specify supported_protocols=['nbd'] so they are skipped when the
user wants to test some other protocol.

Furthermore, we should restrict their choice of formats to 'raw'.  The
idea of a protocol/format combination is to use some format over some
protocol; but we always use the raw format over NBD.  It does not really
matter what the NBD server uses on its end, and it is not a useful test
of the respective format driver anyway.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 7c932a1d69)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:55:39 -05:00
Max Reitz eee776fbc0 iotests: Restrict file Python tests to file
Most of our Python unittest-style tests only support the file protocol.
You can run them with any other protocol, but the test will simply
ignore your choice and use file anyway.

We should let them signal that they require the file protocol so they
are skipped when you want to test some other protocol.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 103cbc771e)
 Conflicts:
	tests/qemu-iotests/257
*drop changes for tests not in 4.1.0
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:55:10 -05:00
Max Reitz 819ba23575 iotests: Add supported protocols to execute_test()
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 88d2aa533a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:53:52 -05:00
John Snow 4d9bdd3149 iotests: add testing shim for script-style python tests
Because the new-style python tests don't use the iotests.main() test
launcher, we don't turn on the debugger logging for these scripts
when invoked via ./check -d.

Refactor the launcher shim into new and old style shims so that they
share environmental configuration.

Two cleanup notes: debug was not actually used as a global, and there
was no reason to create a class in an inner scope just to achieve
default variables; we can simply create an instance of the runner with
the values we want instead.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-14-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 456a2d5ac7)
*prereq for 88d2aa533a
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:53:35 -05:00
Max Reitz 97c478c355 vpc: Return 0 from vpc_co_create() on success
blockdev_create_run() directly uses .bdrv_co_create()'s return value as
the job's return value.  Jobs must return 0 on success, not just any
nonnegative value.  Therefore, using blockdev-create for VPC images may
currently fail as the vpc driver may return a positive integer.

Because there is no point in returning a positive integer anywhere in
the block layer (all non-negative integers are generally treated as
complete success), we probably do not want to add more such cases.
Therefore, fix this problem by making the vpc driver always return 0 in
case of success.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 1a37e31244)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:50:55 -05:00
Igor Mammedov 725dfa851f x86: do not advertise die-id in query-hotpluggbale-cpus if '-smp dies' is not set
Commit 176d2cda0 (i386/cpu: Consolidate die-id validity in smp context) added
new 'die-id' topology property to CPUs and exposed it via QMP command
query-hotpluggable-cpus, which broke -device/device_add cpu-foo for existing
users that do not support die-id/dies yet. That's would be fine if it happened
to new machine type only but it also happened to old machine types,
which breaks migration from old QEMU to the new one, for example following CLI:

  OLD-QEMU -M pc-i440fx-4.0 -smp 1,max_cpus=2 \
           -device qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id
is not able to start with new QEMU, complaining about invalid die-id.

After discovering regression, the patch
   "pc: Don't make die-id mandatory unless necessary"
makes die-id optional so old CLI would work.

However it's not enough as new QEMU still exposes die-id via query-hotpluggbale-cpus
QMP command, so the users that started old machine type on new QEMU, using all
properties (including die-id) received from QMP command (as required), won't be
able to start old QEMU using the same properties since it doesn't support die-id.

Fix it by hiding die-id in query-hotpluggbale-cpus for all machine types in case
'-smp dies' is not provided on CLI or -smp dies = 1', in which case smp_dies == 1
and APIC ID is calculated in default way (as it was before DIE support) so we won't
need compat code as in both cases the topology provided to guest via CPUID is the same.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20190902120222.6179-1-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit c6c1bb89fb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:50:01 -05:00
Markus Armbruster 57fdf4a13f pr-manager: Fix invalid g_free() crash bug
pr_manager_worker() passes its @opaque argument to g_free().  Wrong;
it points to pr_manager_worker()'s automatic @data.  Broken when
commit 2f3a7ab39b converted @data from heap- to stack-allocated.  Fix
by deleting the g_free().

Fixes: 2f3a7ab39b
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6b9d62c2a9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:47:31 -05:00
Max Reitz 3361d03ff0 iotests: Test reverse sub-cluster qcow2 writes
This exercises the regression introduced in commit
50ba5b2d99.  On my machine, it has close
to a 50 % false-negative rate, but that should still be sufficient to
test the fix.

Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Tested-by: Stefano Garzarella <sgarzare@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ae6ef01909)
 Conflicts:
	tests/qemu-iotests/group
*fix context deps on tests not in 4.1.0
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:46:09 -05:00
Max Reitz 6f1a94035b block/file-posix: Reduce xfsctl() use
This patch removes xfs_write_zeroes() and xfs_discard().  Both functions
have been added just before the same feature was present through
fallocate():

- fallocate() has supported PUNCH_HOLE for XFS since Linux 2.6.38 (March
  2011); xfs_discard() was added in December 2010.

- fallocate() has supported ZERO_RANGE for XFS since Linux 3.15 (June
  2014); xfs_write_zeroes() was added in November 2013.

Nowadays, all systems that qemu runs on should support both fallocate()
features (RHEL 7's kernel does).

xfsctl() is still useful for getting the request alignment for O_DIRECT,
so this patch does not remove our dependency on it completely.

Note that xfs_write_zeroes() had a bug: It calls ftruncate() when the
file is shorter than the specified range (because ZERO_RANGE does not
increase the file length).  ftruncate() may yield and then discard data
that parallel write requests have written past the EOF in the meantime.
Dropping the function altogether fixes the bug.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Fixes: 50ba5b2d99
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Tested-by: Stefano Garzarella <sgarzare@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b2c6f23f4a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:45:18 -05:00
Paul Durrant c12adfd8f6 xen-bus: check whether the frontend is active during device reset...
...not the backend

Commit cb323146 "xen-bus: Fix backend state transition on device reset"
contained a subtle mistake. The hunk

@@ -539,11 +556,11 @@ static void xen_device_backend_changed(void *opaque)

     /*
      * If the toolstack (or unplug request callback) has set the backend
-     * state to Closing, but there is no active frontend (i.e. the
-     * state is not Connected) then set the backend state to Closed.
+     * state to Closing, but there is no active frontend then set the
+     * backend state to Closed.
      */
     if (xendev->backend_state == XenbusStateClosing &&
-        xendev->frontend_state != XenbusStateConnected) {
+        !xen_device_state_is_active(state)) {
         xen_device_backend_set_state(xendev, XenbusStateClosed);
     }

mistakenly replaced the check of 'xendev->frontend_state' with a check
(now in a helper function) of 'state', which actually equates to
'xendev->backend_state'.

This patch fixes the mistake.

Fixes: cb32314607
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Message-Id: <20190910171753.3775-1-paul.durrant@citrix.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
(cherry picked from commit df6180bb56)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:43:19 -05:00
Anthony PERARD b6cedc911e xen-bus: Fix backend state transition on device reset
When a frontend wants to reset its state and the backend one, it
starts with setting "Closing", then waits for the backend (QEMU) to do
the same.

But when QEMU is setting "Closing" to its state, it triggers an event
(xenstore watch) that re-execute xen_device_backend_changed() and set
the backend state to "Closed". QEMU should wait for the frontend to
set "Closed" before doing the same.

Before setting "Closed" to the backend_state, we are also going to
check if there is a frontend. If that the case, when the backend state
is set to "Closing" the frontend should react and sets its state to
"Closing" then "Closed". The backend should wait for that to happen.

Fixes: b6af8926fb
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Message-Id: <20190823101534.465-2-anthony.perard@citrix.com>
(cherry picked from commit cb32314607)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:42:16 -05:00
Eduardo Habkost 7ebcd375ad pc: Don't make die-id mandatory unless necessary
We have this issue reported when using libvirt to hotplug CPUs:
https://bugzilla.redhat.com/show_bug.cgi?id=1741451

Basically, libvirt is not copying die-id from
query-hotpluggable-cpus, but die-id is now mandatory.

We could blame libvirt and say it is not following the documented
interface, because we have this buried in the QAPI schema
documentation:

> Note: currently there are 5 properties that could be present
> but management should be prepared to pass through other
> properties with device_add command to allow for future
> interface extension. This also requires the filed names to be kept in
> sync with the properties passed to -device/device_add.

But I don't think this would be reasonable from us.  We can just
make QEMU more flexible and let die-id to be omitted when there's
no ambiguity.  This will allow us to keep compatibility with
existing libvirt versions.

Test case included to ensure we don't break this again.

Fixes: commit 176d2cda0d ("i386/cpu: Consolidate die-id validity in smp context")
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20190816170750.23910-1-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit fea374e7c8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:41:00 -05:00
Aurelien Jarno 4bfd496be3 target/alpha: fix tlb_fill trap_arg2 value for instruction fetch
Commit e41c945297 ("target/alpha: Convert to CPUClass::tlb_fill")
slightly changed the way the trap_arg2 value is computed in case of TLB
fill. The type of the variable used in the ternary operator has been
changed from an int to an enum. This causes the -1 value to not be
sign-extended to 64-bit in case of an instruction fetch. The trap_arg2
ends up with 0xffffffff instead of 0xffffffffffffffff. Fix that by
changing the -1 into -1LL.

This fixes the execution of user space processes in qemu-system-alpha.

Fixes: e41c945297
Cc: qemu-stable@nongnu.org
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[rth: Test MMU_DATA_LOAD and MMU_DATA_STORE instead of implying them.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
(cherry picked from commit cb1de55a83)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:31:56 -05:00
David Hildenbrand 499a5d6bb4 s390x/tcg: Fix VERIM with 32/64 bit elements
Wrong order of operands. The constant always comes last. Makes QEMU crash
reliably on specific git fetch invocations.

Reported-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190814151242.27199-1-david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Fixes: 5c4b0ab460 ("s390x/tcg: Implement VECTOR ELEMENT ROTATE AND INSERT UNDER MASK")
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
(cherry picked from commit 25bcb45d1b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:31:08 -05:00
John Snow 73a5bf4729 Revert "ide/ahci: Check for -ECANCELED in aio callbacks"
This reverts commit 0d910cfeaf.

It's not correct to just ignore an error code in a callback; we need to
handle that error and possible report failure to the guest so that they
don't wait indefinitely for an operation that will now never finish.

This ought to help cases reported by Nutanix where iSCSI returns a
legitimate -ECANCELED for certain operations which should be propagated
normally.

Reported-by: Shaju Abraham <shaju.abraham@nutanix.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Message-id: 20190729223605.7163-1-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 8ec41c4265)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:28:24 -05:00
Paolo Bonzini fbde196c30 dma-helpers: ensure AIO callback is invoked after cancellation
dma_aio_cancel unschedules the BH if there is one, which corresponds
to the reschedule_dma case of dma_blk_cb.  This can stall the DMA
permanently, because dma_complete will never get invoked and therefore
nobody will ever invoke the original AIO callback in dbs->common.cb.

Fix this by invoking the callback (which is ensured to happen after
a bdrv_aio_cancel_async, or done manually in the dbs->bh case), and
add assertions to check that the DMA state machine is indeed waiting
for dma_complete or reschedule_dma, but never both.

Reported-by: John Snow <jsnow@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 20190729213416.1972-1-pbonzini@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 539343c0a4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2019-10-27 23:28:10 -05:00
105 changed files with 5618 additions and 4000 deletions

View File

@ -1 +1 @@
4.1.0
4.1.1

View File

@ -167,7 +167,7 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size));
assert(QEMU_IS_ALIGNED(start, job->cluster_size));
nbytes = MIN(job->copy_range_size, end - start);
nbytes = MIN(job->copy_range_size, MIN(end, job->len) - start);
nr_clusters = DIV_ROUND_UP(nbytes, job->cluster_size);
hbitmap_reset(job->copy_bitmap, start, job->cluster_size * nr_clusters);
ret = blk_co_copy_range(blk, start, job->target, start, nbytes,
@ -657,12 +657,19 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
job->cluster_size = cluster_size;
job->copy_bitmap = copy_bitmap;
copy_bitmap = NULL;
job->use_copy_range = !compress; /* compression isn't supported for it */
job->copy_range_size = MIN_NON_ZERO(blk_get_max_transfer(job->common.blk),
blk_get_max_transfer(job->target));
job->copy_range_size = MAX(job->cluster_size,
QEMU_ALIGN_UP(job->copy_range_size,
job->cluster_size));
job->copy_range_size = QEMU_ALIGN_DOWN(job->copy_range_size,
job->cluster_size);
/*
* Set use_copy_range, consider the following:
* 1. Compression is not supported for copy_range.
* 2. copy_range does not respect max_transfer (it's a TODO), so we factor
* that in here. If max_transfer is smaller than the job->cluster_size,
* we do not use copy_range (in that case it's zero after aligning down
* above).
*/
job->use_copy_range = !compress && job->copy_range_size > 0;
/* Required permissions are already taken with target's blk_new() */
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,

View File

@ -63,9 +63,13 @@ void qmp_blockdev_create(const char *job_id, BlockdevCreateOptions *options,
const char *fmt = BlockdevDriver_str(options->driver);
BlockDriver *drv = bdrv_find_format(fmt);
if (!drv) {
error_setg(errp, "Block driver '%s' not found or not supported", fmt);
return;
}
/* If the driver is in the schema, we know that it exists. But it may not
* be whitelisted. */
assert(drv);
if (bdrv_uses_whitelist() && !bdrv_is_whitelisted(drv, false)) {
error_setg(errp, "Driver is not whitelisted");
return;

View File

@ -80,6 +80,7 @@ static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
#define CURL_BLOCK_OPT_TIMEOUT_DEFAULT 5
struct BDRVCURLState;
struct CURLState;
static bool libcurl_initialized;
@ -97,6 +98,7 @@ typedef struct CURLAIOCB {
typedef struct CURLSocket {
int fd;
struct CURLState *state;
QLIST_ENTRY(CURLSocket) next;
} CURLSocket;
@ -137,7 +139,6 @@ typedef struct BDRVCURLState {
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static void curl_multi_read(void *arg);
#ifdef NEED_CURL_TIMER_CALLBACK
/* Called from curl_multi_do_locked, with s->mutex held. */
@ -170,33 +171,29 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
QLIST_FOREACH(socket, &state->sockets, next) {
if (socket->fd == fd) {
if (action == CURL_POLL_REMOVE) {
QLIST_REMOVE(socket, next);
g_free(socket);
}
break;
}
}
if (!socket) {
socket = g_new0(CURLSocket, 1);
socket->fd = fd;
socket->state = state;
QLIST_INSERT_HEAD(&state->sockets, socket, next);
}
socket = NULL;
trace_curl_sock_cb(action, (int)fd);
switch (action) {
case CURL_POLL_IN:
aio_set_fd_handler(s->aio_context, fd, false,
curl_multi_read, NULL, NULL, state);
curl_multi_do, NULL, NULL, socket);
break;
case CURL_POLL_OUT:
aio_set_fd_handler(s->aio_context, fd, false,
NULL, curl_multi_do, NULL, state);
NULL, curl_multi_do, NULL, socket);
break;
case CURL_POLL_INOUT:
aio_set_fd_handler(s->aio_context, fd, false,
curl_multi_read, curl_multi_do, NULL, state);
curl_multi_do, curl_multi_do, NULL, socket);
break;
case CURL_POLL_REMOVE:
aio_set_fd_handler(s->aio_context, fd, false,
@ -204,6 +201,11 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
break;
}
if (action == CURL_POLL_REMOVE) {
QLIST_REMOVE(socket, next);
g_free(socket);
}
return 0;
}
@ -227,7 +229,6 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
{
CURLState *s = ((CURLState*)opaque);
size_t realsize = size * nmemb;
int i;
trace_curl_read_cb(realsize);
@ -243,32 +244,6 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
memcpy(s->orig_buf + s->buf_off, ptr, realsize);
s->buf_off += realsize;
for(i=0; i<CURL_NUM_ACB; i++) {
CURLAIOCB *acb = s->acb[i];
if (!acb)
continue;
if ((s->buf_off >= acb->end)) {
size_t request_length = acb->bytes;
qemu_iovec_from_buf(acb->qiov, 0, s->orig_buf + acb->start,
acb->end - acb->start);
if (acb->end - acb->start < request_length) {
size_t offset = acb->end - acb->start;
qemu_iovec_memset(acb->qiov, offset, 0,
request_length - offset);
}
acb->ret = 0;
s->acb[i] = NULL;
qemu_mutex_unlock(&s->s->mutex);
aio_co_wake(acb->co);
qemu_mutex_lock(&s->s->mutex);
}
}
read_end:
/* curl will error out if we do not return this value */
return size * nmemb;
@ -349,13 +324,14 @@ static void curl_multi_check_completion(BDRVCURLState *s)
break;
if (msg->msg == CURLMSG_DONE) {
int i;
CURLState *state = NULL;
bool error = msg->data.result != CURLE_OK;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
(char **)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
if (error) {
static int errcount = 100;
/* Don't lose the original error message from curl, since
@ -367,20 +343,35 @@ static void curl_multi_check_completion(BDRVCURLState *s)
error_report("curl: further errors suppressed");
}
}
}
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->ret = -EIO;
state->acb[i] = NULL;
qemu_mutex_unlock(&s->mutex);
aio_co_wake(acb->co);
qemu_mutex_lock(&s->mutex);
if (acb == NULL) {
continue;
}
if (!error) {
/* Assert that we have read all data */
assert(state->buf_off >= acb->end);
qemu_iovec_from_buf(acb->qiov, 0,
state->orig_buf + acb->start,
acb->end - acb->start);
if (acb->end - acb->start < acb->bytes) {
size_t offset = acb->end - acb->start;
qemu_iovec_memset(acb->qiov, offset, 0,
acb->bytes - offset);
}
}
acb->ret = error ? -EIO : 0;
state->acb[i] = NULL;
qemu_mutex_unlock(&s->mutex);
aio_co_wake(acb->co);
qemu_mutex_lock(&s->mutex);
}
curl_clean_state(state);
@ -390,42 +381,30 @@ static void curl_multi_check_completion(BDRVCURLState *s)
}
/* Called with s->mutex held. */
static void curl_multi_do_locked(CURLState *s)
static void curl_multi_do_locked(CURLSocket *socket)
{
CURLSocket *socket, *next_socket;
BDRVCURLState *s = socket->state->s;
int running;
int r;
if (!s->s->multi) {
if (!s->multi) {
return;
}
/* Need to use _SAFE because curl_multi_socket_action() may trigger
* curl_sock_cb() which might modify this list */
QLIST_FOREACH_SAFE(socket, &s->sockets, next, next_socket) {
do {
r = curl_multi_socket_action(s->s->multi, socket->fd, 0, &running);
} while (r == CURLM_CALL_MULTI_PERFORM);
}
do {
r = curl_multi_socket_action(s->multi, socket->fd, 0, &running);
} while (r == CURLM_CALL_MULTI_PERFORM);
}
static void curl_multi_do(void *arg)
{
CURLState *s = (CURLState *)arg;
CURLSocket *socket = arg;
BDRVCURLState *s = socket->state->s;
qemu_mutex_lock(&s->s->mutex);
curl_multi_do_locked(s);
qemu_mutex_unlock(&s->s->mutex);
}
static void curl_multi_read(void *arg)
{
CURLState *s = (CURLState *)arg;
qemu_mutex_lock(&s->s->mutex);
curl_multi_do_locked(s);
curl_multi_check_completion(s->s);
qemu_mutex_unlock(&s->s->mutex);
qemu_mutex_lock(&s->mutex);
curl_multi_do_locked(socket);
curl_multi_check_completion(s);
qemu_mutex_unlock(&s->mutex);
}
static void curl_multi_timeout_do(void *arg)

View File

@ -323,6 +323,7 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
BDRVRawState *s = bs->opaque;
char *buf;
size_t max_align = MAX(MAX_BLOCKSIZE, getpagesize());
size_t alignments[] = {1, 512, 1024, 2048, 4096};
/* For SCSI generic devices the alignment is not really used.
With buffered I/O, we don't have any restrictions. */
@ -349,25 +350,38 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
}
#endif
/* If we could not get the sizes so far, we can only guess them */
if (!s->buf_align) {
/*
* If we could not get the sizes so far, we can only guess them. First try
* to detect request alignment, since it is more likely to succeed. Then
* try to detect buf_align, which cannot be detected in some cases (e.g.
* Gluster). If buf_align cannot be detected, we fallback to the value of
* request_alignment.
*/
if (!bs->bl.request_alignment) {
int i;
size_t align;
buf = qemu_memalign(max_align, 2 * max_align);
for (align = 512; align <= max_align; align <<= 1) {
if (raw_is_io_aligned(fd, buf + align, max_align)) {
s->buf_align = align;
buf = qemu_memalign(max_align, max_align);
for (i = 0; i < ARRAY_SIZE(alignments); i++) {
align = alignments[i];
if (raw_is_io_aligned(fd, buf, align)) {
/* Fallback to safe value. */
bs->bl.request_alignment = (align != 1) ? align : max_align;
break;
}
}
qemu_vfree(buf);
}
if (!bs->bl.request_alignment) {
if (!s->buf_align) {
int i;
size_t align;
buf = qemu_memalign(s->buf_align, max_align);
for (align = 512; align <= max_align; align <<= 1) {
if (raw_is_io_aligned(fd, buf, align)) {
bs->bl.request_alignment = align;
buf = qemu_memalign(max_align, 2 * max_align);
for (i = 0; i < ARRAY_SIZE(alignments); i++) {
align = alignments[i];
if (raw_is_io_aligned(fd, buf + align, max_align)) {
/* Fallback to request_aligment. */
s->buf_align = (align != 1) ? align : bs->bl.request_alignment;
break;
}
}
@ -1445,59 +1459,6 @@ out:
}
}
#ifdef CONFIG_XFS
static int xfs_write_zeroes(BDRVRawState *s, int64_t offset, uint64_t bytes)
{
int64_t len;
struct xfs_flock64 fl;
int err;
len = lseek(s->fd, 0, SEEK_END);
if (len < 0) {
return -errno;
}
if (offset + bytes > len) {
/* XFS_IOC_ZERO_RANGE does not increase the file length */
if (ftruncate(s->fd, offset + bytes) < 0) {
return -errno;
}
}
memset(&fl, 0, sizeof(fl));
fl.l_whence = SEEK_SET;
fl.l_start = offset;
fl.l_len = bytes;
if (xfsctl(NULL, s->fd, XFS_IOC_ZERO_RANGE, &fl) < 0) {
err = errno;
trace_file_xfs_write_zeroes(strerror(errno));
return -err;
}
return 0;
}
static int xfs_discard(BDRVRawState *s, int64_t offset, uint64_t bytes)
{
struct xfs_flock64 fl;
int err;
memset(&fl, 0, sizeof(fl));
fl.l_whence = SEEK_SET;
fl.l_start = offset;
fl.l_len = bytes;
if (xfsctl(NULL, s->fd, XFS_IOC_UNRESVSP64, &fl) < 0) {
err = errno;
trace_file_xfs_discard(strerror(errno));
return -err;
}
return 0;
}
#endif
static int translate_err(int err)
{
if (err == -ENODEV || err == -ENOSYS || err == -EOPNOTSUPP ||
@ -1553,10 +1514,8 @@ static ssize_t handle_aiocb_write_zeroes_block(RawPosixAIOData *aiocb)
static int handle_aiocb_write_zeroes(void *opaque)
{
RawPosixAIOData *aiocb = opaque;
#if defined(CONFIG_FALLOCATE) || defined(CONFIG_XFS)
BDRVRawState *s = aiocb->bs->opaque;
#endif
#ifdef CONFIG_FALLOCATE
BDRVRawState *s = aiocb->bs->opaque;
int64_t len;
#endif
@ -1564,12 +1523,6 @@ static int handle_aiocb_write_zeroes(void *opaque)
return handle_aiocb_write_zeroes_block(aiocb);
}
#ifdef CONFIG_XFS
if (s->is_xfs) {
return xfs_write_zeroes(s, aiocb->aio_offset, aiocb->aio_nbytes);
}
#endif
#ifdef CONFIG_FALLOCATE_ZERO_RANGE
if (s->has_write_zeroes) {
int ret = do_fallocate(s->fd, FALLOC_FL_ZERO_RANGE,
@ -1632,14 +1585,6 @@ static int handle_aiocb_write_zeroes_unmap(void *opaque)
}
#endif
#ifdef CONFIG_XFS
if (s->is_xfs) {
/* xfs_discard() guarantees that the discarded area reads as all-zero
* afterwards, so we can use it here. */
return xfs_discard(s, aiocb->aio_offset, aiocb->aio_nbytes);
}
#endif
/* If we couldn't manage to unmap while guaranteed that the area reads as
* all-zero afterwards, just write zeroes without unmapping */
ret = handle_aiocb_write_zeroes(aiocb);
@ -1716,12 +1661,6 @@ static int handle_aiocb_discard(void *opaque)
ret = -errno;
#endif
} else {
#ifdef CONFIG_XFS
if (s->is_xfs) {
return xfs_discard(s, aiocb->aio_offset, aiocb->aio_nbytes);
}
#endif
#ifdef CONFIG_FALLOCATE_PUNCH_HOLE
ret = do_fallocate(s->fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
aiocb->aio_offset, aiocb->aio_nbytes);
@ -1735,6 +1674,43 @@ static int handle_aiocb_discard(void *opaque)
return ret;
}
/*
* Help alignment probing by allocating the first block.
*
* When reading with direct I/O from unallocated area on Gluster backed by XFS,
* reading succeeds regardless of request length. In this case we fallback to
* safe alignment which is not optimal. Allocating the first block avoids this
* fallback.
*
* fd may be opened with O_DIRECT, but we don't know the buffer alignment or
* request alignment, so we use safe values.
*
* Returns: 0 on success, -errno on failure. Since this is an optimization,
* caller may ignore failures.
*/
static int allocate_first_block(int fd, size_t max_size)
{
size_t write_size = (max_size < MAX_BLOCKSIZE)
? BDRV_SECTOR_SIZE
: MAX_BLOCKSIZE;
size_t max_align = MAX(MAX_BLOCKSIZE, getpagesize());
void *buf;
ssize_t n;
int ret;
buf = qemu_memalign(max_align, write_size);
memset(buf, 0, write_size);
do {
n = pwrite(fd, buf, write_size, 0);
} while (n == -1 && errno == EINTR);
ret = (n == -1) ? -errno : 0;
qemu_vfree(buf);
return ret;
}
static int handle_aiocb_truncate(void *opaque)
{
RawPosixAIOData *aiocb = opaque;
@ -1774,6 +1750,17 @@ static int handle_aiocb_truncate(void *opaque)
/* posix_fallocate() doesn't set errno. */
error_setg_errno(errp, -result,
"Could not preallocate new data");
} else if (current_length == 0) {
/*
* posix_fallocate() uses fallocate() if the filesystem
* supports it, or fallback to manually writing zeroes. If
* fallocate() was used, unaligned reads from the fallocated
* area in raw_probe_alignment() will succeed, hence we need to
* allocate the first block.
*
* Optimize future alignment probing; ignore failures.
*/
allocate_first_block(fd, offset);
}
} else {
result = 0;
@ -1835,6 +1822,9 @@ static int handle_aiocb_truncate(void *opaque)
if (ftruncate(fd, offset) != 0) {
result = -errno;
error_setg_errno(errp, -result, "Could not resize file");
} else if (current_length == 0 && offset > current_length) {
/* Optimize future alignment probing; ignore failures. */
allocate_first_block(fd, offset);
}
return result;
default:
@ -2698,6 +2688,42 @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int bytes,
RawPosixAIOData acb;
ThreadPoolFunc *handler;
#ifdef CONFIG_FALLOCATE
if (offset + bytes > bs->total_sectors * BDRV_SECTOR_SIZE) {
BdrvTrackedRequest *req;
uint64_t end;
/*
* This is a workaround for a bug in the Linux XFS driver,
* where writes submitted through the AIO interface will be
* discarded if they happen beyond a concurrently running
* fallocate() that increases the file length (i.e., both the
* write and the fallocate() happen beyond the EOF).
*
* To work around it, we extend the tracked request for this
* zero write until INT64_MAX (effectively infinity), and mark
* it as serializing.
*
* We have to enable this workaround for all filesystems and
* AIO modes (not just XFS with aio=native), because for
* remote filesystems we do not know the host configuration.
*/
req = bdrv_co_get_self_request(bs);
assert(req);
assert(req->type == BDRV_TRACKED_WRITE);
assert(req->offset <= offset);
assert(req->offset + req->bytes >= offset + bytes);
end = INT64_MAX & -(uint64_t)bs->bl.request_alignment;
req->bytes = end - req->offset;
req->overlap_bytes = req->bytes;
bdrv_mark_request_serialising(req, bs->bl.request_alignment);
bdrv_wait_serialising_requests(req);
}
#endif
acb = (RawPosixAIOData) {
.bs = bs,
.aio_fildes = s->fd,

View File

@ -694,7 +694,7 @@ static void tracked_request_begin(BdrvTrackedRequest *req,
qemu_co_mutex_unlock(&bs->reqs_lock);
}
static void mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
void bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align)
{
int64_t overlap_offset = req->offset & ~(align - 1);
uint64_t overlap_bytes = ROUND_UP(req->offset + req->bytes, align)
@ -721,6 +721,24 @@ static bool is_request_serialising_and_aligned(BdrvTrackedRequest *req)
(req->bytes == req->overlap_bytes);
}
/**
* Return the tracked request on @bs for the current coroutine, or
* NULL if there is none.
*/
BdrvTrackedRequest *coroutine_fn bdrv_co_get_self_request(BlockDriverState *bs)
{
BdrvTrackedRequest *req;
Coroutine *self = qemu_coroutine_self();
QLIST_FOREACH(req, &bs->tracked_requests, list) {
if (req->co == self) {
return req;
}
}
return NULL;
}
/**
* Round a region to cluster boundaries
*/
@ -784,7 +802,7 @@ void bdrv_dec_in_flight(BlockDriverState *bs)
bdrv_wakeup(bs);
}
static bool coroutine_fn wait_serialising_requests(BdrvTrackedRequest *self)
bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self)
{
BlockDriverState *bs = self->bs;
BdrvTrackedRequest *req;
@ -1340,14 +1358,14 @@ static int coroutine_fn bdrv_aligned_preadv(BdrvChild *child,
* with each other for the same cluster. For example, in copy-on-read
* it ensures that the CoR read and write operations are atomic and
* guest writes cannot interleave between them. */
mark_request_serialising(req, bdrv_get_cluster_size(bs));
bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
}
/* BDRV_REQ_SERIALISING is only for write operation */
assert(!(flags & BDRV_REQ_SERIALISING));
if (!(flags & BDRV_REQ_NO_SERIALISING)) {
wait_serialising_requests(req);
bdrv_wait_serialising_requests(req);
}
if (flags & BDRV_REQ_COPY_ON_READ) {
@ -1408,28 +1426,177 @@ out:
}
/*
* Handle a read request in coroutine context
* Request padding
*
* |<---- align ----->| |<----- align ---->|
* |<- head ->|<------------- bytes ------------->|<-- tail -->|
* | | | | | |
* -*----------$-------*-------- ... --------*-----$------------*---
* | | | | | |
* | offset | | end |
* ALIGN_DOWN(offset) ALIGN_UP(offset) ALIGN_DOWN(end) ALIGN_UP(end)
* [buf ... ) [tail_buf )
*
* @buf is an aligned allocation needed to store @head and @tail paddings. @head
* is placed at the beginning of @buf and @tail at the @end.
*
* @tail_buf is a pointer to sub-buffer, corresponding to align-sized chunk
* around tail, if tail exists.
*
* @merge_reads is true for small requests,
* if @buf_len == @head + bytes + @tail. In this case it is possible that both
* head and tail exist but @buf_len == align and @tail_buf == @buf.
*/
typedef struct BdrvRequestPadding {
uint8_t *buf;
size_t buf_len;
uint8_t *tail_buf;
size_t head;
size_t tail;
bool merge_reads;
QEMUIOVector local_qiov;
} BdrvRequestPadding;
static bool bdrv_init_padding(BlockDriverState *bs,
int64_t offset, int64_t bytes,
BdrvRequestPadding *pad)
{
uint64_t align = bs->bl.request_alignment;
size_t sum;
memset(pad, 0, sizeof(*pad));
pad->head = offset & (align - 1);
pad->tail = ((offset + bytes) & (align - 1));
if (pad->tail) {
pad->tail = align - pad->tail;
}
if ((!pad->head && !pad->tail) || !bytes) {
return false;
}
sum = pad->head + bytes + pad->tail;
pad->buf_len = (sum > align && pad->head && pad->tail) ? 2 * align : align;
pad->buf = qemu_blockalign(bs, pad->buf_len);
pad->merge_reads = sum == pad->buf_len;
if (pad->tail) {
pad->tail_buf = pad->buf + pad->buf_len - align;
}
return true;
}
static int bdrv_padding_rmw_read(BdrvChild *child,
BdrvTrackedRequest *req,
BdrvRequestPadding *pad,
bool zero_middle)
{
QEMUIOVector local_qiov;
BlockDriverState *bs = child->bs;
uint64_t align = bs->bl.request_alignment;
int ret;
assert(req->serialising && pad->buf);
if (pad->head || pad->merge_reads) {
uint64_t bytes = pad->merge_reads ? pad->buf_len : align;
qemu_iovec_init_buf(&local_qiov, pad->buf, bytes);
if (pad->head) {
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_HEAD);
}
if (pad->merge_reads && pad->tail) {
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_TAIL);
}
ret = bdrv_aligned_preadv(child, req, req->overlap_offset, bytes,
align, &local_qiov, 0);
if (ret < 0) {
return ret;
}
if (pad->head) {
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_HEAD);
}
if (pad->merge_reads && pad->tail) {
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_TAIL);
}
if (pad->merge_reads) {
goto zero_mem;
}
}
if (pad->tail) {
qemu_iovec_init_buf(&local_qiov, pad->tail_buf, align);
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_TAIL);
ret = bdrv_aligned_preadv(
child, req,
req->overlap_offset + req->overlap_bytes - align,
align, align, &local_qiov, 0);
if (ret < 0) {
return ret;
}
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_TAIL);
}
zero_mem:
if (zero_middle) {
memset(pad->buf + pad->head, 0, pad->buf_len - pad->head - pad->tail);
}
return 0;
}
static void bdrv_padding_destroy(BdrvRequestPadding *pad)
{
if (pad->buf) {
qemu_vfree(pad->buf);
qemu_iovec_destroy(&pad->local_qiov);
}
}
/*
* bdrv_pad_request
*
* Exchange request parameters with padded request if needed. Don't include RMW
* read of padding, bdrv_padding_rmw_read() should be called separately if
* needed.
*
* All parameters except @bs are in-out: they represent original request at
* function call and padded (if padding needed) at function finish.
*
* Function always succeeds.
*/
static bool bdrv_pad_request(BlockDriverState *bs, QEMUIOVector **qiov,
int64_t *offset, unsigned int *bytes,
BdrvRequestPadding *pad)
{
if (!bdrv_init_padding(bs, *offset, *bytes, pad)) {
return false;
}
qemu_iovec_init_extended(&pad->local_qiov, pad->buf, pad->head,
*qiov, 0, *bytes,
pad->buf + pad->buf_len - pad->tail, pad->tail);
*bytes += pad->head + pad->tail;
*offset -= pad->head;
*qiov = &pad->local_qiov;
return true;
}
int coroutine_fn bdrv_co_preadv(BdrvChild *child,
int64_t offset, unsigned int bytes, QEMUIOVector *qiov,
BdrvRequestFlags flags)
{
BlockDriverState *bs = child->bs;
BlockDriver *drv = bs->drv;
BdrvTrackedRequest req;
uint64_t align = bs->bl.request_alignment;
uint8_t *head_buf = NULL;
uint8_t *tail_buf = NULL;
QEMUIOVector local_qiov;
bool use_local_qiov = false;
BdrvRequestPadding pad;
int ret;
trace_bdrv_co_preadv(child->bs, offset, bytes, flags);
if (!drv) {
return -ENOMEDIUM;
}
trace_bdrv_co_preadv(bs, offset, bytes, flags);
ret = bdrv_check_byte_request(bs, offset, bytes);
if (ret < 0) {
@ -1443,43 +1610,16 @@ int coroutine_fn bdrv_co_preadv(BdrvChild *child,
flags |= BDRV_REQ_COPY_ON_READ;
}
/* Align read if necessary by padding qiov */
if (offset & (align - 1)) {
head_buf = qemu_blockalign(bs, align);
qemu_iovec_init(&local_qiov, qiov->niov + 2);
qemu_iovec_add(&local_qiov, head_buf, offset & (align - 1));
qemu_iovec_concat(&local_qiov, qiov, 0, qiov->size);
use_local_qiov = true;
bytes += offset & (align - 1);
offset = offset & ~(align - 1);
}
if ((offset + bytes) & (align - 1)) {
if (!use_local_qiov) {
qemu_iovec_init(&local_qiov, qiov->niov + 1);
qemu_iovec_concat(&local_qiov, qiov, 0, qiov->size);
use_local_qiov = true;
}
tail_buf = qemu_blockalign(bs, align);
qemu_iovec_add(&local_qiov, tail_buf,
align - ((offset + bytes) & (align - 1)));
bytes = ROUND_UP(bytes, align);
}
bdrv_pad_request(bs, &qiov, &offset, &bytes, &pad);
tracked_request_begin(&req, bs, offset, bytes, BDRV_TRACKED_READ);
ret = bdrv_aligned_preadv(child, &req, offset, bytes, align,
use_local_qiov ? &local_qiov : qiov,
flags);
ret = bdrv_aligned_preadv(child, &req, offset, bytes,
bs->bl.request_alignment,
qiov, flags);
tracked_request_end(&req);
bdrv_dec_in_flight(bs);
if (use_local_qiov) {
qemu_iovec_destroy(&local_qiov);
qemu_vfree(head_buf);
qemu_vfree(tail_buf);
}
bdrv_padding_destroy(&pad);
return ret;
}
@ -1614,10 +1754,10 @@ bdrv_co_write_req_prepare(BdrvChild *child, int64_t offset, uint64_t bytes,
assert(!(flags & ~BDRV_REQ_MASK));
if (flags & BDRV_REQ_SERIALISING) {
mark_request_serialising(req, bdrv_get_cluster_size(bs));
bdrv_mark_request_serialising(req, bdrv_get_cluster_size(bs));
}
waited = wait_serialising_requests(req);
waited = bdrv_wait_serialising_requests(req);
assert(!waited || !req->serialising ||
is_request_serialising_and_aligned(req));
@ -1715,7 +1855,7 @@ static int coroutine_fn bdrv_aligned_pwritev(BdrvChild *child,
if (!ret && bs->detect_zeroes != BLOCKDEV_DETECT_ZEROES_OPTIONS_OFF &&
!(flags & BDRV_REQ_ZERO_WRITE) && drv->bdrv_co_pwrite_zeroes &&
qemu_iovec_is_zero(qiov)) {
qemu_iovec_is_zero(qiov, 0, qiov->size)) {
flags |= BDRV_REQ_ZERO_WRITE;
if (bs->detect_zeroes == BLOCKDEV_DETECT_ZEROES_OPTIONS_UNMAP) {
flags |= BDRV_REQ_MAY_UNMAP;
@ -1775,44 +1915,34 @@ static int coroutine_fn bdrv_co_do_zero_pwritev(BdrvChild *child,
BdrvTrackedRequest *req)
{
BlockDriverState *bs = child->bs;
uint8_t *buf = NULL;
QEMUIOVector local_qiov;
uint64_t align = bs->bl.request_alignment;
unsigned int head_padding_bytes, tail_padding_bytes;
int ret = 0;
bool padding;
BdrvRequestPadding pad;
head_padding_bytes = offset & (align - 1);
tail_padding_bytes = (align - (offset + bytes)) & (align - 1);
padding = bdrv_init_padding(bs, offset, bytes, &pad);
if (padding) {
bdrv_mark_request_serialising(req, align);
bdrv_wait_serialising_requests(req);
bdrv_padding_rmw_read(child, req, &pad, true);
assert(flags & BDRV_REQ_ZERO_WRITE);
if (head_padding_bytes || tail_padding_bytes) {
buf = qemu_blockalign(bs, align);
qemu_iovec_init_buf(&local_qiov, buf, align);
}
if (head_padding_bytes) {
uint64_t zero_bytes = MIN(bytes, align - head_padding_bytes);
if (pad.head || pad.merge_reads) {
int64_t aligned_offset = offset & ~(align - 1);
int64_t write_bytes = pad.merge_reads ? pad.buf_len : align;
/* RMW the unaligned part before head. */
mark_request_serialising(req, align);
wait_serialising_requests(req);
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_HEAD);
ret = bdrv_aligned_preadv(child, req, offset & ~(align - 1), align,
align, &local_qiov, 0);
if (ret < 0) {
goto fail;
qemu_iovec_init_buf(&local_qiov, pad.buf, write_bytes);
ret = bdrv_aligned_pwritev(child, req, aligned_offset, write_bytes,
align, &local_qiov,
flags & ~BDRV_REQ_ZERO_WRITE);
if (ret < 0 || pad.merge_reads) {
/* Error or all work is done */
goto out;
}
offset += write_bytes - pad.head;
bytes -= write_bytes - pad.head;
}
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_HEAD);
memset(buf + head_padding_bytes, 0, zero_bytes);
ret = bdrv_aligned_pwritev(child, req, offset & ~(align - 1), align,
align, &local_qiov,
flags & ~BDRV_REQ_ZERO_WRITE);
if (ret < 0) {
goto fail;
}
offset += zero_bytes;
bytes -= zero_bytes;
}
assert(!bytes || (offset & (align - 1)) == 0);
@ -1822,7 +1952,7 @@ static int coroutine_fn bdrv_co_do_zero_pwritev(BdrvChild *child,
ret = bdrv_aligned_pwritev(child, req, offset, aligned_bytes, align,
NULL, flags);
if (ret < 0) {
goto fail;
goto out;
}
bytes -= aligned_bytes;
offset += aligned_bytes;
@ -1830,26 +1960,17 @@ static int coroutine_fn bdrv_co_do_zero_pwritev(BdrvChild *child,
assert(!bytes || (offset & (align - 1)) == 0);
if (bytes) {
assert(align == tail_padding_bytes + bytes);
/* RMW the unaligned part after tail. */
mark_request_serialising(req, align);
wait_serialising_requests(req);
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_TAIL);
ret = bdrv_aligned_preadv(child, req, offset, align,
align, &local_qiov, 0);
if (ret < 0) {
goto fail;
}
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_TAIL);
assert(align == pad.tail + bytes);
memset(buf, 0, bytes);
qemu_iovec_init_buf(&local_qiov, pad.tail_buf, align);
ret = bdrv_aligned_pwritev(child, req, offset, align, align,
&local_qiov, flags & ~BDRV_REQ_ZERO_WRITE);
}
fail:
qemu_vfree(buf);
return ret;
out:
bdrv_padding_destroy(&pad);
return ret;
}
/*
@ -1862,10 +1983,7 @@ int coroutine_fn bdrv_co_pwritev(BdrvChild *child,
BlockDriverState *bs = child->bs;
BdrvTrackedRequest req;
uint64_t align = bs->bl.request_alignment;
uint8_t *head_buf = NULL;
uint8_t *tail_buf = NULL;
QEMUIOVector local_qiov;
bool use_local_qiov = false;
BdrvRequestPadding pad;
int ret;
trace_bdrv_co_pwritev(child->bs, offset, bytes, flags);
@ -1892,86 +2010,21 @@ int coroutine_fn bdrv_co_pwritev(BdrvChild *child,
goto out;
}
if (offset & (align - 1)) {
QEMUIOVector head_qiov;
mark_request_serialising(&req, align);
wait_serialising_requests(&req);
head_buf = qemu_blockalign(bs, align);
qemu_iovec_init_buf(&head_qiov, head_buf, align);
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_HEAD);
ret = bdrv_aligned_preadv(child, &req, offset & ~(align - 1), align,
align, &head_qiov, 0);
if (ret < 0) {
goto fail;
}
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_HEAD);
qemu_iovec_init(&local_qiov, qiov->niov + 2);
qemu_iovec_add(&local_qiov, head_buf, offset & (align - 1));
qemu_iovec_concat(&local_qiov, qiov, 0, qiov->size);
use_local_qiov = true;
bytes += offset & (align - 1);
offset = offset & ~(align - 1);
/* We have read the tail already if the request is smaller
* than one aligned block.
*/
if (bytes < align) {
qemu_iovec_add(&local_qiov, head_buf + bytes, align - bytes);
bytes = align;
}
}
if ((offset + bytes) & (align - 1)) {
QEMUIOVector tail_qiov;
size_t tail_bytes;
bool waited;
mark_request_serialising(&req, align);
waited = wait_serialising_requests(&req);
assert(!waited || !use_local_qiov);
tail_buf = qemu_blockalign(bs, align);
qemu_iovec_init_buf(&tail_qiov, tail_buf, align);
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_TAIL);
ret = bdrv_aligned_preadv(child, &req, (offset + bytes) & ~(align - 1),
align, align, &tail_qiov, 0);
if (ret < 0) {
goto fail;
}
bdrv_debug_event(bs, BLKDBG_PWRITEV_RMW_AFTER_TAIL);
if (!use_local_qiov) {
qemu_iovec_init(&local_qiov, qiov->niov + 1);
qemu_iovec_concat(&local_qiov, qiov, 0, qiov->size);
use_local_qiov = true;
}
tail_bytes = (offset + bytes) & (align - 1);
qemu_iovec_add(&local_qiov, tail_buf + tail_bytes, align - tail_bytes);
bytes = ROUND_UP(bytes, align);
if (bdrv_pad_request(bs, &qiov, &offset, &bytes, &pad)) {
bdrv_mark_request_serialising(&req, align);
bdrv_wait_serialising_requests(&req);
bdrv_padding_rmw_read(child, &req, &pad, false);
}
ret = bdrv_aligned_pwritev(child, &req, offset, bytes, align,
use_local_qiov ? &local_qiov : qiov,
flags);
qiov, flags);
fail:
bdrv_padding_destroy(&pad);
if (use_local_qiov) {
qemu_iovec_destroy(&local_qiov);
}
qemu_vfree(head_buf);
qemu_vfree(tail_buf);
out:
tracked_request_end(&req);
bdrv_dec_in_flight(bs);
return ret;
}
@ -3043,7 +3096,7 @@ static int coroutine_fn bdrv_co_copy_range_internal(
/* BDRV_REQ_SERIALISING is only for write operation */
assert(!(read_flags & BDRV_REQ_SERIALISING));
if (!(read_flags & BDRV_REQ_NO_SERIALISING)) {
wait_serialising_requests(&req);
bdrv_wait_serialising_requests(&req);
}
ret = src->bs->drv->bdrv_co_copy_range_from(src->bs,
@ -3170,7 +3223,7 @@ int coroutine_fn bdrv_co_truncate(BdrvChild *child, int64_t offset,
* new area, we need to make sure that no write requests are made to it
* concurrently or they might be overwritten by preallocation. */
if (new_bytes) {
mark_request_serialising(&req, 1);
bdrv_mark_request_serialising(&req, 1);
}
if (bs->read_only) {
error_setg(errp, "Image is read-only");

View File

@ -618,11 +618,11 @@ static int mirror_exit_common(Job *job)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
BlockJob *bjob = &s->common;
MirrorBDSOpaque *bs_opaque = s->mirror_top_bs->opaque;
MirrorBDSOpaque *bs_opaque;
AioContext *replace_aio_context = NULL;
BlockDriverState *src = s->mirror_top_bs->backing->bs;
BlockDriverState *target_bs = blk_bs(s->target);
BlockDriverState *mirror_top_bs = s->mirror_top_bs;
BlockDriverState *src;
BlockDriverState *target_bs;
BlockDriverState *mirror_top_bs;
Error *local_err = NULL;
bool abort = job->ret < 0;
int ret = 0;
@ -632,6 +632,11 @@ static int mirror_exit_common(Job *job)
}
s->prepared = true;
mirror_top_bs = s->mirror_top_bs;
bs_opaque = mirror_top_bs->opaque;
src = mirror_top_bs->backing->bs;
target_bs = blk_bs(s->target);
if (bdrv_chain_contains(src, target_bs)) {
bdrv_unfreeze_backing_chain(mirror_top_bs, target_bs);
}
@ -656,7 +661,10 @@ static int mirror_exit_common(Job *job)
s->target = NULL;
/* We don't access the source any more. Dropping any WRITE/RESIZE is
* required before it could become a backing file of target_bs. */
* required before it could become a backing file of target_bs. Not having
* these permissions any more means that we can't allow any new requests on
* mirror_top_bs from now on, so keep it drained. */
bdrv_drained_begin(mirror_top_bs);
bs_opaque->stop = true;
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort);
@ -724,6 +732,7 @@ static int mirror_exit_common(Job *job)
bs_opaque->job = NULL;
bdrv_drained_end(src);
bdrv_drained_end(mirror_top_bs);
s->in_drain = false;
bdrv_unref(mirror_top_bs);
bdrv_unref(src);

View File

@ -390,12 +390,14 @@ static void nfs_attach_aio_context(BlockDriverState *bs,
static void nfs_client_close(NFSClient *client)
{
if (client->context) {
qemu_mutex_lock(&client->mutex);
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false, NULL, NULL, NULL, NULL);
qemu_mutex_unlock(&client->mutex);
if (client->fh) {
nfs_close(client->context, client->fh);
client->fh = NULL;
}
aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
false, NULL, NULL, NULL, NULL);
nfs_destroy_context(client->context);
client->context = NULL;
}

View File

@ -142,6 +142,13 @@ static int check_table_entry(uint64_t entry, int cluster_size)
return 0;
}
static int64_t get_bitmap_bytes_needed(int64_t len, uint32_t granularity)
{
int64_t num_bits = DIV_ROUND_UP(len, granularity);
return DIV_ROUND_UP(num_bits, 8);
}
static int check_constraints_on_bitmap(BlockDriverState *bs,
const char *name,
uint32_t granularity,
@ -150,6 +157,7 @@ static int check_constraints_on_bitmap(BlockDriverState *bs,
BDRVQcow2State *s = bs->opaque;
int granularity_bits = ctz32(granularity);
int64_t len = bdrv_getlength(bs);
int64_t bitmap_bytes;
assert(granularity > 0);
assert((granularity & (granularity - 1)) == 0);
@ -171,9 +179,9 @@ static int check_constraints_on_bitmap(BlockDriverState *bs,
return -EINVAL;
}
if ((len > (uint64_t)BME_MAX_PHYS_SIZE << granularity_bits) ||
(len > (uint64_t)BME_MAX_TABLE_SIZE * s->cluster_size <<
granularity_bits))
bitmap_bytes = get_bitmap_bytes_needed(len, granularity);
if ((bitmap_bytes > (uint64_t)BME_MAX_PHYS_SIZE) ||
(bitmap_bytes > (uint64_t)BME_MAX_TABLE_SIZE * s->cluster_size))
{
error_setg(errp, "Too much space will be occupied by the bitmap. "
"Use larger granularity");

View File

@ -473,9 +473,10 @@ static bool coroutine_fn do_perform_cow_encrypt(BlockDriverState *bs,
assert((offset_in_cluster & ~BDRV_SECTOR_MASK) == 0);
assert((bytes & ~BDRV_SECTOR_MASK) == 0);
assert(s->crypto);
if (qcow2_co_encrypt(bs, cluster_offset,
src_cluster_offset + offset_in_cluster,
buffer, bytes) < 0) {
if (qcow2_co_encrypt(bs,
start_of_cluster(s, cluster_offset + offset_in_cluster),
src_cluster_offset + offset_in_cluster,
buffer, bytes) < 0) {
return false;
}
}
@ -1340,6 +1341,9 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
nb_clusters = MIN(nb_clusters, s->l2_slice_size - l2_index);
assert(nb_clusters <= INT_MAX);
/* Limit total allocation byte count to INT_MAX */
nb_clusters = MIN(nb_clusters, INT_MAX >> s->cluster_bits);
/* Find L2 entry for the first involved cluster */
ret = get_cluster_table(bs, guest_offset, &l2_slice, &l2_index);
if (ret < 0) {
@ -1428,7 +1432,7 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
* request actually writes to (excluding COW at the end)
*/
uint64_t requested_bytes = *bytes + offset_into_cluster(s, guest_offset);
int avail_bytes = MIN(INT_MAX, nb_clusters << s->cluster_bits);
int avail_bytes = nb_clusters << s->cluster_bits;
int nb_bytes = MIN(requested_bytes, avail_bytes);
QCowL2Meta *old_m = *m;

View File

@ -3455,6 +3455,8 @@ int qcow2_detect_metadata_preallocation(BlockDriverState *bs)
int64_t i, end_cluster, cluster_count = 0, threshold;
int64_t file_length, real_allocation, real_clusters;
qemu_co_mutex_assert_locked(&s->lock);
file_length = bdrv_getlength(bs->file->bs);
if (file_length < 0) {
return file_length;

View File

@ -826,7 +826,11 @@ static void read_cache_sizes(BlockDriverState *bs, QemuOpts *opts,
bool l2_cache_entry_size_set;
int min_refcount_cache = MIN_REFCOUNT_CACHE_SIZE * s->cluster_size;
uint64_t virtual_disk_size = bs->total_sectors * BDRV_SECTOR_SIZE;
uint64_t max_l2_cache = virtual_disk_size / (s->cluster_size / 8);
uint64_t max_l2_entries = DIV_ROUND_UP(virtual_disk_size, s->cluster_size);
/* An L2 table is always one cluster in size so the max cache size
* should be a multiple of the cluster size. */
uint64_t max_l2_cache = ROUND_UP(max_l2_entries * sizeof(uint64_t),
s->cluster_size);
combined_cache_size_set = qemu_opt_get(opts, QCOW2_OPT_CACHE_SIZE);
l2_cache_size_set = qemu_opt_get(opts, QCOW2_OPT_L2_CACHE_SIZE);
@ -1895,6 +1899,8 @@ static int coroutine_fn qcow2_co_block_status(BlockDriverState *bs,
unsigned int bytes;
int status = 0;
qemu_co_mutex_lock(&s->lock);
if (!s->metadata_preallocation_checked) {
ret = qcow2_detect_metadata_preallocation(bs);
s->metadata_preallocation = (ret == 1);
@ -1902,7 +1908,6 @@ static int coroutine_fn qcow2_co_block_status(BlockDriverState *bs,
}
bytes = MIN(INT_MAX, count);
qemu_co_mutex_lock(&s->lock);
ret = qcow2_get_cluster_offset(bs, offset, &bytes, &cluster_offset);
qemu_co_mutex_unlock(&s->lock);
if (ret < 0) {

View File

@ -77,7 +77,7 @@
/* Defined in the qcow2 spec (compressed cluster descriptor) */
#define QCOW2_COMPRESSED_SECTOR_SIZE 512U
#define QCOW2_COMPRESSED_SECTOR_MASK (~(QCOW2_COMPRESSED_SECTOR_SIZE - 1))
#define QCOW2_COMPRESSED_SECTOR_MASK (~(QCOW2_COMPRESSED_SECTOR_SIZE - 1ULL))
/* Must be at least 2 to cover COW */
#define MIN_L2_CACHE_SIZE 2 /* cache entries */

View File

@ -31,6 +31,7 @@
#include "qapi/qmp/qerror.h"
#include "qapi/qmp/qstring.h"
#include "qemu/option.h"
#include "sysemu/block-backend.h"
QemuOptsList internal_snapshot_opts = {
.name = "snapshot",
@ -384,6 +385,16 @@ int bdrv_snapshot_load_tmp_by_id_or_name(BlockDriverState *bs,
return ret;
}
static bool bdrv_all_snapshots_includes_bs(BlockDriverState *bs)
{
if (!bdrv_is_inserted(bs) || bdrv_is_read_only(bs)) {
return false;
}
/* Include all nodes that are either in use by a BlockBackend, or that
* aren't attached to any node, but owned by the monitor. */
return bdrv_has_blk(bs) || QLIST_EMPTY(&bs->parents);
}
/* Group operations. All block drivers are involved.
* These functions will properly handle dataplane (take aio_context_acquire
@ -399,7 +410,7 @@ bool bdrv_all_can_snapshot(BlockDriverState **first_bad_bs)
AioContext *ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
if (bdrv_is_inserted(bs) && !bdrv_is_read_only(bs)) {
if (bdrv_all_snapshots_includes_bs(bs)) {
ok = bdrv_can_snapshot(bs);
}
aio_context_release(ctx);
@ -426,8 +437,9 @@ int bdrv_all_delete_snapshot(const char *name, BlockDriverState **first_bad_bs,
AioContext *ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
if (bdrv_can_snapshot(bs) &&
bdrv_snapshot_find(bs, snapshot, name) >= 0) {
if (bdrv_all_snapshots_includes_bs(bs) &&
bdrv_snapshot_find(bs, snapshot, name) >= 0)
{
ret = bdrv_snapshot_delete(bs, snapshot->id_str,
snapshot->name, err);
}
@ -455,7 +467,7 @@ int bdrv_all_goto_snapshot(const char *name, BlockDriverState **first_bad_bs,
AioContext *ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
if (bdrv_can_snapshot(bs)) {
if (bdrv_all_snapshots_includes_bs(bs)) {
ret = bdrv_snapshot_goto(bs, name, errp);
}
aio_context_release(ctx);
@ -481,7 +493,7 @@ int bdrv_all_find_snapshot(const char *name, BlockDriverState **first_bad_bs)
AioContext *ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
if (bdrv_can_snapshot(bs)) {
if (bdrv_all_snapshots_includes_bs(bs)) {
err = bdrv_snapshot_find(bs, &sn, name);
}
aio_context_release(ctx);
@ -512,7 +524,7 @@ int bdrv_all_create_snapshot(QEMUSnapshotInfo *sn,
if (bs == vm_state_bs) {
sn->vm_state_size = vm_state_size;
err = bdrv_snapshot_create(bs, sn);
} else if (bdrv_can_snapshot(bs)) {
} else if (bdrv_all_snapshots_includes_bs(bs)) {
sn->vm_state_size = 0;
err = bdrv_snapshot_create(bs, sn);
}
@ -538,7 +550,7 @@ BlockDriverState *bdrv_all_find_vmstate_bs(void)
bool found;
aio_context_acquire(ctx);
found = bdrv_can_snapshot(bs);
found = bdrv_all_snapshots_includes_bs(bs) && bdrv_can_snapshot(bs);
aio_context_release(ctx);
if (found) {

View File

@ -885,6 +885,7 @@ static int create_dynamic_disk(BlockBackend *blk, uint8_t *buf,
goto fail;
}
ret = 0;
fail:
return ret;
}
@ -908,7 +909,7 @@ static int create_fixed_disk(BlockBackend *blk, uint8_t *buf,
return ret;
}
return ret;
return 0;
}
static int calculate_rounded_image_size(BlockdevCreateOptionsVpc *vpc_opts,

View File

@ -186,14 +186,23 @@ static const BdrvChildRole child_job = {
void block_job_remove_all_bdrv(BlockJob *job)
{
GSList *l;
for (l = job->nodes; l; l = l->next) {
/*
* bdrv_root_unref_child() may reach child_job_[can_]set_aio_ctx(),
* which will also traverse job->nodes, so consume the list one by
* one to make sure that such a concurrent access does not attempt
* to process an already freed BdrvChild.
*/
while (job->nodes) {
GSList *l = job->nodes;
BdrvChild *c = l->data;
job->nodes = l->next;
bdrv_op_unblock_all(c->bs, job->blocker);
bdrv_root_unref_child(c);
g_slist_free_1(l);
}
g_slist_free(job->nodes);
job->nodes = NULL;
}
bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)

View File

@ -1097,7 +1097,8 @@ bool vu_set_queue_host_notifier(VuDev *dev, VuVirtq *vq, int fd,
vmsg.fd_num = fd_num;
if ((dev->protocol_features & VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD) == 0) {
if (!has_feature(dev->protocol_features,
VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD)) {
return false;
}

View File

@ -90,6 +90,7 @@ static void reschedule_dma(void *opaque)
{
DMAAIOCB *dbs = (DMAAIOCB *)opaque;
assert(!dbs->acb && dbs->bh);
qemu_bh_delete(dbs->bh);
dbs->bh = NULL;
dma_blk_cb(dbs, 0);
@ -111,15 +112,12 @@ static void dma_complete(DMAAIOCB *dbs, int ret)
{
trace_dma_complete(dbs, ret, dbs->common.cb);
assert(!dbs->acb && !dbs->bh);
dma_blk_unmap(dbs);
if (dbs->common.cb) {
dbs->common.cb(dbs->common.opaque, ret);
}
qemu_iovec_destroy(&dbs->iov);
if (dbs->bh) {
qemu_bh_delete(dbs->bh);
dbs->bh = NULL;
}
qemu_aio_unref(dbs);
}
@ -179,14 +177,21 @@ static void dma_aio_cancel(BlockAIOCB *acb)
trace_dma_aio_cancel(dbs);
assert(!(dbs->acb && dbs->bh));
if (dbs->acb) {
/* This will invoke dma_blk_cb. */
blk_aio_cancel_async(dbs->acb);
return;
}
if (dbs->bh) {
cpu_unregister_map_client(dbs->bh);
qemu_bh_delete(dbs->bh);
dbs->bh = NULL;
}
if (dbs->common.cb) {
dbs->common.cb(dbs->common.opaque, -ECANCELED);
}
}
static AioContext *dma_get_aio_context(BlockAIOCB *acb)

View File

@ -754,6 +754,8 @@ static void do_cpu_reset(void *opaque)
(cs != first_cpu || !info->secure_board_setup)) {
/* Linux expects non-secure state */
env->cp15.scr_el3 |= SCR_NS;
/* Set NSACR.{CP11,CP10} so NS can access the FPU */
env->cp15.nsacr |= 3 << 10;
}
}

View File

@ -297,6 +297,9 @@ void virtio_blk_data_plane_stop(VirtIODevice *vdev)
virtio_bus_cleanup_host_notifier(VIRTIO_BUS(qbus), i);
}
qemu_bh_cancel(s->bh);
notify_guest_bh(s); /* final chance to notify guest */
/* Clean up guest notifier (irq) */
k->set_guest_notifiers(qbus->parent, nvqs, false);

View File

@ -1242,7 +1242,7 @@ int rom_copy(uint8_t *dest, hwaddr addr, size_t size)
if (rom->addr + rom->romsize < addr) {
continue;
}
if (rom->addr > end) {
if (rom->addr > end || rom->addr < addr) {
break;
}

View File

@ -2403,6 +2403,14 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
int max_socket = (ms->smp.max_cpus - 1) /
smp_threads / smp_cores / pcms->smp_dies;
/*
* die-id was optional in QEMU 4.0 and older, so keep it optional
* if there's only one die per socket.
*/
if (cpu->die_id < 0 && pcms->smp_dies == 1) {
cpu->die_id = 0;
}
if (cpu->socket_id < 0) {
error_setg(errp, "CPU socket-id is not set");
return;
@ -2879,8 +2887,10 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
ms->smp.threads, &topo);
ms->possible_cpus->cpus[i].props.has_socket_id = true;
ms->possible_cpus->cpus[i].props.socket_id = topo.pkg_id;
ms->possible_cpus->cpus[i].props.has_die_id = true;
ms->possible_cpus->cpus[i].props.die_id = topo.die_id;
if (pcms->smp_dies > 1) {
ms->possible_cpus->cpus[i].props.has_die_id = true;
ms->possible_cpus->cpus[i].props.die_id = topo.die_id;
}
ms->possible_cpus->cpus[i].props.has_core_id = true;
ms->possible_cpus->cpus[i].props.core_id = topo.core_id;
ms->possible_cpus->cpus[i].props.has_thread_id = true;

View File

@ -1023,9 +1023,6 @@ static void ncq_cb(void *opaque, int ret)
IDEState *ide_state = &ncq_tfs->drive->port.ifs[0];
ncq_tfs->aiocb = NULL;
if (ret == -ECANCELED) {
return;
}
if (ret < 0) {
bool is_read = ncq_tfs->cmd == READ_FPDMA_QUEUED;

View File

@ -722,9 +722,6 @@ static void ide_sector_read_cb(void *opaque, int ret)
s->pio_aiocb = NULL;
s->status &= ~BUSY_STAT;
if (ret == -ECANCELED) {
return;
}
if (ret != 0) {
if (ide_handle_rw_error(s, -ret, IDE_RETRY_PIO |
IDE_RETRY_READ)) {
@ -840,10 +837,6 @@ static void ide_dma_cb(void *opaque, int ret)
uint64_t offset;
bool stay_active = false;
if (ret == -ECANCELED) {
return;
}
if (ret == -EINVAL) {
ide_dma_error(s);
return;
@ -975,10 +968,6 @@ static void ide_sector_write_cb(void *opaque, int ret)
IDEState *s = opaque;
int n;
if (ret == -ECANCELED) {
return;
}
s->pio_aiocb = NULL;
s->status &= ~BUSY_STAT;
@ -1058,9 +1047,6 @@ static void ide_flush_cb(void *opaque, int ret)
s->pio_aiocb = NULL;
if (ret == -ECANCELED) {
return;
}
if (ret < 0) {
/* XXX: What sector number to set here? */
if (ide_handle_rw_error(s, -ret, IDE_RETRY_FLUSH)) {

View File

@ -2330,9 +2330,13 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
n->curr_guest_offloads = virtio_net_supported_guest_offloads(n);
}
if (peer_has_vnet_hdr(n)) {
virtio_net_apply_guest_offloads(n);
}
/*
* curr_guest_offloads will be later overwritten by the
* virtio_set_features_nocheck call done from the virtio_load.
* Here we make sure it is preserved and restored accordingly
* in the virtio_net_post_load_virtio callback.
*/
n->saved_guest_offloads = n->curr_guest_offloads;
virtio_net_set_queues(n);
@ -2367,6 +2371,22 @@ static int virtio_net_post_load_device(void *opaque, int version_id)
return 0;
}
static int virtio_net_post_load_virtio(VirtIODevice *vdev)
{
VirtIONet *n = VIRTIO_NET(vdev);
/*
* The actual needed state is now in saved_guest_offloads,
* see virtio_net_post_load_device for detail.
* Restore it back and apply the desired offloads.
*/
n->curr_guest_offloads = n->saved_guest_offloads;
if (peer_has_vnet_hdr(n)) {
virtio_net_apply_guest_offloads(n);
}
return 0;
}
/* tx_waiting field of a VirtIONetQueue */
static const VMStateDescription vmstate_virtio_net_queue_tx_waiting = {
.name = "virtio-net-queue-tx_waiting",
@ -2909,6 +2929,7 @@ static void virtio_net_class_init(ObjectClass *klass, void *data)
vdc->guest_notifier_mask = virtio_net_guest_notifier_mask;
vdc->guest_notifier_pending = virtio_net_guest_notifier_pending;
vdc->legacy_features |= (0x1 << VIRTIO_NET_F_GSO);
vdc->post_load = virtio_net_post_load_virtio;
vdc->vmsd = &vmstate_virtio_net_device;
}

View File

@ -694,10 +694,15 @@ static const MemoryRegionOps s390_msi_ctrl_ops = {
void s390_pci_iommu_enable(S390PCIIOMMU *iommu)
{
/*
* The iommu region is initialized against a 0-mapped address space,
* so the smallest IOMMU region we can define runs from 0 to the end
* of the PCI address space.
*/
char *name = g_strdup_printf("iommu-s390-%04x", iommu->pbdev->uid);
memory_region_init_iommu(&iommu->iommu_mr, sizeof(iommu->iommu_mr),
TYPE_S390_IOMMU_MEMORY_REGION, OBJECT(&iommu->mr),
name, iommu->pal - iommu->pba + 1);
name, iommu->pal + 1);
iommu->enabled = true;
memory_region_add_subregion(&iommu->mr, 0, MEMORY_REGION(&iommu->iommu_mr));
g_free(name);

View File

@ -185,6 +185,9 @@ static const char *names[] = {
/* Flag set if this is a tagged command. */
#define LSI_TAG_VALID (1 << 16)
/* Maximum instructions to process. */
#define LSI_MAX_INSN 10000
typedef struct lsi_request {
SCSIRequest *req;
uint32_t tag;
@ -1132,7 +1135,21 @@ static void lsi_execute_script(LSIState *s)
s->istat1 |= LSI_ISTAT1_SRUN;
again:
insn_processed++;
if (++insn_processed > LSI_MAX_INSN) {
/* Some windows drivers make the device spin waiting for a memory
location to change. If we have been executed a lot of code then
assume this is the case and force an unexpected device disconnect.
This is apparently sufficient to beat the drivers into submission.
*/
if (!(s->sien0 & LSI_SIST0_UDC)) {
qemu_log_mask(LOG_GUEST_ERROR,
"lsi_scsi: inf. loop with UDC masked");
}
lsi_script_scsi_interrupt(s, LSI_SIST0_UDC, 0);
lsi_disconnect(s);
trace_lsi_execute_script_stop();
return;
}
insn = read_dword(s, s->dsp);
if (!insn) {
/* If we receive an empty opcode increment the DSP by 4 bytes
@ -1569,19 +1586,7 @@ again:
}
}
}
if (insn_processed > 10000 && s->waiting == LSI_NOWAIT) {
/* Some windows drivers make the device spin waiting for a memory
location to change. If we have been executed a lot of code then
assume this is the case and force an unexpected device disconnect.
This is apparently sufficient to beat the drivers into submission.
*/
if (!(s->sien0 & LSI_SIST0_UDC)) {
qemu_log_mask(LOG_GUEST_ERROR,
"lsi_scsi: inf. loop with UDC masked");
}
lsi_script_scsi_interrupt(s, LSI_SIST0_UDC, 0);
lsi_disconnect(s);
} else if (s->istat1 & LSI_ISTAT1_SRUN && s->waiting == LSI_NOWAIT) {
if (s->istat1 & LSI_ISTAT1_SRUN && s->waiting == LSI_NOWAIT) {
if (s->dcntl & LSI_DCNTL_SSM) {
lsi_script_dma_interrupt(s, LSI_DSTAT_SSI);
} else {
@ -1969,6 +1974,10 @@ static void lsi_reg_writeb(LSIState *s, int offset, uint8_t val)
case 0x2f: /* DSP[24:31] */
s->dsp &= 0x00ffffff;
s->dsp |= val << 24;
/*
* FIXME: if s->waiting != LSI_NOWAIT, this will only execute one
* instruction. Is this correct?
*/
if ((s->dmode & LSI_DMODE_MAN) == 0
&& (s->istat1 & LSI_ISTAT1_SRUN) == 0)
lsi_execute_script(s);
@ -1987,6 +1996,10 @@ static void lsi_reg_writeb(LSIState *s, int offset, uint8_t val)
break;
case 0x3b: /* DCNTL */
s->dcntl = val & ~(LSI_DCNTL_PFF | LSI_DCNTL_STD);
/*
* FIXME: if s->waiting != LSI_NOWAIT, this will only execute one
* instruction. Is this correct?
*/
if ((val & LSI_DCNTL_STD) && (s->istat1 & LSI_ISTAT1_SRUN) == 0)
lsi_execute_script(s);
break;

View File

@ -451,8 +451,13 @@ static void vhost_commit(MemoryListener *listener)
changed = true;
} else {
/* Same size, lets check the contents */
changed = n_old_sections && memcmp(dev->mem_sections, old_sections,
n_old_sections * sizeof(old_sections[0])) != 0;
for (int i = 0; i < n_old_sections; i++) {
if (!MemoryRegionSection_eq(&old_sections[i],
&dev->mem_sections[i])) {
changed = true;
break;
}
}
}
trace_vhost_commit(dev->started, changed);

View File

@ -2287,6 +2287,13 @@ int virtio_load(VirtIODevice *vdev, QEMUFile *f, int version_id)
}
rcu_read_unlock();
if (vdc->post_load) {
ret = vdc->post_load(vdev);
if (ret) {
return ret;
}
}
return 0;
}

View File

@ -516,6 +516,23 @@ static void xen_device_backend_set_online(XenDevice *xendev, bool online)
xen_device_backend_printf(xendev, "online", "%u", online);
}
/*
* Tell from the state whether the frontend is likely alive,
* i.e. it will react to a change of state of the backend.
*/
static bool xen_device_state_is_active(enum xenbus_state state)
{
switch (state) {
case XenbusStateInitWait:
case XenbusStateInitialised:
case XenbusStateConnected:
case XenbusStateClosing:
return true;
default:
return false;
}
}
static void xen_device_backend_changed(void *opaque)
{
XenDevice *xendev = opaque;
@ -539,11 +556,11 @@ static void xen_device_backend_changed(void *opaque)
/*
* If the toolstack (or unplug request callback) has set the backend
* state to Closing, but there is no active frontend (i.e. the
* state is not Connected) then set the backend state to Closed.
* state to Closing, but there is no active frontend then set the
* backend state to Closed.
*/
if (xendev->backend_state == XenbusStateClosing &&
xendev->frontend_state != XenbusStateConnected) {
!xen_device_state_is_active(xendev->frontend_state)) {
xen_device_backend_set_state(xendev, XenbusStateClosed);
}

View File

@ -962,6 +962,10 @@ extern unsigned int bdrv_drain_all_count;
void bdrv_apply_subtree_drain(BdrvChild *child, BlockDriverState *new_parent);
void bdrv_unapply_subtree_drain(BdrvChild *child, BlockDriverState *old_parent);
bool coroutine_fn bdrv_wait_serialising_requests(BdrvTrackedRequest *self);
void bdrv_mark_request_serialising(BdrvTrackedRequest *req, uint64_t align);
BdrvTrackedRequest *coroutine_fn bdrv_co_get_self_request(BlockDriverState *bs);
int get_tmp_filename(char *filename, int size);
BlockDriver *bdrv_probe_all(const uint8_t *buf, int buf_size,
const char *filename);

View File

@ -487,15 +487,27 @@ static inline FlatView *address_space_to_flatview(AddressSpace *as)
* @nonvolatile: this section is non-volatile
*/
struct MemoryRegionSection {
Int128 size;
MemoryRegion *mr;
FlatView *fv;
hwaddr offset_within_region;
Int128 size;
hwaddr offset_within_address_space;
bool readonly;
bool nonvolatile;
};
static inline bool MemoryRegionSection_eq(MemoryRegionSection *a,
MemoryRegionSection *b)
{
return a->mr == b->mr &&
a->fv == b->fv &&
a->offset_within_region == b->offset_within_region &&
a->offset_within_address_space == b->offset_within_address_space &&
int128_eq(a->size, b->size) &&
a->readonly == b->readonly &&
a->nonvolatile == b->nonvolatile;
}
/**
* memory_region_init: Initialize a memory region
*

View File

@ -182,6 +182,8 @@ struct VirtIONet {
char *netclient_name;
char *netclient_type;
uint64_t curr_guest_offloads;
/* used on saved state restore phase to preserve the curr_guest_offloads */
uint64_t saved_guest_offloads;
AnnounceTimer announce_timer;
bool needs_vnet_hdr_swap;
bool mtu_bypass_backend;

View File

@ -158,6 +158,12 @@ typedef struct VirtioDeviceClass {
*/
void (*save)(VirtIODevice *vdev, QEMUFile *f);
int (*load)(VirtIODevice *vdev, QEMUFile *f, int version_id);
/* Post load hook in vmsd is called early while device is processed, and
* when VirtIODevice isn't fully initialized. Devices should use this instead,
* unless they specifically want to verify the migration stream as it's
* processed, e.g. for bounds checking.
*/
int (*post_load)(VirtIODevice *vdev);
const VMStateDescription *vmsd;
} VirtioDeviceClass;

View File

@ -167,6 +167,21 @@ void coroutine_fn qemu_co_mutex_lock(CoMutex *mutex);
*/
void coroutine_fn qemu_co_mutex_unlock(CoMutex *mutex);
/**
* Assert that the current coroutine holds @mutex.
*/
static inline coroutine_fn void qemu_co_mutex_assert_locked(CoMutex *mutex)
{
/*
* mutex->holder doesn't need any synchronisation if the assertion holds
* true because the mutex protects it. If it doesn't hold true, we still
* don't mind if another thread takes or releases mutex behind our back,
* because the condition will be false no matter whether we read NULL or
* the pointer for any other coroutine.
*/
assert(atomic_read(&mutex->locked) &&
mutex->holder == qemu_coroutine_self());
}
/**
* CoQueues are a mechanism to queue coroutines in order to continue executing

View File

@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t count);
* @count: Number of bits to reset.
*
* Reset a consecutive range of bits in an HBitmap.
* @start and @count must be aligned to bitmap granularity. The only exception
* is resetting the tail of the bitmap: @count may be equal to hb->orig_size -
* @start, in this case @count may be not aligned. The sum of @start + @count is
* allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
* and @start + @count = ALIGN_UP(hb->orig_size, granularity).
*/
void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);

View File

@ -199,13 +199,20 @@ static inline void *qemu_iovec_buf(QEMUIOVector *qiov)
void qemu_iovec_init(QEMUIOVector *qiov, int alloc_hint);
void qemu_iovec_init_external(QEMUIOVector *qiov, struct iovec *iov, int niov);
void qemu_iovec_init_extended(
QEMUIOVector *qiov,
void *head_buf, size_t head_len,
QEMUIOVector *mid_qiov, size_t mid_offset, size_t mid_len,
void *tail_buf, size_t tail_len);
void qemu_iovec_init_slice(QEMUIOVector *qiov, QEMUIOVector *source,
size_t offset, size_t len);
void qemu_iovec_add(QEMUIOVector *qiov, void *base, size_t len);
void qemu_iovec_concat(QEMUIOVector *dst,
QEMUIOVector *src, size_t soffset, size_t sbytes);
size_t qemu_iovec_concat_iov(QEMUIOVector *dst,
struct iovec *src_iov, unsigned int src_cnt,
size_t soffset, size_t sbytes);
bool qemu_iovec_is_zero(QEMUIOVector *qiov);
bool qemu_iovec_is_zero(QEMUIOVector *qiov, size_t qiov_offeset, size_t bytes);
void qemu_iovec_destroy(QEMUIOVector *qiov);
void qemu_iovec_reset(QEMUIOVector *qiov);
size_t qemu_iovec_to_buf(QEMUIOVector *qiov, size_t offset,

View File

@ -319,7 +319,7 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
*mark = 0;
if (ppkt->tcp_seq == spkt->tcp_seq && ppkt->seq_end == spkt->seq_end) {
if (colo_compare_packet_payload(ppkt, spkt,
if (!colo_compare_packet_payload(ppkt, spkt,
ppkt->header_size, spkt->header_size,
ppkt->payload_size)) {
*mark = COLO_COMPARE_FREE_SECONDARY | COLO_COMPARE_FREE_PRIMARY;
@ -329,7 +329,7 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
/* one part of secondary packet payload still need to be compared */
if (!after(ppkt->seq_end, spkt->seq_end)) {
if (colo_compare_packet_payload(ppkt, spkt,
if (!colo_compare_packet_payload(ppkt, spkt,
ppkt->header_size + ppkt->offset,
spkt->header_size + spkt->offset,
ppkt->payload_size - ppkt->offset)) {
@ -348,7 +348,7 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
/* primary packet is longer than secondary packet, compare
* the same part and mark the primary packet offset
*/
if (colo_compare_packet_payload(ppkt, spkt,
if (!colo_compare_packet_payload(ppkt, spkt,
ppkt->header_size + ppkt->offset,
spkt->header_size + spkt->offset,
spkt->payload_size - spkt->offset)) {

View File

@ -235,6 +235,10 @@ static void chr_closed_bh(void *opaque)
s = DO_UPCAST(NetVhostUserState, nc, ncs[0]);
if (s->vhost_net) {
s->acked_features = vhost_net_get_acked_features(s->vhost_net);
}
qmp_set_link(name, false, &err);
qemu_chr_fe_set_handlers(&s->chr, NULL, NULL, net_vhost_user_event,

View File

@ -46,8 +46,13 @@ all: $(foreach flashdev,$(flashdevs),../pc-bios/edk2-$(flashdev).fd.bz2) \
# files.
.INTERMEDIATE: $(foreach flashdev,$(flashdevs),../pc-bios/edk2-$(flashdev).fd)
# Fetch edk2 submodule's submodules. If it is not in a git tree, assume
# we're building from a tarball and that they've already been fetched by
# make-release/tarball scripts.
submodules:
cd edk2 && git submodule update --init --force
if test -d edk2/.git; then \
cd edk2 && git submodule update --init --force; \
fi
# See notes on the ".NOTPARALLEL" target and the "+" indicator in
# "tests/uefi-test-tools/Makefile".

View File

@ -20,6 +20,14 @@ git checkout "v${version}"
git submodule update --init
(cd roms/seabios && git describe --tags --long --dirty > .version)
(cd roms/skiboot && ./make_version.sh > .version)
# Fetch edk2 submodule's submodules, since it won't have access to them via
# the tarball later.
#
# A more uniform way to handle this sort of situation would be nice, but we
# don't necessarily have much control over how a submodule handles its
# submodule dependencies, so we continue to handle these on a case-by-case
# basis for now.
(cd roms/edk2 && git submodule update --init)
popd
tar --exclude=.git -cjf ${destination}.tar.bz2 ${destination}
rm -rf ${destination}

View File

@ -39,7 +39,6 @@ static int pr_manager_worker(void *opaque)
int fd = data->fd;
int r;
g_free(data);
trace_pr_manager_run(fd, hdr->cmdp[0], hdr->cmdp[1]);
/* The reference was taken in pr_manager_execute. */

View File

@ -283,7 +283,9 @@ bool alpha_cpu_tlb_fill(CPUState *cs, vaddr addr, int size,
cs->exception_index = EXCP_MMFAULT;
env->trap_arg0 = addr;
env->trap_arg1 = fail;
env->trap_arg2 = (access_type == MMU_INST_FETCH ? -1 : access_type);
env->trap_arg2 = (access_type == MMU_DATA_LOAD ? 0ull :
access_type == MMU_DATA_STORE ? 1ull :
/* access_type == MMU_INST_FETCH */ -1ull);
cpu_loop_exit_restore(cs, retaddr);
}

View File

@ -704,9 +704,10 @@ static bool trans_VMSR_VMRS(DisasContext *s, arg_VMSR_VMRS *a)
if (arm_dc_feature(s, ARM_FEATURE_M)) {
/*
* The only M-profile VFP vmrs/vmsr sysreg is FPSCR.
* Writes to R15 are UNPREDICTABLE; we choose to undef.
* Accesses to R15 are UNPREDICTABLE; we choose to undef.
* (FPSCR -> r15 is a special case which writes to the PSR flags.)
*/
if (a->rt == 15 || a->reg != ARM_VFP_FPSCR) {
if (a->rt == 15 && (!a->l || a->reg != ARM_VFP_FPSCR)) {
return false;
}
}
@ -881,8 +882,10 @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
/* gpreg to fpreg */
tmp = load_reg(s, a->rt);
neon_store_reg32(tmp, a->vm);
tcg_temp_free_i32(tmp);
tmp = load_reg(s, a->rt2);
neon_store_reg32(tmp, a->vm + 1);
tcg_temp_free_i32(tmp);
}
return true;

View File

@ -952,10 +952,27 @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
store_cpu_field(var, thumb);
}
/* Set PC and Thumb state from var. var is marked as dead.
/*
* Set PC and Thumb state from var. var is marked as dead.
* For M-profile CPUs, include logic to detect exception-return
* branches and handle them. This is needed for Thumb POP/LDM to PC, LDR to PC,
* and BX reg, and no others, and happens only for code in Handler mode.
* The Security Extension also requires us to check for the FNC_RETURN
* which signals a function return from non-secure state; this can happen
* in both Handler and Thread mode.
* To avoid having to do multiple comparisons in inline generated code,
* we make the check we do here loose, so it will match for EXC_RETURN
* in Thread mode. For system emulation do_v7m_exception_exit() checks
* for these spurious cases and returns without doing anything (giving
* the same behaviour as for a branch to a non-magic address).
*
* In linux-user mode it is unclear what the right behaviour for an
* attempted FNC_RETURN should be, because in real hardware this will go
* directly to Secure code (ie not the Linux kernel) which will then treat
* the error in any way it chooses. For QEMU we opt to make the FNC_RETURN
* attempt behave the way it would on a CPU without the security extension,
* which is to say "like a normal branch". That means we can simply treat
* all branches as normal with no magic address behaviour.
*/
static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
{
@ -963,10 +980,12 @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
* s->base.is_jmp that we need to do the rest of the work later.
*/
gen_bx(s, var);
#ifndef CONFIG_USER_ONLY
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) ||
(s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) {
s->base.is_jmp = DISAS_BX_EXCRET;
}
#endif
}
static inline void gen_bx_excret_final_code(DisasContext *s)

View File

@ -213,7 +213,7 @@ static void get_vec_element_ptr_i64(TCGv_ptr ptr, uint8_t reg, TCGv_i64 enr,
vec_full_reg_offset(v3), ptr, 16, 16, data, fn)
#define gen_gvec_3i(v1, v2, v3, c, gen) \
tcg_gen_gvec_3i(vec_full_reg_offset(v1), vec_full_reg_offset(v2), \
vec_full_reg_offset(v3), c, 16, 16, gen)
vec_full_reg_offset(v3), 16, 16, c, gen)
#define gen_gvec_4(v1, v2, v3, v4, gen) \
tcg_gen_gvec_4(vec_full_reg_offset(v1), vec_full_reg_offset(v2), \
vec_full_reg_offset(v3), vec_full_reg_offset(v4), \

View File

@ -27,8 +27,8 @@
#include "qemu/osdep.h"
#include "cpu.h"
#include "exec/exec-all.h"
#include "exec/gdbstub.h"
#include "qemu-common.h"
#include "qemu/host-utils.h"
#include "core-test_mmuhifi_c3/core-isa.h"
@ -39,7 +39,6 @@
static XtensaConfig test_mmuhifi_c3 __attribute__((unused)) = {
.name = "test_mmuhifi_c3",
.options = XTENSA_OPTIONS,
.gdb_regmap = {
.reg = {
#include "core-test_mmuhifi_c3/gdb-config.inc.c"

View File

@ -1,15 +1,37 @@
/*
* Xtensa processor core configuration information.
* xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
* processor CORE configuration
*
* This file is subject to the terms and conditions of version 2.1 of the GNU
* Lesser General Public License as published by the Free Software Foundation.
*
* Copyright (c) 1999-2009 Tensilica Inc.
* See <xtensa/config/core.h>, which includes this file, for more details.
*/
/* Xtensa processor core configuration information.
Copyright (c) 1999-2019 Tensilica Inc.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
#ifndef XTENSA_CORE_TEST_MMUHIFI_C3_CORE_ISA_H
#define XTENSA_CORE_TEST_MMUHIFI_C3_CORE_ISA_H
/****************************************************************************
Parameters Useful for Any Code, USER or PRIVILEGED
****************************************************************************/
@ -32,6 +54,7 @@
#define XCHAL_HAVE_DEBUG 1 /* debug option */
#define XCHAL_HAVE_DENSITY 1 /* 16-bit instructions */
#define XCHAL_HAVE_LOOPS 1 /* zero-overhead loops */
#define XCHAL_LOOP_BUFFER_SIZE 0 /* zero-ov. loop instr buffer size */
#define XCHAL_HAVE_NSA 1 /* NSA/NSAU instructions */
#define XCHAL_HAVE_MINMAX 1 /* MIN/MAX instructions */
#define XCHAL_HAVE_SEXT 1 /* SEXT instruction */
@ -59,44 +82,73 @@
#define XCHAL_HAVE_TAP_MASTER 0 /* JTAG TAP control instr's */
#define XCHAL_HAVE_PRID 1 /* processor ID register */
#define XCHAL_HAVE_EXTERN_REGS 1 /* WER/RER instructions */
#define XCHAL_HAVE_MX 1 /* MX core (Tensilica internal) */
#define XCHAL_HAVE_MP_INTERRUPTS 1 /* interrupt distributor port */
#define XCHAL_HAVE_MP_RUNSTALL 1 /* core RunStall control port */
#define XCHAL_HAVE_PSO 0 /* Power Shut-Off */
#define XCHAL_HAVE_PSO_CDM 0 /* core/debug/mem pwr domains */
#define XCHAL_HAVE_PSO_FULL_RETENTION 0 /* all regs preserved on PSO */
#define XCHAL_HAVE_THREADPTR 1 /* THREADPTR register */
#define XCHAL_HAVE_BOOLEANS 1 /* boolean registers */
#define XCHAL_HAVE_CP 1 /* CPENABLE reg (coprocessor) */
#define XCHAL_CP_MAXCFG 2 /* max allowed cp id plus one */
#define XCHAL_HAVE_MAC16 0 /* MAC16 package */
#define XCHAL_HAVE_VECTORFPU2005 0 /* vector floating-point pkg */
#define XCHAL_HAVE_FP 0 /* floating point pkg */
#define XCHAL_HAVE_FP 0 /* single prec floating point */
#define XCHAL_HAVE_FP_DIV 0 /* FP with DIV instructions */
#define XCHAL_HAVE_FP_RECIP 0 /* FP with RECIP instructions */
#define XCHAL_HAVE_FP_SQRT 0 /* FP with SQRT instructions */
#define XCHAL_HAVE_FP_RSQRT 0 /* FP with RSQRT instructions */
#define XCHAL_HAVE_DFP 0 /* double precision FP pkg */
#define XCHAL_HAVE_DFP_DIV 0 /* DFP with DIV instructions */
#define XCHAL_HAVE_DFP_RECIP 0 /* DFP with RECIP instructions*/
#define XCHAL_HAVE_DFP_SQRT 0 /* DFP with SQRT instructions */
#define XCHAL_HAVE_DFP_RSQRT 0 /* DFP with RSQRT instructions*/
#define XCHAL_HAVE_DFP_accel 0 /* double precision FP acceleration pkg */
#define XCHAL_HAVE_VECTRA1 0 /* Vectra I pkg */
#define XCHAL_HAVE_VECTRALX 0 /* Vectra LX pkg */
#define XCHAL_HAVE_HIFIPRO 0 /* HiFiPro Audio Engine pkg */
#define XCHAL_HAVE_HIFI3 0 /* HiFi3 Audio Engine pkg */
#define XCHAL_HAVE_HIFI2 1 /* HiFi2 Audio Engine pkg */
#define XCHAL_HAVE_HIFI2EP 0 /* HiFi2EP */
#define XCHAL_HAVE_HIFI_MINI 0
#define XCHAL_HAVE_CONNXD2 0 /* ConnX D2 pkg */
#define XCHAL_HAVE_BBE16 0 /* ConnX BBE16 pkg */
#define XCHAL_HAVE_BBE16_RSQRT 0 /* BBE16 & vector recip sqrt */
#define XCHAL_HAVE_BBE16_VECDIV 0 /* BBE16 & vector divide */
#define XCHAL_HAVE_BBE16_DESPREAD 0 /* BBE16 & despread */
#define XCHAL_HAVE_BBENEP 0 /* ConnX BBENEP pkgs */
#define XCHAL_HAVE_BSP3 0 /* ConnX BSP3 pkg */
#define XCHAL_HAVE_BSP3_TRANSPOSE 0 /* BSP3 & transpose32x32 */
#define XCHAL_HAVE_SSP16 0 /* ConnX SSP16 pkg */
#define XCHAL_HAVE_SSP16_VITERBI 0 /* SSP16 & viterbi */
#define XCHAL_HAVE_TURBO16 0 /* ConnX Turbo16 pkg */
#define XCHAL_HAVE_BBP16 0 /* ConnX BBP16 pkg */
#define XCHAL_HAVE_FLIX3 0 /* basic 3-way FLIX option */
/*----------------------------------------------------------------------
MISC
----------------------------------------------------------------------*/
#define XCHAL_NUM_LOADSTORE_UNITS 1 /* load/store units */
#define XCHAL_NUM_WRITEBUFFER_ENTRIES 8 /* size of write buffer */
#define XCHAL_INST_FETCH_WIDTH 8 /* instr-fetch width in bytes */
#define XCHAL_DATA_WIDTH 8 /* data width in bytes */
#define XCHAL_DATA_PIPE_DELAY 1 /* d-side pipeline delay
(1 = 5-stage, 2 = 7-stage) */
/* In T1050, applies to selected core load and store instructions (see ISA): */
#define XCHAL_UNALIGNED_LOAD_EXCEPTION 1 /* unaligned loads cause exc. */
#define XCHAL_UNALIGNED_STORE_EXCEPTION 1 /* unaligned stores cause exc.*/
#define XCHAL_UNALIGNED_LOAD_HW 0 /* unaligned loads work in hw */
#define XCHAL_UNALIGNED_STORE_HW 0 /* unaligned stores work in hw*/
#define XCHAL_SW_VERSION 800000 /* sw version of this header */
#define XCHAL_SW_VERSION 1000006 /* sw version of this header */
#define XCHAL_CORE_ID "test_mmuhifi_c3" /* alphanum core name
(CoreID) set in the Xtensa
Processor Generator */
#define XCHAL_CORE_DESCRIPTION "test_mmuhifi_c3"
#define XCHAL_BUILD_UNIQUE_ID 0x00005A6A /* 22-bit sw build ID */
/*
@ -136,6 +188,10 @@
#define XCHAL_DCACHE_IS_WRITEBACK 1 /* writeback feature */
#define XCHAL_DCACHE_IS_COHERENT 1 /* MP coherence feature */
#define XCHAL_HAVE_PREFETCH 0 /* PREFCTL register */
#define XCHAL_HAVE_PREFETCH_L1 0 /* prefetch to L1 dcache */
#define XCHAL_PREFETCH_CASTOUT_LINES 0 /* dcache pref. castout bufsz */
@ -172,6 +228,8 @@
#define XCHAL_ICACHE_ACCESS_SIZE 8
#define XCHAL_DCACHE_ACCESS_SIZE 8
#define XCHAL_DCACHE_BANKS 1 /* number of banks */
/* Number of encoded cache attr bits (see <xtensa/hal.h> for decoded bits): */
#define XCHAL_CA_BITS 4
@ -187,6 +245,8 @@
#define XCHAL_NUM_URAM 0 /* number of core unified RAMs*/
#define XCHAL_NUM_XLMI 0 /* number of core XLMI ports */
#define XCHAL_HAVE_IMEM_LOADSTORE 1 /* can load/store to IROM/IRAM*/
/*----------------------------------------------------------------------
INTERRUPTS and TIMERS
@ -261,6 +321,7 @@
#define XCHAL_INTTYPE_MASK_TIMER 0x00000140
#define XCHAL_INTTYPE_MASK_NMI 0x00000000
#define XCHAL_INTTYPE_MASK_WRITE_ERROR 0x00000000
#define XCHAL_INTTYPE_MASK_PROFILING 0x00000000
/* Interrupt numbers assigned to specific interrupt sources: */
#define XCHAL_TIMER0_INTERRUPT 6 /* CCOMPARE0 */
@ -273,7 +334,7 @@
/*
* External interrupt vectors/levels.
* External interrupt mapping.
* These macros describe how Xtensa processor interrupt numbers
* (as numbered internally, eg. in INTERRUPT and INTENABLE registers)
* map to external BInterrupt<n> pins, for those interrupts
@ -281,7 +342,7 @@
* See the Xtensa processor databook for more details.
*/
/* Core interrupt numbers mapped to each EXTERNAL interrupt number: */
/* Core interrupt numbers mapped to each EXTERNAL BInterrupt pin number: */
#define XCHAL_EXTINT0_NUM 0 /* (intlevel 1) */
#define XCHAL_EXTINT1_NUM 1 /* (intlevel 1) */
#define XCHAL_EXTINT2_NUM 2 /* (intlevel 1) */
@ -291,6 +352,16 @@
#define XCHAL_EXTINT6_NUM 9 /* (intlevel 1) */
#define XCHAL_EXTINT7_NUM 10 /* (intlevel 1) */
#define XCHAL_EXTINT8_NUM 11 /* (intlevel 1) */
/* EXTERNAL BInterrupt pin numbers mapped to each core interrupt number: */
#define XCHAL_INT0_EXTNUM 0 /* (intlevel 1) */
#define XCHAL_INT1_EXTNUM 1 /* (intlevel 1) */
#define XCHAL_INT2_EXTNUM 2 /* (intlevel 1) */
#define XCHAL_INT3_EXTNUM 3 /* (intlevel 1) */
#define XCHAL_INT4_EXTNUM 4 /* (intlevel 1) */
#define XCHAL_INT5_EXTNUM 5 /* (intlevel 1) */
#define XCHAL_INT9_EXTNUM 6 /* (intlevel 1) */
#define XCHAL_INT10_EXTNUM 7 /* (intlevel 1) */
#define XCHAL_INT11_EXTNUM 8 /* (intlevel 1) */
/*----------------------------------------------------------------------
@ -300,11 +371,13 @@
#define XCHAL_XEA_VERSION 2 /* Xtensa Exception Architecture
number: 1 == XEA1 (old)
2 == XEA2 (new)
0 == XEAX (extern) */
0 == XEAX (extern) or TX */
#define XCHAL_HAVE_XEA1 0 /* Exception Architecture 1 */
#define XCHAL_HAVE_XEA2 1 /* Exception Architecture 2 */
#define XCHAL_HAVE_XEAX 0 /* External Exception Arch. */
#define XCHAL_HAVE_EXCEPTIONS 1 /* exception option */
#define XCHAL_HAVE_HALT 0 /* halt architecture option */
#define XCHAL_HAVE_BOOTLOADER 0 /* boot loader (for TX) */
#define XCHAL_HAVE_MEM_ECC_PARITY 0 /* local memory ECC/parity */
#define XCHAL_HAVE_VECTOR_SELECT 1 /* relocatable vectors */
#define XCHAL_HAVE_VECBASE 1 /* relocatable vectors */
@ -344,13 +417,30 @@
/*----------------------------------------------------------------------
DEBUG
DEBUG MODULE
----------------------------------------------------------------------*/
/* Misc */
#define XCHAL_HAVE_DEBUG_ERI 0 /* ERI to debug module */
#define XCHAL_HAVE_DEBUG_APB 0 /* APB to debug module */
#define XCHAL_HAVE_DEBUG_JTAG 0 /* JTAG to debug module */
/* On-Chip Debug (OCD) */
#define XCHAL_HAVE_OCD 1 /* OnChipDebug option */
#define XCHAL_NUM_IBREAK 0 /* number of IBREAKn regs */
#define XCHAL_NUM_DBREAK 0 /* number of DBREAKn regs */
#define XCHAL_HAVE_OCD_DIR_ARRAY 0 /* faster OCD option */
#define XCHAL_HAVE_OCD_DIR_ARRAY 0 /* faster OCD option (to LX4) */
#define XCHAL_HAVE_OCD_LS32DDR 0 /* L32DDR/S32DDR (faster OCD) */
/* TRAX (in core) */
#define XCHAL_HAVE_TRAX 0 /* TRAX in debug module */
#define XCHAL_TRAX_MEM_SIZE 0 /* TRAX memory size in bytes */
#define XCHAL_TRAX_MEM_SHAREABLE 0 /* start/end regs; ready sig. */
#define XCHAL_TRAX_ATB_WIDTH 0 /* ATB width (bits), 0=no ATB */
#define XCHAL_TRAX_TIME_WIDTH 0 /* timestamp bitwidth, 0=none */
/* Perf counters */
#define XCHAL_NUM_PERF_COUNTERS 0 /* performance counters */
/*----------------------------------------------------------------------

View File

@ -1,23 +1,25 @@
/* Configuration for the Xtensa architecture for GDB, the GNU debugger.
Copyright (C) 2003, 2005, 2006, 2007, 2008 Free Software Foundation, Inc.
Copyright (c) 2003-2019 Tensilica Inc.
This file is part of GDB.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 3 of the License, or
(at your option) any later version.
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>. */
/* idx ofs bi sz al targno flags cp typ group name */
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */
XTREG( 0, 0,32, 4, 4,0x0020,0x0006,-2, 9,0x0100,pc, 0,0,0,0,0,0)
XTREG( 1, 4,32, 4, 4,0x0100,0x0006,-2, 1,0x0002,ar0, 0,0,0,0,0,0)
XTREG( 2, 8,32, 4, 4,0x0101,0x0006,-2, 1,0x0002,ar1, 0,0,0,0,0,0)
@ -58,8 +60,8 @@
XTREG( 37,148,32, 4, 4,0x0205,0x0006,-2, 2,0x1100,litbase, 0,0,0,0,0,0)
XTREG( 38,152, 3, 4, 4,0x0248,0x0006,-2, 2,0x1002,windowbase, 0,0,0,0,0,0)
XTREG( 39,156, 8, 4, 4,0x0249,0x0006,-2, 2,0x1002,windowstart, 0,0,0,0,0,0)
XTREG( 40,160,32, 4, 4,0x02b0,0x0002,-2, 2,0x1000,sr176, 0,0,0,0,0,0)
XTREG( 41,164,32, 4, 4,0x02d0,0x0002,-2, 2,0x1000,sr208, 0,0,0,0,0,0)
XTREG( 40,160,32, 4, 4,0x02b0,0x0002,-2, 2,0x1000,configid0, 0,0,0,0,0,0)
XTREG( 41,164,32, 4, 4,0x02d0,0x0002,-2, 2,0x1000,configid1, 0,0,0,0,0,0)
XTREG( 42,168,19, 4, 4,0x02e6,0x0006,-2, 2,0x1100,ps, 0,0,0,0,0,0)
XTREG( 43,172,32, 4, 4,0x03e7,0x0006,-2, 3,0x0110,threadptr, 0,0,0,0,0,0)
XTREG( 44,176,16, 4, 4,0x0204,0x0006,-1, 2,0x1100,br, 0,0,0,0,0,0)
@ -137,4 +139,82 @@
XTREG(104,464,32, 4, 4,0x000d,0x0006,-2, 8,0x0100,a13, 0,0,0,0,0,0)
XTREG(105,468,32, 4, 4,0x000e,0x0006,-2, 8,0x0100,a14, 0,0,0,0,0,0)
XTREG(106,472,32, 4, 4,0x000f,0x0006,-2, 8,0x0100,a15, 0,0,0,0,0,0)
XTREG(107,476, 1, 1, 1,0x0010,0x0006,-2, 6,0x1010,b0,
0,0,&xtensa_mask0,0,0,0)
XTREG(108,477, 1, 1, 1,0x0011,0x0006,-2, 6,0x1010,b1,
0,0,&xtensa_mask1,0,0,0)
XTREG(109,478, 1, 1, 1,0x0012,0x0006,-2, 6,0x1010,b2,
0,0,&xtensa_mask2,0,0,0)
XTREG(110,479, 1, 1, 1,0x0013,0x0006,-2, 6,0x1010,b3,
0,0,&xtensa_mask3,0,0,0)
XTREG(111,480, 1, 1, 1,0x0014,0x0006,-2, 6,0x1010,b4,
0,0,&xtensa_mask4,0,0,0)
XTREG(112,481, 1, 1, 1,0x0015,0x0006,-2, 6,0x1010,b5,
0,0,&xtensa_mask5,0,0,0)
XTREG(113,482, 1, 1, 1,0x0016,0x0006,-2, 6,0x1010,b6,
0,0,&xtensa_mask6,0,0,0)
XTREG(114,483, 1, 1, 1,0x0017,0x0006,-2, 6,0x1010,b7,
0,0,&xtensa_mask7,0,0,0)
XTREG(115,484, 1, 1, 1,0x0018,0x0006,-2, 6,0x1010,b8,
0,0,&xtensa_mask8,0,0,0)
XTREG(116,485, 1, 1, 1,0x0019,0x0006,-2, 6,0x1010,b9,
0,0,&xtensa_mask9,0,0,0)
XTREG(117,486, 1, 1, 1,0x001a,0x0006,-2, 6,0x1010,b10,
0,0,&xtensa_mask10,0,0,0)
XTREG(118,487, 1, 1, 1,0x001b,0x0006,-2, 6,0x1010,b11,
0,0,&xtensa_mask11,0,0,0)
XTREG(119,488, 1, 1, 1,0x001c,0x0006,-2, 6,0x1010,b12,
0,0,&xtensa_mask12,0,0,0)
XTREG(120,489, 1, 1, 1,0x001d,0x0006,-2, 6,0x1010,b13,
0,0,&xtensa_mask13,0,0,0)
XTREG(121,490, 1, 1, 1,0x001e,0x0006,-2, 6,0x1010,b14,
0,0,&xtensa_mask14,0,0,0)
XTREG(122,491, 1, 1, 1,0x001f,0x0006,-2, 6,0x1010,b15,
0,0,&xtensa_mask15,0,0,0)
XTREG(123,492, 4, 4, 4,0x2003,0x0006,-2, 6,0x1010,psintlevel,
0,0,&xtensa_mask16,0,0,0)
XTREG(124,496, 1, 4, 4,0x2004,0x0006,-2, 6,0x1010,psum,
0,0,&xtensa_mask17,0,0,0)
XTREG(125,500, 1, 4, 4,0x2005,0x0006,-2, 6,0x1010,pswoe,
0,0,&xtensa_mask18,0,0,0)
XTREG(126,504, 2, 4, 4,0x2006,0x0006,-2, 6,0x1010,psring,
0,0,&xtensa_mask19,0,0,0)
XTREG(127,508, 1, 4, 4,0x2007,0x0006,-2, 6,0x1010,psexcm,
0,0,&xtensa_mask20,0,0,0)
XTREG(128,512, 2, 4, 4,0x2008,0x0006,-2, 6,0x1010,pscallinc,
0,0,&xtensa_mask21,0,0,0)
XTREG(129,516, 4, 4, 4,0x2009,0x0006,-2, 6,0x1010,psowb,
0,0,&xtensa_mask22,0,0,0)
XTREG(130,520,20, 4, 4,0x200a,0x0006,-2, 6,0x1010,litbaddr,
0,0,&xtensa_mask23,0,0,0)
XTREG(131,524, 1, 4, 4,0x200b,0x0006,-2, 6,0x1010,litben,
0,0,&xtensa_mask24,0,0,0)
XTREG(132,528, 4, 4, 4,0x200e,0x0006,-2, 6,0x1010,dbnum,
0,0,&xtensa_mask25,0,0,0)
XTREG(133,532, 8, 4, 4,0x200f,0x0006,-2, 6,0x1010,asid3,
0,0,&xtensa_mask26,0,0,0)
XTREG(134,536, 8, 4, 4,0x2010,0x0006,-2, 6,0x1010,asid2,
0,0,&xtensa_mask27,0,0,0)
XTREG(135,540, 8, 4, 4,0x2011,0x0006,-2, 6,0x1010,asid1,
0,0,&xtensa_mask28,0,0,0)
XTREG(136,544, 2, 4, 4,0x2012,0x0006,-2, 6,0x1010,instpgszid4,
0,0,&xtensa_mask29,0,0,0)
XTREG(137,548, 2, 4, 4,0x2013,0x0006,-2, 6,0x1010,datapgszid4,
0,0,&xtensa_mask30,0,0,0)
XTREG(138,552,10, 4, 4,0x2014,0x0006,-2, 6,0x1010,ptbase,
0,0,&xtensa_mask31,0,0,0)
XTREG(139,556, 1, 4, 4,0x201a,0x0006, 1, 5,0x1010,ae_overflow,
0,0,&xtensa_mask32,0,0,0)
XTREG(140,560, 6, 4, 4,0x201b,0x0006, 1, 5,0x1010,ae_sar,
0,0,&xtensa_mask33,0,0,0)
XTREG(141,564, 4, 4, 4,0x201c,0x0006, 1, 5,0x1010,ae_bitptr,
0,0,&xtensa_mask34,0,0,0)
XTREG(142,568, 4, 4, 4,0x201d,0x0006, 1, 5,0x1010,ae_bitsused,
0,0,&xtensa_mask35,0,0,0)
XTREG(143,572, 4, 4, 4,0x201e,0x0006, 1, 5,0x1010,ae_tablesize,
0,0,&xtensa_mask36,0,0,0)
XTREG(144,576, 4, 4, 4,0x201f,0x0006, 1, 5,0x1010,ae_first_ts,
0,0,&xtensa_mask37,0,0,0)
XTREG(145,580,27, 4, 4,0x2020,0x0006, 1, 5,0x1010,ae_nextoffset,
0,0,&xtensa_mask38,0,0,0)
XTREG_END

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,35 @@
#
# Ensure CPU die-id can be omitted on -device
#
# Copyright (c) 2019 Red Hat Inc
#
# Author:
# Eduardo Habkost <ehabkost@redhat.com>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, see <http://www.gnu.org/licenses/>.
#
from avocado_qemu import Test
class OmittedCPUProps(Test):
"""
:avocado: tags=arch:x86_64
"""
def test_no_die_id(self):
self.vm.add_args('-nodefaults', '-S')
self.vm.add_args('-smp', '1,sockets=2,cores=2,threads=2,maxcpus=8')
self.vm.add_args('-cpu', 'qemu64')
self.vm.add_args('-device', 'qemu64-x86_64-cpu,socket-id=1,core-id=0,thread-id=0')
self.vm.launch()
self.assertEquals(len(self.vm.command('query-cpus')), 2)

View File

@ -957,4 +957,5 @@ class TestSetSpeed(iotests.QMPTestCase):
self.cancel_and_wait(resume=True)
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2', 'qed'])
iotests.main(supported_fmts=['qcow2', 'qed'],
supported_protocols=['file'])

View File

@ -433,4 +433,5 @@ class TestReopenOverlay(ImageCommitTestCase):
self.run_commit_test(self.img1, self.img0)
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2', 'qed'])
iotests.main(supported_fmts=['qcow2', 'qed'],
supported_protocols=['file'])

View File

@ -1068,4 +1068,5 @@ class TestOrphanedSource(iotests.QMPTestCase):
self.assert_qmp(result, 'error/class', 'GenericError')
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2', 'qed'])
iotests.main(supported_fmts=['qcow2', 'qed'],
supported_protocols=['file'])

View File

@ -118,4 +118,5 @@ class TestRefcountTableGrowth(iotests.QMPTestCase):
pass
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -175,4 +175,5 @@ class TestSCMFd(iotests.QMPTestCase):
"File descriptor named '%s' not found" % fdname)
if __name__ == '__main__':
iotests.main(supported_fmts=['raw'])
iotests.main(supported_fmts=['raw'],
supported_protocols=['file'])

View File

@ -563,4 +563,5 @@ class TestDriveCompression(iotests.QMPTestCase):
target='drive1')
if __name__ == '__main__':
iotests.main(supported_fmts=['raw', 'qcow2'])
iotests.main(supported_fmts=['raw', 'qcow2'],
supported_protocols=['file'])

View File

@ -335,4 +335,5 @@ class BackupTest(iotests.QMPTestCase):
self.dismissal_failure(True)
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2', 'qed'])
iotests.main(supported_fmts=['qcow2', 'qed'],
supported_protocols=['file'])

View File

@ -256,4 +256,5 @@ class TestSnapshotDelete(ImageSnapshotTestCase):
self.assert_qmp(result, 'error/class', 'GenericError')
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -27,7 +27,7 @@ Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1073741824000 subformat=twoGbMax
image: TEST_DIR/t.vmdk
file format: vmdk
virtual size: 0.977 TiB (1073741824000 bytes)
disk size: 16 KiB
disk size: 1.97 MiB
Format specific information:
cid: XXXXXXXX
parent cid: XXXXXXXX

View File

@ -129,4 +129,5 @@ TestQemuImgInfo = None
TestQMP = None
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -67,4 +67,5 @@ class TestLiveSnapshot(iotests.QMPTestCase):
self.checkConfig('target')
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -707,4 +707,5 @@ if __name__ == '__main__':
iotests.qemu_default_machine)
# Need to support image creation
iotests.main(supported_fmts=['vpc', 'parallels', 'qcow', 'vdi', 'qcow2',
'vmdk', 'raw', 'vhdx', 'qed'])
'vmdk', 'raw', 'vhdx', 'qed'],
supported_protocols=['file'])

View File

@ -779,4 +779,5 @@ class TestIncrementalBackupBlkdebug(TestIncrementalBackupBase):
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -83,4 +83,5 @@ class TestStopWithBlockJob(iotests.QMPTestCase):
self.do_test_stop("block-commit", device="drive0")
if __name__ == '__main__':
iotests.main(supported_fmts=["qcow2"])
iotests.main(supported_fmts=["qcow2"],
supported_protocols=["file"])

View File

@ -56,4 +56,5 @@ class TestSingleDrive(iotests.QMPTestCase):
'target image does not match source after mirroring')
if __name__ == '__main__':
iotests.main(supported_fmts=['raw', 'qcow2'])
iotests.main(supported_fmts=['raw', 'qcow2'],
supported_protocols=['file'])

View File

@ -361,4 +361,5 @@ class TestBlockdevDel(iotests.QMPTestCase):
if __name__ == '__main__':
iotests.main(supported_fmts=["qcow2"])
iotests.main(supported_fmts=["qcow2"],
supported_protocols=["file"])

View File

@ -287,6 +287,5 @@ class BuiltinNBD(NBDBlockdevAddBase):
if __name__ == '__main__':
# Need to support image creation
iotests.main(supported_fmts=['vpc', 'parallels', 'qcow', 'vdi', 'qcow2',
'vmdk', 'raw', 'vhdx', 'qed'])
iotests.main(supported_fmts=['raw'],
supported_protocols=['nbd'])

View File

@ -137,4 +137,5 @@ class TestFifoQuorumEvents(TestQuorumEvents):
if __name__ == '__main__':
iotests.verify_quorum()
iotests.main(supported_fmts=["raw"])
iotests.main(supported_fmts=["raw"],
supported_protocols=["file"])

View File

@ -0,0 +1,12 @@
QA output created by 150
=== Mapping sparse conversion ===
Offset Length File
0 0x1000 TEST_DIR/t.IMGFMT
=== Mapping non-sparse conversion ===
Offset Length File
0 0x100000 TEST_DIR/t.IMGFMT
*** done

View File

@ -142,4 +142,5 @@ class TestActiveMirror(iotests.QMPTestCase):
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2', 'raw'])
iotests.main(supported_fmts=['qcow2', 'raw'],
supported_protocols=['file'])

View File

@ -59,4 +59,5 @@ class TestUnaligned(iotests.QMPTestCase):
if __name__ == '__main__':
iotests.main(supported_fmts=['raw', 'qcow2'])
iotests.main(supported_fmts=['raw', 'qcow2'],
supported_protocols=['file'])

View File

@ -258,4 +258,5 @@ BaseClass = None
MirrorBaseClass = None
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -170,4 +170,5 @@ class TestShrink1M(ShrinkBaseClass):
ShrinkBaseClass = None
if __name__ == '__main__':
iotests.main(supported_fmts=['raw', 'qcow2'])
iotests.main(supported_fmts=['raw', 'qcow2'],
supported_protocols=['file'])

View File

@ -103,4 +103,5 @@ class TestPersistentDirtyBitmap(iotests.QMPTestCase):
self.vm.shutdown()
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -227,4 +227,5 @@ for cmb in list(itertools.product((True, False), repeat=2)):
'do_test_migration_resume_source', *list(cmb))
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -37,14 +37,16 @@ trap "_cleanup; exit \$status" 0 1 2 3 15
# the file size. This function hides the resulting difference in the
# stat -c '%b' output.
# Parameter 1: Number of blocks an empty file occupies
# Parameter 2: Image size in bytes
# Parameter 2: Minimal number of blocks in an image
# Parameter 3: Image size in bytes
_filter_blocks()
{
extra_blocks=$1
img_size=$2
min_blocks=$2
img_size=$3
sed -e "s/blocks=$extra_blocks\\(\$\\|[^0-9]\\)/nothing allocated/" \
-e "s/blocks=$((extra_blocks + img_size / 512))\\(\$\\|[^0-9]\\)/everything allocated/"
sed -e "s/blocks=$min_blocks\\(\$\\|[^0-9]\\)/min allocation/" \
-e "s/blocks=$((extra_blocks + img_size / 512))\\(\$\\|[^0-9]\\)/max allocation/"
}
# get standard environment, filters and checks
@ -60,16 +62,21 @@ size=$((1 * 1024 * 1024))
touch "$TEST_DIR/empty"
extra_blocks=$(stat -c '%b' "$TEST_DIR/empty")
# We always write the first byte; check how many blocks this filesystem
# allocates to match empty image alloation.
printf "\0" > "$TEST_DIR/empty"
min_blocks=$(stat -c '%b' "$TEST_DIR/empty")
echo
echo "== creating image with default preallocation =="
_make_test_img $size | _filter_imgfmt
stat -c "size=%s, blocks=%b" $TEST_IMG | _filter_blocks $extra_blocks $size
stat -c "size=%s, blocks=%b" $TEST_IMG | _filter_blocks $extra_blocks $min_blocks $size
for mode in off full falloc; do
echo
echo "== creating image with preallocation $mode =="
IMGOPTS=preallocation=$mode _make_test_img $size | _filter_imgfmt
stat -c "size=%s, blocks=%b" $TEST_IMG | _filter_blocks $extra_blocks $size
stat -c "size=%s, blocks=%b" $TEST_IMG | _filter_blocks $extra_blocks $min_blocks $size
done
# success, all done

View File

@ -2,17 +2,17 @@ QA output created by 175
== creating image with default preallocation ==
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576
size=1048576, nothing allocated
size=1048576, min allocation
== creating image with preallocation off ==
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 preallocation=off
size=1048576, nothing allocated
size=1048576, min allocation
== creating image with preallocation full ==
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 preallocation=full
size=1048576, everything allocated
size=1048576, max allocation
== creating image with preallocation falloc ==
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576 preallocation=falloc
size=1048576, everything allocated
size=1048576, max allocation
*** done

View File

@ -101,7 +101,7 @@ converted image file size in bytes: 196608
== raw input image with data (human) ==
Formatting 'TEST_DIR/t.qcow2', fmt=IMGFMT size=1073741824
required size: 393216
required size: 458752
fully allocated size: 1074135040
wrote 512/512 bytes at offset 512
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
@ -257,7 +257,7 @@ converted image file size in bytes: 196608
Formatting 'TEST_DIR/t.qcow2', fmt=IMGFMT size=1073741824
{
"required": 393216,
"required": 458752,
"fully-allocated": 1074135040
}
wrote 512/512 bytes at offset 512

View File

@ -63,4 +63,5 @@ class TestInvalidateAutoclear(iotests.QMPTestCase):
self.assertEqual(f.read(1), b'\x00')
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'])
iotests.main(supported_fmts=['qcow2'],
supported_protocols=['file'])

View File

@ -115,4 +115,5 @@ class TestDirtyBitmapPostcopyMigration(iotests.QMPTestCase):
self.assert_qmp(result, 'return/sha256', sha256);
if __name__ == '__main__':
iotests.main(supported_fmts=['qcow2'], supported_cache_modes=['none'])
iotests.main(supported_fmts=['qcow2'], supported_cache_modes=['none'],
supported_protocols=['file'])

View File

@ -153,4 +153,5 @@ class TestNbdServerRemove(iotests.QMPTestCase):
if __name__ == '__main__':
iotests.main(supported_fmts=['generic'])
iotests.main(supported_fmts=['raw'],
supported_protocols=['nbd'])

View File

@ -3,14 +3,18 @@ QA output created by 221
=== Check mapping of unaligned raw image ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=65537
[{ "start": 0, "length": 66048, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 66048, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 61952, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 61952, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
wrote 1/1 bytes at offset 65536
1 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
[{ "start": 0, "length": 65536, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 61440, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
{ "start": 65536, "length": 1, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 65537, "length": 511, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 65536, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 61440, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
{ "start": 65536, "length": 1, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 65537, "length": 511, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
*** done

View File

@ -1000,4 +1000,5 @@ class TestBlockdevReopen(iotests.QMPTestCase):
self.reopen(opts, {'backing': 'hd2'})
if __name__ == '__main__':
iotests.main(supported_fmts=["qcow2"])
iotests.main(supported_fmts=["qcow2"],
supported_protocols=["file"])

View File

@ -3,12 +3,16 @@ QA output created by 253
=== Check mapping of unaligned raw image ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048575
[{ "start": 0, "length": 1048576, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 1048576, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 1044480, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 1044480, "depth": 0, "zero": true, "data": false, "offset": OFFSET}]
wrote 65535/65535 bytes at offset 983040
63.999 KiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
[{ "start": 0, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 978944, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
{ "start": 983040, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": OFFSET}]
[{ "start": 0, "length": 983040, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
[{ "start": 0, "length": 4096, "depth": 0, "zero": false, "data": true, "offset": OFFSET},
{ "start": 4096, "length": 978944, "depth": 0, "zero": true, "data": false, "offset": OFFSET},
{ "start": 983040, "length": 65536, "depth": 0, "zero": false, "data": true, "offset": OFFSET}]
*** done

View File

@ -0,0 +1,67 @@
#!/usr/bin/env bash
#
# Test reverse-ordered qcow2 writes on a sub-cluster level
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
seq=$(basename $0)
echo "QA output created by $seq"
status=1 # failure is the default!
_cleanup()
{
_cleanup_test_img
}
trap "_cleanup; exit \$status" 0 1 2 3 15
# get standard environment, filters and checks
. ./common.rc
. ./common.filter
# qcow2-specific test
_supported_fmt qcow2
_supported_proto file
_supported_os Linux
echo '--- Writing to the image ---'
# Reduce cluster size so we get more and quicker I/O
IMGOPTS='cluster_size=4096' _make_test_img 1M
(for ((kb = 1024 - 4; kb >= 0; kb -= 4)); do \
echo "aio_write -P 42 $((kb + 1))k 2k"; \
done) \
| $QEMU_IO "$TEST_IMG" > /dev/null
echo '--- Verifying its content ---'
(for ((kb = 0; kb < 1024; kb += 4)); do \
echo "read -P 0 ${kb}k 1k"; \
echo "read -P 42 $((kb + 1))k 2k"; \
echo "read -P 0 $((kb + 3))k 1k"; \
done) \
| $QEMU_IO "$TEST_IMG" | _filter_qemu_io | grep 'verification'
# Status of qemu-io
if [ ${PIPESTATUS[1]} = 0 ]; then
echo 'Content verified.'
fi
# success, all done
echo "*** done"
rm -f $seq.full
status=0

View File

@ -0,0 +1,6 @@
QA output created by 265
--- Writing to the image ---
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=1048576
--- Verifying its content ---
Content verified.
*** done

View File

@ -0,0 +1,153 @@
#!/usr/bin/env python
#
# Test VPC and file image creation
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import iotests
from iotests import imgfmt
def blockdev_create(vm, options):
result = vm.qmp_log('blockdev-create', job_id='job0', options=options,
filters=[iotests.filter_qmp_testfiles])
if 'return' in result:
assert result['return'] == {}
vm.run_job('job0')
# Successful image creation (defaults)
def implicit_defaults(vm, file_path):
iotests.log("=== Successful image creation (defaults) ===")
iotests.log("")
# 8 heads, 964 cyls/head, 17 secs/cyl
# (Close to 64 MB)
size = 8 * 964 * 17 * 512
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': size })
# Successful image creation (explicit defaults)
def explicit_defaults(vm, file_path):
iotests.log("=== Successful image creation (explicit defaults) ===")
iotests.log("")
# 16 heads, 964 cyls/head, 17 secs/cyl
# (Close to 128 MB)
size = 16 * 964 * 17 * 512
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': size,
'subformat': 'dynamic',
'force-size': False })
# Successful image creation (non-default options)
def non_defaults(vm, file_path):
iotests.log("=== Successful image creation (non-default options) ===")
iotests.log("")
# Not representable in CHS (fine with force-size=True)
size = 1048576
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': size,
'subformat': 'fixed',
'force-size': True })
# Size not representable in CHS with force-size=False
def non_chs_size_without_force(vm, file_path):
iotests.log("=== Size not representable in CHS ===")
iotests.log("")
# Not representable in CHS (will not work with force-size=False)
size = 1048576
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': size,
'force-size': False })
# Zero size
def zero_size(vm, file_path):
iotests.log("=== Zero size===")
iotests.log("")
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': 0 })
# Maximum CHS size
def maximum_chs_size(vm, file_path):
iotests.log("=== Maximum CHS size===")
iotests.log("")
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': 16 * 65535 * 255 * 512 })
# Actual maximum size
def maximum_size(vm, file_path):
iotests.log("=== Actual maximum size===")
iotests.log("")
blockdev_create(vm, { 'driver': imgfmt,
'file': 'protocol-node',
'size': 0xff000000 * 512,
'force-size': True })
def main():
for test_func in [implicit_defaults, explicit_defaults, non_defaults,
non_chs_size_without_force, zero_size, maximum_chs_size,
maximum_size]:
with iotests.FilePath('t.vpc') as file_path, \
iotests.VM() as vm:
vm.launch()
iotests.log('--- Creating empty file ---')
blockdev_create(vm, { 'driver': 'file',
'filename': file_path,
'size': 0 })
vm.qmp_log('blockdev-add', driver='file', filename=file_path,
node_name='protocol-node',
filters=[iotests.filter_qmp_testfiles])
iotests.log('')
print_info = test_func(vm, file_path)
iotests.log('')
vm.shutdown()
iotests.img_info_log(file_path)
iotests.script_main(main,
supported_fmts=['vpc'],
supported_protocols=['file'])

View File

@ -0,0 +1,137 @@
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Successful image creation (defaults) ===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "size": 67125248}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
image: TEST_IMG
file format: IMGFMT
virtual size: 64 MiB (67125248 bytes)
cluster_size: 2097152
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Successful image creation (explicit defaults) ===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "force-size": false, "size": 134250496, "subformat": "dynamic"}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
image: TEST_IMG
file format: IMGFMT
virtual size: 128 MiB (134250496 bytes)
cluster_size: 2097152
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Successful image creation (non-default options) ===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "force-size": true, "size": 1048576, "subformat": "fixed"}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
image: TEST_IMG
file format: IMGFMT
virtual size: 1 MiB (1048576 bytes)
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Size not representable in CHS ===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "force-size": false, "size": 1048576}}}
{"return": {}}
Job failed: The requested image size cannot be represented in CHS geometry
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
qemu-img: Could not open 'TEST_IMG': File too small for a VHD header
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Zero size===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
image: TEST_IMG
file format: IMGFMT
virtual size: 0 B (0 bytes)
cluster_size: 2097152
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Maximum CHS size===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "size": 136899993600}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
image: TEST_IMG
file format: IMGFMT
virtual size: 127 GiB (136899993600 bytes)
cluster_size: 2097152
--- Creating empty file ---
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "size": 0}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
{"execute": "blockdev-add", "arguments": {"driver": "file", "filename": "TEST_DIR/PID-t.vpc", "node-name": "protocol-node"}}
{"return": {}}
=== Actual maximum size===
{"execute": "blockdev-create", "arguments": {"job-id": "job0", "options": {"driver": "vpc", "file": "protocol-node", "force-size": true, "size": 2190433320960}}}
{"return": {}}
{"execute": "job-dismiss", "arguments": {"id": "job0"}}
{"return": {}}
image: TEST_IMG
file format: IMGFMT
virtual size: 1.99 TiB (2190433320960 bytes)
cluster_size: 2097152

View File

@ -0,0 +1,168 @@
#!/usr/bin/env bash
#
# Test which nodes are involved in internal snapshots
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# creator
owner=kwolf@redhat.com
seq=`basename $0`
echo "QA output created by $seq"
status=1 # failure is the default!
_cleanup()
{
_cleanup_test_img
rm -f "$TEST_DIR/nbd"
}
trap "_cleanup; exit \$status" 0 1 2 3 15
# get standard environment, filters and checks
. ./common.rc
. ./common.filter
_supported_fmt qcow2
_supported_proto file
_supported_os Linux
# Internal snapshots are (currently) impossible with refcount_bits=1
_unsupported_imgopts 'refcount_bits=1[^0-9]'
do_run_qemu()
{
echo Testing: "$@"
(
if ! test -t 0; then
while read cmd; do
echo $cmd
done
fi
echo quit
) | $QEMU -nographic -monitor stdio -nodefaults "$@"
echo
}
run_qemu()
{
do_run_qemu "$@" 2>&1 | _filter_testdir | _filter_qemu | _filter_hmp |
_filter_generated_node_ids | _filter_imgfmt | _filter_vmstate_size
}
size=128M
run_test()
{
_make_test_img $size
printf "savevm snap0\ninfo snapshots\nloadvm snap0\n" | run_qemu "$@" | _filter_date
}
echo
echo "=== No block devices at all ==="
echo
run_test
echo
echo "=== -drive if=none ==="
echo
run_test -drive driver=file,file="$TEST_IMG",if=none
run_test -drive driver=$IMGFMT,file="$TEST_IMG",if=none
run_test -drive driver=$IMGFMT,file="$TEST_IMG",if=none -device virtio-blk,drive=none0
echo
echo "=== -drive if=virtio ==="
echo
run_test -drive driver=file,file="$TEST_IMG",if=virtio
run_test -drive driver=$IMGFMT,file="$TEST_IMG",if=virtio
echo
echo "=== Simple -blockdev ==="
echo
run_test -blockdev driver=file,filename="$TEST_IMG",node-name=file
run_test -blockdev driver=file,filename="$TEST_IMG",node-name=file \
-blockdev driver=$IMGFMT,file=file,node-name=fmt
run_test -blockdev driver=file,filename="$TEST_IMG",node-name=file \
-blockdev driver=raw,file=file,node-name=raw \
-blockdev driver=$IMGFMT,file=raw,node-name=fmt
echo
echo "=== -blockdev with a filter on top ==="
echo
run_test -blockdev driver=file,filename="$TEST_IMG",node-name=file \
-blockdev driver=$IMGFMT,file=file,node-name=fmt \
-blockdev driver=copy-on-read,file=fmt,node-name=filter
echo
echo "=== -blockdev with a backing file ==="
echo
TEST_IMG="$TEST_IMG.base" _make_test_img $size
IMGOPTS="backing_file=$TEST_IMG.base" \
run_test -blockdev driver=file,filename="$TEST_IMG.base",node-name=backing-file \
-blockdev driver=file,filename="$TEST_IMG",node-name=file \
-blockdev driver=$IMGFMT,file=file,backing=backing-file,node-name=fmt
IMGOPTS="backing_file=$TEST_IMG.base" \
run_test -blockdev driver=file,filename="$TEST_IMG.base",node-name=backing-file \
-blockdev driver=$IMGFMT,file=backing-file,node-name=backing-fmt \
-blockdev driver=file,filename="$TEST_IMG",node-name=file \
-blockdev driver=$IMGFMT,file=file,backing=backing-fmt,node-name=fmt
# A snapshot should be present on the overlay, but not the backing file
echo Internal snapshots on overlay:
$QEMU_IMG snapshot -l "$TEST_IMG" | _filter_date | _filter_vmstate_size
echo Internal snapshots on backing file:
$QEMU_IMG snapshot -l "$TEST_IMG.base" | _filter_date | _filter_vmstate_size
echo
echo "=== -blockdev with NBD server on the backing file ==="
echo
IMGOPTS="backing_file=$TEST_IMG.base" _make_test_img $size
cat <<EOF |
nbd_server_start unix:$TEST_DIR/nbd
nbd_server_add -w backing-fmt
savevm snap0
info snapshots
loadvm snap0
EOF
run_qemu -blockdev driver=file,filename="$TEST_IMG.base",node-name=backing-file \
-blockdev driver=$IMGFMT,file=backing-file,node-name=backing-fmt \
-blockdev driver=file,filename="$TEST_IMG",node-name=file \
-blockdev driver=$IMGFMT,file=file,backing=backing-fmt,node-name=fmt |
_filter_date
# This time, a snapshot should be created on both files
echo Internal snapshots on overlay:
$QEMU_IMG snapshot -l "$TEST_IMG" | _filter_date | _filter_vmstate_size
echo Internal snapshots on backing file:
$QEMU_IMG snapshot -l "$TEST_IMG.base" | _filter_date | _filter_vmstate_size
# success, all done
echo "*** done"
rm -f $seq.full
status=0

View File

@ -0,0 +1,182 @@
QA output created by 267
=== No block devices at all ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing:
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
Error: No block device can accept snapshots
(qemu) info snapshots
No available block device supports snapshots
(qemu) loadvm snap0
Error: No block device supports snapshots
(qemu) quit
=== -drive if=none ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -drive driver=file,file=TEST_DIR/t.IMGFMT,if=none
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
Error: Device 'none0' is writable but does not support snapshots
(qemu) info snapshots
No available block device supports snapshots
(qemu) loadvm snap0
Error: Device 'none0' is writable but does not support snapshots
(qemu) quit
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -drive driver=IMGFMT,file=TEST_DIR/t.IMGFMT,if=none
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -drive driver=IMGFMT,file=TEST_DIR/t.IMGFMT,if=none -device virtio-blk,drive=none0
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
=== -drive if=virtio ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -drive driver=file,file=TEST_DIR/t.IMGFMT,if=virtio
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
Error: Device 'virtio0' is writable but does not support snapshots
(qemu) info snapshots
No available block device supports snapshots
(qemu) loadvm snap0
Error: Device 'virtio0' is writable but does not support snapshots
(qemu) quit
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -drive driver=IMGFMT,file=TEST_DIR/t.IMGFMT,if=virtio
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
=== Simple -blockdev ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
Error: Device '' is writable but does not support snapshots
(qemu) info snapshots
No available block device supports snapshots
(qemu) loadvm snap0
Error: Device '' is writable but does not support snapshots
(qemu) quit
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file -blockdev driver=IMGFMT,file=file,node-name=fmt
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file -blockdev driver=raw,file=file,node-name=raw -blockdev driver=IMGFMT,file=raw,node-name=fmt
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
=== -blockdev with a filter on top ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file -blockdev driver=IMGFMT,file=file,node-name=fmt -blockdev driver=copy-on-read,file=fmt,node-name=filter
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
=== -blockdev with a backing file ===
Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=134217728
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/t.IMGFMT.base
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT.base,node-name=backing-file -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file -blockdev driver=IMGFMT,file=file,backing=backing-file,node-name=fmt
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/t.IMGFMT.base
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT.base,node-name=backing-file -blockdev driver=IMGFMT,file=backing-file,node-name=backing-fmt -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file -blockdev driver=IMGFMT,file=file,backing=backing-fmt,node-name=fmt
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
Internal snapshots on overlay:
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
Internal snapshots on backing file:
=== -blockdev with NBD server on the backing file ===
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=134217728 backing_file=TEST_DIR/t.IMGFMT.base
Testing: -blockdev driver=file,filename=TEST_DIR/t.IMGFMT.base,node-name=backing-file -blockdev driver=IMGFMT,file=backing-file,node-name=backing-fmt -blockdev driver=file,filename=TEST_DIR/t.IMGFMT,node-name=file -blockdev driver=IMGFMT,file=file,backing=backing-fmt,node-name=fmt
QEMU X.Y.Z monitor - type 'help' for more information
(qemu) nbd_server_start unix:TEST_DIR/nbd
(qemu) nbd_server_add -w backing-fmt
(qemu) savevm snap0
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
(qemu) loadvm snap0
(qemu) quit
Internal snapshots on overlay:
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
Internal snapshots on backing file:
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snap0 SIZE yyyy-mm-dd hh:mm:ss 00:00:00.000
*** done

View File

@ -0,0 +1,83 @@
#!/usr/bin/env bash
#
# Test large write to a qcow2 image
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
seq=$(basename "$0")
echo "QA output created by $seq"
status=1 # failure is the default!
_cleanup()
{
_cleanup_test_img
}
trap "_cleanup; exit \$status" 0 1 2 3 15
# get standard environment, filters and checks
. ./common.rc
. ./common.filter
# This is a qcow2 regression test
_supported_fmt qcow2
_supported_proto file
_supported_os Linux
# We use our own external data file and our own cluster size, and we
# require v3 images
_unsupported_imgopts data_file cluster_size 'compat=0.10'
# We need a backing file so that handle_alloc_space() will not do
# anything. (If it were to do anything, it would simply fail its
# write-zeroes request because the request range is too large.)
TEST_IMG="$TEST_IMG.base" _make_test_img 4G
$QEMU_IO -c 'write 0 512' "$TEST_IMG.base" | _filter_qemu_io
# (Use .orig because _cleanup_test_img will remove that file)
# We need a large cluster size, see below for why (above the $QEMU_IO
# invocation)
_make_test_img -o cluster_size=2M,data_file="$TEST_IMG.orig" \
-b "$TEST_IMG.base" 4G
# We want a null-co as the data file, because it allows us to quickly
# "write" 2G of data without using any space.
# (qemu-img create does not like it, though, because null-co does not
# support image creation.)
$QEMU_IMG amend -o data_file="json:{'driver':'null-co',,'size':'4294967296'}" \
"$TEST_IMG"
# This gives us a range of:
# 2^31 - 512 + 768 - 1 = 2^31 + 255 > 2^31
# until the beginning of the end COW block. (The total allocation
# size depends on the cluster size, but all that is important is that
# it exceeds INT_MAX.)
#
# 2^31 - 512 is the maximum request size. We want this to result in a
# single allocation, and because the qcow2 driver splits allocations
# on L2 boundaries, we need large L2 tables; hence the cluster size of
# 2 MB. (Anything from 256 kB should work, though, because then one L2
# table covers 8 GB.)
$QEMU_IO -c "write 768 $((2 ** 31 - 512))" "$TEST_IMG" | _filter_qemu_io
_check_test_img
# success, all done
echo "*** done"
rm -f $seq.full
status=0

View File

@ -0,0 +1,9 @@
QA output created by 270
Formatting 'TEST_DIR/t.IMGFMT.base', fmt=IMGFMT size=4294967296
wrote 512/512 bytes at offset 0
512 bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=4294967296 backing_file=TEST_DIR/t.IMGFMT.base data_file=TEST_DIR/t.IMGFMT.orig
wrote 2147483136/2147483136 bytes at offset 768
2 GiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
No errors were found on the image.
*** done

View File

@ -0,0 +1,79 @@
#!/usr/bin/env bash
#
# Test compressed write to a qcow2 image at an offset above 4 GB
#
# Copyright (C) 2019 Red Hat, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
seq=$(basename "$0")
echo "QA output created by $seq"
status=1 # failure is the default!
_cleanup()
{
_cleanup_test_img
}
trap "_cleanup; exit \$status" 0 1 2 3 15
# get standard environment, filters and checks
. ./common.rc
. ./common.filter
# This is a qcow2 regression test
_supported_fmt qcow2
_supported_proto file
# External data files do not support compression;
# We need an exact cluster size (2M) and refcount width (2) so we can
# get this test quickly over with; and this in turn require
# compat=1.1
_unsupported_imgopts data_file cluster_size refcount_bits 'compat=0.10'
# The idea is: Create an empty file, mark the first 4 GB as used, then
# do a compressed write that thus must be put beyond 4 GB.
# (This used to fail because the compressed sector mask was just a
# 32 bit mask, so qemu-img check will count a cluster before 4 GB as
# referenced twice.)
# We would like to use refcount_bits=1 here, but then qemu-img check
# will throw an error when trying to count a cluster as referenced
# twice.
_make_test_img -o cluster_size=2M,refcount_bits=2 64M
reft_offs=$(peek_file_be "$TEST_IMG" 48 8)
refb_offs=$(peek_file_be "$TEST_IMG" $reft_offs 8)
# We want to cover 4 GB, those are 2048 clusters, equivalent to
# 4096 bit = 512 B.
truncate -s 4G "$TEST_IMG"
for ((in_refb_offs = 0; in_refb_offs < 512; in_refb_offs += 8)); do
poke_file "$TEST_IMG" $((refb_offs + in_refb_offs)) \
'\x55\x55\x55\x55\x55\x55\x55\x55'
done
$QEMU_IO -c 'write -c -P 42 0 2M' "$TEST_IMG" | _filter_qemu_io
echo
echo '--- Check ---'
# This should only print the leaked clusters in the first 4 GB
_check_test_img | grep -v '^Leaked cluster '
# success, all done
echo "*** done"
rm -f $seq.full
status=0

View File

@ -0,0 +1,10 @@
QA output created by 272
Formatting 'TEST_DIR/t.IMGFMT', fmt=IMGFMT size=67108864
wrote 2097152/2097152 bytes at offset 0
2 MiB, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
--- Check ---
2044 leaked clusters were found on the image.
This means waste of disk space, but no harm to data.
*** done

View File

@ -19,12 +19,15 @@
# standard filters
#
# ctime(3) dates
#
_filter_date()
{
$SED \
-e 's/[A-Z][a-z][a-z] [A-z][a-z][a-z] *[0-9][0-9]* [0-9][0-9]:[0-9][0-9]:[0-9][0-9] [0-9][0-9][0-9][0-9]$/DATE/'
$SED -re 's/[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}/yyyy-mm-dd hh:mm:ss/'
}
_filter_vmstate_size()
{
$SED -r -e 's/[0-9. ]{5} [KMGT]iB/ SIZE/' \
-e 's/[0-9. ]{5} B/ SIZE/'
}
_filter_generated_node_ids()

View File

@ -53,6 +53,26 @@ poke_file()
printf "$3" | dd "of=$1" bs=1 "seek=$2" conv=notrunc &>/dev/null
}
# peek_file_le 'test.img' 512 2 => 65534
peek_file_le()
{
# Wrap in echo $() to strip spaces
echo $(od -j"$2" -N"$3" --endian=little -An -vtu"$3" "$1")
}
# peek_file_be 'test.img' 512 2 => 65279
peek_file_be()
{
# Wrap in echo $() to strip spaces
echo $(od -j"$2" -N"$3" --endian=big -An -vtu"$3" "$1")
}
# peek_file_raw 'test.img' 512 2 => '\xff\xfe'
peek_file_raw()
{
dd if="$1" bs=1 skip="$2" count="$3" status=none
}
if ! . ./common.config
then

View File

@ -271,3 +271,8 @@
254 rw backing quick
255 rw quick
256 rw quick
265 rw auto quick
266 rw quick
267 rw auto quick snapshot
270 rw backing quick
272 rw

View File

@ -61,7 +61,6 @@ cachemode = os.environ.get('CACHEMODE')
qemu_default_machine = os.environ.get('QEMU_DEFAULT_MACHINE')
socket_scm_helper = os.environ.get('SOCKET_SCM_HELPER', 'socket_scm_helper')
debug = False
luks_default_secret_object = 'secret,id=keysec0,data=' + \
os.environ.get('IMGKEYSECRET', '')
@ -842,11 +841,23 @@ def skip_if_unsupported(required_formats=[], read_only=False):
return func_wrapper
return skip_test_decorator
def main(supported_fmts=[], supported_oses=['linux'], supported_cache_modes=[],
unsupported_fmts=[]):
'''Run tests'''
def execute_unittest(output, verbosity, debug):
runner = unittest.TextTestRunner(stream=output, descriptions=True,
verbosity=verbosity)
try:
# unittest.main() will use sys.exit(); so expect a SystemExit
# exception
unittest.main(testRunner=runner)
finally:
if not debug:
sys.stderr.write(re.sub(r'Ran (\d+) tests? in [\d.]+s',
r'Ran \1 tests', output.getvalue()))
global debug
def execute_test(test_function=None,
supported_fmts=[], supported_oses=['linux'],
supported_cache_modes=[], unsupported_fmts=[],
supported_protocols=[], unsupported_protocols=[]):
"""Run either unittest or script-style tests."""
# We are using TEST_DIR and QEMU_DEFAULT_MACHINE as proxies to
# indicate that we're not being run via "check". There may be
@ -859,6 +870,7 @@ def main(supported_fmts=[], supported_oses=['linux'], supported_cache_modes=[],
debug = '-d' in sys.argv
verbosity = 1
verify_image_format(supported_fmts, unsupported_fmts)
verify_protocol(supported_protocols, unsupported_protocols)
verify_platform(supported_oses)
verify_cache_mode(supported_cache_modes)
@ -878,13 +890,15 @@ def main(supported_fmts=[], supported_oses=['linux'], supported_cache_modes=[],
logging.basicConfig(level=(logging.DEBUG if debug else logging.WARN))
class MyTestRunner(unittest.TextTestRunner):
def __init__(self, stream=output, descriptions=True, verbosity=verbosity):
unittest.TextTestRunner.__init__(self, stream, descriptions, verbosity)
if not test_function:
execute_unittest(output, verbosity, debug)
else:
test_function()
# unittest.main() will use sys.exit() so expect a SystemExit exception
try:
unittest.main(testRunner=MyTestRunner)
finally:
if not debug:
sys.stderr.write(re.sub(r'Ran (\d+) tests? in [\d.]+s', r'Ran \1 tests', output.getvalue()))
def script_main(test_function, *args, **kwargs):
"""Run script-style tests outside of the unittest framework"""
execute_test(test_function, *args, **kwargs)
def main(*args, **kwargs):
"""Run tests using the unittest framework"""
execute_test(None, *args, **kwargs)

Some files were not shown because too many files have changed in this diff Show More