Commit Graph

47907 Commits (stable-2.7)

Author SHA1 Message Date
Michael Roth 0d83fccb4f Update version for 2.7.1 release
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-23 09:38:06 -06:00
Ashijeet Acharya 4dde694191 ide: Fix memory leak in ide_register_restart_cb()
Fix a memory leak in ide_register_restart_cb() in hw/ide/core.c and add
idebus_unrealize() in hw/ide/qdev.c to have calls to
qemu_del_vm_change_state_handler() to deal with the dangling change
state handler during hot-unplugging ide devices which might lead to a
crash.

Signed-off-by: Ashijeet Acharya <ashijeetacharya@gmail.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1474995212-10580-1-git-send-email-ashijeetacharya@gmail.com
[Minor whitespace fix --js]
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit ca44141d5f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 10:49:48 -06:00
Marc-André Lureau 7d17d68971 portio: keep references on portio
The isa_register_portio_list() function allocates ioports
data/state. Let's keep the reference to this data on some owner.  This
isn't enough to fix leaks, but at least, ASAN stops complaining of
direct leaks. Further cleanup would require calling
portio_list_del/destroy().

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e305a16510)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 10:08:19 -06:00
John Snow 345f1cd9f6 block-backend: Always notify on blk_eject
blk_eject is only used by scsi-disk and atapi, and in both cases we
only attempt to invoke blk_eject if we have a bona-fide change in
tray state.

The "issue" here is that the tray state does not generate a QMP event
unless there is a medium/BDS attached to the device, so if libvirt et al
are waiting for a tray event to occur from an empty-but-closed drive,
software opening that drive will not emit an event and libvirt will
wait forever.

Change this by modifying blk_eject to always emit an event, instead of
conditionally on a "real" backend eject.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1373264

Reported-by: Peter Krempa <pkrempa@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1478553214-497-2-git-send-email-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit c47ee043dc)

* dropped functional depedenecy on 2d76e724

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 09:59:57 -06:00
Mark Cave-Ayland 8d5f2a7570 dma-helpers: explicitly pass alignment into DMA helpers
The hard-coded default alignment is BDRV_SECTOR_SIZE, however this is not
necessarily the case for all platforms. Use this as the default alignment for
all current callers.

Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Eric Blake <eblake@redhat.com>
Acked-by: John Snow <jsnow@redhat.com>
Message-id: 1476445266-27503-2-git-send-email-mark.cave-ayland@ilande.co.uk
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit 99868af3d0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-21 09:27:59 -06:00
John Snow 5f20161cf3 atapi: classify read_cd as conditionally returning data
For the purposes of byte_count_limit verification, add a new flag that
identifies read_cd as sometimes returning data, then check the BCL in
its command handler after we know that it will indeed return data.

Reported-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1477970211-25754-2-git-send-email-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
(cherry picked from commit e7bd708ec8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-20 19:33:05 -06:00
Stefan Hajnoczi 05838b4688 ui/gtk: fix "Copy" menu item segfault
The "Copy" menu item copies VTE terminal text to the clipboard.  This
only works with VTE terminals, not with graphics consoles.

Disable the menu item when the current notebook page isn't a VTE
terminal.

This patch fixes a segfault.  Reproducer: Start QEMU and click the Copy
menu item when the guest display is visible.

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20161214142518.10504-1-stefanha@redhat.com
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit a08156321a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-20 19:31:47 -06:00
Thorsten Kohfeldt 223d1a2da1 vfio/pci: Fix vfio_rtl8168_quirk_data_read address offset
Introductory comment for rtl8168 VFIO MSI-X quirk states:
At BAR2 offset 0x70 there is a dword data register,
         offset 0x74 is a dword address register.
vfio: vfio_bar_read(0000:05:00.0:BAR2+0x70, 4) = 0xfee00398 // read data

Thus, correct offset for data read is 0x70,
but function vfio_rtl8168_quirk_data_read() wrongfully uses offset 0x74.

Signed-off-by: Thorsten Kohfeldt <thorsten.kohfeldt@gmx.de>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
(cherry picked from commit 31e6a7b17b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-14 16:34:09 -06:00
Lin Ma 7f7ac2141e msmouse: Fix segfault caused by free the chr before chardev cleanup.
Segfault happens when leaving qemu with msmouse backend:

 #0  0x00007fa8526ac975 in raise () at /lib64/libc.so.6
 #1  0x00007fa8526add8a in abort () at /lib64/libc.so.6
 #2  0x0000558be78846ab in error_exit (err=16, msg=0x558be799da10 ...
 #3  0x0000558be7884717 in qemu_mutex_destroy (mutex=0x558be93be750) at ...
 #4  0x0000558be7549951 in qemu_chr_free_common (chr=0x558be93be750) at ...
 #5  0x0000558be754999c in qemu_chr_free (chr=0x558be93be750) at ...
 #6  0x0000558be7549a20 in qemu_chr_delete (chr=0x558be93be750) at ...
 #7  0x0000558be754a8ef in qemu_chr_cleanup () at qemu-char.c:4643
 #8  0x0000558be755843e in main (argc=5, argv=0x7ffe925d7118, ...

The chr was freed by msmouse close callback before chardev cleanup,
Then qemu_mutex_destroy triggered raise().

Because freeing chr is handled by qemu_chr_free_common, Remove the free from
msmouse_chr_close to avoid double free.

Fixes: c1111a24a3
Cc: qemu-stable@nongnu.org
Signed-off-by: Lin Ma <lma@suse.com>
Message-Id: <20160915143158.4796-1-lma@suse.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 9e14037f05)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-14 16:34:09 -06:00
Paolo Bonzini db1604cd60 Revert "megasas: remove useless check for cmd->frame"
This reverts commit 8cc46787b5.
It turns out that cmd->frame can be NULL and thus the commit
can cause a SIGSEGV

Reported-by: Holger Schranz <holger@fam-schranz.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 421cc3e7e8)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:15:33 -06:00
Eduardo Habkost cc1fd25295 vl: Delay initialization of memory backends
Initialization of memory backends may take a while when
prealloc=yes is used, depending on their size. Initializing
memory backends before chardevs may delay the creation of monitor
sockets, and trigger timeouts on management software that waits
until the monitor socket is created by QEMU. See, for example,
the bug report at:
https://bugzilla.redhat.com/show_bug.cgi?id=1371211

In addition to that, allocating memory before calling
configure_accelerator() breaks the tcg_enabled() checks at
memory_region_init_*().

This patch fixes those problems by adding "memory-backend-*"
classes to the delayed-initialization list.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit 6546d0dba6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:13:21 -06:00
Eduardo Habkost ee99e42be4 vhost-user-test: Use libqos instead of pxe-virtio.rom
vhost-user-test relies on iPXE just to initialize the virtio-net
device, and doesn't do any actual packet tx/rx testing.

In addition to that, the test relies on TCG, which is
imcompatible with vhost. The test only worked by accident: a bug
the memory backend initialization made memory regions not have
the DIRTY_MEMORY_CODE bit set in dirty_log_mask.

This changes vhost-user-test to initialize the virtio-net device
using libqos, and not use TCG nor pxe-virtio.rom.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
(cherry picked from commit cdafe92961)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:13:02 -06:00
Peter Xu 0ef167c907 intel_iommu: fix incorrect device invalidate
"mask" needs to be inverted before use.

Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 6cb99acc28)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:06:43 -06:00
Adrian Bunk 248a780fd3 rules.mak: Use -r instead of -Wl, -r to fix building when PIE is default
Building qemu fails in distributions where gcc enables PIE by default
(e.g. Debian unstable) with:

/usr/bin/ld: -r and -pie may not be used together

Use -r instead of -Wl,-r to avoid gcc passing -pie to the linker
when PIE is enabled and a relocatable object is passed.

Signed-off-by: Adrian Bunk <bunk@stusta.de>
Message-Id: <20161127162817.15144-1-bunk@stusta.de>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit c96f0ee6a6)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:05:55 -06:00
Peter Xu 80f630be21 pci-assign: sync MSI/MSI-X cap and table with PCIDevice
Since commit e1d4fb2d ("kvm-irqchip: x86: add msi route notify fn"),
kvm_irqchip_add_msi_route() starts to use pci_get_msi_message() to fetch
MSI info. This requires that we setup MSI related fields in PCIDevice.
For most devices, that won't be a problem, as long as we are using
general interfaces like msi_init()/msix_init().

However, for pci-assign devices, MSI/MSI-X is treated differently - PCI
assign devices are maintaining its own MSI table and cap information in
AssignedDevice struct. however that's not synced up with PCIDevice's
fields. That will leads to pci_get_msi_message() failed to find correct
MSI capability, even with an NULL msix_table.

A quick fix is to sync up the two places: both the capability bits and
table address for MSI/MSI-X.

Reported-by: Changlimin <changlimin@h3c.com>
Tested-by: Changlimin <changlimin@h3c.com>
Cc: qemu-stable@nongnu.org
Fixes: e1d4fb2d ("kvm-irqchip: x86: add msi route notify fn")
Signed-off-by: Peter Xu <peterx@redhat.com>

Message-Id: <1480042522-16551-1-git-send-email-peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 64e184e260)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 18:04:19 -06:00
Zhuang Yanying 353801cde4 ivshmem: Fix 64 bit memory bar configuration
Device ivshmem property use64=0 is designed to make the device
expose a 32 bit shared memory BAR instead of 64 bit one.  The
default is a 64 bit BAR, except pc-1.2 and older retain a 32 bit
BAR.  A 32 bit BAR can support only up to 1 GiB of shared memory.

This worked as designed until commit 5400c02 accidentally flipped
its sense: since then, we misinterpret use64=0 as use64=1 and vice
versa.  Worse, the default got flipped as well.  Devices
ivshmem-plain and ivshmem-doorbell are not affected.

Fix by restoring the test of IVShmemState member not_legacy_32bit
that got messed up in commit 5400c02.  Also update its
initialization for devices ivhsmem-plain and ivshmem-doorbell.
Without that, they'd regress to 32 bit BARs.

Cc: qemu-stable@nongnu.org
Signed-off-by: Zhuang Yanying <ann.zhuangyanying@huawei.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit be4e0d7375)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:49:41 -06:00
Greg Kurz c8a3159df4 vhost: drop legacy vring layout bits
The legacy vring layout is not used anymore as we use the separate
mappings even for legacy devices.
This patch simply removes it.

This also fixes a bug with virtio 1 devices when the vring descriptor table
is mapped at a higher address than the used vring because the following
function may return an insanely great value:

hwaddr virtio_queue_get_ring_size(VirtIODevice *vdev, int n)
{
    return vdev->vq[n].vring.used - vdev->vq[n].vring.desc +
           virtio_queue_get_used_size(vdev, n);
}

and the mapping fails.

Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 1cdce7c54d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:49:41 -06:00
Max Reitz 48fdfebab6 block/curl: Do not wait for data beyond EOF
libcurl will only give us as much data as there is, not more. The block
layer will deny requests beyond the end of file for us; but since this
block driver is still using a sector-based interface, we can still get
in trouble if the file size is not a multiple of 512.

While we have already made sure not to attempt transfers beyond the end
of the file, we are currently still trying to receive data from there if
the original request exceeds the file size. This patch fixes this issue
and invokes qemu_iovec_memset() on the iovec's tail.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20161025025431.24714-5-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 4e504535c1)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Max Reitz 6eb194ddd7 block/curl: Remember all sockets
For some connection types (like FTP, generally), more than one socket
may be used (in FTP's case: control vs. data stream). As of commit
838ef60249 ("curl: Eliminate unnecessary
use of curl_multi_socket_all"), we have to remember all of the sockets
used by libcurl, but in fact we only did that for a single one. Since
one libcurl connection may use multiple sockets, however, we have to
remember them all.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20161025025431.24714-4-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit ff5ca1664a)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Max Reitz 7e278ef797 block/curl: Fix return value from curl_read_cb
While commit 38bbc0a580 is correct in that
the callback is supposed to return the number of bytes handled; what it
does not mention is that libcurl will throw an error if the callback did
not "handle" all of the data passed to it.

Therefore, if the callback receives some data that it cannot handle
(either because the receive buffer has not been set up yet or because it
would not fit into the receive buffer) and we have to ignore it, we
still have to report that the data has been handled.

Obviously, this should not happen normally. But it does happen at least
for FTP connections where some data (that we do not expect) may be
generated when the connection is established.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20161025025431.24714-3-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 4e7676571b)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Max Reitz 6a5ea68e9e block/curl: Use BDRV_SECTOR_SIZE
Currently, curl defines its own constant SECTOR_SIZE. There is no
advantage over using the global BDRV_SECTOR_SIZE, so drop it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20161025025431.24714-2-mreitz@redhat.com
Signed-off-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 9054d9f6b0)

Conflicts:
	block/curl.c

* drop context dep on fffb6e1 / aio_bh_schedule_oneshot

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Eric Blake 31454ebde5 block: Pass unaligned discard requests to drivers
Discard is advisory, so rounding the requests to alignment
boundaries is never semantically wrong from the data that
the guest sees.  But at least the Dell Equallogic iSCSI SANs
has an interesting property that its advertised discard
alignment is 15M, yet documents that discarding a sequence
of 1M slices will eventually result in the 15M page being
marked as discarded, and it is possible to observe which
pages have been discarded.

Between commits 9f1963b and b8d0a980, we converted the block
layer to a byte-based interface that ultimately ignores any
unaligned head or tail based on the driver's advertised
discard granularity, which means that qemu 2.7 refuses to
pass any discard request smaller than 15M down to the Dell
Equallogic hardware.  This is a slight regression in behavior
compared to earlier qemu, where a guest executing discards
in power-of-2 chunks used to be able to get every page
discarded, but is now left with various pages still allocated
because the guest requests did not align with the hardware's
15M pages.

Since the SCSI specification says nothing about a minimum
discard granularity, and only documents the preferred
alignment, it is best if the block layer gives the driver
every bit of information about discard requests, rather than
rounding it to alignment boundaries early.

Rework the block layer discard algorithm to mirror the write
zero algorithm: always peel off any unaligned head or tail
and manage that in isolation, then do the bulk of the request
on an aligned boundary.  The fallback when the driver returns
-ENOTSUP for an unaligned request is to silently ignore that
portion of the discard request; but for devices that can pass
the partial request all the way down to hardware, this can
result in the hardware coalescing requests and discarding
aligned pages after all.

Reported by: Peter Lieven <pl@kamp.de>
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>

(cherry picked from commit 3482b9bc41)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:44 -06:00
Eric Blake 5e4eb851dd block: Return -ENOTSUP rather than assert on unaligned discards
Right now, the block layer rounds discard requests, so that
individual drivers are able to assert that discard requests
will never be unaligned.  But there are some ISCSI devices
that track and coalesce multiple unaligned requests, turning it
into an actual discard if the requests eventually cover an
entire page, which implies that it is better to always pass
discard requests as low down the stack as possible.

In isolation, this patch has no semantic effect, since the
block layer currently never passes an unaligned request through.
But the block layer already has code that silently ignores
drivers that return -ENOTSUP for a discard request that cannot
be honored (as well as drivers that return 0 even when nothing
was done).  But the next patch will update the block layer to
fragment discard requests, so that clients are guaranteed that
they are either dealing with an unaligned head or tail, or an
aligned core, making it similar to the block layer semantics of
write zero fragmentation.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 49228d1e95)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:43 -06:00
Eric Blake dd11d33deb block: Let write zeroes fallback work even with small max_transfer
Commit 443668ca rewrote the write_zeroes logic to guarantee that
an unaligned request never crosses a cluster boundary.  But
in the rewrite, the new code assumed that at most one iteration
would be needed to get to an alignment boundary.

However, it is easy to trigger an assertion failure: the Linux
kernel limits loopback devices to advertise a max_transfer of
only 64k.  Any operation that requires falling back to writes
rather than more efficient zeroing must obey max_transfer during
that fallback, which means an unaligned head may require multiple
iterations of the write fallbacks before reaching the aligned
boundaries, when layering a format with clusters larger than 64k
atop the protocol of file access to a loopback device.

Test case:

$ qemu-img create -f qcow2 -o cluster_size=1M file 10M
$ losetup /dev/loop2 /path/to/file
$ qemu-io -f qcow2 /dev/loop2
qemu-io> w 7m 1k
qemu-io> w -z 8003584 2093056

In fairness to Denis (as the original listed author of the culprit
commit), the faulty logic for at most one iteration is probably all
my fault in reworking his idea.  But the solution is to restore what
was in place prior to that commit: when dealing with an unaligned
head or tail, iterate as many times as necessary while fragmenting
the operation at max_transfer boundaries.

Reported-by: Ed Swierk <eswierk@skyportsystems.com>
CC: qemu-stable@nongnu.org
CC: Denis V. Lunev <den@openvz.org>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b2f95feec5)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:30 -06:00
Eric Blake c4bf37e0b0 qcow2: Inform block layer about discard boundaries
At the qcow2 layer, discard is only possible on a per-cluster
basis; at the moment, qcow2 silently rounds any unaligned
requests to this granularity.  However, an upcoming patch will
fix a regression in the block layer ignoring too much of an
unaligned discard request, by changing the block layer to
break up a discard request at alignment boundaries; for that
to work, the block layer must know about our limits.

However, we can't go one step further by changing
qcow2_discard_clusters() to assert that requests are always
aligned, since that helper function is reached on paths
outside of the block layer.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ecdbead659)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-12 17:48:19 -06:00
Samuel Thibault e9dbd28e2a slirp: Fix access to freed memory
if_start() goes through the slirp->if_fastq and slirp->if_batchq
list of pending messages, and accesses ifm->ifq_so->so_nqueued of its
elements if ifm->ifq_so != NULL.  When freeing a socket, we thus need
to make sure that any pending message for this socket does not refer
to the socket any more.

Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Tested-by: Brian Candler <b.candler@pobox.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit ea64d5f088)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:56:26 -06:00
Greg Kurz 92230a5963 vhost: adapt vhost_verify_ring_mappings() to virtio 1 ring layout
With virtio 1, the vring layout is split in 3 separate regions of
contiguous memory for the descriptor table, the available ring and the
used ring, as opposed with legacy virtio which uses a single region.

In case of memory re-mapping, the code ensures it doesn't affect the
vring mapping. This is done in vhost_verify_ring_mappings() which assumes
the device is legacy.

This patch changes vhost_verify_ring_mappings() to check the mappings of
each part of the vring separately.

This works for legacy mappings as well.

Cc: qemu-stable@nongnu.org
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit f1f9e6c596)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:53:11 -06:00
Kevin Wolf 48b3aa20ae block: Don't mark node clean after failed flush
Commit 3ff2f67a changed bdrv_co_flush() so that no flush is issues if
the image hasn't been dirtied since the last flush. This is not quite
correct: The condition should be that the image hasn't been dirtied
since the last _successful_ flush. This patch changes the logic
accordingly.

Without this fix, subsequent bdrv_co_flush() calls would return success
without actually doing anything even though the image is still dirty.
The difference is visible in some blkdebug test cases where error
messages incorrectly disappeared after commit 3ff2f67a.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1478300595-10090-1-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e6af1e0854)

Conflicts:
	block/io.c

* remove context dep on 9972354

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:50:54 -06:00
Michael S. Tsirkin f1372d6e14 virtio-net: mark VIRTIO_NET_F_GSO as legacy
virtio 1.0 spec says this is a legacy feature bit,
hide it from guests in modern mode.

Note: for cross-version migration compatibility,
we keep the bit set in host_features.
The result will be that a guest migrating cross-version
will see host features change under it.
As guests only seem to read it once, this should
not be an issue. Meanwhile, will work to fix guests to
ignore this bit in virtio1 mode, too.

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 2a083ffd2e)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:50:54 -06:00
Michael S. Tsirkin 63087cd74b virtio: allow per-device-class legacy features
Legacy features are those that transitional devices only
expose on the legacy interface.
Allow different ones per device class.

Cc: qemu-stable@nongnu.org # dependency for the next patch
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
(cherry picked from commit 9b706dbbbb)

Conflicts:
	hw/virtio/virtio.c

* drop context dep on ff4c07df
* resolv func dep on ff4c07df creating vdc variable in
  virtio_device_class_init()

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-08 13:50:54 -06:00
David Gibson 9df69dcd14 target-ppc: Fix CPU migration from qemu-2.6 <-> later versions
When migration for target-ppc was converted to vmstate, several
VMSTATE_EQUAL() checks were foolishly included of things that really
should be internal state.  Specifically we verified equality of the
insns_flags and insns_flags2 fields, which are used within TCG to
determine which groups of instructions are available on this cpu
model.  Between qemu-2.6 and qemu-2.7 we made some changes to these
classes which broke migration.

This path fixes migration both forwards and backwards.  On migration
from 2.6 to later versions we import the fields into teporary
variables, which we then ignore.  In migration backwards, we populate
the temporary fields from the runtime fields, but mask out the bits
which were added after qemu-2.6, allowing the VMSTATE_EQUAL in
qemu-2.6 to accept the stream.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 16a2497bd4)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-12-02 09:38:26 -06:00
Daniel P. Berrange 95a0638043 net: fix sending of data with -net socket, listen backend
The use of -net socket,listen was broken in the following
commit

  commit 16a3df403b
  Author: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
  Date:   Fri May 13 15:35:19 2016 +0800

    net/net: Add SocketReadState for reuse codes

    This function is from net/socket.c, move it to net.c and net.h.
    Add SocketReadState to make others reuse net_fill_rstate().
    suggestion from jason.

This refactored the state out of NetSocketState into a
separate SocketReadState. This refactoring requires
that a callback is provided to be triggered upon
completion of a packet receive from the guest.

The patch only registered this callback in the codepaths
hit by -net socket,connect, not -net socket,listen. So
as a result packets sent by the guest in the latter case
get dropped on the floor.

This bug is hidden because net_fill_rstate() silently
does nothing if the callback is not set.

This patch adds in the middle callback registration
and also adds an assert so that QEMU aborts if there
are any other codepaths hit which are missing the
callback.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit e79cd40680)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-18 19:06:19 -06:00
Corey Minyard 1790a9d77d acpi/ipmi: Initialize the fwinfo before fetching it
The initialization was missed before, resulting in some
bad data in the smbus case.

Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 698ae42b91)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-18 19:05:21 -06:00
Alex Williamson 1b16ded6a5 memory: Don't use memcpy for ram_device regions
With a vfio assigned device we lay down a base MemoryRegion registered
as an IO region, giving us read & write accessors.  If the region
supports mmap, we lay down a higher priority sub-region MemoryRegion
on top of the base layer initialized as a RAM device pointer to the
mmap.  Finally, if we have any quirks for the device (ie. address
ranges that need additional virtualization support), we put another IO
sub-region on top of the mmap MemoryRegion.  When this is flattened,
we now potentially have sub-page mmap MemoryRegions exposed which
cannot be directly mapped through KVM.

This is as expected, but a subtle detail of this is that we end up
with two different access mechanisms through QEMU.  If we disable the
mmap MemoryRegion, we make use of the IO MemoryRegion and service
accesses using pread and pwrite to the vfio device file descriptor.
If the mmap MemoryRegion is enabled and results in one of these
sub-page gaps, QEMU handles the access as RAM, using memcpy to the
mmap.  Using either pread/pwrite or the mmap directly should be
correct, but using memcpy causes us problems.  I expect that not only
does memcpy not necessarily honor the original width and alignment in
performing a copy, but it potentially also uses processor instructions
not intended for MMIO spaces.  It turns out that this has been a
problem for Realtek NIC assignment, which has such a quirk that
creates a sub-page mmap MemoryRegion access.

To resolve this, we disable memory_access_is_direct() for ram_device
regions since QEMU assumes that it can use memcpy for those regions.
Instead we access through MemoryRegionOps, which replaces the memcpy
with simple de-references of standard sizes to the host memory.

With this patch we attempt to provide unrestricted access to the RAM
device, allowing byte through qword access as well as unaligned
access.  The assumption here is that accesses initiated by the VM are
driven by a device specific driver, which knows the device
capabilities.  If unaligned accesses are not supported by the device,
we don't want them to work in a VM by performing multiple aligned
accesses to compose the unaligned access.  A down-side of this
philosophy is that the xp command from the monitor attempts to use
the largest available access weidth, unaware of the underlying
device.  Using memcpy had this same restriction, but at least now an
operator can dump individual registers, even if blocks of device
memory may result in access widths beyond the capabilities of a
given device (RTL NICs only support up to dword).

Reported-by: Thorsten Kohfeldt <thorsten.kohfeldt@gmx.de>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 4a2e242bbb)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:01 -05:00
Alex Williamson ca83f87a66 memory: Replace skip_dump flag with "ram_device"
Setting skip_dump on a MemoryRegion allows us to modify one specific
code path, but the restriction we're trying to address encompasses
more than that.  If we have a RAM MemoryRegion backed by a physical
device, it not only restricts our ability to dump that region, but
also affects how we should manipulate it.  Here we recognize that
MemoryRegions do not change to sometimes allow dumps and other times
not, so we replace setting the skip_dump flag with a new initializer
so that we know exactly the type of region to which we're applying
this behavior.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 21e00fa55f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:01 -05:00
Prasad J Pandit 2817466c55 net: rtl8139: limit processing of ring descriptors
RTL8139 ethernet controller in C+ mode supports multiple
descriptor rings, each with maximum of 64 descriptors. While
processing transmit descriptor ring in 'rtl8139_cplus_transmit',
it does not limit the descriptor count and runs forever. Add
check to avoid it.

Reported-by: Andrew Henderson <hendersa@icculus.org>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit c7c3591669)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:01 -05:00
Alberto Garcia e389e44a35 qemu-iotests: Test I/O in a single drive from a throttling group
iotest 093 contains a test that creates a throttling group with
several drives and performs I/O in all of them. This patch adds a new
test that creates a similar setup but only performs I/O in one of the
drives at the same time.

This is useful to test that the round robin algorithm is behaving
properly in these scenarios, and is specifically written using the
regression introduced in 27ccdd5259 as an example.

Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a26ddb4396)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Alberto Garcia b1fdc94193 throttle: Correct access to wrong BlockBackendPublic structures
In 27ccdd5259 the throttling fields were
moved from BlockDriverState to BlockBackend. However in a few cases
the code started using throttling fields from the active BlockBackend
instead of the round-robin token, making the algorithm behave
incorrectly.

This can cause starvation if there's a throttling group with several
drives but only one of them has I/O.

Cc: qemu-stable@nongnu.org
Reported-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6bf77e1c2d)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Thomas Huth 61781984db ppc/kvm: Mark 64kB page size support as disabled if not available
QEMU currently refuses to start with KVM-PR and only prints out

	qemu: fatal: Unknown MMU model 851972

when being started there. This is because commit 4322e8ced5
("ppc: Fix 64K pages support in full emulation") introduced a new
POWERPC_MMU_64K bit to indicate support for this page size, but
it never gets cleared on KVM-PR if the host kernel does not support
this. Thus we've got to turn off this bit in the mmu_model for KVM-PR.

Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 0d594f5565)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Paolo Bonzini 857efecf91 rbd: shift byte count as a 64-bit value
Otherwise, reads of more than 2GB fail.  Until commit
7bbca9e290, reads of 2^41
bytes succeeded at least theoretically.

In fact, pdiscard ought to receive a 64-bit integer as the
count for the same reason.

Reported by Coverity.

Fixes: 7bbca9e290
Cc: qemu-stable@nongnu.org
Cc: kwolf@redhat.com
Cc: eblake@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit e948f663e9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Markus Armbruster 99837b0d36 tests/test-qmp-input-strict: Cover missing struct members
These tests would have caught the bug fixed by the previous commit.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1475594630-24758-1-git-send-email-armbru@redhat.com>
(cherry picked from commit bce3035a44)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:19:00 -05:00
Marc-André Lureau b34c7bd463 qapi: Fix crash when 'any' or 'null' parameter is missing
Unlike the other visit methods, visit_type_any() and visit_type_null()
neglect to check whether qmp_input_get_object() succeeded.  They crash
when it fails.  Reproducer:

{ "execute": "qom-set",
  "arguments": { "path": "/machine", "property": "rtc-time" } }

Will crash with:

qapi/qapi-visit-core.c:277: visit_type_any: Assertion `!err != !*obj'
failed

Broken in commit 5c678ee.  Fix by adding the missing error checks.

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20160922203927.28241-3-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message rephrased]
Signed-off-by: Markus Armbruster <armbru@redhat.com>

(cherry picked from commit c489780203)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 18:18:54 -05:00
Marc-André Lureau f3467f56c1 qmp: fix object-add assert() without props
Since commit ad739706bb, user_creatable_add_type() expects to be
given a qdict. However, if object-add is called without props, you reach
the assert: "qemu/qom/object_interfaces.c:115: user_creatable_add_type:
Assertion `qdict' failed.", because the qdict isn't created in this
case (it's optional).

Furthermore, qmp_input_visitor_new() is not meant to be called without a
dict, and a further commit will assert in this situation.

If none given, create an empty qdict in qmp to avoid the
user_creatable_add_type() assert(qdict).

Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20160922203927.28241-2-marcandre.lureau@redhat.com>
Tested-by: Xiao Long Jiang <zxiaol@linux.vnet.ibm.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
(cherry picked from commit e64c75a975)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:24:25 -05:00
Daniel P. Berrange 5be5335661 char: fix missing return in error path for chardev TLS init
If the qio_channel_tls_new_(server|client) methods fail,
we disconnect the client. Unfortunately a missing return
means we then go on to try and run the TLS handshake on
a NULL I/O channel. This gives predictably segfaulty
results.

The main way to trigger this is to request a bogus TLS
priority string for the TLS credentials. e.g.

  -object tls-creds-x509,id=tls0,priority=wibble,...

Most other ways appear impossible to trigger except
perhaps if OOM conditions cause gnutls initialization
to fail.

Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
(cherry picked from commit 660a2d83e0)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:16:58 -05:00
Emilio G. Cota af29bd3193 qht: fix unlock-after-free segfault upon resizing
The old map's bucket locks are being unlocked *after*
that same old map has been passed to RCU for destruction.
This is a bug that can cause a segfault, since there's
no guarantee that the deletion will be deferred (e.g.
there may be no concurrent readers).

The segfault is easily triggered in RHEL6/CentOS6 with qht-test,
particularly on a single-core system or by pinning qht-test
to a single core.

Fix it by unlocking the map's bucket locks right after having
published the new map, and (crucially) before marking the map
for deletion via call_rcu().

While at it, expand qht_do_resize() to atomically do (1) a reset,
(2) a resize, or (3) a reset+resize. This simplifies the calling
code, since the new function (qht_do_resize_reset()) acquires
and releases the buckets' locks.

Note that no qht_do_reset inline is provided, since it would have
no users--qht_reset() already performs a reset without taking
ht->lock.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Reported-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1475706880-10667-3-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 76b553b308)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:15:30 -05:00
Emilio G. Cota f72ca1ac6f qht: simplify qht_reset_size
Sometimes gcc doesn't pick up the fact that 'new' is properly
set if 'resize == true', which may generate an unnecessary
build warning.

Fix it by removing 'resize' and directly checking that 'new'
is non-NULL.

Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1475706880-10667-2-git-send-email-cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit f555a9d0b3)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:14:57 -05:00
Eric Blake 4d45fe11d1 migrate: Fix cpu-throttle-increment regression in HMP
Commit 69ef1f3 accidentally broke migrate_set_parameter's ability
to set the cpu-throttle-increment to anything other than the
default, because it forgot to parse the user's string into an
integer.

CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit bb2b777cf9)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:10:51 -05:00
John Snow 4a25ab2a04 block-backend: remove blk_flush_all
We can teach Xen to drain and flush each device as it needs to, instead
of trying to flush ALL devices. This removes the last user of
blk_flush_all.

The function is therefore removed under the premise that any new uses
of blk_flush_all would be the wrong paradigm: either flush the single
device that requires flushing, or use an appropriate flush_all mechanism
from outside of the BlkBackend layer.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 49137bf684)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:06:27 -05:00
John Snow 95200ebb50 qemu: use bdrv_flush_all for vm_stop et al
Reimplement bdrv_flush_all for vm_stop. In contrast to blk_flush_all,
bdrv_flush_all does not have device model restrictions. This allows
us to flush and halt unconditionally without error.

This allows us to do things like migrate when we have a device with
an open tray, but has a node that may need to be flushed, or nodes
that aren't currently attached to any device and need to be flushed.

Specifically, this allows us to migrate when we have a CDROM with
an open tray.

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 22af08eacf)
Conflicts:
	cpus.c

* drop context dependancy on 6d0ceb80

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:04:56 -05:00
John Snow 8e94512568 block: reintroduce bdrv_flush_all
Commit fe1a9cbc moved the flush_all routine from the bdrv layer to the
block-backend layer. In doing so, however, the semantics of the routine
changed slightly such that flush_all now used blk_flush instead of
bdrv_flush.

blk_flush can fail if the attached device model reports that it is not
"available," (i.e. the tray is open.) This changed the semantics of
flush_all such that it can now fail for e.g. open CDROM drives.

Reintroduce bdrv_flush_all to regain the old semantics without having to
alter the behavior of blk_flush or blk_flush_all, which are already
'doing the right thing.'

Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Acked-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 4085f5c7a2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2016-11-02 17:01:44 -05:00