Commit graph

36591 commits

Author SHA1 Message Date
Paolo Bonzini 0d931d7062 serial: clean up THRE/TEMT handling
- assert TEMT is cleared before sending a character; we'll get one from
TSR if tsr_retry > 0, from the FIFO or THR otherwise

- assert THRE cleared and FIFO not empty (if enabled) before fetching a
character to send.  This effectively reverts dffacd46, but the check
makes no sense and commit f702e62 (serial: change retry logic to avoid
concurrency, 2014-07-11) must have made it unnecessary.  The commit
message for f702e62 talks about multiple calls to qemu_chr_fe_add_watch
triggering s->tsr_retry >= MAX_XMIT_RETRY, but other failures were
possible.  For example, if you have multiple calls, the subsequent ones
will see s->tsr_retry == 0 and will find THRE and/or TEMT on entry.

- for clarity, raise THRI immediately after the code sets THRE

- check THRE to see if another character has to be sent.  This makes
the assertions more obvious and also means TEMT has to be set as soon as
the loop ends.  It makes the loop send both TSR and THR if flow-control
happens in non-FIFO mode.  Previously, THR would be lost.

- clear TEMT together with THRE even in the non-FIFO case

The last two items are bugfixes, but they were just found by inspection
and do not squash known bugs.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 17:33:37 +01:00
Paolo Bonzini 4e02b0fcf5 serial: reset thri_pending on IER writes with THRI=0
This is responsible for failure of migration from 2.2 to 2.1, because
thr_ipending is always one in practice.

serial.c is setting thr_ipending unconditionally.  However, thr_ipending
is not used at all if THRI=0, and it will be overwritten again the next
time THRE or THRI changes.  For that reason, we can set thr_ipending to
zero every time THRI is reset.

There is disagreement on whether LSR.THRE should be resampled when IER.THRI
goes from 1 to 1.  This patch does not touch the code, leaving that for
QEMU 2.3+.

This has no semantic change and is enough to fix migration in the common
case where the interrupt is not pending or is reported in IIR.  It does not
change the migration format, so 2.2.0 -> 2.1 will remain broken but we
can fix 2.2.1 -> 2.1 without breaking 2.2.1 <-> 2.2.0.

The case that remains broken (the one in which the subsection is strictly
necessary) is when THRE=1, the THRI interrupt has *not* been acknowledged
yet, and a higher-priority interrupt comes.  In this case, you need the
subsection to tell the source that the lower-priority THRI interrupt is
pending.  The subsection's breakage of migration, in this case, prevents
continuing the VM on the destination with an invalid state.

Cc: qemu-stable@nongnu.org
Reported-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 14:35:53 +01:00
Paolo Bonzini 269e235849 linuxboot: fix loading old kernels
Old kernels that used high memory only allowed the initrd to be in the
first 896MB of memory.  If you load the initrd above, they complain
that "initrd extends beyond end of memory".

In order to fix this, while not breaking machines with small amounts
of memory fixed by cdebec5 (linuxboot: compute initrd loading address,
2014-10-06), we need to distinguish two cases.  If pc.c placed the
initrd at end of memory, use the new algorithm based on the e801
memory map.  If instead pc.c placed the initrd at the maximum address
specified by the bzImage, leave it there.

The only interesting part is that the low-memory info block is now
loaded very early, in real mode, and thus the 32-bit address has
to be converted into a real mode segment.  The initrd address is
also patched in the info block before entering real mode, it is
simpler that way.

This fixes booting the RHEL4.8 32-bit installation image with 1GB
of RAM.

Cc: qemu-stable@nongnu.org
Cc: mst@redhat.com
Cc: jsnow@redhat.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:03 +01:00
Paolo Bonzini 575a6f4082 kvm/apic: fix 2.2->2.1 migration
The wait_for_sipi field is set back to 1 after an INIT, so it was not
effective to reset it in kvm_apic_realize.  Introduce a reset callback
and reset wait_for_sipi there.

Reported-by: Igor Mammedov <imammedo@redhat.com>
Cc: qemu-stable@nongnu.org
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Paolo Bonzini 2f9ac42acf target-i386: add Ivy Bridge CPU model
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Paolo Bonzini 78a611f193 target-i386: add f16c and rdrand to Haswell and Broadwell
Both were added in Ivy Bridge (for which we do not have a CPU model
yet!).

Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Paolo Bonzini b3a4f0b1a0 target-i386: add VME to all CPUs
vm86 mode extensions date back to the 486.  All models should have
them.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Paolo Bonzini 64bbd372f2 pc: add 2.3 machine types
The next patch will differentiate them.

Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk 5b9efc39ae i386: do not cross the pages boundaries in replay mode
This patch denies crossing the boundary of the pages in the replay mode,
because it can cause an exception. Do it only when boundary is
crossed by the first instruction in the block.
If current instruction already crossed the bound - it's ok,
because an exception hasn't stopped this code.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk bf2a7ddb0a cpus: make icount warp behave well with respect to stop/cont
This patch makes icount warp use the new QEMU_CLOCK_VIRTUAL_RT clock.
This way, icount's QEMU_CLOCK_VIRTUAL will never count time during which
the virtual machine is stopped.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk 4e7fa73ec2 timer: introduce new QEMU_CLOCK_VIRTUAL_RT clock
This patch introduces new QEMU_CLOCK_VIRTUAL_RT clock, which
should be used for icount warping.  In the next patch, it
will be used to avoid a huge icount warp when a virtual
machine is stopped for a long time.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk d8a499f17e cpu-exec: invalidate nocache translation if they are interrupted
In this case, QEMU might longjmp out of cpu-exec.c and miss the final
cleanup in cpu_exec_nocache.  Do this manually through a new compile
flag.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk 2a62914bd8 icount: introduce cpu_get_icount_raw
Separate accessing the instruction counter from the compensation for
speed and halting that are introduced by qemu_icount_bias.  This
introduces new infrastructure used by the record/replay patches.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk 626cf8f4c6 icount: set can_do_io outside TB execution
This patch sets can_do_io function to allow reading icount
within cpu-exec, but outside TB execution.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk e511b4d783 cpu-exec: reset exception_index correctly
Exception index is reset at every entry at every entry into cpu_exec()
function. This may cause missing the exceptions while replaying them.
This patch moves exception_index reset to the locations where they are
processed.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Pavel Dovgalyuk b4ac20b4df cpu-exec: fix cpu_exec_nocache
In icount mode cpu_exec_nocache function is used to execute part of the
existing TB. At the end of cpu_exec_nocache newly created TB is deleted.
Sometimes io_read function needs to recompile current TB and restart TB
lookup and execution. After that tb_find_fast function finds old (bigger)
TB again. This TB cannot be executed (because icount is not big enough)
and cpu_exec_nocache is called again. Such a loop continues over and over.
This patch deletes old TB and avoids finding it in the TB cache.

Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Paolo Bonzini f8e1f53334 scsi-disk: provide maximum transfer length
The QEMU block layer has a limit of INT_MAX bytes per transfer.

Expose it in the block limits VPD page for both regular transfers
and WRITE SAME.

Reported-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster 3c55fe2a13 scsi: Use g_new() & friends where that makes obvious sense
g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
for two reasons.  One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.

This commit only touches allocations with size arguments of the form
sizeof(T).

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster 0bd0adbe5b scsi: Fuse g_malloc(); memset() into g_malloc0()
Coccinelle semantic patch:

    @@
    expression LHS, SZ;
    @@
    -       LHS = g_malloc(SZ);
    -       memset(LHS, 0, SZ);
    +       LHS = g_malloc0(SZ);

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster 1c3381af32 scsi: Drop superfluous conditionals around g_free()
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster e42a92ae64 x86: Drop some superfluous casts from void *
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster ab3ad07f89 x86: Use g_new() & friends where that makes obvious sense
g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
for two reasons.  One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.

This commit only touches allocations with size arguments of the form
sizeof(T).

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster 4be34d1e21 x86: Fuse g_malloc(); memset() into g_malloc0()
Coccinelle semantic patch:

    @@
    expression LHS, SZ;
    @@
    -       LHS = g_malloc(SZ);
    -       memset(LHS, 0, SZ);
    +       LHS = g_malloc0(SZ);

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Markus Armbruster 18fc805534 x86: Drop superfluous conditionals around g_free()
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:02 +01:00
Wanpeng Li 18cd2c17b5 target-i386: get/set/migrate XSAVES state
Add xsaves related definition, it also adds corresponding part
to kvm_get/put, and vmstate.

Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Paolo Bonzini 906b53a2de target-mips: kvm: do not use get_clock()
Use the external qemu-timer API instead.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Paolo Bonzini 0bb0b2d2fe target-i386: add feature flags for CPUID[EAX=0xd,ECX=1]
These represent xsave-related capabilities of the processor, and KVM may
or may not support them.

Add feature bits so that they are considered by "-cpu ...,enforce", and use
the new feature work instead of calling kvm_arch_get_supported_cpuid.

Bit 3 (XSAVES) is not migratables because it requires saving MSR_IA32_XSS.
Neither KVM nor any commonly available hardware supports it anyway.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger e9af2fef24 coverity/s390x: avoid false positive in kvm_irqchip_add_adapter_route
Paolo Bonzini reported that Coverity reports an uninitialized pad value.
Let's use a designated initializer for kvm_irq_routing_entry to avoid
this false positive. This is similar to kvm_irqchip_add_msi_route and
other users of kvm_irq_routing_entry.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger e6eef7c221 valgrind/s390x: avoid false positives on KVM_SET_FPU ioctl
struct kvm_fpu contains an alignment padding on s390x. Let's use a
designated initializer to avoid false positives from valgrind/memcheck.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger 076796f8fd valgrind/i386: avoid false positives on KVM_SET_VCPU_EVENTS ioctl
struct kvm_vcpu_events contains reserved fields. Let's use a
designated initializer to avoid false positives in valgrind.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger d19ae73e98 valgrind/i386: avoid false positives on KVM_GET_MSRS ioctl
struct kvm_msrs contains a pad field. Let's use a designated
initializer on the info part to avoid false positives from
valgrind/memcheck.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger c7fe4b1298 valgrind/i386: avoid false positives on KVM_SET_MSRS ioctl
struct kvm_msrs contains padding bytes. Let's use a designated
initializer on the info part to avoid false positives from
valgrind/memcheck. Do the same for generic MSRS, the TSC and
feature control.

We also need to zero out the reserved fields in the entries.
We do this in kvm_msr_entry_set as suggested by Paolo. This
avoids a big memset that a designated initializer on the
full structure would do.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger bdfc8480c5 valgrind/i386: avoid false positives on KVM_SET_XCRS ioctl
struct kvm_xcrs contains padding bytes. Let's use a designated
initializer to avoid false positives from valgrind/memcheck.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger b0a0551283 valgrind/i386: avoid false positives on KVM_SET_PIT ioctl
struct kvm_pit_state2 contains pad fields. Let's use a designated
initializer to avoid false positives from valgrind/memcheck.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger 5e0b7d8869 valgrind/i386: avoid false positives on KVM_SET_CLOCK ioctl
kvm_clock_data contains pad fields. Let's use a designated
initializer to avoid false positives from valgrind/memcheck.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Christian Borntraeger d229b985b5 valgrind: avoid false positives in KVM_GET_DIRTY_LOG ioctl
struct kvm_dirty_log contains padding fields that trigger false
positives in valgrind. Let's use a designated initializer to avoid
false positives from valgrind/memcheck.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Eric Auger 9fc0e2d8ac vfio: use kvm_resamplefds_enabled()
Use the kvm_resamplefds_enabled function

Signed-off-by: Eric Auger <eric.auger@linaro.org>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Eric Auger f41389ae3c KVM_CAP_IRQFD and KVM_CAP_IRQFD_RESAMPLE checks
Compute kvm_irqfds_allowed by checking the KVM_CAP_IRQFD extension.
Remove direct settings in architecture specific files.

Add a new kvm_resamplefds_allowed variable, initialized by
checking the KVM_CAP_IRQFD_RESAMPLE extension. Add a corresponding
kvm_resamplefds_enabled() function.

A special notice for s390 where KVM_CAP_IRQFD was not immediatly
advirtised when irqfd capability was introduced in the kernel.
KVM_CAP_IRQ_ROUTING was advertised instead.

This was fixed in "KVM: s390: announce irqfd capability",
ebc3226202d5956a5963185222982d435378b899 whereas irqfd support
was brought in 84223598778ba08041f4297fda485df83414d57e,
"KVM: s390: irq routing for adapter interrupts".  Both commits
first appear in 3.15 so there should not be any kernel
version impacted by this QEMU modification.

Signed-off-by: Eric Auger <eric.auger@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Aurelien Jarno 9551ea6991 target-i386: simplify AES emulation
This patch simplifies the AES code, by directly accessing the newly added
S-Box, InvS-Box and InvMixColumns tables instead of recreating them by
using the AES_Te and AES_Td tables.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Drew DeVault 5eba5a6632 Add bootloader name to multiboot implementation
The name is set to "qemu".

Signed-off-by: Drew DeVault <sir@cmpwn.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Drew DeVault <sircmpwn@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2014-12-15 12:21:01 +01:00
Peter Maydell 54600752a1 Collected x86 patches
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1
 
 iQEcBAABAgAGBQJUjhUkAAoJEK0ScMxN0Cebe+IIAKTLD3oU9wLJ+PKIOcbRGoGq
 w+T9j16z/N5DlAT9WOJSPFqc4BYOjncm8CbV9OiTcqVkRYkX1B4Ne+Ao5oEiv3D3
 leABNxnM3h7ncvw89jb5C/ObWkmA6HmKZQDnUX5HwVaek3tTllmLzFqznDOnjGfr
 AajMEh4QkWdTufi+Vmn46MlhkJ5Z0GyhxYRKtjNcZ2sNh8o1gbcKA2uyUaQeUbi6
 WjYq66Gm7qWszKHSXmC0RTcF+uzlOVSaEAqDNvb1MvNyBvNQ3e6gHVazKQ1IVy3l
 +Smvc5pGbyS2qQSiLe2qEXhagzNARVPY7NNFhct++hpWW3O3it0z/aH4FqL2C/k=
 =32Pn
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/rth/tags/x86-next-20141214' into staging

Collected x86 patches

# gpg: Signature made Sun 14 Dec 2014 22:54:28 GMT using RSA key ID 4DD0279B
# gpg: Good signature from "Richard Henderson <rth7680@gmail.com>"
# gpg:                 aka "Richard Henderson <rth@redhat.com>"
# gpg:                 aka "Richard Henderson <rth@twiddle.net>"

* remotes/rth/tags/x86-next-20141214:
  target-i386: fix icount processing for repz instructions
  target-i386: fbld instruction doesn't set minus sign
  target-i386: Wrong conversion infinity from float80 to int32/int64

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-12-15 11:11:52 +00:00
Pavel Dovgalyuk c4d4525c38 target-i386: fix icount processing for repz instructions
TCG generates optimized code for i386 repz instructions in single step mode.
It means that when ecx becomes 0, execution of the string instruction breaks
immediately without an additional iteration for ecx==0 (which will only check
ecx and set the flags). Omitting this iteration leads to different
instructions counting in singlestep mode and in normal execution.
This patch disables optimization of this last iteration for icount mode
which should be deterministic.

v2: inverted the condition and formatted the comment

Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-12-14 16:48:38 -06:00
Dmitry Poletaev 18b41f95d2 target-i386: fbld instruction doesn't set minus sign
Signed-off-by: Dmitry Poletaev <poletaev-qemu@yandex.ru>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-12-14 16:34:29 -06:00
Dmitry Poletaev ea32aaf1a7 target-i386: Wrong conversion infinity from float80 to int32/int64
Signed-off-by: Dmitry Poletaev <poletaev-qemu@yandex.ru>
Signed-off-by: Richard Henderson <rth@twiddle.net>
2014-12-14 16:34:29 -06:00
Peter Maydell e0d3795654 -----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
 
 iQEcBAABAgAGBQJUiyFkAAoJEJykq7OBq3PIOxwH/iNwgJiag12KGBZbwOGJmc/L
 6Z1XuvLmSbp+bIrbUfxIx+Izss8b7S16f3QvbMwOVGGpFRWvfGVljerLgiKagarl
 cj2aU+TX+VQjKrWTE1KgG64Ai4GfD6cCzr3fXCBLHetDhmA1h0qUxaZk580zvP/6
 1h2WACNh+KeFGyh1gJHjj/aYGZbPq7hiN04/HMiT0E5swKYVuygrUX/blaffWk6O
 LnW2aRFQM4I5wZXdUOyL5BOHH4wnQUN8i4xUpSKvILUuAWx16At58FbORmtrgk4k
 hJmDREBMg+nbCWWh7ehkcQEhxoZPww8MVolqnJfkMYr8UHZHrBYh8UBuznaeGbU=
 =YeTr
 -----END PGP SIGNATURE-----

Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging

# gpg: Signature made Fri 12 Dec 2014 17:09:56 GMT using RSA key ID 81AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>"

* remotes/stefanha/tags/block-pull-request:
  linux-aio: simplify removal of completed iocbs from the list
  linux-aio: drop return code from laio_io_unplug and ioq_submit
  linux-aio: rename LaioQueue idx field to "n"
  linux-aio: track whether the queue is blocked
  linux-aio: queue requests that cannot be submitted
  block: drop unused bdrv_clear_incoming_migration_all() prototype
  block: Don't add trailing space in "Formating..." message
  qemu-iotests: Remove traling whitespaces in *.out
  block: vhdx - set .bdrv_has_zero_init to bdrv_has_zero_init_1
  iotests: Fix test 039
  iotests: Filter for "Killed" in qemu-io output
  qemu-io: Add sigraise command
  block: vhdx - change .vhdx_create default block state to ZERO
  block: vhdx - update PAYLOAD_BLOCK_UNMAPPED value to match 1.00 spec
  block: vhdx - remove redundant comments
  block/rbd: fix memory leak
  iotests: Add test for vmdk JSON file names
  vmdk: Fix error for JSON descriptor file names
  block migration: fix return value

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-12-12 17:10:44 +00:00
Paolo Bonzini 82595da8de linux-aio: simplify removal of completed iocbs from the list
There is no need to do another O(n) pass on the list; the iocb to
split the list at is already available through the array we passed to
io_submit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1418305950-30924-6-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-12-12 16:57:55 +00:00
Paolo Bonzini de35464461 linux-aio: drop return code from laio_io_unplug and ioq_submit
These are unused.

Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1418305950-30924-5-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-12-12 16:57:55 +00:00
Paolo Bonzini 8455ce053a linux-aio: rename LaioQueue idx field to "n"
It does not identify an index in an array anymore.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1418305950-30924-4-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-12-12 16:57:55 +00:00
Paolo Bonzini 43f2376e09 linux-aio: track whether the queue is blocked
Avoid that unplug submits requests when io_submit reported that it
couldn't accept more; at the same time, try more io_submit calls if it
could handle the whole set of requests that were passed, so that the
"blocked" flag is reset as soon as possible.

After the previous patch, laio_submit already tried to avoid submitting
requests to a blocked queue, by comparing s->io_q.idx with "==" instead
of the more natural ">=".  Switch to the simpler expression now that we
have the "blocked" flag.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1418305950-30924-3-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-12-12 16:57:55 +00:00
Paolo Bonzini 28b240877b linux-aio: queue requests that cannot be submitted
Keep a queue of requests that were not submitted; pass them to
the kernel when a completion is reported, unless the queue is
plugged.

The array of iocbs is rebuilt every time from scratch.  This
avoids keeping the iocbs array and list synchronized.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1418305950-30924-2-git-send-email-pbonzini@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2014-12-12 16:57:55 +00:00