Compare commits

...

109 Commits

Author SHA1 Message Date
Michael Roth 4ce5bc2dd1 update VERSION for 1.1.2
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:38:39 -05:00
Ian Campbell 113f4cd9e9 console: bounds check whenever changing the cursor due to an escape code
This is XSA-17 / CVE-2012-3515

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
2012-09-05 10:38:39 -05:00
Paolo Bonzini f965d237b5 qemu-timer: properly arm alarm timer for timers set by device initialization
QEMU will hang when fed the following command-line

  qemu-system-mips -kernel vmlinux-2.6.32-5-4kc-malta -append "console=ttyS0" -nographic -net none

The -net none is important otherwise it seems some events are generated
causing the things to work. When it doesn't work, the guest hangs when
measuring the CPU frequency, after the following line:

  [    0.000000] NR_IRQS:256

Pressing a key on the serial port unblocks it, hinting that the problem
is due to the recent elimination of the 1 second timeout in the main
loop.

The problem is that because init_timer_alarm sets the timer's pending
flag to true, the alarm timer is never armed until after the first time
through the main loop.  Thus the bug started when QEMU started testing
the pending flag in qemu_mod_timer (commit 1828be3, more alarm timer
cleanup, 2010-03-10).

But actually, it isn't true at all that a timer is pending when the
alarm timer is created, and the real bug has been latent forever: the
fix is to remove the bogus setting of pending flag.

Reported-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Jan Kiszka <jan.kiszka@siemens.com>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit de188751da)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:38:39 -05:00
Max Filippov 5a16dd9bc8 target-xtensa: return ENOSYS for unimplemented simcalls
This prevents guest from proceeding with uninitialised garbage returned
from unimplemented simcalls.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit e7eee62a90)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:38:39 -05:00
Max Filippov c7580c1034 target-xtensa: fix big-endian BBS/BBC implementation
Quote from ISA, 2.1:

For most Xtensa instructions, bit numbering is irrelevant; only the BBC
and BBS instructions assign bit numbers to values on which the processor
operates. The BBC/BBS instructions use big-endian bit ordering (0 is the
most-significant bit) on a big-endian processor configuration.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 7ff7563fc1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:38:39 -05:00
Hans de Goede a8cd6f7ddf ehci: Fix NULL ptr deref when unplugging an USB dev with an iso stream active
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
(cherry picked from commit 7ce86aa1aa)

Conflicts:

	hw/usb/hcd-ehci.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:30:38 -05:00
Michael S. Tsirkin ce4fc986e5 msix: make [un]use vectors on reset/load optional
The facility to use/unuse vectors dynamically is helpful
for virtio but little else: everyone just seems to use
vectors in their init function.

Avoid clearing msix vector use info on reset and load.
For virtio, clear it explicitly.
This should fix regressions reported with ivshmem - though
I didn't test this, I verified that virtio keeps
working like it did.

Tested-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 3cac001e5a)

Conflicts:

	hw/msix.c
	hw/virtio-pci.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:30:38 -05:00
Gleb Natapov 219a7482ab reset PMBA and PMREGMISC PIIX4 registers.
The bug causes Windows + OVMF hang after reboot since OVMF
checks PMREGMISC to see if IO space is enabled and skip
configuration if it is.

Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 4d09d37c6a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:30:38 -05:00
Stefano Stabellini 28846ad3b5 qemu_rearm_alarm_timer: do not call rearm if the next deadline is INT64_MAX
qemu_rearm_alarm_timer partially duplicates the code in
qemu_next_alarm_deadline to figure out if it needs to rearm the timer.
If it calls qemu_next_alarm_deadline, it always rearms the timer even if
the next deadline is INT64_MAX.

This patch simplifies the behavior of qemu_rearm_alarm_timer and removes
the duplicated code, always calling qemu_next_alarm_deadline and only
rearming the timer if the deadline is less than INT64_MAX.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
Tested-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 8227421e04)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-09-05 10:30:38 -05:00
Stefan Weil cccb5446a6 qemu-ga: Fix null pointer passed to unlink in failure branch
Clang reports this warning:

Null pointer passed as an argument to a 'nonnull' parameter

Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 4bdb1a3059)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-30 14:54:16 -05:00
Jan Kiszka 25c0807e3c memory: Fix copy&paste mistake in memory_region_iorange_write
The last argument of find_portio is "write", so this must be true here.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7e2a62d82a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-30 14:54:16 -05:00
Cam Macdonell 57fa9fb4ef ivshmem: remove redundant ioeventfd configuration
setup_ioeventfds() is unnecessary and actually causes a segfault when used
ioeventfd=on is used on the command-line.  Since ioeventfds are handled within
the memory API, it can be removed.

Signed-off-by: Cam Macdonell <cam@cs.ualberta.ca>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7e7de876ae)

Conflicts:

	hw/ivshmem.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-30 14:54:16 -05:00
Peter Maydell 450ead742a hw/arm_gic.c: Define .class_size in arm_gic_info TypeInfo
Add the missing .class_size definition to the arm_gic_info TypeInfo.
This fixes the memory corruption and possible segfault that otherwise
results when the class struct is allocated at too small a size and
the class init function writes off the end of it.

Reported-by: Adam Lackorzynski <adam@os.inf.tu-dresden.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 998a74bcda)

 - ARMGICClass isn't in 1.1, set class size to SysBusDeviceClass instead

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-30 14:54:16 -05:00
Aurelien Jarno 69c67eca97 tcg/mips: fix broken CONFIG_TCG_PASS_AREG0 code
The CONFIG_TCG_PASS_AREG0 code for calling ld/st helpers was
broken in that it did not respect the ABI requirement that 64
bit values were passed in even-odd register pairs. The simplest
way to fix this is to implement some new utility functions
for marshalling function arguments into the correct registers
and stack, so that the code which sets up the address and
data arguments does not need to care whether there has been
a preceding env argument.

Based on commit 9716ef3b for ARM by Peter Maydell.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
(cherry picked from commit 18fec301cd)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 14:38:44 -05:00
munkyu.im 256c899432 audio/winwave: previous audio buffer should be flushed
Winwave audio backend has problem with pausing and restart audio out.
Unlike other backends, Winwave pausing API does not flush audio buffer.
As a result, the previous audio data are played in front of
user expected sound when user restart audio.
So changes it to waveOutReset()

Signed-off-by: Munkyu Im <munkyu.im@samsung.com>
Signed-off-by: malc <av1474@comtv.ru>
(cherry picked from commit 13ef70f64e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 14:38:44 -05:00
Eric Johnson 849c865155 target-mips: allow microMIPS SWP and SDP to have RD equal to BASE
The microMIPS SWP and SDP instructions do not modify GPRs.  So their
behavior is well defined when RD equals BASE.  The MIPS Architecture
Verification Programs (AVPs) check that they work as expected.  This
is required for AVPs to pass.

Signed-off-by: Eric Johnson <ericj@mips.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 36c6711bbe)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 14:38:44 -05:00
Eric Johnson 57708c532f target-mips: add privilege level check to several Cop0 instructions
The MIPS Architecture Verification Programs (AVPs) check privileged
instructions for the required privilege level.  These changes are needed
to pass the AVP suite.

Signed-off-by: Eric Johnson <ericj@mips.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 2e15497c5b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 14:38:44 -05:00
Richard Henderson 8d45ae8352 mips-linux-user: Always support rdhwr.
The kernel will emulate this instruction if it's not supported
natively.  This insn is used for TLS, among other things, and
so is required by modern glibc.

Signed-off-by: Richard Henderson <rth@twiddle.net>
Cc: Riku Voipio <riku.voipio@iki.fi>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit b316728836)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:03 -05:00
Richard Henderson 2f0f684cce target-mips: Streamline indexed cp1 memory addressing.
We've already eliminated both base and index being zero.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 0516867450)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:03 -05:00
Richard Sandiford bc4321e754 Fix order of CVT.PS.S operands
The FS input to CVT.PS.S is the high half and FT is the low half.
tcg_gen_concat_i32_i64 takes the low half first, so the operands
were in the wrong order.

Signed-off-by: Richard Sandiford <rdsandiford@googlemail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 13d24f4972)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:03 -05:00
Richard Sandiford 9a32fb2824 Fix operands of RECIP2.S and RECIP2.PS
Read the second input operand of RECIP2.S and RECIP2.PS from FT rather
than FD.  RECIP2.D is already correct.

Signed-off-by: Richard Sandiford <rdsandiford@googlemail.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit d22d728987)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:03 -05:00
Aurelien Jarno 50462f2ca8 tcg/ia64: fix and optimize ld/st slow path
Store slow path has been broken in e141ab52d:
- the arguments are shifted before the last one (mem_index) is written.
- the shift is done for both slow and fast paths.

Fix that. Also optimize a bit by bundling the move together. This still
can be optimized, but it's better to wait for a decision to be taken on
the arguments order.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit d03c98d80f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:03 -05:00
Aurelien Jarno ec16f35e4e tcg/ia64: fix prologue/epilogue
Prologue and epilogue code has been broken in cea5f9a28.

Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 18d445b443)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Peter Maydell 0ba3d50242 tcg/arm: Fix broken CONFIG_TCG_PASS_AREG0 code
The CONFIG_TCG_PASS_AREG0 code for calling ld/st helpers was
broken in that it did not respect the ABI requirement that 64
bit values were passed in even-odd register pairs. The simplest
way to fix this is to implement some new utility functions
for marshalling function arguments into the correct registers
and stack, so that the code which sets up the address and
data arguments does not need to care whether there has been
a preceding env argument.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 9716ef3b1b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Matthew Ogilvie 0214b6b4de target-i386/translate.c: mov to/from crN/drN: ignore mod bits
> This instruction is always treated as a register-to-register (MOD = 11)
> instruction, regardless of the encoding of the MOD field in the MODR/M
> byte.

Also, Microport UNIX System V/386 v 2.1 (ca 1987) runs fine on
real Intel 386 and 486 CPU's (at least), but does not run in qemu without
this patch.

Signed-off-by: Matthew Ogilvie <mmogilvi_qemu@miniinfo.net>
Signed-off-by: malc <av1474@comtv.ru>
(cherry picked from commit 5c73b757e3)

Conflicts:

	target-i386/translate.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Paolo Bonzini 63f7166a80 ivshmem: fix memory_region_del_eventfd assertion failure
We do not register ioeventfds unless the IVSHMEM_IOEVENTFD feature
is set.  The same feature must be checked before releasing the eventfds.
Regression introduced by commit 563027c (ivshmem: use EventNotifier and
memory API, 2012-07-05).

Reported-by: Cam Macdonnell <cam@cs.ualberta.ca>
Tested-by: Cam Macdonnell <cam@cs.ualberta.ca>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 98609cd8fc)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Paolo Bonzini 1d34dff02f qom: object_delete should unparent the object first
object_deinit is only called when the reference count goes to zero,
and yet tries to do an object_unparent.  Now, object_unparent
either does nothing or it will decrease the reference count.
Because we know the reference count is zero, the object_unparent
call in object_deinit is useless.

Instead, we need to disconnect the object from its parent just
before we remove the last reference apart from the parent's.  This
happens in object_delete.  Once we do this, all calls to
object_unparent peppered through QEMU can go away.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit da5a44e8b0)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Anthony Liguori 5330a894ed monitor: don't try to initialize json parser when monitor is HMP
Reported-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 26efaca377)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Stefan Weil cdcf2aa41c target-mips: Fix some helper functions (VR54xx multiplication)
Commits b5dc7732e1 and
be24bb4f30 optimized the code
and removed the correct setting of t0. Fix this.

gcc-4.7 detected this bug because parameter arg1 was unused
but set in set_HIT0_LO and set_HI_LOT0.

Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 6fc97fafce)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Meador Inge 5ea33884f6 target-mips: Enable access to required RDHWR hardware registers
While running in the usermode emulator all of the required*
MIPS32r2 RDHWR hardware registers should be accessible (the
Linux kernel enables access to these same registers).  Note
that these registers are still enabled when the MIPS ISA is
not release 2.  This is OK since the Linux kernel emulates
access to them when they are not available in hardware.

* There is also the ULR register which is only recommended
  for full release 2 compliance.  Incidentally, accessing
  this register in the current implementation works fine
  without flipping its access bit.

Signed-off-by: Meador Inge <meadori@codesourcery.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 94159135cb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Anthony Liguori 50d0184cb7 monitor: move json init from OPEN event to init
At some point in the past, the OPEN event was changed to be issued from a
bottom half.  This creates a small window whereas a data callback registered in
init may be invoked before the OPEN event has been issued.

This is reproducible with:

 echo "{'execute': 'qmp_capabilities'}" | qemu-system-x86_64 -M none -qmp stdio

We can fix this for the monitor by moving the parser initialization to init.

The remaining state that is set in OPEN appears harmless.

Reported-by: Daniel Berrange <berrange@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 58617a795c)

Conflicts:

	monitor.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Jim Meyering c068d37020 softmmu-semi: fix lock_user* functions not to deref NULL upon OOM
Return NULL upon malloc failure.

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 15d9e3bc6a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Jim Meyering cc5caf7df4 arm-semi: don't leak 1KB user string lock buffer upon TARGET_SYS_OPEN
Always call unlock_user before returning.

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 396bef4b38)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Jim Meyering b68e45c686 sheepdog: don't leak socket file descriptor upon connection failure
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit a7e47d4bfc)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Jim Meyering df60f451b3 linux-user: do_msgrcv: don't leak host_mb upon TARGET_EFAULT failure
Also, use g_malloc to avoid NULL-deref upon OOM.

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 0d07fe47d4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Jim Meyering 1bc6332461 qemu-ga: don't leak a file descriptor upon failed lockf
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 4144f122b4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Dongxiao Xu 07548727b3 xen-all.c: fix multiply issue for int and uint types
If the two multiply operands are int and uint types separately,
the int type will be transformed to uint firstly, which is not the
intent in our code piece. The fix is to add (int64_t) transform
for the uint type before the multiply.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 14d4018372)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Frediano Ziglio c7e6d6b115 Fix invalidate if memory requested was not bucket aligned
When memory is mapped in qemu_map_cache with lock != 0 a reverse mapping
is created pointing to the virtual address of location requested.
The cached mapped entry is saved in last_address_vaddr with the memory
location of the base virtual address (without bucket offset).
However when this entry is invalidated the virtual address saved in the
reverse mapping is used. This cause that the mapping is freed but the
last_address_vaddr is not reset.

Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 27b7652ef5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:02 -05:00
Jan Kiszka 886c62a3ff i82378: Remove bogus MMIO coalescing
This MMIO area is an entry gate to legacy PC ISA devices, addressed via
PIO over there. Quite a few of the PIO ports have side effects on access
like starting/stopping timers that must be executed properly ordered
/wrt the CPU. So we have to remove the coalescing mark.

Acked-by: Hervé Poussineau <hpoussin@reactos.org>
Acked-by: Andreas Färber <andreas.faerber@web.de>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 0ec64507a5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Alexey Kardashevskiy b598cb2214 eventfd: making it thread safe
QEMU uses IO handlers to run select() in the main loop.
The handlers list is managed by qemu_set_fd_handler() helper
which works fine when called from the main thread as it is
called when select() is not waiting.

However IO handlers list can be changed in the thread other than
the main one doing os_host_main_loop_wait(), for example, as a result
of a hypercall which changes PCI config space (VFIO on POWER is the case)
and enables/disabled MSI/MSIX which closes/creates eventfd handles.
As the main loop should be waiting on the newly created eventfds,
it has to be restarted.

The patch adds the qemu_notify_event() call to interrupt select()
to make main_loop() restart select() with the updated IO handlers
list.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 55ce75faf2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Paolo Bonzini 799c27e124 iscsi: fix races between task completion and abort
This patch fixes two main issues with block/iscsi.c:

1) iscsi_task_mgmt_abort_task_async calls iscsi_scsi_task_cancel which
was also directly called in iscsi_aio_cancel

2) a race between task completion and task abortion could happen cause
the scsi_free_scsi_task were done before iscsi_schedule_bh has finished.
To fix this, all the freeing of IscsiTasks and releasing of the AIOCBs
is centralized in iscsi_bh_cb, independent of whether the SCSI command
has completed or was cancelled.

3) iscsi_aio_cancel was not synchronously waiting for the end of the
command.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 1bd075f29e)

Conflicts:

	block/iscsi.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Paolo Bonzini b90d717b64 iscsi: simplify iscsi_schedule_bh
It is always used with the same callback, remove the argument.  And
its return value is never used, assume allocation succeeds.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit cfb3f5064a)

Conflicts:

	block/iscsi.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Paolo Bonzini a410be59b5 iscsi: move iscsi_schedule_bh and iscsi_readv_writev_bh_cb
Put these functions at the beginning, to avoid forward references
in the next patches.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 27cbd828c6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Kevin Wolf f946f010f5 Documentation: Warn against qemu-img on active image
People have repeatedly expected that you can do things like snapshotting
an image with qemu-img while a qemu instance is running. Maybe we need
to consider locking the files while they are in use, but having a
warning in the qemu-img manpage is doable for 1.2 and can't hurt anyway.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 48467328c6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Kevin Wolf d438650fa5 vmdk: Read footer for streamOptimized images
The footer takes precedence over the header when it exists. It contains
the real grain directory offset that is missing in the header. Without
this patch, streamOptimized images with a footer cannot be read.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
(cherry picked from commit 65bd155c73)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Kevin Wolf 07ab4fc1ef vmdk: Fix header structure
Commit bb45ded9 swapped gd_offset and rgd_offset. This is wrong.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 7a736bfa4e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-28 01:50:01 -05:00
Markus Armbruster fd21cc14f5 vl: Round argument of -m up to multiple of 8KiB
Partial pages make little sense and don't work.  Ensure the RAM size
is a multiple of any possible target's page size.

Fixes

    $ qemu-system-x86_64 -nodefaults -S -vnc :0 -m 0.8
    qemu-system-x86_64: /work/armbru/qemu/exec.c:2255: register_subpage: Assertion `existing->mr->subpage || existing->mr == &io_mem_unassigned' failed.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit ff96101552)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:42:32 -05:00
Markus Armbruster cfeb9958c8 pc: Fix RTC CMOS info on RAM for ram_size < 1MiB
pc_cmos_init() always claims 640KiB base memory, and ram_size - 1MiB
extended memory.  The latter can underflow to "lots of extended
memory".  Fix both, and clean up some.

Note: SeaBIOS currently requires 1MiB of RAM, and doesn't check
whether it got enough.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit e89001f72e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:42:32 -05:00
Jan Kiszka ffc7565c81 kvm: i8254: Finish time conversion fix
0cdd3d1444 fixed reading back the counter load time from the kernel
while assuming the kernel would always update its load time on writing
the state. That is only true for channel 1, and so pit_get_channel_info
returned wrong output pin states for high counter values.

Fix this by applying the offset also on kvm_pit_put. Now we also need to
update the offset when we write the state while the VM is stopped as it
keeps on changing in that state.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
(cherry picked from commit 050a46065d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:42:32 -05:00
Jan Kiszka 393d4c9214 kvm: i8254: Cache kernel clock offset in KVMPITState
To prepare the final fix for clock calibration issues with the in-kernel
PIT, we want to cache the offset between vmclock and the clock used by
the in-kernel PIT. So far, we only need to update it when the VM state
changes between running and stopped because we only read the in-kernel
PIT state while the VM is running.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
(cherry picked from commit 205df4d1a8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:42:32 -05:00
Jason Baron 5ae1c7ca0a ahci: Fix ahci cdrom read corruptions for reads > 128k
While testing q35, which has its cdrom attached to the ahci controller, I found
that the Fedora 17 install would panic on boot. The panic occurs while
squashfs is trying to read from the cdrom. The errors are:

[    8.622711] SQUASHFS error: xz_dec_run error, data probably corrupt
[    8.625180] SQUASHFS error: squashfs_read_data failed to read block
0x20be48a

I was also able to produce corrupt data reads using an installed piix based
qemu machine, using 'dd'. I found that the corruptions were only occuring when
then read size was greater than 128k. For example, the following command
results in corrupted reads:

dd if=/dev/sr0 of=/tmp/blah bs=256k iflag=direct

The > 128k size reads exercise a different code path than 128k and below. In
ide_atapi_cmd_read_dma_cb() s->io_buffer_size is capped at 128k. Thus,
ide_atapi_cmd_read_dma_cb() is called a second time when the read is > 128k.
However, ahci_dma_rw_buf() restart the read from offset 0, instead of at 128k.
Thus, resulting in a corrupted read.

To fix this, I've introduced 'io_buffer_offset' field in IDEState to keep
track of the offset. I've also modified ahci_populate_sglist() to take a new
3rd offset argument, so that the sglist is property initialized.

I've tested this patch using 'dd' testing, and Fedora 17 now correctly boots
and installs on q35 with the cdrom ahci controller.

Signed-off-by: Jason Baron <jbaron@redhat.com>
Tested-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 61f52e06f0)

Conflicts:

	hw/ide/ahci.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:59 -05:00
Jason Baron c094d3d818 ahci: Fix sglist memleak in ahci_dma_rw_buf()
I noticed that in hw/ide/ahci:ahci_dma_rw_buf() we do not free the sglist. Thus,
I've added a call to qemu_sglist_destroy() to fix this memory leak.

In addition, I've adeed a call in qemu_sglist_destroy() to 0 all of the sglist
fields, in case there is some other codepath that tries to free the sglist.

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit ea8d82a1ed)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:40 -05:00
Jan Kiszka a473439574 apic: Defer interrupt updates to VCPU thread
KVM performs TPR raising asynchronously to QEMU, specifically outside
QEMU's global lock. When an interrupt is injected into the APIC and TPR
is checked to decide if this can be delivered, a stale TPR value may be
used, causing spurious interrupts in the end.

Fix this by deferring apic_update_irq to the context of the target VCPU.
We introduce a new interrupt flag for this, CPU_INTERRUPT_POLL. When it
is set, the VCPU calls apic_poll_irq before checking for further pending
interrupts. To avoid special-casing KVM, we also implement this logic
for TCG mode.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit 5d62c43a17)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Jan Kiszka 2a93a89400 apic: Reevaluate pending interrupts on LVT_LINT0 changes
When the guest modifies the LVT_LINT0 register, we need to check if some
pending PIC interrupt can now be delivered.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit a94820ddc3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Jan Kiszka c337db8cc9 apic: Resolve potential endless loop around apic_update_irq
Commit d96e173769 refactored the reinjection of pending PIC interrupts.
However, it missed the potential loop of apic_update_irq ->
apic_deliver_pic_intr -> apic_local_deliver -> apic_set_irq ->
apic_update_irq that /could/ occur if LINT0 is injected as APIC_DM_FIXED
and that vector is currently blocked via TPR.

Resolve this by reinjecting only where it matters: inside
apic_get_interrupt. This function may clear a vector while a
PIC-originated reason still exists.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit 3db3659bf6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Jan Kiszka 44a42fd2b6 slirp: Improve error reporting of inaccessible smb directories
Instead of guessing, print the error code returned by access.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
(cherry picked from commit 22a61f365d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Dunrong Huang 1a933e4801 slirp: Ensure smbd and shared directory exist when enable smb
Users may pass the following parameters to qemu:
    $ qemu-kvm -net nic -net user,smb= ...
    $ qemu-kvm -net nic -net user,smb ...
    $ qemu-kvm -net nic -net user,smb=bad_directory ...

In these cases, qemu started successfully while samba server
failed to start. Users will confuse since samba server
failed silently without any indication of what it did wrong.

To avoid it, we check whether the shared directory exist and
if users have permission to access this directory when QEMU's
"built-in" SMB server is enabled.

Signed-off-by: Dunrong Huang <riegamaths@gmail.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
(cherry picked from commit 927d811b28)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Jan Kiszka 55e64b6ddd slirp: Enforce host-side user of smb share
Windows 7 (and possibly other versions) cannot connect to the samba
share if the exported host directory is not world-readable. This can be
resolved by forcing the username used for access checks to the one
under which QEMU and smbd are running.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
(cherry picked from commit 1cb1c5d10b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Michael Roth b83883e2db check-qjson: add test for large JSON objects
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7109edfeb6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Michael Roth 9c7e3b98aa json-parser: don't replicate tokens at each level of recursion
Currently, when parsing a stream of tokens we make a copy of the token
list at the beginning of each level of recursion so that we do not
modify the original list in cases where we need to fall back to an
earlier state.

In the worst case, we will only read 1 or 2 tokens off the list before
recursing again, which means an upper bound of roughly N^2 token allocations.

For a "reasonably" sized QMP request (in this a QMP representation of
cirrus_vga's device state, generated via QIDL, being passed in via
qom-set), this caused my 16GB's of memory to be exhausted before any
noticeable progress was made by the parser.

This patch works around the issue by using single copy of the token list
in the form of an indexable array so that we can save/restore state by
manipulating indices.

A subsequent commit adds a "large_dict" test case which exhibits the
same behavior as above. With this patch applied the test case successfully
completes in under a second.

Tested with valgrind, make check, and QMP.

Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 65c0f1e955)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Michael Roth 8a869baa08 qlist: add qlist_size()
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit a86a4c2f7b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Hans de Goede dbeb6c22d7 usb-ehci: Fix an assert whenever isoc transfers are used
hcd-ehci.c is missing an usb_packet_init() call for the ipacket UsbPacket
it uses for isoc transfers, triggering an assert (taking the entire vm down)
in usb_packet_setup as soon as any isoc transfers are done by a high speed
USB device.

Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 7341ea075c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Hans de Goede b00201d402 usb-redir: Correctly handle the usb_redir_babble usbredir status
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit adae502c0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Gerd Hoffmann a1a17b1d5a usb: restore USBDevice->attached on vmload
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 495d544798)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Gerd Hoffmann f49853a4bd uhci: fix uhci_async_cancel_all
We update the QTAILQ in the loop, thus we must use the SAFE version
to make sure we don't touch the queue struct after freeing it.

https://bugzilla.novell.com/show_bug.cgi?id=766310

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 77fa9aee38)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Gerd Hoffmann b10daa61f9 ehci: don't flush cache on doorbell rings.
Commit 4be23939ab makes ehci instantly
zap any unlinked queue heads when the guest rings the doorbell.

While hacking up uas support this turned out to be a problem.  The linux
kernel can unlink and instantly relink the very same queue head, thereby
killing any async packets in flight.  That alone isn't an issue yet, the
packet will canceled and resubmitted and everything is fine.  We'll run
into trouble though in case the async packet is completed already, so we
can't cancel it any more.  The transaction is simply lost then.

usb_ehci_qh_ptrs q (nil) - QH @ 39c4f000: next 39c4f122 qtds 00000000,00000001,39c50000
usb_ehci_qh_fields QH @ 39c4f000 - rl 0, mplen 0, eps 0, ep 0, dev 0
usb_ehci_qh_ptrs q 0x7f95feba90a0 - QH @ 39c4f000: next 39c4f122 qtds 00000000,00000001,39c50000
usb_ehci_qh_fields QH @ 39c4f000 - rl 0, mplen 0, eps 0, ep 0, dev 0
usb_ehci_qh_ptrs q 0x7f95fe515210 - QH @ 39c4f120: next 39c4f0c2 qtds 29dbce40,29dbc4e0,00000009
usb_ehci_qh_fields QH @ 39c4f120 - rl 4, mplen 512, eps 2, ep 1, dev 2
usb_ehci_packet_action q 0x7f95fe515210 p 0x7f95fdec32a0: alloc
usb_packet_state_change bus 0, port 2, ep 1, packet 0x7f95fdec32e0, state undef -> setup
usb_ehci_packet_action q 0x7f95fe515210 p 0x7f95fdec32a0: process
usb_uas_command dev 2, tag 0x2, lun 0, lun64 00000000-00000000
scsi_req_parsed target 0 lun 0 tag 2 command 42 dir 2 length 16384
scsi_req_parsed_lba target 0 lun 0 tag 2 command 42 lba 5933312
scsi_req_alloc target 0 lun 0 tag 2
scsi_req_continue target 0 lun 0 tag 2
scsi_req_data target 0 lun 0 tag 2 len 16384
usb_uas_scsi_data dev 2, tag 0x2, bytes 16384
usb_uas_write_ready dev 2, tag 0x2
usb_packet_state_change bus 0, port 2, ep 1, packet 0x7f95fdec32e0, state setup -> complete
usb_ehci_packet_action q 0x7f95fe515210 p 0x7f95fdec32a0: free
usb_ehci_qh_ptrs q 0x7f95fdec3210 - QH @ 39c4f0c0: next 39c4f002 qtds 29dbce40,00000001,00000009
usb_ehci_qh_fields QH @ 39c4f0c0 - rl 4, mplen 512, eps 2, ep 2, dev 2
usb_ehci_queue_action q 0x7f95fe5152a0: free
usb_packet_state_change bus 0, port 2, ep 2, packet 0x7f95feba9170, state async -> complete
^^^ async packets completes.
usb_ehci_packet_action q 0x7f95fdec3210 p 0x7f95feba9130: wakeup

usb_ehci_qh_ptrs q (nil) - QH @ 39c4f000: next 39c4f122 qtds 00000000,00000001,39c50000
usb_ehci_qh_fields QH @ 39c4f000 - rl 0, mplen 0, eps 0, ep 0, dev 0
usb_ehci_qh_ptrs q 0x7f95feba90a0 - QH @ 39c4f000: next 39c4f122 qtds 00000000,00000001,39c50000
usb_ehci_qh_fields QH @ 39c4f000 - rl 0, mplen 0, eps 0, ep 0, dev 0
usb_ehci_qh_ptrs q 0x7f95fe515210 - QH @ 39c4f120: next 39c4f002 qtds 29dbc4e0,29dbc8a0,00000009
usb_ehci_qh_fields QH @ 39c4f120 - rl 4, mplen 512, eps 2, ep 1, dev 2
usb_ehci_queue_action q 0x7f95fdec3210: free
usb_ehci_packet_action q 0x7f95fdec3210 p 0x7f95feba9130: free
^^^ endpoint #2 queue head removed from schedule, doorbell makes ehci zap the queue,
    the (completed) usb packet is freed too and gets lost.

usb_ehci_qh_ptrs q (nil) - QH @ 39c4f000: next 39c4f0c2 qtds 00000000,00000001,39c50000
usb_ehci_qh_fields QH @ 39c4f000 - rl 0, mplen 0, eps 0, ep 0, dev 0
usb_ehci_qh_ptrs q 0x7f95feba90a0 - QH @ 39c4f000: next 39c4f0c2 qtds 00000000,00000001,39c50000
usb_ehci_qh_fields QH @ 39c4f000 - rl 0, mplen 0, eps 0, ep 0, dev 0
usb_ehci_queue_action q 0x7f9600dff570: alloc
usb_ehci_qh_ptrs q 0x7f9600dff570 - QH @ 39c4f0c0: next 39c4f122 qtds 29dbce40,00000001,00000009
usb_ehci_qh_fields QH @ 39c4f0c0 - rl 4, mplen 512, eps 2, ep 2, dev 2
usb_ehci_packet_action q 0x7f9600dff570 p 0x7f95feba9130: alloc
usb_packet_state_change bus 0, port 2, ep 2, packet 0x7f95feba9170, state undef -> setup
usb_ehci_packet_action q 0x7f9600dff570 p 0x7f95feba9130: process
usb_packet_state_change bus 0, port 2, ep 2, packet 0x7f95feba9170, state setup -> async
usb_ehci_packet_action q 0x7f9600dff570 p 0x7f95feba9130: async
^^^ linux kernel relinked the queue head, ehci creates a new usb packet,
    but we should have delivered the completed one instead.
usb_ehci_qh_ptrs q 0x7f95fe515210 - QH @ 39c4f120: next 39c4f002 qtds 29dbc4e0,29dbc8a0,00000009
usb_ehci_qh_fields QH @ 39c4f120 - rl 4, mplen 512, eps 2, ep 1, dev 2

So instead of instantly zapping the queue we'll set a flag that the
queue needs revalidation in case we'll see it again in the schedule.
ehci then checks that the queue head fields addressing / describing the
endpoint and the qtd pointer match the cached content before reusing it.

Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 9bc3a3a216)

Conflicts:

	hw/usb/hcd-ehci.c

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Gerd Hoffmann 34b41ed15d ehci: fix reset
Check for the reset bit first when processing USBCMD register writes.
Also break out of the switch, there is no need to check the other bits.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 7046530c36)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Avi Kivity 3b38972743 virtio-blk: fix use-after-free while handling scsi commands
The scsi passthrough handler falls through after completing a
request into the failure path, resulting in a use after free.

Reproducible by running a guest with aio=native on a block device.

Reported-by: Stefan Priebe <s.priebe@profihost.ag>
Signed-off-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 730a9c53b4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Anthony Liguori 36ed337845 qdev: fix use-after-free in the error path of qdev_init_nofail
From Markus:

Before:

    $ qemu-system-x86_64 -display none -drive if=ide
    qemu-system-x86_64: Device needs media, but drive is empty
    qemu-system-x86_64: Initialization of device ide-hd failed
    [Exit 1 ]

After:

    $ qemu-system-x86_64 -display none -drive if=ide
    qemu-system-x86_64: Device needs media, but drive is empty
    Segmentation fault (core dumped)
    [Exit 139 (SIGSEGV)]

This error always existed as qdev_init() frees the object.  But QOM
goes a bit further and purposefully sets the class pointer to NULL to
help find use-after-free.  It worked :-)

Cc: Andreas Faerber <afaerber@suse.de>
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7de3abe505)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:35 -05:00
Jan Kiszka 5a1800cf1c kvmvapic: Disable if there is insufficient memory
We need at least 1M of RAM to map the option ROM. Otherwise, we will
corrupt host memory or even crash:

    $ qemu-system-x86_64 -nodefaults --enable-kvm -vnc :0 -m 640k
    Segmentation fault (core dumped)

Reported-and-tested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
(cherry picked from commit a9605e0317)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:34 -05:00
Christian Borntraeger 97ac3b1af5 s390: Fix error handling and condition code of service call
Invalid sccb addresses will cause specification or addressing exception.
Lets add those checks. Furthermore, the good case (cc=0) was incorrect
for KVM, we did not set the CC at all. We now use return codes < 0
as program checks and return codes > 0 as condition code values.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 9abf567d95)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:34 -05:00
David Gibson 08e642c00e ppc: Fix bug in handling of PAPR hypercall exits
Currently for powerpc, kvm_arch_handle_exit() always returns 1, meaning
that its caller - kvm_cpu_exec() - will always exit immediately afterwards
to the loop in qemu_kvm_cpu_thread_fn().

There's no need to do this.  Once we've handled the hypercall there's no
reason we can't go straight around and KVM_RUN again, which is what ret = 0
will signal.  The only exception might be for hypercalls which affect the
state of cpu_can_run(), however the only one that might do this is H_CEDE
and for kvm that is always handled in the kernel, not qemu.

Furtherm setting ret = 0 means that when exit_requested is set from a
hypercall, we will enter KVM_RUN once more with a signal which lets the
the kernel do its internal logic to complete the hypercall with out
actually executing any more guest code.  This is important if our hypercall
also triggered a reset, which previously would re-initialize everything
without completing the hypercall.  This caused the kernel to get confused
because it thought the guest was still in the middle of a hypercall when
it has actually been reset.

This patch therefore changes to ret = 0, which is both a bugfix and a small
optimization.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Alexander Graf <agraf@suse.de>
(cherry picked from commit 78e8fde26c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:34 -05:00
Peter Maydell 7940c766d4 configure: Don't override user's --cpu on MacOS and Solaris
Both MacOS and Solaris have special case handling for the CPU
type, because the check_define probes will return i386 even if
the hardware is 64 bit and x86_64 would be preferable. Move
these checks earlier in the configure probing so that we can
do them only if the user didn't specify a CPU with --cpu. This
fixes a bug where the user's command line argument was being
ignored.

Reviewed-by: Andreas F=E4rber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit bbea405080)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:34 -05:00
Anthony Liguori f408f49165 qtest: fix infinite loop when QEMU aborts abruptly
From Markus:

Makes "make check" hang:

    QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 gtester -k --verbose -m=quick tests/crash-test tests/rtc-test
    TEST: tests/crash-test... (pid=972)
    qemu-system-x86_64: Device needs media, but drive is empty
[Nothing happens, wait a while, then hit ^C]
    make: *** [check-qtest-x86_64] Interrupt

This was due to the fact that we weren't checked for errors when
reading from the QMP socket.  This patch adds appropriate error
checking.

Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 039380a8e1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-08-21 15:36:34 -05:00
Michael Roth 785adb09b9 update VERSION for v1.1.1
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-07-12 13:36:53 -05:00
Michael Roth f52d0d639e Merge remote-tracking branch 'agraf/s390-for-upstream-1.1' into HEAD 2012-07-10 14:08:37 -05:00
Alexander Graf 4082e889ee s390x: fix s390 virtio aliases
Some of the virtio devices have the same frontend name, but actually
implement different devices behind the scenes through aliases.

The indicator which device type to use is the architecture. On s390, we
want s390 virtio devices. On everything else, we want PCI devices.

Reflect this in the alias selection code. This way we fix commands like
-device virtio-blk on s390x which with this patch applied select the
correct virtio-blk-s390 device rather than virtio-blk-pci.

Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2012-07-10 18:29:29 +02:00
Jason Wang b7093f294c rtl8139: validate rx ring before receiving packets
Commit ff71f2e8ca prevent the possible
crash during initialization of linux driver by checking the operating
mode.This seems too strict as:

- the real card could still work in mode other than normal
- some buggy driver who does not set correct opmode after eeprom
 access

So, considering rx ring address were reset to zero (which could be
safely trated as an address not intened to DMA to), in order to
both letting old guest work and preventing the unexpected DMA to
guest, we can forbid packet receiving when rx ring address is zero.

Tested-by: Avi Kivity <avi@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit fcce6fd25f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-29 10:22:30 -05:00
Daniel Verkamp cd63a77e99 ahci: SATA FIS is 20 bytes, not 0x20
As in the SATA and AHCI specifications, a FIS is 5 Dwords of 4 bytes
each, which comes to 20 bytes (decimal), not 0x20.

Signed-off-by: Daniel Verkamp <daniel@drv.nu>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 4bb9c939a5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 10:58:24 -05:00
Stefan Hajnoczi 8456852657 qemu-img: document qed format on qemu-img man page
The qemu-img.1 man page is missing the qed format from its list of
supported formats.  Document the image creation options for qed.

Suggested-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit f085800e24)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:04:33 -05:00
Stefan Weil 7d440f20bd virtio: Fix compiler warning for non Linux hosts
The local variables ret, i are only used if __linux__ is defined.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 47ce9ef7f8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:04:04 -05:00
MORITA Kazutaka feba8ae20b sheepdog: fix return value of do_load_save_vm_state
bdrv_save_vmstate and bdrv_load_vmstate should return the vmstate size
on success, and -errno on error.

Signed-off-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 6f3c714eb7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:03:33 -05:00
Jan Beulich c9c2479289 qemu/xendisk: set maximum number of grants to be used
Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 64c27e5b1f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 09:00:42 -05:00
Bruce Rogers 4c45bf61d3 build: install qmp-commands.txt
File is targeted for install, but is never installed.

Signed-off-by: Bruce Rogers <brogers@suse.com>
Signed-off-by: Luiz Capitulino <lcapitulino@redhat.com>
(cherry picked from commit 0cd23fcc0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:50 -05:00
Pavel Hrdina 70d582074f fdc: fix implied seek while there is no media in drive
The Windows uses 'READ' command at the start of an instalation
without checking the 'dir' register. We have to abort the transfer
with an abnormal termination if there is no media in the drive.

Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit c52acf60b6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:44 -05:00
Stefan Hajnoczi 0da4c07322 qcow2: fix autoclear image header update
The autoclear feature bits can be used for qcow2 file format features
that are safe to "drop" by old programs that do not understand the
feature.  Upon opening the image file unknown autoclear feature bits are
cleared and the image file header is rewritten, but this was happening
too early in the code when critical header fields were not yet loaded.

Process autoclear feature bits after all necessary header information
has been loaded.

Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit af7b708db2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:38 -05:00
Pavel Dovgaluk ee7735fa63 Prevent disk data loss when closing qemu
Prevent disk data loss when closing qemu console window
under Windows 7.

v3. Comment for Sleep() parameter was updated.

Signed-off-by: Pavel Dovgalyuk<pavel.dovgaluk@gmail.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b75a02829d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:33 -05:00
Zhi Yong Wu 02fe741375 qcow2: fix endianness conversion
Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 87267753a3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:25 -05:00
Jason Baron dbe4ac16bb pci_bridge_dev: fix error path in pci_bridge_dev_initfn()
Currently, we do not properly cleanup, if pci_bridge_dev_initfn
fails to initialize properly. Make sure to call pci_bridge_exitfn()
in the error path.

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 80aa796bf3)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:19 -05:00
Jason Baron f63e60327b qdev: release parent properties on dc->init failure
While looking into hot-plugging bridges, I can create a qemu segfault via:

$ device_add pci-bridge

Bridge chassis not specified. Each bridge is required to be assigned a unique chassis id > 0.
**
ERROR:qom/object.c:389:object_delete: assertion failed: (obj->ref == 0)

I'm proposing to fix this by adding a call to 'object_unparent()', before the
call to qdev_free(). I see there is already a precedent for this usage pattern as
seen in qdev_simple_unplug_cb():

/* can be used as ->unplug() callback for the simple cases */
int qdev_simple_unplug_cb(DeviceState *dev)
{
    /* just zap it */
    object_unparent(OBJECT(dev));
    qdev_free(dev);
    return 0;
}

Signed-off-by: Jason Baron <jbaron@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 266ca11a04)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:12 -05:00
Jan Kiszka 0ec3907571 intel-hda: Fix reset of MSI function
Call msi_reset on device reset as still required by the core.

CC: Gerd Hoffmann <kraxel@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 8e729e3b52)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:06 -05:00
Jan Kiszka 1658e3cd89 ahci: Fix reset of MSI function
Call msi_reset on device reset as still required by the core.

CC: Alexander Graf <agraf@suse.de>
CC: qemu-stable@nongnu.org
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 868a1a5226)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:55:00 -05:00
Fernando Luis Vazquez Cao 065436479b rtl8139: honor RxOverflow flag in can_receive method
Some drivers (Linux' 8139too among them) rely on the NIC
injecting an interrupt in the event of a receive buffer overflow
and, accordingly, set the RxOverflow bit in the interrupt
mask. Unfortunately rtl8139's can_receive method ignores the
RxOverflow flag, which may lead to a situation where rtl8139
stops receiving packets (can_receive returns 0) when the receive
buffer becomes full.

If the driver eventually read from the receive buffer or reset
the card the emulator could recover from this situation. However
some implementations only do this upon receiving an interrupt
with either RxOK or RxOverflow set in the ISR; interrupt that
will never come because QEMU's flow control mechanisms would
prevent rtl8139 from receiving any packet.

Letting packets go through when the overflow interrupt is enabled
makes the QEMU emulator compliant to the spec and solves the
problem.

This patch should fix a relatively common (in our experience)
network stall observed when running enterprise distros with
rtl8139 as the NIC; in some cases the 8139too device driver gets
loaded and when under heavy load the network eventually stops
working.

Reported-by: Hayato Kakuta <kakuta.hayato@oss.ntt.co.jp>
Tested-by: Hayato Kakuta <kakuta.hayato@oss.ntt.co.jp>
Acked-by: Igor Kovalenko <igor.v.kovalenko@gmail.com>
Signed-off-by: Fernando Luis Vazquez Cao <fernando@oss.ntt.co.jp>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit fee9d348ff)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:54:55 -05:00
Stefan Weil f6db26e4f8 configure: Fix build for some versions of glibc (9pfs)
Some versions declare open_by_handle_at, but don't define AT_EMPTY_PATH.
Extend the check in configure to test both preconditions.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Serge Hallyn <serge.hallyn@ubuntu.com>
(cherry picked from commit acc55ba8b1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:54:44 -05:00
Stefan Weil c49dd1bf64 monitor: Fix memory leak with readline completion
Each string which is shown during readline completion in the QEMU monitor
is allocated dynamically but currently never deallocated.

Add the missing loop which calls g_free for the allocated strings.

Signed-off-by: Stefan Weil <sw@weilnetz.de>
Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit fc9fa4bd0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:25 -05:00
Kevin Wolf b4fcb4b499 qcow2: Silence false warning
Some gcc versions seem not to be able to figure out that the switch
statement covers all possible values and that c is therefore always
initialised. Add a default branch for them.

Reported-by: malc <av1474@comtv.ru>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: malc <av1474@comtv.ru>
(cherry picked from commit 1417d7e40e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:20 -05:00
Jan Kiszka 7672b714b2 kvm: i8254: Fix conversion of in-kernel to userspace state
Due to a offset between the clock used to generate the in-kernel
count_load_time (CLOCK_MONOTONIC) and the clock used for processing this
in userspace (vm_clock), reading back the output of PIT channel 2 via
port 0x61 was broken. One use cases that suffered from it was the CPU
frequency calibration of SeaBIOS, which also affected IDE/AHCI timeouts.

This fixes it by calibrating the offset between both clocks on
kvm_pit_get and adjusting the kernel value before saving it in the
userspace state. As the calibration only works while the vm_clock is
running, we cache the in-kernel state across stopped phases.

Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit 0cdd3d1444)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:13 -05:00
Jim Meyering ca09717e8e kvm/apic: correct short memset
kvm_put_apic_state's attempt to clear *kapic before setting its
bits cleared sizeof(void*) bytes (no more than 8) rather than the
intended 1024 (KVM_APIC_REG_SIZE) bytes. Spotted by coverity.

Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
(cherry picked from commit 0614cb82ca)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:53:05 -05:00
Harsh Prateek Bora 0cc21de484 configure: report missing libraries for virtfs
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit 263ddcc81b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:52:57 -05:00
Harsh Prateek Bora 08375616a0 trace/simple.c: fix deprecated glib2 interface
Signed-off-by: Harsh Prateek Bora <harsh@linux.vnet.ibm.com>
Signed-off-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com>
(cherry picked from commit 0d665005c7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:52:44 -05:00
Max Filippov b993b863e7 target-xtensa: fix CCOUNT for conditional branches
Taken conditional branches fail to update CCOUNT register because
accumulated ccount_delta is reset during translation of non-taken
branch. To fix it only update CCOUNT once per conditional branch
instruction translation.

This fixes guest linux freeze on LTP waitpid06 test.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit d865f30739)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:56 -05:00
Max Filippov 07ff37597b exec: fix TB invalidation after breakpoint insertion/deletion
tb_invalidate_phys_addr has to be called with the exact physical address of
the breakpoint we add/remove, not just the page's base address.
Otherwise we easily fail to flush the right TB.

This breakage was introduced by the commit f3705d5329 "memory: make
phys_page_find() return an unadjusted".

This appeared to work for some guest architectures because their
cpu_get_phys_page_debug implementation returns full translated physical
address, not just the base of the TARGET_PAGE_SIZE-sized page.

Reported-by: TeLeMan <geleman@gmail.com>
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 9d70c4b7b8)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:46 -05:00
Max Filippov e77326d99c target-xtensa: add MMU pagewalking tests
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit c305e32f43)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:38 -05:00
Max Filippov 8b3ac66120 target-xtensa: control page table lookup explicitly
Hardware pagetable walking may not be nested. Stop guessing and pass
explicit flag to the get_physical_addr_mmu function that controls page
table lookup.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 57705a676c)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:31 -05:00
Max Filippov 2eb4d314ce target-xtensa: update autorefill TLB entries conditionally
This is to avoid interference of internal QEMU helpers
(cpu_get_phys_page_debug, tb_invalidate_virtual_addr) with guest-visible
TLB state.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit ae4e7982e6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:23 -05:00
Max Filippov adda59173c target-xtensa: extract TLB entry setting method
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 16bde77a29)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:13 -05:00
Max Filippov b696aeab6a target-xtensa: update EXCVADDR in case of page table lookup
According to ISA, 4.4.2.6, EXCVADDR may be changed by any TLB miss, even
if the miss is handled entirely by processor hardware.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 39e7d37f0f)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:48:06 -05:00
Max Filippov 6514fe5047 target-xtensa: flush TLB page for new MMU mapping
Both old and new mappings need flushing because their VPN may be
different in MMU case.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit e323bdeff2)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:47:58 -05:00
Christian Borntraeger c63c453889 virtio-blk: Fix geometry sector calculation
Currently the sector value for the geometry is masked, even if the
user usesa command line parameter that explicitely gives a number.
This breaks dasd devices on s390. A dasd device can have
a physical block size of 4096 (== same for logical block size)
and a typcial geometry of 15 heads and 12 sectors per cyl.
The ibm partition detection relies on a correct geometry
reported by the device. Unfortunately the current code changes
12 to 8. This would be necessary if the total size is
not a multiple of logical sector size,  but for dasd this
is not the case.

This patch checks the device size and only applies sector
mask if necessary.

Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
CC: Christoph Hellwig <hch@lst.de>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit 136be99e6e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2012-06-25 08:47:47 -05:00
82 changed files with 1533 additions and 683 deletions

View File

@ -271,6 +271,7 @@ endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) QMP/qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 qemu-img.1 "$(DESTDIR)$(mandir)/man1"

View File

@ -1 +1 @@
1.1.0
1.1.2

View File

@ -194,18 +194,19 @@ uint32_t do_arm_semihosting(CPUARMState *env)
if (!(s = lock_user_string(ARG(0))))
/* FIXME - should this error code be -TARGET_EFAULT ? */
return (uint32_t)-1;
if (ARG(1) >= 12)
if (ARG(1) >= 12) {
unlock_user(s, ARG(0), 0);
return (uint32_t)-1;
}
if (strcmp(s, ":tt") == 0) {
if (ARG(1) < 4)
return STDIN_FILENO;
else
return STDOUT_FILENO;
int result_fileno = ARG(1) < 4 ? STDIN_FILENO : STDOUT_FILENO;
unlock_user(s, ARG(0), 0);
return result_fileno;
}
if (use_gdb_syscalls()) {
gdb_do_syscall(arm_semi_cb, "open,%s,%x,1a4", ARG(0),
(int)ARG(2)+1, gdb_open_modeflags[ARG(1)]);
return env->regs[0];
ret = env->regs[0];
} else {
ret = set_swi_errno(ts, open(s, open_modeflags[ARG(1)], 0644));
}

View File

@ -349,21 +349,15 @@ static int winwave_ctl_out (HWVoiceOut *hw, int cmd, ...)
else {
hw->poll_mode = 0;
}
if (wave->paused) {
mr = waveOutRestart (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutRestart");
}
wave->paused = 0;
}
wave->paused = 0;
}
return 0;
case VOICE_DISABLE:
if (!wave->paused) {
mr = waveOutPause (wave->hwo);
mr = waveOutReset (wave->hwo);
if (mr != MMSYSERR_NOERROR) {
winwave_logerr (mr, "waveOutPause");
winwave_logerr (mr, "waveOutReset");
}
else {
wave->paused = 1;

View File

@ -65,10 +65,44 @@ struct IscsiTask {
int complete;
};
static void
iscsi_bh_cb(void *p)
{
IscsiAIOCB *acb = p;
qemu_bh_delete(acb->bh);
if (acb->canceled == 0) {
acb->common.cb(acb->common.opaque, acb->status);
}
if (acb->task != NULL) {
scsi_free_scsi_task(acb->task);
acb->task = NULL;
}
qemu_aio_release(acb);
}
static void
iscsi_schedule_bh(IscsiAIOCB *acb)
{
if (acb->bh) {
return;
}
acb->bh = qemu_bh_new(iscsi_bh_cb, acb);
qemu_bh_schedule(acb->bh);
}
static void
iscsi_abort_task_cb(struct iscsi_context *iscsi, int status, void *command_data,
void *private_data)
{
IscsiAIOCB *acb = private_data;
acb->status = -ECANCELED;
iscsi_schedule_bh(acb);
}
static void
@ -77,15 +111,19 @@ iscsi_aio_cancel(BlockDriverAIOCB *blockacb)
IscsiAIOCB *acb = (IscsiAIOCB *)blockacb;
IscsiLun *iscsilun = acb->iscsilun;
acb->common.cb(acb->common.opaque, -ECANCELED);
if (acb->status != -EINPROGRESS) {
return;
}
acb->canceled = 1;
/* send a task mgmt call to the target to cancel the task on the target */
iscsi_task_mgmt_abort_task_async(iscsilun->iscsi, acb->task,
iscsi_abort_task_cb, NULL);
iscsi_abort_task_cb, acb);
/* then also cancel the task locally in libiscsi */
iscsi_scsi_task_cancel(iscsilun->iscsi, acb->task);
while (acb->status == -EINPROGRESS) {
qemu_aio_wait();
}
}
static AIOPool iscsi_aio_pool = {
@ -152,34 +190,6 @@ iscsi_process_write(void *arg)
}
static int
iscsi_schedule_bh(QEMUBHFunc *cb, IscsiAIOCB *acb)
{
acb->bh = qemu_bh_new(cb, acb);
if (!acb->bh) {
error_report("oom: could not create iscsi bh");
return -EIO;
}
qemu_bh_schedule(acb->bh);
return 0;
}
static void
iscsi_readv_writev_bh_cb(void *p)
{
IscsiAIOCB *acb = p;
qemu_bh_delete(acb->bh);
if (acb->canceled == 0) {
acb->common.cb(acb->common.opaque, acb->status);
}
qemu_aio_release(acb);
}
static void
iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
void *command_data, void *opaque)
@ -191,9 +201,6 @@ iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
g_free(acb->buf);
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@ -204,9 +211,7 @@ iscsi_aio_write16_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
iscsi_schedule_bh(acb);
}
static int64_t sector_qemu2lun(int64_t sector, IscsiLun *iscsilun)
@ -235,6 +240,8 @@ iscsi_aio_writev(BlockDriverState *bs, int64_t sector_num,
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
/* XXX we should pass the iovec to write16 to avoid the extra copy */
/* this will allow us to get rid of 'buf' completely */
@ -293,9 +300,6 @@ iscsi_aio_read16_cb(struct iscsi_context *iscsi, int status,
trace_iscsi_aio_read16_cb(iscsi, status, acb, acb->canceled);
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@ -306,9 +310,7 @@ iscsi_aio_read16_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
@ -334,6 +336,8 @@ iscsi_aio_readv(BlockDriverState *bs, int64_t sector_num,
acb->qiov = qiov;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->read_size = qemu_read_size;
acb->buf = NULL;
@ -409,9 +413,6 @@ iscsi_synccache10_cb(struct iscsi_context *iscsi, int status,
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@ -422,9 +423,7 @@ iscsi_synccache10_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
@ -439,6 +438,8 @@ iscsi_aio_flush(BlockDriverState *bs,
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
acb->task = iscsi_synchronizecache10_task(iscsi, iscsilun->lun,
0, 0, 0, 0,
@ -463,9 +464,6 @@ iscsi_unmap_cb(struct iscsi_context *iscsi, int status,
IscsiAIOCB *acb = opaque;
if (acb->canceled != 0) {
qemu_aio_release(acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
return;
}
@ -476,9 +474,7 @@ iscsi_unmap_cb(struct iscsi_context *iscsi, int status,
acb->status = -EIO;
}
iscsi_schedule_bh(iscsi_readv_writev_bh_cb, acb);
scsi_free_scsi_task(acb->task);
acb->task = NULL;
iscsi_schedule_bh(acb);
}
static BlockDriverAIOCB *
@ -495,6 +491,8 @@ iscsi_aio_discard(BlockDriverState *bs,
acb->iscsilun = iscsilun;
acb->canceled = 0;
acb->bh = NULL;
acb->status = -EINPROGRESS;
list[0].lba = sector_qemu2lun(sector_num, iscsilun);
list[0].num = nb_sectors * BDRV_SECTOR_SIZE / iscsilun->block_size;

View File

@ -471,6 +471,8 @@ int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
QCOW_OFLAG_COMPRESSED | QCOW_OFLAG_ZERO);
*cluster_offset &= L2E_OFFSET_MASK;
break;
default:
abort();
}
qcow2_cache_put(bs, s->l2_table_cache, (void**) &l2_table);

View File

@ -367,7 +367,7 @@ static int alloc_refcount_block(BlockDriverState *bs,
}
for(i = 0; i < table_size; i++) {
cpu_to_be64s(&new_table[i]);
be64_to_cpus(&new_table[i]);
}
/* Hook up the new refcount table in the qcow2 header */

View File

@ -298,14 +298,6 @@ static int qcow2_open(BlockDriverState *bs, int flags)
goto fail;
}
if (!bs->read_only && s->autoclear_features != 0) {
s->autoclear_features = 0;
ret = qcow2_update_header(bs);
if (ret < 0) {
goto fail;
}
}
/* Check support for various header values */
if (header.refcount_order != 4) {
report_unsupported(bs, "%d bit reference counts",
@ -411,6 +403,15 @@ static int qcow2_open(BlockDriverState *bs, int flags)
goto fail;
}
/* Clear unknown autoclear feature bits */
if (!bs->read_only && s->autoclear_features != 0) {
s->autoclear_features = 0;
ret = qcow2_update_header(bs);
if (ret < 0) {
goto fail;
}
}
/* Initialise locks */
qemu_co_mutex_init(&s->lock);

View File

@ -489,6 +489,7 @@ static int connect_to_sdog(const char *addr, const char *port)
if (errno == EINTR) {
goto reconnect;
}
close(fd);
break;
}
@ -1957,7 +1958,7 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
int64_t pos, int size, int load)
{
int fd, create;
int ret = 0;
int ret = 0, remaining = size;
unsigned int data_len;
uint64_t vmstate_oid;
uint32_t vdi_index;
@ -1968,11 +1969,11 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
return fd;
}
while (size) {
while (remaining) {
vdi_index = pos / SD_DATA_OBJ_SIZE;
offset = pos % SD_DATA_OBJ_SIZE;
data_len = MIN(size, SD_DATA_OBJ_SIZE);
data_len = MIN(remaining, SD_DATA_OBJ_SIZE);
vmstate_oid = vid_to_vmstate_oid(s->inode.vdi_id, vdi_index);
@ -1993,9 +1994,9 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
}
pos += data_len;
size -= data_len;
ret += data_len;
remaining -= data_len;
}
ret = size;
cleanup:
closesocket(fd);
return ret;

View File

@ -35,6 +35,7 @@
#define VMDK4_FLAG_RGD (1 << 1)
#define VMDK4_FLAG_COMPRESS (1 << 16)
#define VMDK4_FLAG_MARKER (1 << 17)
#define VMDK4_GD_AT_END 0xffffffffffffffffULL
typedef struct {
uint32_t version;
@ -57,8 +58,8 @@ typedef struct {
int64_t desc_offset;
int64_t desc_size;
int32_t num_gtes_per_gte;
int64_t gd_offset;
int64_t rgd_offset;
int64_t gd_offset;
int64_t grain_offset;
char filler[1];
char check_bytes[4];
@ -115,6 +116,13 @@ typedef struct VmdkGrainMarker {
uint8_t data[0];
} VmdkGrainMarker;
enum {
MARKER_END_OF_STREAM = 0,
MARKER_GRAIN_TABLE = 1,
MARKER_GRAIN_DIRECTORY = 2,
MARKER_FOOTER = 3,
};
static int vmdk_probe(const uint8_t *buf, int buf_size, const char *filename)
{
uint32_t magic;
@ -451,6 +459,54 @@ static int vmdk_open_vmdk4(BlockDriverState *bs,
if (header.capacity == 0 && header.desc_offset) {
return vmdk_open_desc_file(bs, flags, header.desc_offset << 9);
}
if (le64_to_cpu(header.gd_offset) == VMDK4_GD_AT_END) {
/*
* The footer takes precedence over the header, so read it in. The
* footer starts at offset -1024 from the end: One sector for the
* footer, and another one for the end-of-stream marker.
*/
struct {
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED footer_marker;
uint32_t magic;
VMDK4Header header;
uint8_t pad[512 - 4 - sizeof(VMDK4Header)];
struct {
uint64_t val;
uint32_t size;
uint32_t type;
uint8_t pad[512 - 16];
} QEMU_PACKED eos_marker;
} QEMU_PACKED footer;
ret = bdrv_pread(file,
bs->file->total_sectors * 512 - 1536,
&footer, sizeof(footer));
if (ret < 0) {
return ret;
}
/* Some sanity checks for the footer */
if (be32_to_cpu(footer.magic) != VMDK4_MAGIC ||
le32_to_cpu(footer.footer_marker.size) != 0 ||
le32_to_cpu(footer.footer_marker.type) != MARKER_FOOTER ||
le64_to_cpu(footer.eos_marker.val) != 0 ||
le32_to_cpu(footer.eos_marker.size) != 0 ||
le32_to_cpu(footer.eos_marker.type) != MARKER_END_OF_STREAM)
{
return -EINVAL;
}
header = footer.header;
}
l1_entry_sectors = le32_to_cpu(header.num_gtes_per_gte)
* le64_to_cpu(header.granularity);
if (l1_entry_sectors == 0) {

67
configure vendored
View File

@ -275,6 +275,41 @@ EOF
compile_object
}
if check_define __linux__ ; then
targetos="Linux"
elif check_define _WIN32 ; then
targetos='MINGW32'
elif check_define __OpenBSD__ ; then
targetos='OpenBSD'
elif check_define __sun__ ; then
targetos='SunOS'
elif check_define __HAIKU__ ; then
targetos='Haiku'
else
targetos=`uname -s`
fi
# Some host OSes need non-standard checks for which CPU to use.
# Note that these checks are broken for cross-compilation: if you're
# cross-compiling to one of these OSes then you'll need to specify
# the correct CPU with the --cpu option.
case $targetos in
Darwin)
# on Leopard most of the system is 32-bit, so we have to ask the kernel if we can
# run 64-bit userspace code.
# If the user didn't specify a CPU explicitly and the kernel says this is
# 64 bit hw, then assume x86_64. Otherwise fall through to the usual detection code.
if test -z "$cpu" && test "$(sysctl -n hw.optional.x86_64)" = "1"; then
cpu="x86_64"
fi
;;
SunOS)
# `uname -m` returns i86pc even on an x86_64 box, so default based on isainfo
if test -z "$cpu" && test "$(isainfo -k)" = "amd64"; then
cpu="x86_64"
fi
esac
if test ! -z "$cpu" ; then
# command line argument
:
@ -349,19 +384,6 @@ if test -z "$ARCH"; then
fi
# OS specific
if check_define __linux__ ; then
targetos="Linux"
elif check_define _WIN32 ; then
targetos='MINGW32'
elif check_define __OpenBSD__ ; then
targetos='OpenBSD'
elif check_define __sun__ ; then
targetos='SunOS'
elif check_define __HAIKU__ ; then
targetos='Haiku'
else
targetos=`uname -s`
fi
case $targetos in
CYGWIN*)
@ -411,12 +433,6 @@ OpenBSD)
Darwin)
bsd="yes"
darwin="yes"
# on Leopard most of the system is 32-bit, so we have to ask the kernel it if we can
# run 64-bit userspace code
if [ "$cpu" = "i386" ] ; then
is_x86_64=`sysctl -n hw.optional.x86_64`
[ "$is_x86_64" = "1" ] && cpu=x86_64
fi
if [ "$cpu" = "x86_64" ] ; then
QEMU_CFLAGS="-arch x86_64 $QEMU_CFLAGS"
LDFLAGS="-arch x86_64 $LDFLAGS"
@ -437,12 +453,6 @@ SunOS)
smbd="${SMBD-/usr/sfw/sbin/smbd}"
needs_libsunmath="no"
solarisrev=`uname -r | cut -f2 -d.`
# have to select again, because `uname -m` returns i86pc
# even on an x86_64 box.
solariscpu=`isainfo -k`
if test "${solariscpu}" = "amd64" ; then
cpu="x86_64"
fi
if [ "$cpu" = "i386" -o "$cpu" = "x86_64" ] ; then
if test "$solarisrev" -le 9 ; then
if test -f /opt/SUNWspro/prod/lib/libsunmath.so.1; then
@ -2811,7 +2821,11 @@ fi
open_by_hande_at=no
cat > $TMPC << EOF
#include <fcntl.h>
#if !defined(AT_EMPTY_PATH)
# error missing definition
#else
int main(void) { struct file_handle fh; return open_by_handle_at(0, &fh, 0); }
#endif
EOF
if compile_prog "" "" ; then
open_by_handle_at=yes
@ -2915,7 +2929,8 @@ if test "$softmmu" = yes ; then
tools="$tools fsdev/virtfs-proxy-helper\$(EXESUF)"
else
if test "$virtfs" = yes; then
feature_not_found "virtfs"
echo "VirtFS is supported only on Linux and requires libcap-devel and libattr-devel"
exit 1
fi
virtfs=no
fi

View File

@ -847,6 +847,26 @@ static void console_clear_xy(TextConsole *s, int x, int y)
update_xy(s, x, y);
}
/* set cursor, checking bounds */
static void set_cursor(TextConsole *s, int x, int y)
{
if (x < 0) {
x = 0;
}
if (y < 0) {
y = 0;
}
if (y >= s->height) {
y = s->height - 1;
}
if (x >= s->width) {
x = s->width - 1;
}
s->x = x;
s->y = y;
}
static void console_putchar(TextConsole *s, int ch)
{
TextCell *c;
@ -918,7 +938,8 @@ static void console_putchar(TextConsole *s, int ch)
s->esc_params[s->nb_esc_params] * 10 + ch - '0';
}
} else {
s->nb_esc_params++;
if (s->nb_esc_params < MAX_ESC_PARAMS)
s->nb_esc_params++;
if (ch == ';')
break;
#ifdef DEBUG_CONSOLE
@ -932,59 +953,37 @@ static void console_putchar(TextConsole *s, int ch)
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
s->y -= s->esc_params[0];
if (s->y < 0) {
s->y = 0;
}
set_cursor(s, s->x, s->y - s->esc_params[0]);
break;
case 'B':
/* move cursor down */
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
s->y += s->esc_params[0];
if (s->y >= s->height) {
s->y = s->height - 1;
}
set_cursor(s, s->x, s->y + s->esc_params[0]);
break;
case 'C':
/* move cursor right */
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
s->x += s->esc_params[0];
if (s->x >= s->width) {
s->x = s->width - 1;
}
set_cursor(s, s->x + s->esc_params[0], s->y);
break;
case 'D':
/* move cursor left */
if (s->esc_params[0] == 0) {
s->esc_params[0] = 1;
}
s->x -= s->esc_params[0];
if (s->x < 0) {
s->x = 0;
}
set_cursor(s, s->x - s->esc_params[0], s->y);
break;
case 'G':
/* move cursor to column */
s->x = s->esc_params[0] - 1;
if (s->x < 0) {
s->x = 0;
}
set_cursor(s, s->esc_params[0] - 1, s->y);
break;
case 'f':
case 'H':
/* move cursor to row, column */
s->x = s->esc_params[1] - 1;
if (s->x < 0) {
s->x = 0;
}
s->y = s->esc_params[0] - 1;
if (s->y < 0) {
s->y = 0;
}
set_cursor(s, s->esc_params[1] - 1, s->esc_params[0] - 1);
break;
case 'J':
switch (s->esc_params[0]) {

View File

@ -285,6 +285,12 @@ int cpu_exec(CPUArchState *env)
}
#endif
#if defined(TARGET_I386)
#if !defined(CONFIG_USER_ONLY)
if (interrupt_request & CPU_INTERRUPT_POLL) {
env->interrupt_request &= ~CPU_INTERRUPT_POLL;
apic_poll_irq(env->apic_state);
}
#endif
if (interrupt_request & CPU_INTERRUPT_INIT) {
svm_check_intercept(env, SVM_EXIT_INIT);
do_cpu_init(env);

View File

@ -33,6 +33,7 @@ void qemu_sglist_add(QEMUSGList *qsg, dma_addr_t base, dma_addr_t len)
void qemu_sglist_destroy(QEMUSGList *qsg)
{
g_free(qsg->sg);
memset(qsg, 0, sizeof(*qsg));
}
typedef struct {

3
exec.c
View File

@ -1492,7 +1492,8 @@ void tb_invalidate_phys_addr(target_phys_addr_t addr)
static void breakpoint_invalidate(CPUArchState *env, target_ulong pc)
{
tb_invalidate_phys_addr(cpu_get_phys_page_debug(env, pc));
tb_invalidate_phys_addr(cpu_get_phys_page_debug(env, pc) |
(pc & ~TARGET_PAGE_MASK));
}
#endif
#endif /* TARGET_HAS_ICE */

View File

@ -299,7 +299,6 @@ static void acpi_piix_eject_slot(PIIX4PMState *s, unsigned slots)
if (pc->no_hotplug) {
slot_free = false;
} else {
object_unparent(OBJECT(dev));
qdev_free(qdev);
}
}
@ -346,6 +345,9 @@ static void piix4_reset(void *opaque)
pci_conf[0x5a] = 0;
pci_conf[0x5b] = 0;
pci_conf[0x40] = 0x01; /* PM io base read only bit */
pci_conf[0x80] = 0;
if (s->kvm_enabled) {
/* Mark SMM as already inited (until KVM supports SMM). */
pci_conf[0x5B] = 0x02;
@ -385,8 +387,6 @@ static int piix4_pm_initfn(PCIDevice *dev)
pci_conf[0x09] = 0x00;
pci_conf[0x3d] = 0x01; // interrupt pin 1
pci_conf[0x40] = 0x01; /* PM io base read only bit */
/* APM */
apm_init(&s->apm, apm_ctrl_changed, s);

View File

@ -16,6 +16,7 @@
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>
*/
#include "qemu-thread.h"
#include "apic_internal.h"
#include "apic.h"
#include "ioapic.h"
@ -369,11 +370,10 @@ static void apic_update_irq(APICCommonState *s)
if (!(s->spurious_vec & APIC_SV_ENABLE)) {
return;
}
if (apic_irq_pending(s) > 0) {
if (!qemu_cpu_is_self(s->cpu_env)) {
cpu_interrupt(s->cpu_env, CPU_INTERRUPT_POLL);
} else if (apic_irq_pending(s) > 0) {
cpu_interrupt(s->cpu_env, CPU_INTERRUPT_HARD);
} else if (apic_accept_pic_intr(&s->busdev.qdev) &&
pic_get_output(isa_pic)) {
apic_deliver_pic_intr(&s->busdev.qdev, 1);
}
}
@ -543,6 +543,15 @@ static void apic_deliver(DeviceState *d, uint8_t dest, uint8_t dest_mode,
apic_bus_deliver(deliver_bitmask, delivery_mode, vector_num, trigger_mode);
}
static bool apic_check_pic(APICCommonState *s)
{
if (!apic_accept_pic_intr(&s->busdev.qdev) || !pic_get_output(isa_pic)) {
return false;
}
apic_deliver_pic_intr(&s->busdev.qdev, 1);
return true;
}
int apic_get_interrupt(DeviceState *d)
{
APICCommonState *s = DO_UPCAST(APICCommonState, busdev.qdev, d);
@ -568,7 +577,12 @@ int apic_get_interrupt(DeviceState *d)
reset_bit(s->irr, intno);
set_bit(s->isr, intno);
apic_sync_vapic(s, SYNC_TO_VAPIC);
/* re-inject if there is still a pending PIC interrupt */
apic_check_pic(s);
apic_update_irq(s);
return intno;
}
@ -808,8 +822,11 @@ static void apic_mem_writel(void *opaque, target_phys_addr_t addr, uint32_t val)
{
int n = index - 0x32;
s->lvt[n] = val;
if (n == APIC_LVT_TIMER)
if (n == APIC_LVT_TIMER) {
apic_timer_update(s, qemu_get_clock_ns(vm_clock));
} else if (n == APIC_LVT_LINT0 && apic_check_pic(s)) {
apic_update_irq(s);
}
}
break;
case 0x38:

View File

@ -20,6 +20,7 @@ void apic_init_reset(DeviceState *s);
void apic_sipi(DeviceState *s);
void apic_handle_tpr_access_report(DeviceState *d, target_ulong ip,
TPRAccess access);
void apic_poll_irq(DeviceState *d);
/* pc.c */
int cpu_is_bsp(CPUX86State *env);

View File

@ -289,7 +289,9 @@ static int apic_init_common(SysBusDevice *dev)
sysbus_init_mmio(dev, &s->io_memory);
if (!vapic && s->vapic_control & VAPIC_ENABLE_MASK) {
/* Note: We need at least 1M to map the VAPIC option ROM */
if (!vapic && s->vapic_control & VAPIC_ENABLE_MASK &&
ram_size >= 1024 * 1024) {
vapic = sysbus_create_simple("kvmvapic", -1, NULL);
}
s->vapic = vapic;

View File

@ -141,7 +141,6 @@ void apic_report_irq_delivered(int delivered);
bool apic_next_timer(APICCommonState *s, int64_t current_time);
void apic_enable_tpr_access_reporting(DeviceState *d, bool enable);
void apic_enable_vapic(DeviceState *d, target_phys_addr_t paddr);
void apic_poll_irq(DeviceState *d);
void vapic_report_tpr_access(DeviceState *dev, void *cpu, target_ulong ip,
TPRAccess access);

View File

@ -955,6 +955,7 @@ static TypeInfo arm_gic_info = {
.parent = TYPE_SYS_BUS_DEVICE,
.instance_size = sizeof(gic_state),
.class_init = arm_gic_class_init,
.class_size = sizeof(SysBusDeviceClass),
};
static void arm_gic_register_types(void)

View File

@ -159,6 +159,10 @@ static int fd_seek(FDrive *drv, uint8_t head, uint8_t track, uint8_t sect,
drv->sect = sect;
}
if (drv->bs == NULL || !bdrv_is_inserted(drv->bs)) {
ret = 2;
}
return ret;
}

View File

@ -225,7 +225,6 @@ static int pci_i82378_init(PCIDevice *dev)
pci_register_bar(dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY, &s->io);
memory_region_init_io(&s->mem, &i82378_mem_ops, s, "i82378-mem", 0x01000000);
memory_region_set_coalescing(&s->mem);
pci_register_bar(dev, 1, PCI_BASE_ADDRESS_SPACE_MEMORY, &s->mem);
/* Make I/O address read only */

View File

@ -462,7 +462,7 @@ static void ahci_check_cmd_bh(void *opaque)
static void ahci_init_d2h(AHCIDevice *ad)
{
uint8_t init_fis[0x20];
uint8_t init_fis[20];
IDEState *ide_state = &ad->port.ifs[0];
memset(init_fis, 0, sizeof(init_fis));
@ -619,7 +619,7 @@ static void ahci_write_fis_d2h(AHCIDevice *ad, uint8_t *cmd_fis)
d2h_fis[11] = cmd_fis[11];
d2h_fis[12] = cmd_fis[12];
d2h_fis[13] = cmd_fis[13];
for (i = 14; i < 0x20; i++) {
for (i = 14; i < 20; i++) {
d2h_fis[i] = 0;
}
@ -634,7 +634,7 @@ static void ahci_write_fis_d2h(AHCIDevice *ad, uint8_t *cmd_fis)
}
}
static int ahci_populate_sglist(AHCIDevice *ad, QEMUSGList *sglist)
static int ahci_populate_sglist(AHCIDevice *ad, QEMUSGList *sglist, int offset)
{
AHCICmdHdr *cmd = ad->cur_cmd;
uint32_t opts = le32_to_cpu(cmd->opts);
@ -645,6 +645,10 @@ static int ahci_populate_sglist(AHCIDevice *ad, QEMUSGList *sglist)
uint8_t *prdt;
int i;
int r = 0;
int sum = 0;
int off_idx = -1;
int off_pos = -1;
int tbl_entry_size;
if (!sglist_alloc_hint) {
DPRINTF(ad->port_no, "no sg list given by guest: 0x%08x\n", opts);
@ -666,9 +670,30 @@ static int ahci_populate_sglist(AHCIDevice *ad, QEMUSGList *sglist)
/* Get entries in the PRDT, init a qemu sglist accordingly */
if (sglist_alloc_hint > 0) {
AHCI_SG *tbl = (AHCI_SG *)prdt;
qemu_sglist_init(sglist, sglist_alloc_hint);
sum = 0;
for (i = 0; i < sglist_alloc_hint; i++) {
/* flags_size is zero-based */
tbl_entry_size = (le32_to_cpu(tbl[i].flags_size) + 1);
if (offset <= (sum + tbl_entry_size)) {
off_idx = i;
off_pos = offset - sum;
break;
}
sum += tbl_entry_size;
}
if ((off_idx == -1) || (off_pos < 0) || (off_pos > tbl_entry_size)) {
DPRINTF(ad->port_no, "%s: Incorrect offset! "
"off_idx: %d, off_pos: %d\n",
__func__, off_idx, off_pos);
r = -1;
goto out;
}
qemu_sglist_init(sglist, (sglist_alloc_hint - off_idx));
qemu_sglist_add(sglist, le64_to_cpu(tbl[off_idx].addr + off_pos),
le32_to_cpu(tbl[off_idx].flags_size) + 1 - off_pos);
for (i = off_idx + 1; i < sglist_alloc_hint; i++) {
/* flags_size is zero-based */
qemu_sglist_add(sglist, le64_to_cpu(tbl[i].addr),
le32_to_cpu(tbl[i].flags_size) + 1);
@ -741,7 +766,7 @@ static void process_ncq_command(AHCIState *s, int port, uint8_t *cmd_fis,
ncq_tfs->lba, ncq_tfs->lba + ncq_tfs->sector_count - 2,
s->dev[port].port.ifs[0].nb_sectors - 1);
ahci_populate_sglist(&s->dev[port], &ncq_tfs->sglist);
ahci_populate_sglist(&s->dev[port], &ncq_tfs->sglist, 0);
ncq_tfs->tag = tag;
switch(ncq_fis->command) {
@ -964,7 +989,7 @@ static int ahci_start_transfer(IDEDMA *dma)
goto out;
}
if (!ahci_populate_sglist(ad, &s->sg)) {
if (!ahci_populate_sglist(ad, &s->sg, 0)) {
has_sglist = 1;
}
@ -1009,6 +1034,7 @@ static void ahci_start_dma(IDEDMA *dma, IDEState *s,
DPRINTF(ad->port_no, "\n");
ad->dma_cb = dma_cb;
ad->dma_status |= BM_STATUS_DMAING;
s->io_buffer_offset = 0;
dma_cb(s, 0);
}
@ -1017,7 +1043,7 @@ static int ahci_dma_prepare_buf(IDEDMA *dma, int is_write)
AHCIDevice *ad = DO_UPCAST(AHCIDevice, dma, dma);
IDEState *s = &ad->port.ifs[0];
ahci_populate_sglist(ad, &s->sg);
ahci_populate_sglist(ad, &s->sg, 0);
s->io_buffer_size = s->sg.size;
DPRINTF(ad->port_no, "len=%#x\n", s->io_buffer_size);
@ -1031,7 +1057,7 @@ static int ahci_dma_rw_buf(IDEDMA *dma, int is_write)
uint8_t *p = s->io_buffer + s->io_buffer_index;
int l = s->io_buffer_size - s->io_buffer_index;
if (ahci_populate_sglist(ad, &s->sg)) {
if (ahci_populate_sglist(ad, &s->sg, s->io_buffer_offset)) {
return 0;
}
@ -1041,9 +1067,13 @@ static int ahci_dma_rw_buf(IDEDMA *dma, int is_write)
dma_buf_write(p, l, &s->sg);
}
/* free sglist that was created in ahci_populate_sglist() */
qemu_sglist_destroy(&s->sg);
/* update number of transferred bytes */
ad->cur_cmd->status = cpu_to_le32(le32_to_cpu(ad->cur_cmd->status) + l);
s->io_buffer_index += l;
s->io_buffer_offset += l;
DPRINTF(ad->port_no, "len=%#x\n", l);

View File

@ -84,6 +84,14 @@ static const VMStateDescription vmstate_ahci = {
.unmigratable = 1,
};
static void pci_ich9_reset(void *opaque)
{
struct AHCIPCIState *d = opaque;
msi_reset(&d->card);
ahci_reset(opaque);
}
static int pci_ich9_ahci_init(PCIDevice *dev)
{
struct AHCIPCIState *d;
@ -102,7 +110,7 @@ static int pci_ich9_ahci_init(PCIDevice *dev)
/* XXX Software should program this register */
d->card.config[0x90] = 1 << 6; /* Address Map Register - AHCI mode */
qemu_register_reset(ahci_reset, d);
qemu_register_reset(pci_ich9_reset, d);
msi_init(dev, 0x50, 1, true, false);
d->ahci.irq = d->card.irq[0];
@ -133,7 +141,7 @@ static int pci_ich9_uninit(PCIDevice *dev)
d = DO_UPCAST(struct AHCIPCIState, card, dev);
msi_uninit(dev);
qemu_unregister_reset(ahci_reset, d);
qemu_unregister_reset(pci_ich9_reset, d);
ahci_uninit(&d->ahci);
return 0;

View File

@ -389,6 +389,7 @@ struct IDEState {
struct iovec iov;
QEMUIOVector qiov;
/* ATA DMA state */
int io_buffer_offset;
int io_buffer_size;
QEMUSGList sg;
/* PIO transfer handling */

View File

@ -1107,6 +1107,9 @@ static void intel_hda_reset(DeviceState *dev)
DeviceState *qdev;
HDACodecDevice *cdev;
if (d->msi) {
msi_reset(&d->pci);
}
intel_hda_regs_reset(d);
d->wall_base_ns = qemu_get_clock_ns(vm_clock);

View File

@ -351,6 +351,10 @@ static void close_guest_eventfds(IVShmemState *s, int posn)
{
int i, guest_curr_max;
if (!ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) {
return;
}
guest_curr_max = s->peers[posn].nb_eventfds;
for (i = 0; i < guest_curr_max; i++) {
@ -363,22 +367,6 @@ static void close_guest_eventfds(IVShmemState *s, int posn)
s->peers[posn].nb_eventfds = 0;
}
static void setup_ioeventfds(IVShmemState *s) {
int i, j;
for (i = 0; i <= s->max_peer; i++) {
for (j = 0; j < s->peers[i].nb_eventfds; j++) {
memory_region_add_eventfd(&s->ivshmem_mmio,
DOORBELL,
4,
true,
(i << 16) | j,
s->peers[i].eventfds[j]);
}
}
}
/* this function increase the dynamic storage need to store data about other
* guests */
static void increase_dynamic_storage(IVShmemState *s, int new_min_size) {
@ -685,10 +673,6 @@ static int pci_ivshmem_init(PCIDevice *dev)
memory_region_init_io(&s->ivshmem_mmio, &ivshmem_mmio_ops, s,
"ivshmem-mmio", IVSHMEM_REG_BAR_SIZE);
if (ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) {
setup_ioeventfds(s);
}
/* region for registers*/
pci_register_bar(&s->dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY,
&s->ivshmem_mmio);

View File

@ -29,7 +29,7 @@ void kvm_put_apic_state(DeviceState *d, struct kvm_lapic_state *kapic)
APICCommonState *s = DO_UPCAST(APICCommonState, busdev.qdev, d);
int i;
memset(kapic, 0, sizeof(kapic));
memset(kapic, 0, sizeof(*kapic));
kvm_apic_set_reg(kapic, 0x2, s->id << 24);
kvm_apic_set_reg(kapic, 0x8, s->tpr);
kvm_apic_set_reg(kapic, 0xd, s->log_dest << 24);

View File

@ -23,31 +23,71 @@
* THE SOFTWARE.
*/
#include "qemu-timer.h"
#include "sysemu.h"
#include "hw/i8254.h"
#include "hw/i8254_internal.h"
#include "kvm.h"
#define KVM_PIT_REINJECT_BIT 0
#define CALIBRATION_ROUNDS 3
typedef struct KVMPITState {
PITCommonState pit;
LostTickPolicy lost_tick_policy;
bool vm_stopped;
int64_t kernel_clock_offset;
} KVMPITState;
static void kvm_pit_get(PITCommonState *s)
static int64_t abs64(int64_t v)
{
return v < 0 ? -v : v;
}
static void kvm_pit_update_clock_offset(KVMPITState *s)
{
int64_t offset, clock_offset;
struct timespec ts;
int i;
/*
* Measure the delta between CLOCK_MONOTONIC, the base used for
* kvm_pit_channel_state::count_load_time, and vm_clock. Take the
* minimum of several samples to filter out scheduling noise.
*/
clock_offset = INT64_MAX;
for (i = 0; i < CALIBRATION_ROUNDS; i++) {
offset = qemu_get_clock_ns(vm_clock);
clock_gettime(CLOCK_MONOTONIC, &ts);
offset -= ts.tv_nsec;
offset -= (int64_t)ts.tv_sec * 1000000000;
if (abs64(offset) < abs64(clock_offset)) {
clock_offset = offset;
}
}
s->kernel_clock_offset = clock_offset;
}
static void kvm_pit_get(PITCommonState *pit)
{
KVMPITState *s = DO_UPCAST(KVMPITState, pit, pit);
struct kvm_pit_state2 kpit;
struct kvm_pit_channel_state *kchan;
struct PITChannelState *sc;
int i, ret;
/* No need to re-read the state if VM is stopped. */
if (s->vm_stopped) {
return;
}
if (kvm_has_pit_state2()) {
ret = kvm_vm_ioctl(kvm_state, KVM_GET_PIT2, &kpit);
if (ret < 0) {
fprintf(stderr, "KVM_GET_PIT2 failed: %s\n", strerror(ret));
abort();
}
s->channels[0].irq_disabled = kpit.flags & KVM_PIT_FLAGS_HPET_LEGACY;
pit->channels[0].irq_disabled = kpit.flags & KVM_PIT_FLAGS_HPET_LEGACY;
} else {
/*
* kvm_pit_state2 is superset of kvm_pit_state struct,
@ -61,7 +101,7 @@ static void kvm_pit_get(PITCommonState *s)
}
for (i = 0; i < 3; i++) {
kchan = &kpit.channels[i];
sc = &s->channels[i];
sc = &pit->channels[i];
sc->count = kchan->count;
sc->latched_count = kchan->latched_count;
sc->count_latched = kchan->count_latched;
@ -74,25 +114,31 @@ static void kvm_pit_get(PITCommonState *s)
sc->mode = kchan->mode;
sc->bcd = kchan->bcd;
sc->gate = kchan->gate;
sc->count_load_time = kchan->count_load_time;
sc->count_load_time = kchan->count_load_time + s->kernel_clock_offset;
}
sc = &s->channels[0];
sc = &pit->channels[0];
sc->next_transition_time =
pit_get_next_transition_time(sc, sc->count_load_time);
}
static void kvm_pit_put(PITCommonState *s)
static void kvm_pit_put(PITCommonState *pit)
{
KVMPITState *s = DO_UPCAST(KVMPITState, pit, pit);
struct kvm_pit_state2 kpit;
struct kvm_pit_channel_state *kchan;
struct PITChannelState *sc;
int i, ret;
kpit.flags = s->channels[0].irq_disabled ? KVM_PIT_FLAGS_HPET_LEGACY : 0;
/* The offset keeps changing as long as the VM is stopped. */
if (s->vm_stopped) {
kvm_pit_update_clock_offset(s);
}
kpit.flags = pit->channels[0].irq_disabled ? KVM_PIT_FLAGS_HPET_LEGACY : 0;
for (i = 0; i < 3; i++) {
kchan = &kpit.channels[i];
sc = &s->channels[i];
sc = &pit->channels[i];
kchan->count = sc->count;
kchan->latched_count = sc->latched_count;
kchan->count_latched = sc->count_latched;
@ -105,7 +151,7 @@ static void kvm_pit_put(PITCommonState *s)
kchan->mode = sc->mode;
kchan->bcd = sc->bcd;
kchan->gate = sc->gate;
kchan->count_load_time = sc->count_load_time;
kchan->count_load_time = sc->count_load_time - s->kernel_clock_offset;
}
ret = kvm_vm_ioctl(kvm_state,
@ -173,6 +219,21 @@ static void kvm_pit_irq_control(void *opaque, int n, int enable)
kvm_pit_put(pit);
}
static void kvm_pit_vm_state_change(void *opaque, int running,
RunState state)
{
KVMPITState *s = opaque;
if (running) {
kvm_pit_update_clock_offset(s);
s->vm_stopped = false;
} else {
kvm_pit_update_clock_offset(s);
kvm_pit_get(&s->pit);
s->vm_stopped = true;
}
}
static int kvm_pit_initfn(PITCommonState *pit)
{
KVMPITState *s = DO_UPCAST(KVMPITState, pit, pit);
@ -215,6 +276,8 @@ static int kvm_pit_initfn(PITCommonState *pit)
qdev_init_gpio_in(&pit->dev.qdev, kvm_pit_irq_control, 1);
qemu_add_vm_change_state_handler(kvm_pit_vm_state_change, s);
return 0;
}

View File

@ -282,6 +282,15 @@ static void msix_free_irq_entries(PCIDevice *dev)
}
}
static void msix_clear_all_vectors(PCIDevice *dev)
{
int vector;
for (vector = 0; vector < dev->msix_entries_nr; ++vector) {
msix_clr_pending(dev, vector);
}
}
/* Clean up resources for the device. */
int msix_uninit(PCIDevice *dev, MemoryRegion *bar)
{
@ -322,7 +331,7 @@ void msix_load(PCIDevice *dev, QEMUFile *f)
return;
}
msix_free_irq_entries(dev);
msix_clear_all_vectors(dev);
qemu_get_buffer(f, dev->msix_table_page, n * PCI_MSIX_ENTRY_SIZE);
qemu_get_buffer(f, dev->msix_table_page + MSIX_PAGE_PENDING, (n + 7) / 8);
msix_update_function_masked(dev);
@ -372,7 +381,7 @@ void msix_reset(PCIDevice *dev)
{
if (!(dev->cap_present & QEMU_PCI_CAP_MSIX))
return;
msix_free_irq_entries(dev);
msix_clear_all_vectors(dev);
dev->config[dev->msix_cap + MSIX_CONTROL_OFFSET] &=
~dev->wmask[dev->msix_cap + MSIX_CONTROL_OFFSET];
memset(dev->msix_table_page, 0, MSIX_PAGE_SIZE);

31
hw/pc.c
View File

@ -344,32 +344,37 @@ void pc_cmos_init(ram_addr_t ram_size, ram_addr_t above_4g_mem_size,
/* various important CMOS locations needed by PC/Bochs bios */
/* memory size */
val = 640; /* base memory in K */
/* base memory (first MiB) */
val = MIN(ram_size / 1024, 640);
rtc_set_memory(s, 0x15, val);
rtc_set_memory(s, 0x16, val >> 8);
val = (ram_size / 1024) - 1024;
/* extended memory (next 64MiB) */
if (ram_size > 1024 * 1024) {
val = (ram_size - 1024 * 1024) / 1024;
} else {
val = 0;
}
if (val > 65535)
val = 65535;
rtc_set_memory(s, 0x17, val);
rtc_set_memory(s, 0x18, val >> 8);
rtc_set_memory(s, 0x30, val);
rtc_set_memory(s, 0x31, val >> 8);
if (above_4g_mem_size) {
rtc_set_memory(s, 0x5b, (unsigned int)above_4g_mem_size >> 16);
rtc_set_memory(s, 0x5c, (unsigned int)above_4g_mem_size >> 24);
rtc_set_memory(s, 0x5d, (uint64_t)above_4g_mem_size >> 32);
}
if (ram_size > (16 * 1024 * 1024))
val = (ram_size / 65536) - ((16 * 1024 * 1024) / 65536);
else
/* memory between 16MiB and 4GiB */
if (ram_size > 16 * 1024 * 1024) {
val = (ram_size - 16 * 1024 * 1024) / 65536;
} else {
val = 0;
}
if (val > 65535)
val = 65535;
rtc_set_memory(s, 0x34, val);
rtc_set_memory(s, 0x35, val >> 8);
/* memory above 4GiB */
val = above_4g_mem_size / 65536;
rtc_set_memory(s, 0x5b, val);
rtc_set_memory(s, 0x5c, val >> 8);
rtc_set_memory(s, 0x5d, val >> 16);
/* set the number of CPU */
rtc_set_memory(s, 0x5f, smp_cpus - 1);

View File

@ -52,7 +52,7 @@ static int pci_bridge_dev_initfn(PCIDevice *dev)
{
PCIBridge *br = DO_UPCAST(PCIBridge, dev, dev);
PCIBridgeDev *bridge_dev = DO_UPCAST(PCIBridgeDev, bridge, br);
int err;
int err, ret;
pci_bridge_map_irq(br, NULL, pci_bridge_dev_map_irq_fn);
err = pci_bridge_initfn(dev);
if (err) {
@ -86,6 +86,8 @@ slotid_error:
shpc_cleanup(dev, &bridge_dev->bar);
shpc_error:
memory_region_destroy(&bridge_dev->bar);
ret = pci_bridge_exitfn(dev);
assert(!ret);
bridge_error:
return err;
}

View File

@ -20,6 +20,7 @@
#include "qdev.h"
#include "monitor.h"
#include "qmp-commands.h"
#include "arch_init.h"
/*
* Aliases were a bad idea from the start. Let's keep them
@ -29,16 +30,18 @@ typedef struct QDevAlias
{
const char *typename;
const char *alias;
uint32_t arch_mask;
} QDevAlias;
static const QDevAlias qdev_alias_table[] = {
{ "virtio-blk-pci", "virtio-blk" },
{ "virtio-net-pci", "virtio-net" },
{ "virtio-serial-pci", "virtio-serial" },
{ "virtio-balloon-pci", "virtio-balloon" },
{ "virtio-blk-s390", "virtio-blk" },
{ "virtio-net-s390", "virtio-net" },
{ "virtio-serial-s390", "virtio-serial" },
{ "virtio-blk-pci", "virtio-blk", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
{ "virtio-net-pci", "virtio-net", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
{ "virtio-serial-pci", "virtio-serial", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
{ "virtio-balloon-pci", "virtio-balloon",
QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
{ "virtio-blk-s390", "virtio-blk", QEMU_ARCH_S390X },
{ "virtio-net-s390", "virtio-net", QEMU_ARCH_S390X },
{ "virtio-serial-s390", "virtio-serial", QEMU_ARCH_S390X },
{ "lsi53c895a", "lsi" },
{ "ich9-ahci", "ahci" },
{ }
@ -50,6 +53,11 @@ static const char *qdev_class_get_alias(DeviceClass *dc)
int i;
for (i = 0; qdev_alias_table[i].typename; i++) {
if (qdev_alias_table[i].arch_mask &&
!(qdev_alias_table[i].arch_mask & arch_type)) {
continue;
}
if (strcmp(qdev_alias_table[i].typename, typename) == 0) {
return qdev_alias_table[i].alias;
}
@ -110,6 +118,11 @@ static const char *find_typename_by_alias(const char *alias)
int i;
for (i = 0; qdev_alias_table[i].alias; i++) {
if (qdev_alias_table[i].arch_mask &&
!(qdev_alias_table[i].arch_mask & arch_type)) {
continue;
}
if (strcmp(qdev_alias_table[i].alias, alias) == 0) {
return qdev_alias_table[i].typename;
}

View File

@ -240,7 +240,6 @@ void qbus_reset_all_fn(void *opaque)
int qdev_simple_unplug_cb(DeviceState *dev)
{
/* just zap it */
object_unparent(OBJECT(dev));
qdev_free(dev);
return 0;
}
@ -255,9 +254,10 @@ int qdev_simple_unplug_cb(DeviceState *dev)
way is somewhat unclean, and best avoided. */
void qdev_init_nofail(DeviceState *dev)
{
const char *typename = object_get_typename(OBJECT(dev));
if (qdev_init(dev) < 0) {
error_report("Initialization of device %s failed",
object_get_typename(OBJECT(dev)));
error_report("Initialization of device %s failed", typename);
exit(1);
}
}

View File

@ -781,6 +781,13 @@ static inline dma_addr_t rtl8139_addr64(uint32_t low, uint32_t high)
#endif
}
/* Workaround for buggy guest driver such as linux who allocates rx
* rings after the receiver were enabled. */
static bool rtl8139_cp_rx_valid(RTL8139State *s)
{
return !(s->RxRingAddrLO == 0 && s->RxRingAddrHI == 0);
}
static int rtl8139_can_receive(VLANClientState *nc)
{
RTL8139State *s = DO_UPCAST(NICState, nc, nc)->opaque;
@ -791,18 +798,15 @@ static int rtl8139_can_receive(VLANClientState *nc)
return 1;
if (!rtl8139_receiver_enabled(s))
return 1;
/* network/host communication happens only in normal mode */
if ((s->Cfg9346 & Chip9346_op_mask) != Cfg9346_Normal)
return 0;
if (rtl8139_cp_receiver_enabled(s)) {
if (rtl8139_cp_receiver_enabled(s) && rtl8139_cp_rx_valid(s)) {
/* ??? Flow control not implemented in c+ mode.
This is a hack to work around slirp deficiencies anyway. */
return 1;
} else {
avail = MOD2(s->RxBufferSize + s->RxBufPtr - s->RxBufAddr,
s->RxBufferSize);
return (avail == 0 || avail >= 1514);
return (avail == 0 || avail >= 1514 || (s->IntrMask & RxOverflow));
}
}
@ -836,12 +840,6 @@ static ssize_t rtl8139_do_receive(VLANClientState *nc, const uint8_t *buf, size_
return -1;
}
/* check whether we are in normal mode */
if ((s->Cfg9346 & Chip9346_op_mask) != Cfg9346_Normal) {
DPRINTF("not in normal op mode\n");
return -1;
}
/* XXX: check this */
if (s->RxConfig & AcceptAllPhys) {
/* promiscuous: receive all */
@ -946,6 +944,10 @@ static ssize_t rtl8139_do_receive(VLANClientState *nc, const uint8_t *buf, size_
if (rtl8139_cp_receiver_enabled(s))
{
if (!rtl8139_cp_rx_valid(s)) {
return size;
}
DPRINTF("in C+ Rx mode ================\n");
/* begin C+ receiver mode */

View File

@ -27,10 +27,23 @@ static struct BusInfo usb_bus_info = {
static int next_usb_bus = 0;
static QTAILQ_HEAD(, USBBus) busses = QTAILQ_HEAD_INITIALIZER(busses);
static int usb_device_post_load(void *opaque, int version_id)
{
USBDevice *dev = opaque;
if (dev->state == USB_STATE_NOTATTACHED) {
dev->attached = 0;
} else {
dev->attached = 1;
}
return 0;
}
const VMStateDescription vmstate_usb_device = {
.name = "USBDevice",
.version_id = 1,
.minimum_version_id = 1,
.post_load = usb_device_post_load,
.fields = (VMStateField []) {
VMSTATE_UINT8(addr, USBDevice),
VMSTATE_INT32(state, USBDevice),

View File

@ -348,6 +348,7 @@ struct EHCIQueue {
QTAILQ_ENTRY(EHCIQueue) next;
uint32_t seen;
uint64_t ts;
int revalidate;
/* cached data from guest - needs to be flushed
* when guest removes an entry (doorbell, handshake sequence)
@ -695,7 +696,18 @@ static EHCIQueue *ehci_find_queue_by_qh(EHCIState *ehci, uint32_t addr,
return NULL;
}
static void ehci_queues_rip_unused(EHCIState *ehci, int async, int flush)
static void ehci_queues_tag_unused_async(EHCIState *ehci)
{
EHCIQueue *q;
QTAILQ_FOREACH(q, &ehci->aqueues, next) {
if (!q->seen) {
q->revalidate = 1;
}
}
}
static void ehci_queues_rip_unused(EHCIState *ehci, int async)
{
EHCIQueueHead *head = async ? &ehci->aqueues : &ehci->pqueues;
EHCIQueue *q, *tmp;
@ -706,7 +718,7 @@ static void ehci_queues_rip_unused(EHCIState *ehci, int async, int flush)
q->ts = ehci->last_run_ns;
continue;
}
if (!flush && ehci->last_run_ns < q->ts + 250000000) {
if (ehci->last_run_ns < q->ts + 250000000) {
/* allow 0.25 sec idle */
continue;
}
@ -1064,6 +1076,12 @@ static void ehci_mem_writel(void *ptr, target_phys_addr_t addr, uint32_t val)
/* Do any register specific pre-write processing here. */
switch(addr) {
case USBCMD:
if (val & USBCMD_HCRESET) {
ehci_reset(s);
val = s->usbcmd;
break;
}
if ((val & USBCMD_RUNSTOP) && !(s->usbcmd & USBCMD_RUNSTOP)) {
qemu_mod_timer(s->frame_timer, qemu_get_clock_ns(vm_clock));
SET_LAST_RUN_CLOCK(s);
@ -1077,10 +1095,6 @@ static void ehci_mem_writel(void *ptr, target_phys_addr_t addr, uint32_t val)
ehci_set_usbsts(s, USBSTS_HALT);
}
if (val & USBCMD_HCRESET) {
ehci_reset(s);
val = s->usbcmd;
}
/* not supporting dynamic frame list size at the moment */
if ((val & USBCMD_FLS) && !(s->usbcmd & USBCMD_FLS)) {
@ -1451,7 +1465,7 @@ static int ehci_process_itd(EHCIState *ehci,
dev = ehci_find_device(ehci, devaddr);
ep = usb_ep_get(dev, pid, endp);
if (ep->type == USB_ENDPOINT_XFER_ISOC) {
if (ep && ep->type == USB_ENDPOINT_XFER_ISOC) {
usb_packet_setup(&ehci->ipacket, pid, ep);
usb_packet_map(&ehci->ipacket, &ehci->isgl);
ret = usb_handle_packet(dev, &ehci->ipacket);
@ -1519,7 +1533,7 @@ static int ehci_state_waitlisthead(EHCIState *ehci, int async)
ehci_set_usbsts(ehci, USBSTS_REC);
}
ehci_queues_rip_unused(ehci, async, 0);
ehci_queues_rip_unused(ehci, async);
/* Find the head of the list (4.9.1.1) */
for(i = 0; i < MAX_QH; i++) {
@ -1603,6 +1617,7 @@ static EHCIQueue *ehci_state_fetchqh(EHCIState *ehci, int async)
{
uint32_t entry;
EHCIQueue *q;
EHCIqh qh;
entry = ehci_get_fetch_addr(ehci, async);
q = ehci_find_queue_by_qh(ehci, entry, async);
@ -1620,7 +1635,16 @@ static EHCIQueue *ehci_state_fetchqh(EHCIState *ehci, int async)
}
get_dwords(ehci, NLPTR_GET(q->qhaddr),
(uint32_t *) &q->qh, sizeof(EHCIqh) >> 2);
(uint32_t *) &qh, sizeof(EHCIqh) >> 2);
if (q->revalidate && (q->qh.epchar != qh.epchar ||
q->qh.epcap != qh.epcap ||
q->qh.current_qtd != qh.current_qtd)) {
ehci_free_queue(q, async);
q = ehci_alloc_queue(ehci, async);
q->seen++;
}
q->qh = qh;
q->revalidate = 0;
ehci_trace_qh(q, NLPTR_GET(q->qhaddr), &q->qh);
if (q->async == EHCI_ASYNC_INFLIGHT) {
@ -2045,7 +2069,7 @@ static void ehci_advance_async_state(EHCIState *ehci)
*/
if (ehci->usbcmd & USBCMD_IAAD) {
/* Remove all unseen qhs from the async qhs queue */
ehci_queues_rip_unused(ehci, async, 1);
ehci_queues_tag_unused_async(ehci);
DPRINTF("ASYNC: doorbell request acknowledged\n");
ehci->usbcmd &= ~USBCMD_IAAD;
ehci_set_interrupt(ehci, USBSTS_IAA);
@ -2100,7 +2124,7 @@ static void ehci_advance_periodic_state(EHCIState *ehci)
ehci_set_fetch_addr(ehci, async,entry);
ehci_set_state(ehci, async, EST_FETCHENTRY);
ehci_advance_state(ehci, async);
ehci_queues_rip_unused(ehci, async, 0);
ehci_queues_rip_unused(ehci, async);
break;
default:
@ -2300,6 +2324,7 @@ static int usb_ehci_initfn(PCIDevice *dev)
s->frame_timer = qemu_new_timer_ns(vm_clock, ehci_frame_timer, s);
QTAILQ_INIT(&s->aqueues);
QTAILQ_INIT(&s->pqueues);
usb_packet_init(&s->ipacket);
qemu_register_reset(ehci_reset, s);

View File

@ -288,10 +288,10 @@ static void uhci_async_cancel_device(UHCIState *s, USBDevice *dev)
static void uhci_async_cancel_all(UHCIState *s)
{
UHCIQueue *queue;
UHCIQueue *queue, *nq;
UHCIAsync *curr, *n;
QTAILQ_FOREACH(queue, &s->queues, next) {
QTAILQ_FOREACH_SAFE(queue, &s->queues, next, nq) {
QTAILQ_FOREACH_SAFE(curr, &queue->asyncs, next, n) {
uhci_async_unlink(curr);
uhci_async_cancel(curr);

View File

@ -1031,6 +1031,8 @@ static int usbredir_handle_status(USBRedirDevice *dev,
case usb_redir_inval:
WARNING("got invalid param error from usb-host?\n");
return USB_RET_NAK;
case usb_redir_babble:
return USB_RET_BABBLE;
case usb_redir_ioerror:
case usb_redir_timeout:
default:

View File

@ -147,9 +147,11 @@ static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s)
static void virtio_blk_handle_scsi(VirtIOBlockReq *req)
{
#ifdef __linux__
int ret;
int status = VIRTIO_BLK_S_OK;
int i;
#endif
int status = VIRTIO_BLK_S_OK;
/*
* We require at least one output segment each for the virtio_blk_outhdr
@ -251,6 +253,7 @@ static void virtio_blk_handle_scsi(VirtIOBlockReq *req)
virtio_blk_req_complete(req, status);
g_free(req);
return;
#else
abort();
#endif
@ -489,7 +492,22 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
stw_raw(&blkcfg.min_io_size, s->conf->min_io_size / blk_size);
stw_raw(&blkcfg.opt_io_size, s->conf->opt_io_size / blk_size);
blkcfg.heads = heads;
blkcfg.sectors = secs & ~s->sector_mask;
/*
* We must ensure that the block device capacity is a multiple of
* the logical block size. If that is not the case, lets use
* sector_mask to adopt the geometry to have a correct picture.
* For those devices where the capacity is ok for the given geometry
* we dont touch the sector value of the geometry, since some devices
* (like s390 dasd) need a specific value. Here the capacity is already
* cyls*heads*secs*blk_size and the sector value is not block size
* divided by 512 - instead it is the amount of blk_size blocks
* per track (cylinder).
*/
if (bdrv_getlength(s->bs) / heads / secs % blk_size) {
blkcfg.sectors = secs & ~s->sector_mask;
} else {
blkcfg.sectors = secs;
}
blkcfg.size_max = 0;
blkcfg.physical_block_exp = get_physical_block_exp(s->conf);
blkcfg.alignment_offset = 0;

View File

@ -130,6 +130,7 @@ static int virtio_pci_load_config(void * opaque, QEMUFile *f)
if (ret) {
return ret;
}
msix_unuse_all_vectors(&proxy->pci_dev);
msix_load(&proxy->pci_dev, f);
if (msix_present(&proxy->pci_dev)) {
qemu_get_be16s(f, &proxy->vdev->config_vector);
@ -278,6 +279,7 @@ void virtio_pci_reset(DeviceState *d)
virtio_pci_stop_ioeventfd(proxy);
virtio_reset(proxy->vdev);
msix_reset(&proxy->pci_dev);
msix_unuse_all_vectors(&proxy->pci_dev);
proxy->flags &= ~VIRTIO_PCI_FLAG_BUS_MASTER_BUG;
}

View File

@ -537,6 +537,15 @@ static void blk_bh(void *opaque)
blk_handle_requests(blkdev);
}
/*
* We need to account for the grant allocations requiring contiguous
* chunks; the worst case number would be
* max_req * max_seg + (max_req - 1) * (max_seg - 1) + 1,
* but in order to keep things simple just use
* 2 * max_req * max_seg.
*/
#define MAX_GRANTS(max_req, max_seg) (2 * (max_req) * (max_seg))
static void blk_alloc(struct XenDevice *xendev)
{
struct XenBlkDev *blkdev = container_of(xendev, struct XenBlkDev, xendev);
@ -548,6 +557,11 @@ static void blk_alloc(struct XenDevice *xendev)
if (xen_mode != XEN_EMULATE) {
batch_maps = 1;
}
if (xc_gnttab_set_max_grants(xendev->gnttabdev,
MAX_GRANTS(max_requests, BLKIF_MAX_SEGMENTS_PER_REQUEST)) < 0) {
xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
strerror(errno));
}
}
static int blk_init(struct XenDevice *xendev)

View File

@ -87,9 +87,6 @@ static void unplug_nic(PCIBus *b, PCIDevice *d)
{
if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
PCI_CLASS_NETWORK_ETHERNET) {
/* Until qdev_free includes a call to object_unparent, we call it here
*/
object_unparent(&d->qdev.parent_obj);
qdev_free(&d->qdev);
}
}

View File

@ -77,6 +77,7 @@ int qemu_set_fd_handler2(int fd,
ioh->fd_write = fd_write;
ioh->opaque = opaque;
ioh->deleted = 0;
qemu_notify_event();
}
return 0;
}

View File

@ -27,6 +27,11 @@
typedef struct JSONParserContext
{
Error *err;
struct {
QObject **buf;
size_t pos;
size_t count;
} tokens;
} JSONParserContext;
#define BUG_ON(cond) assert(!(cond))
@ -40,7 +45,7 @@ typedef struct JSONParserContext
* 4) deal with premature EOI
*/
static QObject *parse_value(JSONParserContext *ctxt, QList **tokens, va_list *ap);
static QObject *parse_value(JSONParserContext *ctxt, va_list *ap);
/**
* Token manipulators
@ -270,27 +275,111 @@ out:
return NULL;
}
static QObject *parser_context_pop_token(JSONParserContext *ctxt)
{
QObject *token;
g_assert(ctxt->tokens.pos < ctxt->tokens.count);
token = ctxt->tokens.buf[ctxt->tokens.pos];
ctxt->tokens.pos++;
return token;
}
/* Note: parser_context_{peek|pop}_token do not increment the
* token object's refcount. In both cases the references will continue
* to be tracked and cleaned up in parser_context_free(), so do not
* attempt to free the token object.
*/
static QObject *parser_context_peek_token(JSONParserContext *ctxt)
{
QObject *token;
g_assert(ctxt->tokens.pos < ctxt->tokens.count);
token = ctxt->tokens.buf[ctxt->tokens.pos];
return token;
}
static JSONParserContext parser_context_save(JSONParserContext *ctxt)
{
JSONParserContext saved_ctxt = {0};
saved_ctxt.tokens.pos = ctxt->tokens.pos;
saved_ctxt.tokens.count = ctxt->tokens.count;
saved_ctxt.tokens.buf = ctxt->tokens.buf;
return saved_ctxt;
}
static void parser_context_restore(JSONParserContext *ctxt,
JSONParserContext saved_ctxt)
{
ctxt->tokens.pos = saved_ctxt.tokens.pos;
ctxt->tokens.count = saved_ctxt.tokens.count;
ctxt->tokens.buf = saved_ctxt.tokens.buf;
}
static void tokens_append_from_iter(QObject *obj, void *opaque)
{
JSONParserContext *ctxt = opaque;
g_assert(ctxt->tokens.pos < ctxt->tokens.count);
ctxt->tokens.buf[ctxt->tokens.pos++] = obj;
qobject_incref(obj);
}
static JSONParserContext *parser_context_new(QList *tokens)
{
JSONParserContext *ctxt;
size_t count;
if (!tokens) {
return NULL;
}
count = qlist_size(tokens);
if (count == 0) {
return NULL;
}
ctxt = g_malloc0(sizeof(JSONParserContext));
ctxt->tokens.pos = 0;
ctxt->tokens.count = count;
ctxt->tokens.buf = g_malloc(count * sizeof(QObject *));
qlist_iter(tokens, tokens_append_from_iter, ctxt);
ctxt->tokens.pos = 0;
return ctxt;
}
/* to support error propagation, ctxt->err must be freed separately */
static void parser_context_free(JSONParserContext *ctxt)
{
int i;
if (ctxt) {
for (i = 0; i < ctxt->tokens.count; i++) {
qobject_decref(ctxt->tokens.buf[i]);
}
g_free(ctxt->tokens.buf);
g_free(ctxt);
}
}
/**
* Parsing rules
*/
static int parse_pair(JSONParserContext *ctxt, QDict *dict, QList **tokens, va_list *ap)
static int parse_pair(JSONParserContext *ctxt, QDict *dict, va_list *ap)
{
QObject *key = NULL, *token = NULL, *value, *peek;
QList *working = qlist_copy(*tokens);
JSONParserContext saved_ctxt = parser_context_save(ctxt);
peek = qlist_peek(working);
peek = parser_context_peek_token(ctxt);
if (peek == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
}
key = parse_value(ctxt, &working, ap);
key = parse_value(ctxt, ap);
if (!key || qobject_type(key) != QTYPE_QSTRING) {
parse_error(ctxt, peek, "key is not a string in object");
goto out;
}
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
@ -301,7 +390,7 @@ static int parse_pair(JSONParserContext *ctxt, QDict *dict, QList **tokens, va_l
goto out;
}
value = parse_value(ctxt, &working, ap);
value = parse_value(ctxt, ap);
if (value == NULL) {
parse_error(ctxt, token, "Missing value in dict");
goto out;
@ -309,28 +398,24 @@ static int parse_pair(JSONParserContext *ctxt, QDict *dict, QList **tokens, va_l
qdict_put_obj(dict, qstring_get_str(qobject_to_qstring(key)), value);
qobject_decref(token);
qobject_decref(key);
QDECREF(*tokens);
*tokens = working;
return 0;
out:
qobject_decref(token);
parser_context_restore(ctxt, saved_ctxt);
qobject_decref(key);
QDECREF(working);
return -1;
}
static QObject *parse_object(JSONParserContext *ctxt, QList **tokens, va_list *ap)
static QObject *parse_object(JSONParserContext *ctxt, va_list *ap)
{
QDict *dict = NULL;
QObject *token, *peek;
QList *working = qlist_copy(*tokens);
JSONParserContext saved_ctxt = parser_context_save(ctxt);
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
goto out;
}
@ -338,23 +423,22 @@ static QObject *parse_object(JSONParserContext *ctxt, QList **tokens, va_list *a
if (!token_is_operator(token, '{')) {
goto out;
}
qobject_decref(token);
token = NULL;
dict = qdict_new();
peek = qlist_peek(working);
peek = parser_context_peek_token(ctxt);
if (peek == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
}
if (!token_is_operator(peek, '}')) {
if (parse_pair(ctxt, dict, &working, ap) == -1) {
if (parse_pair(ctxt, dict, ap) == -1) {
goto out;
}
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
@ -365,59 +449,52 @@ static QObject *parse_object(JSONParserContext *ctxt, QList **tokens, va_list *a
parse_error(ctxt, token, "expected separator in dict");
goto out;
}
qobject_decref(token);
token = NULL;
if (parse_pair(ctxt, dict, &working, ap) == -1) {
if (parse_pair(ctxt, dict, ap) == -1) {
goto out;
}
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
}
}
qobject_decref(token);
token = NULL;
} else {
token = qlist_pop(working);
qobject_decref(token);
token = parser_context_pop_token(ctxt);
token = NULL;
}
QDECREF(*tokens);
*tokens = working;
return QOBJECT(dict);
out:
qobject_decref(token);
QDECREF(working);
parser_context_restore(ctxt, saved_ctxt);
QDECREF(dict);
return NULL;
}
static QObject *parse_array(JSONParserContext *ctxt, QList **tokens, va_list *ap)
static QObject *parse_array(JSONParserContext *ctxt, va_list *ap)
{
QList *list = NULL;
QObject *token, *peek;
QList *working = qlist_copy(*tokens);
JSONParserContext saved_ctxt = parser_context_save(ctxt);
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
goto out;
}
if (!token_is_operator(token, '[')) {
token = NULL;
goto out;
}
qobject_decref(token);
token = NULL;
list = qlist_new();
peek = qlist_peek(working);
peek = parser_context_peek_token(ctxt);
if (peek == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
@ -426,7 +503,7 @@ static QObject *parse_array(JSONParserContext *ctxt, QList **tokens, va_list *ap
if (!token_is_operator(peek, ']')) {
QObject *obj;
obj = parse_value(ctxt, &working, ap);
obj = parse_value(ctxt, ap);
if (obj == NULL) {
parse_error(ctxt, token, "expecting value");
goto out;
@ -434,7 +511,7 @@ static QObject *parse_array(JSONParserContext *ctxt, QList **tokens, va_list *ap
qlist_append_obj(list, obj);
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
@ -446,10 +523,9 @@ static QObject *parse_array(JSONParserContext *ctxt, QList **tokens, va_list *ap
goto out;
}
qobject_decref(token);
token = NULL;
obj = parse_value(ctxt, &working, ap);
obj = parse_value(ctxt, ap);
if (obj == NULL) {
parse_error(ctxt, token, "expecting value");
goto out;
@ -457,39 +533,33 @@ static QObject *parse_array(JSONParserContext *ctxt, QList **tokens, va_list *ap
qlist_append_obj(list, obj);
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
parse_error(ctxt, NULL, "premature EOI");
goto out;
}
}
qobject_decref(token);
token = NULL;
} else {
token = qlist_pop(working);
qobject_decref(token);
token = parser_context_pop_token(ctxt);
token = NULL;
}
QDECREF(*tokens);
*tokens = working;
return QOBJECT(list);
out:
qobject_decref(token);
QDECREF(working);
parser_context_restore(ctxt, saved_ctxt);
QDECREF(list);
return NULL;
}
static QObject *parse_keyword(JSONParserContext *ctxt, QList **tokens)
static QObject *parse_keyword(JSONParserContext *ctxt)
{
QObject *token, *ret;
QList *working = qlist_copy(*tokens);
JSONParserContext saved_ctxt = parser_context_save(ctxt);
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
goto out;
}
@ -507,29 +577,24 @@ static QObject *parse_keyword(JSONParserContext *ctxt, QList **tokens)
goto out;
}
qobject_decref(token);
QDECREF(*tokens);
*tokens = working;
return ret;
out:
qobject_decref(token);
QDECREF(working);
parser_context_restore(ctxt, saved_ctxt);
return NULL;
}
static QObject *parse_escape(JSONParserContext *ctxt, QList **tokens, va_list *ap)
static QObject *parse_escape(JSONParserContext *ctxt, va_list *ap)
{
QObject *token = NULL, *obj;
QList *working = qlist_copy(*tokens);
JSONParserContext saved_ctxt = parser_context_save(ctxt);
if (ap == NULL) {
goto out;
}
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
goto out;
}
@ -553,25 +618,20 @@ static QObject *parse_escape(JSONParserContext *ctxt, QList **tokens, va_list *a
goto out;
}
qobject_decref(token);
QDECREF(*tokens);
*tokens = working;
return obj;
out:
qobject_decref(token);
QDECREF(working);
parser_context_restore(ctxt, saved_ctxt);
return NULL;
}
static QObject *parse_literal(JSONParserContext *ctxt, QList **tokens)
static QObject *parse_literal(JSONParserContext *ctxt)
{
QObject *token, *obj;
QList *working = qlist_copy(*tokens);
JSONParserContext saved_ctxt = parser_context_save(ctxt);
token = qlist_pop(working);
token = parser_context_pop_token(ctxt);
if (token == NULL) {
goto out;
}
@ -591,35 +651,30 @@ static QObject *parse_literal(JSONParserContext *ctxt, QList **tokens)
goto out;
}
qobject_decref(token);
QDECREF(*tokens);
*tokens = working;
return obj;
out:
qobject_decref(token);
QDECREF(working);
parser_context_restore(ctxt, saved_ctxt);
return NULL;
}
static QObject *parse_value(JSONParserContext *ctxt, QList **tokens, va_list *ap)
static QObject *parse_value(JSONParserContext *ctxt, va_list *ap)
{
QObject *obj;
obj = parse_object(ctxt, tokens, ap);
obj = parse_object(ctxt, ap);
if (obj == NULL) {
obj = parse_array(ctxt, tokens, ap);
obj = parse_array(ctxt, ap);
}
if (obj == NULL) {
obj = parse_escape(ctxt, tokens, ap);
obj = parse_escape(ctxt, ap);
}
if (obj == NULL) {
obj = parse_keyword(ctxt, tokens);
obj = parse_keyword(ctxt);
}
if (obj == NULL) {
obj = parse_literal(ctxt, tokens);
obj = parse_literal(ctxt);
}
return obj;
@ -632,19 +687,18 @@ QObject *json_parser_parse(QList *tokens, va_list *ap)
QObject *json_parser_parse_err(QList *tokens, va_list *ap, Error **errp)
{
JSONParserContext ctxt = {};
QList *working;
JSONParserContext *ctxt = parser_context_new(tokens);
QObject *result;
if (!tokens) {
if (!ctxt) {
return NULL;
}
working = qlist_copy(tokens);
result = parse_value(&ctxt, &working, ap);
QDECREF(working);
result = parse_value(ctxt, ap);
error_propagate(errp, ctxt.err);
error_propagate(errp, ctxt->err);
parser_context_free(ctxt);
return result;
}

View File

@ -2794,7 +2794,7 @@ static inline abi_long do_msgrcv(int msqid, abi_long msgp,
if (!lock_user_struct(VERIFY_WRITE, target_mb, msgp, 0))
return -TARGET_EFAULT;
host_mb = malloc(msgsz+sizeof(long));
host_mb = g_malloc(msgsz+sizeof(long));
ret = get_errno(msgrcv(msqid, host_mb, msgsz, tswapal(msgtyp), msgflg));
if (ret > 0) {
@ -2809,11 +2809,11 @@ static inline abi_long do_msgrcv(int msqid, abi_long msgp,
}
target_mb->mtype = tswapal(host_mb->mtype);
free(host_mb);
end:
if (target_mb)
unlock_user_struct(target_mb, msgp, 1);
g_free(host_mb);
return ret;
}

View File

@ -426,7 +426,7 @@ static void memory_region_iorange_write(IORange *iorange,
if (mrp) {
mrp->write(mr->opaque, offset, data);
} else if (width == 2) {
mrp = find_portio(mr, offset - mrio->offset, 1, false);
mrp = find_portio(mr, offset - mrio->offset, 1, true);
assert(mrp);
mrp->write(mr->opaque, offset, data & 0xff);
mrp->write(mr->opaque, offset + 1, data >> 8);

View File

@ -4501,13 +4501,13 @@ static void monitor_control_event(void *opaque, int event)
switch (event) {
case CHR_EVENT_OPENED:
mon->mc->command_mode = 0;
json_message_parser_init(&mon->mc->parser, handle_qmp_command);
data = get_qmp_greeting();
monitor_json_emitter(mon, data);
qobject_decref(data);
break;
case CHR_EVENT_CLOSED:
json_message_parser_destroy(&mon->mc->parser);
json_message_parser_init(&mon->mc->parser, handle_qmp_command);
break;
}
}
@ -4605,6 +4605,8 @@ void monitor_init(CharDriverState *chr, int flags)
qemu_chr_add_handlers(chr, monitor_can_read, monitor_control_read,
monitor_control_event, mon);
qemu_chr_fe_set_echo(chr, true);
json_message_parser_init(&mon->mc->parser, handle_qmp_command);
} else {
qemu_chr_add_handlers(chr, monitor_can_read, monitor_read,
monitor_event, mon);

View File

@ -26,6 +26,7 @@
#include "config-host.h"
#ifndef _WIN32
#include <pwd.h>
#include <sys/wait.h>
#endif
#include "net.h"
@ -487,8 +488,27 @@ static int slirp_smb(SlirpState* s, const char *exported_dir,
static int instance;
char smb_conf[128];
char smb_cmdline[128];
struct passwd *passwd;
FILE *f;
passwd = getpwuid(geteuid());
if (!passwd) {
error_report("failed to retrieve user name");
return -1;
}
if (access(CONFIG_SMBD_COMMAND, F_OK)) {
error_report("could not find '%s', please install it",
CONFIG_SMBD_COMMAND);
return -1;
}
if (access(exported_dir, R_OK | X_OK)) {
error_report("error accessing shared directory '%s': %s",
exported_dir, strerror(errno));
return -1;
}
snprintf(s->smb_dir, sizeof(s->smb_dir), "/tmp/qemu-smb.%ld-%d",
(long)getpid(), instance++);
if (mkdir(s->smb_dir, 0700) < 0) {
@ -517,14 +537,16 @@ static int slirp_smb(SlirpState* s, const char *exported_dir,
"[qemu]\n"
"path=%s\n"
"read only=no\n"
"guest ok=yes\n",
"guest ok=yes\n"
"force user=%s\n",
s->smb_dir,
s->smb_dir,
s->smb_dir,
s->smb_dir,
s->smb_dir,
s->smb_dir,
exported_dir
exported_dir,
passwd->pw_name
);
fclose(f);

View File

@ -57,7 +57,13 @@ int setenv(const char *name, const char *value, int overwrite)
static BOOL WINAPI qemu_ctrl_handler(DWORD type)
{
exit(STATUS_CONTROL_C_EXIT);
qemu_system_shutdown_request();
/* Windows 7 kills application when the function returns.
Sleep here to give QEMU a try for closing.
Sleep period is 10000ms because Windows kills the program
after 10 seconds anyway. */
Sleep(10000);
return TRUE;
}

View File

@ -248,6 +248,9 @@ static bool ga_open_pidfile(const char *pidfile)
pidfd = open(pidfile, O_CREAT|O_WRONLY, S_IRUSR|S_IWUSR);
if (pidfd == -1 || lockf(pidfd, F_TLOCK, 0)) {
g_critical("Cannot lock pid file, %s", strerror(errno));
if (pidfd != -1) {
close(pidfd);
}
return false;
}
@ -436,7 +439,9 @@ static void become_daemon(const char *pidfile)
return;
fail:
unlink(pidfile);
if (pidfile) {
unlink(pidfile);
}
g_critical("failed to daemonize");
exit(EXIT_FAILURE);
#endif

View File

@ -4,6 +4,16 @@ usage: qemu-img command [command options]
@c man end
@end example
@c man begin DESCRIPTION
qemu-img allows you to create, convert and modify images offline. It can handle
all image formats supported by QEMU.
@b{Warning:} Never use qemu-img to modify images in use by a running virtual
machine or any other process; this may destroy the image. Also, be aware that
querying an image that is being modified by another process may encounter
inconsistent state.
@c man end
@c man begin OPTIONS
The following commands are supported:
@ -232,6 +242,29 @@ to grow.
@end table
@item qed
Image format with support for backing files and compact image files (when your
filesystem or transport medium does not support holes). Good performance due
to less metadata than the more featureful qcow2 format, especially with
cache=writethrough or cache=directsync. Consider using qcow2 which will soon
have a similar optimization and is most actively developed.
Supported options:
@table @code
@item backing_file
File name of a base image (see @option{create} subcommand).
@item backing_fmt
Image file format of backing file (optional). Useful if the format cannot be
autodetected because it has no header, like some vhd/vpc files.
@item cluster_size
Changes the cluster size (must be power-of-2 between 4K and 64K). Smaller
cluster sizes can improve the image file size whereas larger cluster sizes
generally provide better performance.
@item table_size
Changes the number of clusters per L1/L2 table (must be power-of-2 between 1
and 16). There is normally no need to change this value but this option can be
used for performance benchmarking.
@end table
@item qcow
Old QEMU image format. Left for compatibility.

View File

@ -112,14 +112,10 @@ static int64_t qemu_next_alarm_deadline(void)
static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
{
int64_t nearest_delta_ns;
if (!rt_clock->active_timers &&
!vm_clock->active_timers &&
!host_clock->active_timers) {
return;
int64_t nearest_delta_ns = qemu_next_alarm_deadline();
if (nearest_delta_ns < INT64_MAX) {
t->rearm(t, nearest_delta_ns);
}
nearest_delta_ns = qemu_next_alarm_deadline();
t->rearm(t, nearest_delta_ns);
}
/* TODO: MIN_TIMER_REARM_NS should be optimized */
@ -763,11 +759,8 @@ int init_timer_alarm(void)
goto fail;
}
/* first event is at time 0 */
atexit(quit_timers);
t->pending = true;
alarm_timer = t;
return 0;
fail:

13
qlist.c
View File

@ -124,6 +124,19 @@ int qlist_empty(const QList *qlist)
return QTAILQ_EMPTY(&qlist->head);
}
static void qlist_size_iter(QObject *obj, void *opaque)
{
size_t *count = opaque;
(*count)++;
}
size_t qlist_size(const QList *qlist)
{
size_t count = 0;
qlist_iter(qlist, qlist_size_iter, &count);
return count;
}
/**
* qobject_to_qlist(): Convert a QObject into a QList
*/

View File

@ -49,6 +49,7 @@ void qlist_iter(const QList *qlist,
QObject *qlist_pop(QList *qlist);
QObject *qlist_peek(QList *qlist);
int qlist_empty(const QList *qlist);
size_t qlist_size(const QList *qlist);
QList *qobject_to_qlist(const QObject *obj);
static inline const QListEntry *qlist_first(const QList *qlist)

View File

@ -347,8 +347,6 @@ static void object_deinit(Object *obj, TypeImpl *type)
if (type_has_parent(type)) {
object_deinit(obj, type_get_parent(type));
}
object_unparent(obj);
}
void object_finalize(void *data)
@ -385,8 +383,9 @@ Object *object_new(const char *typename)
void object_delete(Object *obj)
{
object_unparent(obj);
g_assert(obj->ref == 1);
object_unref(obj);
g_assert(obj->ref == 0);
g_free(obj);
}

View File

@ -337,6 +337,9 @@ static void readline_completion(ReadLineState *rs)
}
readline_show_prompt(rs);
}
for (i = 0; i < rs->nb_completions; i++) {
g_free(rs->completions[i]);
}
}
/* return true if command handled */

View File

@ -40,7 +40,7 @@ static void *softmmu_lock_user(CPUArchState *env, uint32_t addr, uint32_t len,
uint8_t *p;
/* TODO: Make this something that isn't fixed size. */
p = malloc(len);
if (copy)
if (p && copy)
cpu_memory_rw_debug(env, addr, p, len, 0);
return p;
}
@ -52,6 +52,9 @@ static char *softmmu_lock_user_string(CPUArchState *env, uint32_t addr)
uint8_t c;
/* TODO: Make this something that isn't fixed size. */
s = p = malloc(1024);
if (!s) {
return NULL;
}
do {
cpu_memory_rw_debug(env, addr, &c, 1, 0);
addr++;

View File

@ -477,6 +477,7 @@
for syscall instruction */
/* i386-specific interrupt pending bits. */
#define CPU_INTERRUPT_POLL CPU_INTERRUPT_TGT_EXT_1
#define CPU_INTERRUPT_SMI CPU_INTERRUPT_TGT_EXT_2
#define CPU_INTERRUPT_NMI CPU_INTERRUPT_TGT_EXT_3
#define CPU_INTERRUPT_MCE CPU_INTERRUPT_TGT_EXT_4
@ -1029,7 +1030,8 @@ static inline void cpu_clone_regs(CPUX86State *env, target_ulong newsp)
static inline bool cpu_has_work(CPUX86State *env)
{
return ((env->interrupt_request & CPU_INTERRUPT_HARD) &&
return ((env->interrupt_request & (CPU_INTERRUPT_HARD |
CPU_INTERRUPT_POLL)) &&
(env->eflags & IF_MASK)) ||
(env->interrupt_request & (CPU_INTERRUPT_NMI |
CPU_INTERRUPT_INIT |

View File

@ -1725,6 +1725,10 @@ int kvm_arch_process_async_events(CPUX86State *env)
return 0;
}
if (env->interrupt_request & CPU_INTERRUPT_POLL) {
env->interrupt_request &= ~CPU_INTERRUPT_POLL;
apic_poll_irq(env->apic_state);
}
if (((env->interrupt_request & CPU_INTERRUPT_HARD) &&
(env->eflags & IF_MASK)) ||
(env->interrupt_request & CPU_INTERRUPT_NMI)) {

View File

@ -7409,8 +7409,11 @@ static target_ulong disas_insn(DisasContext *s, target_ulong pc_start)
gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
} else {
modrm = ldub_code(s->pc++);
if ((modrm & 0xc0) != 0xc0)
goto illegal_op;
/* Ignore the mod bits (assume (modrm&0xc0)==0xc0).
* AMD documentation (24594.pdf) and testing of
* intel 386 and 486 processors all show that the mod bits
* are assumed to be 1's, regardless of actual values.
*/
rm = (modrm & 7) | REX_B(s);
reg = ((modrm >> 3) & 7) | rex_r;
if (CODE64(s))
@ -7451,8 +7454,11 @@ static target_ulong disas_insn(DisasContext *s, target_ulong pc_start)
gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
} else {
modrm = ldub_code(s->pc++);
if ((modrm & 0xc0) != 0xc0)
goto illegal_op;
/* Ignore the mod bits (assume (modrm&0xc0)==0xc0).
* AMD documentation (24594.pdf) and testing of
* intel 386 and 486 processors all show that the mod bits
* are assumed to be 1's, regardless of actual values.
*/
rm = (modrm & 7) | REX_B(s);
reg = ((modrm >> 3) & 7) | rex_r;
if (CODE64(s))

View File

@ -192,115 +192,98 @@ static inline uint64_t get_HILO (void)
return ((uint64_t)(env->active_tc.HI[0]) << 32) | (uint32_t)env->active_tc.LO[0];
}
static inline void set_HIT0_LO (target_ulong arg1, uint64_t HILO)
static inline target_ulong set_HIT0_LO(uint64_t HILO)
{
target_ulong tmp;
env->active_tc.LO[0] = (int32_t)(HILO & 0xFFFFFFFF);
arg1 = env->active_tc.HI[0] = (int32_t)(HILO >> 32);
tmp = env->active_tc.HI[0] = (int32_t)(HILO >> 32);
return tmp;
}
static inline void set_HI_LOT0 (target_ulong arg1, uint64_t HILO)
static inline target_ulong set_HI_LOT0(uint64_t HILO)
{
arg1 = env->active_tc.LO[0] = (int32_t)(HILO & 0xFFFFFFFF);
target_ulong tmp = env->active_tc.LO[0] = (int32_t)(HILO & 0xFFFFFFFF);
env->active_tc.HI[0] = (int32_t)(HILO >> 32);
return tmp;
}
/* Multiplication variants of the vr54xx. */
target_ulong helper_muls (target_ulong arg1, target_ulong arg2)
{
set_HI_LOT0(arg1, 0 - ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
return arg1;
return set_HI_LOT0(0 - ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
}
target_ulong helper_mulsu (target_ulong arg1, target_ulong arg2)
{
set_HI_LOT0(arg1, 0 - ((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2));
return arg1;
return set_HI_LOT0(0 - (uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2);
}
target_ulong helper_macc (target_ulong arg1, target_ulong arg2)
{
set_HI_LOT0(arg1, ((int64_t)get_HILO()) + ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
return arg1;
return set_HI_LOT0((int64_t)get_HILO() + (int64_t)(int32_t)arg1 *
(int64_t)(int32_t)arg2);
}
target_ulong helper_macchi (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, ((int64_t)get_HILO()) + ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
return arg1;
return set_HIT0_LO((int64_t)get_HILO() + (int64_t)(int32_t)arg1 *
(int64_t)(int32_t)arg2);
}
target_ulong helper_maccu (target_ulong arg1, target_ulong arg2)
{
set_HI_LOT0(arg1, ((uint64_t)get_HILO()) + ((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2));
return arg1;
return set_HI_LOT0((uint64_t)get_HILO() + (uint64_t)(uint32_t)arg1 *
(uint64_t)(uint32_t)arg2);
}
target_ulong helper_macchiu (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, ((uint64_t)get_HILO()) + ((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2));
return arg1;
return set_HIT0_LO((uint64_t)get_HILO() + (uint64_t)(uint32_t)arg1 *
(uint64_t)(uint32_t)arg2);
}
target_ulong helper_msac (target_ulong arg1, target_ulong arg2)
{
set_HI_LOT0(arg1, ((int64_t)get_HILO()) - ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
return arg1;
return set_HI_LOT0((int64_t)get_HILO() - (int64_t)(int32_t)arg1 *
(int64_t)(int32_t)arg2);
}
target_ulong helper_msachi (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, ((int64_t)get_HILO()) - ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
return arg1;
return set_HIT0_LO((int64_t)get_HILO() - (int64_t)(int32_t)arg1 *
(int64_t)(int32_t)arg2);
}
target_ulong helper_msacu (target_ulong arg1, target_ulong arg2)
{
set_HI_LOT0(arg1, ((uint64_t)get_HILO()) - ((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2));
return arg1;
return set_HI_LOT0((uint64_t)get_HILO() - (uint64_t)(uint32_t)arg1 *
(uint64_t)(uint32_t)arg2);
}
target_ulong helper_msachiu (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, ((uint64_t)get_HILO()) - ((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2));
return arg1;
return set_HIT0_LO((uint64_t)get_HILO() - (uint64_t)(uint32_t)arg1 *
(uint64_t)(uint32_t)arg2);
}
target_ulong helper_mulhi (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, (int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2);
return arg1;
return set_HIT0_LO((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2);
}
target_ulong helper_mulhiu (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, (uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2);
return arg1;
return set_HIT0_LO((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2);
}
target_ulong helper_mulshi (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, 0 - ((int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2));
return arg1;
return set_HIT0_LO(0 - (int64_t)(int32_t)arg1 * (int64_t)(int32_t)arg2);
}
target_ulong helper_mulshiu (target_ulong arg1, target_ulong arg2)
{
set_HIT0_LO(arg1, 0 - ((uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2));
return arg1;
return set_HIT0_LO(0 - (uint64_t)(uint32_t)arg1 * (uint64_t)(uint32_t)arg2);
}
#ifdef TARGET_MIPS64

View File

@ -5933,6 +5933,7 @@ static void gen_cp0 (CPUMIPSState *env, DisasContext *ctx, uint32_t opc, int rt,
{
const char *opn = "ldst";
check_cp0_enabled(ctx);
switch (opc) {
case OPC_MFC0:
if (rt == 0) {
@ -6805,7 +6806,7 @@ static void gen_farith (DisasContext *ctx, enum fopcode op1,
TCGv_i32 fp1 = tcg_temp_new_i32();
gen_load_fpr32(fp0, fs);
gen_load_fpr32(fp1, fd);
gen_load_fpr32(fp1, ft);
gen_helper_float_recip2_s(fp0, fp0, fp1);
tcg_temp_free_i32(fp1);
gen_store_fpr32(fp0, fd);
@ -6900,7 +6901,7 @@ static void gen_farith (DisasContext *ctx, enum fopcode op1,
gen_load_fpr32(fp32_0, fs);
gen_load_fpr32(fp32_1, ft);
tcg_gen_concat_i32_i64(fp64, fp32_0, fp32_1);
tcg_gen_concat_i32_i64(fp64, fp32_1, fp32_0);
tcg_temp_free_i32(fp32_1);
tcg_temp_free_i32(fp32_0);
gen_store_fpr64(ctx, fp64, fd);
@ -7543,7 +7544,7 @@ static void gen_farith (DisasContext *ctx, enum fopcode op1,
TCGv_i64 fp1 = tcg_temp_new_i64();
gen_load_fpr64(ctx, fp0, fs);
gen_load_fpr64(ctx, fp1, fd);
gen_load_fpr64(ctx, fp1, ft);
gen_helper_float_recip2_ps(fp0, fp0, fp1);
tcg_temp_free_i64(fp1);
gen_store_fpr64(ctx, fp0, fd);
@ -7742,8 +7743,7 @@ static void gen_flt3_ldst (DisasContext *ctx, uint32_t opc,
} else if (index == 0) {
gen_load_gpr(t0, base);
} else {
gen_load_gpr(t0, index);
gen_op_addr_add(ctx, t0, cpu_gpr[base], t0);
gen_op_addr_add(ctx, t0, cpu_gpr[base], cpu_gpr[index]);
}
/* Don't do NOP if destination is zero: we must perform the actual
memory access. */
@ -8112,7 +8112,11 @@ gen_rdhwr (CPUMIPSState *env, DisasContext *ctx, int rt, int rd)
{
TCGv t0;
#if !defined(CONFIG_USER_ONLY)
/* The Linux kernel will emulate rdhwr if it's not supported natively.
Therefore only check the ISA in system mode. */
check_insn(env, ctx, ISA_MIPS32R2);
#endif
t0 = tcg_temp_new();
switch (rd) {
@ -10027,7 +10031,7 @@ static void gen_ldst_pair (DisasContext *ctx, uint32_t opc, int rd,
const char *opn = "ldst_pair";
TCGv t0, t1;
if (ctx->hflags & MIPS_HFLAG_BMASK || rd == 31 || rd == base) {
if (ctx->hflags & MIPS_HFLAG_BMASK || rd == 31) {
generate_exception(ctx, EXCP_RI);
return;
}
@ -10039,6 +10043,10 @@ static void gen_ldst_pair (DisasContext *ctx, uint32_t opc, int rd,
switch (opc) {
case LWP:
if (rd == base) {
generate_exception(ctx, EXCP_RI);
return;
}
save_cpu_state(ctx, 0);
op_ld_lw(t1, t0, ctx);
gen_store_gpr(t1, rd);
@ -10060,6 +10068,10 @@ static void gen_ldst_pair (DisasContext *ctx, uint32_t opc, int rd,
break;
#ifdef TARGET_MIPS64
case LDP:
if (rd == base) {
generate_exception(ctx, EXCP_RI);
return;
}
save_cpu_state(ctx, 0);
op_ld_ld(t1, t0, ctx);
gen_store_gpr(t1, rd);
@ -10118,6 +10130,7 @@ static void gen_pool32axf (CPUMIPSState *env, DisasContext *ctx, int rt, int rs,
#ifndef CONFIG_USER_ONLY
case MFC0:
case MFC0 + 32:
check_cp0_enabled(ctx);
if (rt == 0) {
/* Treat as NOP. */
break;
@ -10126,6 +10139,7 @@ static void gen_pool32axf (CPUMIPSState *env, DisasContext *ctx, int rt, int rs,
break;
case MTC0:
case MTC0 + 32:
check_cp0_enabled(ctx);
{
TCGv t0 = tcg_temp_new();
@ -10222,10 +10236,12 @@ static void gen_pool32axf (CPUMIPSState *env, DisasContext *ctx, int rt, int rs,
case 0x05:
switch (minor) {
case RDPGPR:
check_cp0_enabled(ctx);
check_insn(env, ctx, ISA_MIPS32R2);
gen_load_srsgpr(rt, rs);
break;
case WRPGPR:
check_cp0_enabled(ctx);
check_insn(env, ctx, ISA_MIPS32R2);
gen_store_srsgpr(rt, rs);
break;
@ -10266,6 +10282,7 @@ static void gen_pool32axf (CPUMIPSState *env, DisasContext *ctx, int rt, int rs,
case 0x1d:
switch (minor) {
case DI:
check_cp0_enabled(ctx);
{
TCGv t0 = tcg_temp_new();
@ -10278,6 +10295,7 @@ static void gen_pool32axf (CPUMIPSState *env, DisasContext *ctx, int rt, int rs,
}
break;
case EI:
check_cp0_enabled(ctx);
{
TCGv t0 = tcg_temp_new();
@ -10758,6 +10776,7 @@ static void decode_micromips32_opc (CPUMIPSState *env, DisasContext *ctx,
minor = (ctx->opcode >> 12) & 0xf;
switch (minor) {
case CACHE:
check_cp0_enabled(ctx);
/* Treat as no-op. */
break;
case LWC2:
@ -12208,6 +12227,7 @@ static void decode_opc (CPUMIPSState *env, DisasContext *ctx, int *is_branch)
gen_st_cond(ctx, op, rt, rs, imm);
break;
case OPC_CACHE:
check_cp0_enabled(ctx);
check_insn(env, ctx, ISA_MIPS3 | ISA_MIPS32);
/* Treat as NOP. */
break;
@ -12767,8 +12787,9 @@ void cpu_state_reset(CPUMIPSState *env)
#if defined(CONFIG_USER_ONLY)
env->hflags = MIPS_HFLAG_UM;
/* Enable access to the SYNCI_Step register. */
env->CP0_HWREna |= (1 << 1);
/* Enable access to the CPUNum, SYNCI_Step, CC, and CCRes RDHWR
hardware registers. */
env->CP0_HWREna |= 0x0000000F;
if (env->CP0_Config1 & (1 << CP0C1_FP)) {
env->hflags |= MIPS_HFLAG_FPU;
}

View File

@ -558,7 +558,7 @@ int kvm_arch_handle_exit(CPUPPCState *env, struct kvm_run *run)
dprintf("handle PAPR hypercall\n");
run->papr_hcall.ret = spapr_hypercall(env, run->papr_hcall.nr,
run->papr_hcall.args);
ret = 1;
ret = 0;
break;
#endif
default:

View File

@ -238,9 +238,10 @@ static int kvm_sclp_service_call(CPUS390XState *env, struct kvm_run *run,
code = env->regs[(ipbh0 & 0xf0) >> 4];
r = sclp_service_call(env, sccb, code);
if (r) {
setcc(env, 3);
if (r < 0) {
enter_pgmcheck(env, -r);
}
setcc(env, r);
return 0;
}

View File

@ -19,6 +19,8 @@
*/
#include "cpu.h"
#include "memory.h"
#include "cputlb.h"
#include "dyngen-exec.h"
#include "host-utils.h"
#include "helper.h"
@ -2366,6 +2368,9 @@ static void ext_interrupt(CPUS390XState *env, int type, uint32_t param,
cpu_inject_ext(env, type, param, param64);
}
/*
* ret < 0 indicates program check, ret = 0,1,2,3 -> cc
*/
int sclp_service_call(CPUS390XState *env, uint32_t sccb, uint64_t code)
{
int r = 0;
@ -2375,10 +2380,12 @@ int sclp_service_call(CPUS390XState *env, uint32_t sccb, uint64_t code)
printf("sclp(0x%x, 0x%" PRIx64 ")\n", sccb, code);
#endif
/* basic checks */
if (!memory_region_is_ram(phys_page_find(sccb >> TARGET_PAGE_BITS)->mr)) {
return -PGM_ADDRESSING;
}
if (sccb & ~0x7ffffff8ul) {
fprintf(stderr, "KVM: invalid sccb address 0x%x\n", sccb);
r = -1;
goto out;
return -PGM_SPECIFICATION;
}
switch(code) {
@ -2405,22 +2412,24 @@ int sclp_service_call(CPUS390XState *env, uint32_t sccb, uint64_t code)
#ifdef DEBUG_HELPER
printf("KVM: invalid sclp call 0x%x / 0x%" PRIx64 "x\n", sccb, code);
#endif
r = -1;
r = 3;
break;
}
out:
return r;
}
/* SCLP service call */
uint32_t HELPER(servc)(uint32_t r1, uint64_t r2)
{
if (sclp_service_call(env, r1, r2)) {
return 3;
}
int r;
return 0;
r = sclp_service_call(env, r1, r2);
if (r < 0) {
program_interrupt(env, -r, 4);
return 0;
}
return r;
}
/* DIAG */

View File

@ -370,9 +370,12 @@ void split_tlb_entry_spec_way(const CPUXtensaState *env, uint32_t v, bool dtlb,
uint32_t *vpn, uint32_t wi, uint32_t *ei);
int xtensa_tlb_lookup(const CPUXtensaState *env, uint32_t addr, bool dtlb,
uint32_t *pwi, uint32_t *pei, uint8_t *pring);
void xtensa_tlb_set_entry_mmu(const CPUXtensaState *env,
xtensa_tlb_entry *entry, bool dtlb,
unsigned wi, unsigned ei, uint32_t vpn, uint32_t pte);
void xtensa_tlb_set_entry(CPUXtensaState *env, bool dtlb,
unsigned wi, unsigned ei, uint32_t vpn, uint32_t pte);
int xtensa_get_physical_addr(CPUXtensaState *env,
int xtensa_get_physical_addr(CPUXtensaState *env, bool update_tlb,
uint32_t vaddr, int is_write, int mmu_idx,
uint32_t *paddr, uint32_t *page_size, unsigned *access);
void reset_mmu(CPUXtensaState *env);

View File

@ -135,11 +135,11 @@ target_phys_addr_t cpu_get_phys_page_debug(CPUXtensaState *env, target_ulong add
uint32_t page_size;
unsigned access;
if (xtensa_get_physical_addr(env, addr, 0, 0,
if (xtensa_get_physical_addr(env, false, addr, 0, 0,
&paddr, &page_size, &access) == 0) {
return paddr;
}
if (xtensa_get_physical_addr(env, addr, 2, 0,
if (xtensa_get_physical_addr(env, false, addr, 2, 0,
&paddr, &page_size, &access) == 0) {
return paddr;
}
@ -448,30 +448,48 @@ static bool is_access_granted(unsigned access, int is_write)
}
}
static int autorefill_mmu(CPUXtensaState *env, uint32_t vaddr, bool dtlb,
uint32_t *wi, uint32_t *ei, uint8_t *ring);
static int get_pte(CPUXtensaState *env, uint32_t vaddr, uint32_t *pte);
static int get_physical_addr_mmu(CPUXtensaState *env,
static int get_physical_addr_mmu(CPUXtensaState *env, bool update_tlb,
uint32_t vaddr, int is_write, int mmu_idx,
uint32_t *paddr, uint32_t *page_size, unsigned *access)
uint32_t *paddr, uint32_t *page_size, unsigned *access,
bool may_lookup_pt)
{
bool dtlb = is_write != 2;
uint32_t wi;
uint32_t ei;
uint8_t ring;
uint32_t vpn;
uint32_t pte;
const xtensa_tlb_entry *entry = NULL;
xtensa_tlb_entry tmp_entry;
int ret = xtensa_tlb_lookup(env, vaddr, dtlb, &wi, &ei, &ring);
if ((ret == INST_TLB_MISS_CAUSE || ret == LOAD_STORE_TLB_MISS_CAUSE) &&
(mmu_idx != 0 || ((vaddr ^ env->sregs[PTEVADDR]) & 0xffc00000)) &&
autorefill_mmu(env, vaddr, dtlb, &wi, &ei, &ring) == 0) {
may_lookup_pt && get_pte(env, vaddr, &pte) == 0) {
ring = (pte >> 4) & 0x3;
wi = 0;
split_tlb_entry_spec_way(env, vaddr, dtlb, &vpn, wi, &ei);
if (update_tlb) {
wi = ++env->autorefill_idx & 0x3;
xtensa_tlb_set_entry(env, dtlb, wi, ei, vpn, pte);
env->sregs[EXCVADDR] = vaddr;
qemu_log("%s: autorefill(%08x): %08x -> %08x\n",
__func__, vaddr, vpn, pte);
} else {
xtensa_tlb_set_entry_mmu(env, &tmp_entry, dtlb, wi, ei, vpn, pte);
entry = &tmp_entry;
}
ret = 0;
}
if (ret != 0) {
return ret;
}
const xtensa_tlb_entry *entry =
xtensa_tlb_get_entry(env, dtlb, wi, ei);
if (entry == NULL) {
entry = xtensa_tlb_get_entry(env, dtlb, wi, ei);
}
if (ring < mmu_idx) {
return dtlb ?
@ -494,30 +512,21 @@ static int get_physical_addr_mmu(CPUXtensaState *env,
return 0;
}
static int autorefill_mmu(CPUXtensaState *env, uint32_t vaddr, bool dtlb,
uint32_t *wi, uint32_t *ei, uint8_t *ring)
static int get_pte(CPUXtensaState *env, uint32_t vaddr, uint32_t *pte)
{
uint32_t paddr;
uint32_t page_size;
unsigned access;
uint32_t pt_vaddr =
(env->sregs[PTEVADDR] | (vaddr >> 10)) & 0xfffffffc;
int ret = get_physical_addr_mmu(env, pt_vaddr, 0, 0,
&paddr, &page_size, &access);
int ret = get_physical_addr_mmu(env, false, pt_vaddr, 0, 0,
&paddr, &page_size, &access, false);
qemu_log("%s: trying autorefill(%08x) -> %08x\n", __func__,
vaddr, ret ? ~0 : paddr);
if (ret == 0) {
uint32_t vpn;
uint32_t pte = ldl_phys(paddr);
*ring = (pte >> 4) & 0x3;
*wi = (++env->autorefill_idx) & 0x3;
split_tlb_entry_spec_way(env, vaddr, dtlb, &vpn, *wi, ei);
xtensa_tlb_set_entry(env, dtlb, *wi, *ei, vpn, pte);
qemu_log("%s: autorefill(%08x): %08x -> %08x\n",
__func__, vaddr, vpn, pte);
*pte = ldl_phys(paddr);
}
return ret;
}
@ -553,13 +562,13 @@ static int get_physical_addr_region(CPUXtensaState *env,
*
* \return 0 if ok, exception cause code otherwise
*/
int xtensa_get_physical_addr(CPUXtensaState *env,
int xtensa_get_physical_addr(CPUXtensaState *env, bool update_tlb,
uint32_t vaddr, int is_write, int mmu_idx,
uint32_t *paddr, uint32_t *page_size, unsigned *access)
{
if (xtensa_option_enabled(env->config, XTENSA_OPTION_MMU)) {
return get_physical_addr_mmu(env, vaddr, is_write, mmu_idx,
paddr, page_size, access);
return get_physical_addr_mmu(env, update_tlb,
vaddr, is_write, mmu_idx, paddr, page_size, access, true);
} else if (xtensa_option_bits_enabled(env->config,
XTENSA_OPTION_BIT(XTENSA_OPTION_REGION_PROTECTION) |
XTENSA_OPTION_BIT(XTENSA_OPTION_REGION_TRANSLATION))) {

View File

@ -79,7 +79,7 @@ void tlb_fill(CPUXtensaState *env1, target_ulong vaddr, int is_write, int mmu_id
uint32_t paddr;
uint32_t page_size;
unsigned access;
int ret = xtensa_get_physical_addr(env, vaddr, is_write, mmu_idx,
int ret = xtensa_get_physical_addr(env, true, vaddr, is_write, mmu_idx,
&paddr, &page_size, &access);
qemu_log("%s(%08x, %d, %d) -> %08x, ret = %d\n", __func__,
@ -103,7 +103,7 @@ static void tb_invalidate_virtual_addr(CPUXtensaState *env, uint32_t vaddr)
uint32_t paddr;
uint32_t page_size;
unsigned access;
int ret = xtensa_get_physical_addr(env, vaddr, 2, 0,
int ret = xtensa_get_physical_addr(env, false, vaddr, 2, 0,
&paddr, &page_size, &access);
if (ret == 0) {
tb_invalidate_phys_addr(paddr);
@ -655,6 +655,16 @@ uint32_t HELPER(ptlb)(uint32_t v, uint32_t dtlb)
}
}
void xtensa_tlb_set_entry_mmu(const CPUXtensaState *env,
xtensa_tlb_entry *entry, bool dtlb,
unsigned wi, unsigned ei, uint32_t vpn, uint32_t pte)
{
entry->vaddr = vpn;
entry->paddr = pte & xtensa_tlb_get_addr_mask(env, dtlb, wi);
entry->asid = (env->sregs[RASID] >> ((pte >> 1) & 0x18)) & 0xff;
entry->attr = pte & 0xf;
}
void xtensa_tlb_set_entry(CPUXtensaState *env, bool dtlb,
unsigned wi, unsigned ei, uint32_t vpn, uint32_t pte)
{
@ -665,10 +675,8 @@ void xtensa_tlb_set_entry(CPUXtensaState *env, bool dtlb,
if (entry->asid) {
tlb_flush_page(env, entry->vaddr);
}
entry->vaddr = vpn;
entry->paddr = pte & xtensa_tlb_get_addr_mask(env, dtlb, wi);
entry->asid = (env->sregs[RASID] >> ((pte >> 1) & 0x18)) & 0xff;
entry->attr = pte & 0xf;
xtensa_tlb_set_entry_mmu(env, entry, dtlb, wi, ei, vpn, pte);
tlb_flush_page(env, entry->vaddr);
} else {
qemu_log("%s %d, %d, %d trying to set immutable entry\n",
__func__, dtlb, wi, ei);

View File

@ -388,6 +388,7 @@ static bool gen_check_loop_end(DisasContext *dc, int slot)
dc->next_pc == dc->lend) {
int label = gen_new_label();
gen_advance_ccount(dc);
tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_SR[LCOUNT], 0, label);
tcg_gen_subi_i32(cpu_SR[LCOUNT], cpu_SR[LCOUNT], 1);
gen_jumpi(dc, dc->lbeg, slot);
@ -410,6 +411,7 @@ static void gen_brcond(DisasContext *dc, TCGCond cond,
{
int label = gen_new_label();
gen_advance_ccount(dc);
tcg_gen_brcond_i32(cond, t0, t1, label);
gen_jumpi_check_loop_end(dc, 0);
gen_set_label(label);
@ -2360,10 +2362,18 @@ static void disas_xtensa_insn(DisasContext *dc)
case 5: /*BBC*/ /*BBS*/
gen_window_check2(dc, RRI8_S, RRI8_T);
{
TCGv_i32 bit = tcg_const_i32(1);
#ifdef TARGET_WORDS_BIGENDIAN
TCGv_i32 bit = tcg_const_i32(0x80000000);
#else
TCGv_i32 bit = tcg_const_i32(0x00000001);
#endif
TCGv_i32 tmp = tcg_temp_new_i32();
tcg_gen_andi_i32(tmp, cpu_R[RRI8_T], 0x1f);
#ifdef TARGET_WORDS_BIGENDIAN
tcg_gen_shr_i32(bit, bit, tmp);
#else
tcg_gen_shl_i32(bit, bit, tmp);
#endif
tcg_gen_and_i32(tmp, cpu_R[RRI8_S], bit);
gen_brcondi(dc, eq_ne, tmp, 0, 4 + RRI8_IMM8_SE);
tcg_temp_free(tmp);
@ -2377,7 +2387,11 @@ static void disas_xtensa_insn(DisasContext *dc)
{
TCGv_i32 tmp = tcg_temp_new_i32();
tcg_gen_andi_i32(tmp, cpu_R[RRI8_S],
1 << (((RRI8_R & 1) << 4) | RRI8_T));
#ifdef TARGET_WORDS_BIGENDIAN
0x80000000 >> (((RRI8_R & 1) << 4) | RRI8_T));
#else
0x00000001 << (((RRI8_R & 1) << 4) | RRI8_T));
#endif
gen_brcondi(dc, eq_ne, tmp, 0, 4 + RRI8_IMM8_SE);
tcg_temp_free(tmp);
}

View File

@ -176,6 +176,13 @@ static int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str)
so don't use these. */
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R0);
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R1);
#if defined(CONFIG_TCG_PASS_AREG0) && (TARGET_LONG_BITS == 64)
/* If we're passing env to the helper as r0 and need a regpair
* for the address then r2 will be overwritten as we're setting
* up the args to the helper.
*/
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R2);
#endif
#endif
break;
case 'L':
@ -197,6 +204,12 @@ static int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str)
use these. */
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R0);
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R1);
#if defined(CONFIG_SOFTMMU) && \
defined(CONFIG_TCG_PASS_AREG0) && (TARGET_LONG_BITS == 64)
/* Avoid clashes with registers being used for helper args */
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R2);
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R3);
#endif
break;
/* qemu_st64 data_reg2 */
case 'S':
@ -210,6 +223,10 @@ static int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str)
#ifdef CONFIG_SOFTMMU
/* r2 is still needed to load data_reg, so don't use it. */
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R2);
#if defined(CONFIG_TCG_PASS_AREG0) && (TARGET_LONG_BITS == 64)
/* Avoid clashes with registers being used for helper args */
tcg_regset_reset_reg(ct->u.regs, TCG_REG_R3);
#endif
#endif
break;
@ -388,6 +405,14 @@ static inline void tcg_out_dat_reg(TCGContext *s,
(rn << 16) | (rd << 12) | shift | rm);
}
static inline void tcg_out_mov_reg(TCGContext *s, int cond, int rd, int rm)
{
/* Simple reg-reg move, optimising out the 'do nothing' case */
if (rd != rm) {
tcg_out_dat_reg(s, cond, ARITH_MOV, rd, 0, rm, SHIFT_IMM_LSL(0));
}
}
static inline void tcg_out_dat_reg2(TCGContext *s,
int cond, int opc0, int opc1, int rd0, int rd1,
int rn0, int rn1, int rm0, int rm1, int shift)
@ -966,6 +991,90 @@ static void *qemu_st_helpers[4] = {
__stq_mmu,
};
#endif
/* Helper routines for marshalling helper function arguments into
* the correct registers and stack.
* argreg is where we want to put this argument, arg is the argument itself.
* Return value is the updated argreg ready for the next call.
* Note that argreg 0..3 is real registers, 4+ on stack.
* When we reach the first stacked argument, we allocate space for it
* and the following stacked arguments using "str r8, [sp, #-0x10]!".
* Following arguments are filled in with "str r8, [sp, #0xNN]".
* For more than 4 stacked arguments we'd need to know how much
* space to allocate when we pushed the first stacked argument.
* We don't need this, so don't implement it (and will assert if you try it.)
*
* We provide routines for arguments which are: immediate, 32 bit
* value in register, 16 and 8 bit values in register (which must be zero
* extended before use) and 64 bit value in a lo:hi register pair.
*/
#define DEFINE_TCG_OUT_ARG(NAME, ARGPARAM) \
static TCGReg NAME(TCGContext *s, TCGReg argreg, ARGPARAM) \
{ \
if (argreg < 4) { \
TCG_OUT_ARG_GET_ARG(argreg); \
} else if (argreg == 4) { \
TCG_OUT_ARG_GET_ARG(TCG_REG_R8); \
tcg_out32(s, (COND_AL << 28) | 0x052d8010); \
} else { \
assert(argreg < 8); \
TCG_OUT_ARG_GET_ARG(TCG_REG_R8); \
tcg_out32(s, (COND_AL << 28) | 0x058d8000 | (argreg - 4) * 4); \
} \
return argreg + 1; \
}
#define TCG_OUT_ARG_GET_ARG(A) tcg_out_dat_imm(s, COND_AL, ARITH_MOV, A, 0, arg)
DEFINE_TCG_OUT_ARG(tcg_out_arg_imm32, uint32_t arg)
#undef TCG_OUT_ARG_GET_ARG
#define TCG_OUT_ARG_GET_ARG(A) tcg_out_ext8u(s, COND_AL, A, arg)
DEFINE_TCG_OUT_ARG(tcg_out_arg_reg8, TCGReg arg)
#undef TCG_OUT_ARG_GET_ARG
#define TCG_OUT_ARG_GET_ARG(A) tcg_out_ext16u(s, COND_AL, A, arg)
DEFINE_TCG_OUT_ARG(tcg_out_arg_reg16, TCGReg arg)
#undef TCG_OUT_ARG_GET_ARG
/* We don't use the macro for this one to avoid an unnecessary reg-reg
* move when storing to the stack.
*/
static TCGReg tcg_out_arg_reg32(TCGContext *s, TCGReg argreg, TCGReg arg)
{
if (argreg < 4) {
tcg_out_mov_reg(s, COND_AL, argreg, arg);
} else if (argreg == 4) {
/* str arg, [sp, #-0x10]! */
tcg_out32(s, (COND_AL << 28) | 0x052d0010 | (arg << 12));
} else {
assert(argreg < 8);
/* str arg, [sp, #0xNN] */
tcg_out32(s, (COND_AL << 28) | 0x058d0000 |
(arg << 12) | (argreg - 4) * 4);
}
return argreg + 1;
}
static inline TCGReg tcg_out_arg_reg64(TCGContext *s, TCGReg argreg,
TCGReg arglo, TCGReg arghi)
{
/* 64 bit arguments must go in even/odd register pairs
* and in 8-aligned stack slots.
*/
if (argreg & 1) {
argreg++;
}
argreg = tcg_out_arg_reg32(s, argreg, arglo);
argreg = tcg_out_arg_reg32(s, argreg, arghi);
return argreg;
}
static inline void tcg_out_arg_stacktidy(TCGContext *s, TCGReg argreg)
{
/* Output any necessary post-call cleanup of the stack */
if (argreg > 4) {
tcg_out_dat_imm(s, COND_AL, ARITH_ADD, TCG_REG_R13, TCG_REG_R13, 0x10);
}
}
#endif
#define TLB_SHIFT (CPU_TLB_ENTRY_BITS + CPU_TLB_BITS)
@ -975,6 +1084,7 @@ static inline void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc)
int addr_reg, data_reg, data_reg2, bswap;
#ifdef CONFIG_SOFTMMU
int mem_index, s_bits;
TCGReg argreg;
# if TARGET_LONG_BITS == 64
int addr_reg2;
# endif
@ -1088,31 +1198,22 @@ static inline void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc)
tcg_out_b_noaddr(s, COND_EQ);
/* TODO: move this code to where the constants pool will be */
if (addr_reg != TCG_REG_R0) {
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R0, 0, addr_reg, SHIFT_IMM_LSL(0));
}
# if TARGET_LONG_BITS == 32
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R1, 0, mem_index);
# else
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R1, 0, addr_reg2, SHIFT_IMM_LSL(0));
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index);
# endif
/* Note that this code relies on the constraints we set in arm_op_defs[]
* to ensure that later arguments are not passed to us in registers we
* trash by moving the earlier arguments into them.
*/
argreg = TCG_REG_R0;
#ifdef CONFIG_TCG_PASS_AREG0
/* XXX/FIXME: suboptimal and incorrect for 64 bit */
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[2], 0,
tcg_target_call_iarg_regs[1], SHIFT_IMM_LSL(0));
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[1], 0,
tcg_target_call_iarg_regs[0], SHIFT_IMM_LSL(0));
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[0], 0, TCG_AREG0,
SHIFT_IMM_LSL(0));
argreg = tcg_out_arg_reg32(s, argreg, TCG_AREG0);
#endif
#if TARGET_LONG_BITS == 64
argreg = tcg_out_arg_reg64(s, argreg, addr_reg, addr_reg2);
#else
argreg = tcg_out_arg_reg32(s, argreg, addr_reg);
#endif
argreg = tcg_out_arg_imm32(s, argreg, mem_index);
tcg_out_call(s, (tcg_target_long) qemu_ld_helpers[s_bits]);
tcg_out_arg_stacktidy(s, argreg);
switch (opc) {
case 0 | 4:
@ -1211,6 +1312,7 @@ static inline void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc)
int addr_reg, data_reg, data_reg2, bswap;
#ifdef CONFIG_SOFTMMU
int mem_index, s_bits;
TCGReg argreg;
# if TARGET_LONG_BITS == 64
int addr_reg2;
# endif
@ -1314,89 +1416,38 @@ static inline void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc)
tcg_out_b_noaddr(s, COND_EQ);
/* TODO: move this code to where the constants pool will be */
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R0, 0, addr_reg, SHIFT_IMM_LSL(0));
# if TARGET_LONG_BITS == 32
switch (opc) {
case 0:
tcg_out_ext8u(s, COND_AL, TCG_REG_R1, data_reg);
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index);
break;
case 1:
tcg_out_ext16u(s, COND_AL, TCG_REG_R1, data_reg);
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index);
break;
case 2:
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R1, 0, data_reg, SHIFT_IMM_LSL(0));
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R2, 0, mem_index);
break;
case 3:
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R8, 0, mem_index);
tcg_out32(s, (COND_AL << 28) | 0x052d8010); /* str r8, [sp, #-0x10]! */
if (data_reg != TCG_REG_R2) {
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R2, 0, data_reg, SHIFT_IMM_LSL(0));
}
if (data_reg2 != TCG_REG_R3) {
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R3, 0, data_reg2, SHIFT_IMM_LSL(0));
}
break;
}
# else
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R1, 0, addr_reg2, SHIFT_IMM_LSL(0));
switch (opc) {
case 0:
tcg_out_ext8u(s, COND_AL, TCG_REG_R2, data_reg);
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index);
break;
case 1:
tcg_out_ext16u(s, COND_AL, TCG_REG_R2, data_reg);
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index);
break;
case 2:
if (data_reg != TCG_REG_R2) {
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R2, 0, data_reg, SHIFT_IMM_LSL(0));
}
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R3, 0, mem_index);
break;
case 3:
tcg_out_dat_imm(s, COND_AL, ARITH_MOV, TCG_REG_R8, 0, mem_index);
tcg_out32(s, (COND_AL << 28) | 0x052d8010); /* str r8, [sp, #-0x10]! */
if (data_reg != TCG_REG_R2) {
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R2, 0, data_reg, SHIFT_IMM_LSL(0));
}
if (data_reg2 != TCG_REG_R3) {
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
TCG_REG_R3, 0, data_reg2, SHIFT_IMM_LSL(0));
}
break;
}
# endif
/* Note that this code relies on the constraints we set in arm_op_defs[]
* to ensure that later arguments are not passed to us in registers we
* trash by moving the earlier arguments into them.
*/
argreg = TCG_REG_R0;
#ifdef CONFIG_TCG_PASS_AREG0
/* XXX/FIXME: suboptimal and incorrect for 64 bit */
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[3], 0,
tcg_target_call_iarg_regs[2], SHIFT_IMM_LSL(0));
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[2], 0,
tcg_target_call_iarg_regs[1], SHIFT_IMM_LSL(0));
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[1], 0,
tcg_target_call_iarg_regs[0], SHIFT_IMM_LSL(0));
tcg_out_dat_reg(s, COND_AL, ARITH_MOV,
tcg_target_call_iarg_regs[0], 0, TCG_AREG0,
SHIFT_IMM_LSL(0));
argreg = tcg_out_arg_reg32(s, argreg, TCG_AREG0);
#endif
#if TARGET_LONG_BITS == 64
argreg = tcg_out_arg_reg64(s, argreg, addr_reg, addr_reg2);
#else
argreg = tcg_out_arg_reg32(s, argreg, addr_reg);
#endif
switch (opc) {
case 0:
argreg = tcg_out_arg_reg8(s, argreg, data_reg);
break;
case 1:
argreg = tcg_out_arg_reg16(s, argreg, data_reg);
break;
case 2:
argreg = tcg_out_arg_reg32(s, argreg, data_reg);
break;
case 3:
argreg = tcg_out_arg_reg64(s, argreg, data_reg, data_reg2);
break;
}
argreg = tcg_out_arg_imm32(s, argreg, mem_index);
tcg_out_call(s, (tcg_target_long) qemu_st_helpers[s_bits]);
if (opc == 3)
tcg_out_dat_imm(s, COND_AL, ARITH_ADD, TCG_REG_R13, TCG_REG_R13, 0x10);
tcg_out_arg_stacktidy(s, argreg);
reloc_pc24(label_ptr, (tcg_target_long)s->code_ptr);
#else /* !CONFIG_SOFTMMU */

View File

@ -107,7 +107,7 @@ enum {
};
static const int tcg_target_reg_alloc_order[] = {
TCG_REG_R34,
TCG_REG_R33,
TCG_REG_R35,
TCG_REG_R36,
TCG_REG_R37,
@ -1532,12 +1532,13 @@ static inline void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc)
}
#ifdef CONFIG_TCG_PASS_AREG0
/* XXX/FIXME: suboptimal */
tcg_out_mov(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[2],
tcg_target_call_iarg_regs[1]);
tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1],
tcg_target_call_iarg_regs[0]);
tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0],
TCG_AREG0);
tcg_out_bundle(s, mII,
tcg_opc_a5 (TCG_REG_P7, OPC_ADDL_A5, TCG_REG_R58,
mem_index, TCG_REG_R0),
tcg_opc_a4 (TCG_REG_P7, OPC_ADDS_A4,
TCG_REG_R57, 0, TCG_REG_R56),
tcg_opc_a4 (TCG_REG_P7, OPC_ADDS_A4,
TCG_REG_R56, 0, TCG_AREG0));
#endif
if (!bswap || s_bits == 0) {
tcg_out_bundle(s, miB,
@ -1659,15 +1660,21 @@ static inline void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc)
#ifdef CONFIG_TCG_PASS_AREG0
/* XXX/FIXME: suboptimal */
tcg_out_mov(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[3],
tcg_target_call_iarg_regs[2]);
tcg_out_mov(s, TCG_TYPE_I64, tcg_target_call_iarg_regs[2],
tcg_target_call_iarg_regs[1]);
tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1],
tcg_target_call_iarg_regs[0]);
tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0],
TCG_AREG0);
#endif
tcg_out_bundle(s, mII,
tcg_opc_a5 (TCG_REG_P7, OPC_ADDL_A5, TCG_REG_R59,
mem_index, TCG_REG_R0),
tcg_opc_a4 (TCG_REG_P7, OPC_ADDS_A4,
TCG_REG_R58, 0, TCG_REG_R57),
tcg_opc_a4 (TCG_REG_P7, OPC_ADDS_A4,
TCG_REG_R57, 0, TCG_REG_R56));
tcg_out_bundle(s, miB,
tcg_opc_m4 (TCG_REG_P6, opc_st_m4[opc],
data_reg, TCG_REG_R3),
tcg_opc_a4 (TCG_REG_P7, OPC_ADDS_A4,
TCG_REG_R56, 0, TCG_AREG0),
tcg_opc_b5 (TCG_REG_P7, OPC_BR_CALL_SPTK_MANY_B5,
TCG_REG_B0, TCG_REG_B6));
#else
tcg_out_bundle(s, miB,
tcg_opc_m4 (TCG_REG_P6, opc_st_m4[opc],
data_reg, TCG_REG_R3),
@ -1675,6 +1682,7 @@ static inline void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc)
mem_index, TCG_REG_R0),
tcg_opc_b5 (TCG_REG_P7, OPC_BR_CALL_SPTK_MANY_B5,
TCG_REG_B0, TCG_REG_B6));
#endif
}
#else /* !CONFIG_SOFTMMU */
@ -2314,13 +2322,13 @@ static void tcg_target_qemu_prologue(TCGContext *s)
s->code_ptr += 16; /* skip GP */
/* prologue */
tcg_out_bundle(s, mII,
tcg_out_bundle(s, miI,
tcg_opc_m34(TCG_REG_P0, OPC_ALLOC_M34,
TCG_REG_R33, 32, 24, 0),
TCG_REG_R34, 32, 24, 0),
tcg_opc_a4 (TCG_REG_P0, OPC_ADDS_A4,
TCG_AREG0, 0, TCG_REG_R32),
tcg_opc_i21(TCG_REG_P0, OPC_MOV_I21,
TCG_REG_B6, TCG_REG_R33, 0),
tcg_opc_i22(TCG_REG_P0, OPC_MOV_I22,
TCG_REG_R32, TCG_REG_B0));
TCG_REG_B6, TCG_REG_R33, 0));
/* ??? If GUEST_BASE < 0x200000, we could load the register via
an ADDL in the M slot of the next bundle. */
@ -2334,10 +2342,10 @@ static void tcg_target_qemu_prologue(TCGContext *s)
}
tcg_out_bundle(s, miB,
tcg_opc_a4 (TCG_REG_P0, OPC_ADDS_A4,
TCG_AREG0, 0, TCG_REG_R32),
tcg_opc_a4 (TCG_REG_P0, OPC_ADDS_A4,
TCG_REG_R12, -frame_size, TCG_REG_R12),
tcg_opc_i22(TCG_REG_P0, OPC_MOV_I22,
TCG_REG_R32, TCG_REG_B0),
tcg_opc_b4 (TCG_REG_P0, OPC_BR_SPTK_MANY_B4, TCG_REG_B6));
/* epilogue */
@ -2351,7 +2359,7 @@ static void tcg_target_qemu_prologue(TCGContext *s)
tcg_out_bundle(s, miB,
tcg_opc_m48(TCG_REG_P0, OPC_NOP_M48, 0),
tcg_opc_i26(TCG_REG_P0, OPC_MOV_I_I26,
TCG_REG_PFS, TCG_REG_R33),
TCG_REG_PFS, TCG_REG_R34),
tcg_opc_b4 (TCG_REG_P0, OPC_BR_RET_SPTK_MANY_B4,
TCG_REG_B0));
}
@ -2403,7 +2411,7 @@ static void tcg_target_init(TCGContext *s)
tcg_regset_set_reg(s->reserved_regs, TCG_REG_R12); /* stack pointer */
tcg_regset_set_reg(s->reserved_regs, TCG_REG_R13); /* thread pointer */
tcg_regset_set_reg(s->reserved_regs, TCG_REG_R32); /* return address */
tcg_regset_set_reg(s->reserved_regs, TCG_REG_R33); /* PFS */
tcg_regset_set_reg(s->reserved_regs, TCG_REG_R34); /* PFS */
/* The following 3 are not in use, are call-saved, but *not* saved
by the prologue. Therefore we cannot use them without modifying

View File

@ -217,6 +217,9 @@ static int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str)
tcg_regset_set(ct->u.regs, 0xffffffff);
#if defined(CONFIG_SOFTMMU)
tcg_regset_reset_reg(ct->u.regs, TCG_REG_A0);
# if defined(CONFIG_TCG_PASS_AREG0) && (TARGET_LONG_BITS == 64)
tcg_regset_reset_reg(ct->u.regs, TCG_REG_A2);
# endif
#endif
break;
case 'S': /* qemu_st constraint */
@ -224,10 +227,14 @@ static int target_parse_constraint(TCGArgConstraint *ct, const char **pct_str)
tcg_regset_set(ct->u.regs, 0xffffffff);
tcg_regset_reset_reg(ct->u.regs, TCG_REG_A0);
#if defined(CONFIG_SOFTMMU)
# if TARGET_LONG_BITS == 64
# if (defined(CONFIG_TCG_PASS_AREG0) && TARGET_LONG_BITS == 32) || \
(!defined(CONFIG_TCG_PASS_AREG0) && TARGET_LONG_BITS == 64)
tcg_regset_reset_reg(ct->u.regs, TCG_REG_A1);
# endif
tcg_regset_reset_reg(ct->u.regs, TCG_REG_A2);
# if defined(CONFIG_TCG_PASS_AREG0) && TARGET_LONG_BITS == 64
tcg_regset_reset_reg(ct->u.regs, TCG_REG_A3);
# endif
#endif
break;
case 'I':
@ -382,7 +389,10 @@ static inline void tcg_out_nop(TCGContext *s)
static inline void tcg_out_mov(TCGContext *s, TCGType type,
TCGReg ret, TCGReg arg)
{
tcg_out_opc_reg(s, OPC_ADDU, ret, arg, TCG_REG_ZERO);
/* Simple reg-reg move, optimising out the 'do nothing' case */
if (ret != arg) {
tcg_out_opc_reg(s, OPC_ADDU, ret, arg, TCG_REG_ZERO);
}
}
static inline void tcg_out_movi(TCGContext *s, TCGType type,
@ -503,6 +513,67 @@ static inline void tcg_out_addi(TCGContext *s, int reg, tcg_target_long val)
}
}
/* Helper routines for marshalling helper function arguments into
* the correct registers and stack.
* arg_num is where we want to put this argument, and is updated to be ready
* for the next call. arg is the argument itself. Note that arg_num 0..3 is
* real registers, 4+ on stack.
*
* We provide routines for arguments which are: immediate, 32 bit
* value in register, 16 and 8 bit values in register (which must be zero
* extended before use) and 64 bit value in a lo:hi register pair.
*/
#define DEFINE_TCG_OUT_CALL_IARG(NAME, ARGPARAM) \
static inline void NAME(TCGContext *s, int *arg_num, ARGPARAM) \
{ \
if (*arg_num < 4) { \
DEFINE_TCG_OUT_CALL_IARG_GET_ARG(tcg_target_call_iarg_regs[*arg_num]); \
} else { \
DEFINE_TCG_OUT_CALL_IARG_GET_ARG(TCG_REG_AT); \
tcg_out_st(s, TCG_TYPE_I32, TCG_REG_AT, TCG_REG_SP, 4 * (*arg_num)); \
} \
(*arg_num)++; \
}
#define DEFINE_TCG_OUT_CALL_IARG_GET_ARG(A) \
tcg_out_opc_imm(s, OPC_ANDI, A, arg, 0xff);
DEFINE_TCG_OUT_CALL_IARG(tcg_out_call_iarg_reg8, TCGReg arg)
#undef DEFINE_TCG_OUT_CALL_IARG_GET_ARG
#define DEFINE_TCG_OUT_CALL_IARG_GET_ARG(A) \
tcg_out_opc_imm(s, OPC_ANDI, A, arg, 0xffff);
DEFINE_TCG_OUT_CALL_IARG(tcg_out_call_iarg_reg16, TCGReg arg)
#undef DEFINE_TCG_OUT_CALL_IARG_GET_ARG
#define DEFINE_TCG_OUT_CALL_IARG_GET_ARG(A) \
tcg_out_movi(s, TCG_TYPE_I32, A, arg);
DEFINE_TCG_OUT_CALL_IARG(tcg_out_call_iarg_imm32, uint32_t arg)
#undef DEFINE_TCG_OUT_CALL_IARG_GET_ARG
/* We don't use the macro for this one to avoid an unnecessary reg-reg
move when storing to the stack. */
static inline void tcg_out_call_iarg_reg32(TCGContext *s, int *arg_num,
TCGReg arg)
{
if (*arg_num < 4) {
tcg_out_mov(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[*arg_num], arg);
} else {
tcg_out_st(s, TCG_TYPE_I32, arg, TCG_REG_SP, 4 * (*arg_num));
}
(*arg_num)++;
}
static inline void tcg_out_call_iarg_reg64(TCGContext *s, int *arg_num,
TCGReg arg_low, TCGReg arg_high)
{
(*arg_num) = (*arg_num + 1) & ~1;
#if defined(TCG_TARGET_WORDS_BIGENDIAN)
tcg_out_call_iarg_reg32(s, arg_num, arg_high);
tcg_out_call_iarg_reg32(s, arg_num, arg_low);
#else
tcg_out_call_iarg_reg32(s, arg_num, arg_low);
tcg_out_call_iarg_reg32(s, arg_num, arg_high);
#endif
}
static void tcg_out_brcond(TCGContext *s, TCGCond cond, int arg1,
int arg2, int label_index)
{
@ -792,18 +863,18 @@ static void *qemu_st_helpers[4] = {
static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args,
int opc)
{
int addr_regl, addr_reg1, addr_meml;
int addr_regl, addr_meml;
int data_regl, data_regh, data_reg1, data_reg2;
int mem_index, s_bits;
#if defined(CONFIG_SOFTMMU)
void *label1_ptr, *label2_ptr;
int sp_args;
int arg_num;
#endif
#if TARGET_LONG_BITS == 64
# if defined(CONFIG_SOFTMMU)
uint8_t *label3_ptr;
# endif
int addr_regh, addr_reg2, addr_memh;
int addr_regh, addr_memh;
#endif
data_regl = *args++;
if (opc == 3)
@ -831,18 +902,13 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args,
}
#if TARGET_LONG_BITS == 64
# if defined(TCG_TARGET_WORDS_BIGENDIAN)
addr_reg1 = addr_regh;
addr_reg2 = addr_regl;
addr_memh = 0;
addr_meml = 4;
# else
addr_reg1 = addr_regl;
addr_reg2 = addr_regh;
addr_memh = 4;
addr_meml = 0;
# endif
#else
addr_reg1 = addr_regl;
addr_meml = 0;
#endif
@ -875,22 +941,17 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args,
# endif
/* slow path */
sp_args = TCG_REG_A0;
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, addr_reg1);
# if TARGET_LONG_BITS == 64
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, addr_reg2);
arg_num = 0;
# ifdef CONFIG_TCG_PASS_AREG0
tcg_out_call_iarg_reg32(s, &arg_num, TCG_AREG0);
# endif
tcg_out_movi(s, TCG_TYPE_I32, sp_args++, mem_index);
# if TARGET_LONG_BITS == 64
tcg_out_call_iarg_reg64(s, &arg_num, addr_regl, addr_regh);
# else
tcg_out_call_iarg_reg32(s, &arg_num, addr_regl);
# endif
tcg_out_call_iarg_imm32(s, &arg_num, mem_index);
tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_T9, (tcg_target_long)qemu_ld_helpers[s_bits]);
#ifdef CONFIG_TCG_PASS_AREG0
/* XXX/FIXME: suboptimal and incorrect for 64 on 32 bit */
tcg_out_mov(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[2],
tcg_target_call_iarg_regs[1]);
tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1],
tcg_target_call_iarg_regs[0]);
tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0],
TCG_AREG0);
#endif
tcg_out_opc_reg(s, OPC_JALR, TCG_REG_RA, TCG_REG_T9, 0);
tcg_out_nop(s);
@ -991,18 +1052,18 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args,
static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args,
int opc)
{
int addr_regl, addr_reg1, addr_meml;
int addr_regl, addr_meml;
int data_regl, data_regh, data_reg1, data_reg2;
int mem_index, s_bits;
#if defined(CONFIG_SOFTMMU)
uint8_t *label1_ptr, *label2_ptr;
int sp_args;
int arg_num;
#endif
#if TARGET_LONG_BITS == 64
# if defined(CONFIG_SOFTMMU)
uint8_t *label3_ptr;
# endif
int addr_regh, addr_reg2, addr_memh;
int addr_regh, addr_memh;
#endif
data_regl = *args++;
@ -1024,18 +1085,13 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args,
#if TARGET_LONG_BITS == 64
addr_regh = *args++;
# if defined(TCG_TARGET_WORDS_BIGENDIAN)
addr_reg1 = addr_regh;
addr_reg2 = addr_regl;
addr_memh = 0;
addr_meml = 4;
# else
addr_reg1 = addr_regl;
addr_reg2 = addr_regh;
addr_memh = 4;
addr_meml = 0;
# endif
#else
addr_reg1 = addr_regl;
addr_meml = 0;
#endif
mem_index = *args;
@ -1070,49 +1126,33 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args,
# endif
/* slow path */
sp_args = TCG_REG_A0;
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, addr_reg1);
arg_num = 0;
# ifdef CONFIG_TCG_PASS_AREG0
tcg_out_call_iarg_reg32(s, &arg_num, TCG_AREG0);
# endif
# if TARGET_LONG_BITS == 64
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, addr_reg2);
tcg_out_call_iarg_reg64(s, &arg_num, addr_regl, addr_regh);
# else
tcg_out_call_iarg_reg32(s, &arg_num, addr_regl);
# endif
switch(opc) {
case 0:
tcg_out_opc_imm(s, OPC_ANDI, sp_args++, data_reg1, 0xff);
tcg_out_call_iarg_reg8(s, &arg_num, data_regl);
break;
case 1:
tcg_out_opc_imm(s, OPC_ANDI, sp_args++, data_reg1, 0xffff);
tcg_out_call_iarg_reg16(s, &arg_num, data_regl);
break;
case 2:
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, data_reg1);
tcg_out_call_iarg_reg32(s, &arg_num, data_regl);
break;
case 3:
sp_args = (sp_args + 1) & ~1;
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, data_reg1);
tcg_out_mov(s, TCG_TYPE_I32, sp_args++, data_reg2);
tcg_out_call_iarg_reg64(s, &arg_num, data_regl, data_regh);
break;
default:
tcg_abort();
}
if (sp_args > TCG_REG_A3) {
/* Push mem_index on the stack */
tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_AT, mem_index);
tcg_out_st(s, TCG_TYPE_I32, TCG_REG_AT, TCG_REG_SP, 16);
} else {
tcg_out_movi(s, TCG_TYPE_I32, sp_args, mem_index);
}
tcg_out_call_iarg_imm32(s, &arg_num, mem_index);
tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_T9, (tcg_target_long)qemu_st_helpers[s_bits]);
#ifdef CONFIG_TCG_PASS_AREG0
/* XXX/FIXME: suboptimal and incorrect for 64 on 32 bit */
tcg_out_mov(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[3],
tcg_target_call_iarg_regs[2]);
tcg_out_mov(s, TCG_TYPE_I64, tcg_target_call_iarg_regs[2],
tcg_target_call_iarg_regs[1]);
tcg_out_mov(s, TCG_TYPE_TL, tcg_target_call_iarg_regs[1],
tcg_target_call_iarg_regs[0]);
tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0],
TCG_AREG0);
#endif
tcg_out_opc_reg(s, OPC_JALR, TCG_REG_RA, TCG_REG_T9, 0);
tcg_out_nop(s);

View File

@ -466,6 +466,58 @@ static void simple_dict(void)
}
}
/*
* this generates json of the form:
* a(0,m) = [0, 1, ..., m-1]
* a(n,m) = {
* 'key0': a(0,m),
* 'key1': a(1,m),
* ...
* 'key(n-1)': a(n-1,m)
* }
*/
static void gen_test_json(GString *gstr, int nest_level_max,
int elem_count)
{
int i;
g_assert(gstr);
if (nest_level_max == 0) {
g_string_append(gstr, "[");
for (i = 0; i < elem_count; i++) {
g_string_append_printf(gstr, "%d", i);
if (i < elem_count - 1) {
g_string_append_printf(gstr, ", ");
}
}
g_string_append(gstr, "]");
return;
}
g_string_append(gstr, "{");
for (i = 0; i < nest_level_max; i++) {
g_string_append_printf(gstr, "'key%d': ", i);
gen_test_json(gstr, i, elem_count);
if (i < nest_level_max - 1) {
g_string_append(gstr, ",");
}
}
g_string_append(gstr, "}");
}
static void large_dict(void)
{
GString *gstr = g_string_new("");
QObject *obj;
gen_test_json(gstr, 10, 100);
obj = qobject_from_json(gstr->str);
g_assert(obj != NULL);
qobject_decref(obj);
g_string_free(gstr, true);
}
static void simple_list(void)
{
int i;
@ -706,6 +758,7 @@ int main(int argc, char **argv)
g_test_add_func("/literals/keyword", keyword_literal);
g_test_add_func("/dicts/simple_dict", simple_dict);
g_test_add_func("/dicts/large_dict", large_dict);
g_test_add_func("/lists/simple_list", simple_list);
g_test_add_func("/whitespace/simple_whitespace", simple_whitespace);

View File

@ -290,6 +290,11 @@ void qtest_qmp(QTestState *s, const char *fmt, ...)
continue;
}
if (len == -1 || len == 0) {
fprintf(stderr, "Broken pipe\n");
exit(1);
}
switch (c) {
case '{':
nesting++;

View File

@ -293,26 +293,219 @@ test store_prohibited
assert eq, a2, a3
test_end
test dtlb_autoload
set_vector kernel, 0
movi a2, 0xd4000000
/* Set up page table entry vaddr->paddr, ring=pte_ring, attr=pte_attr
* and DTLB way 7 to cover this PTE, ring=pt_ring, attr=pt_attr
*/
.macro pt_setup pt_ring, pt_attr, pte_ring, vaddr, paddr, pte_attr
movi a2, 0x80000000
wsr a2, ptevaddr
movi a3, 0x00001013
s32i a3, a2, 4
movi a3, 0x80000007 | (((\vaddr) >> 10) & 0xfffff000) /* way 7 */
movi a4, 0x04000003 | ((\pt_ring) << 4) /* PADDR 64M */
wdtlb a4, a3
isync
movi a3, ((\paddr) & 0xfffff000) | ((\pte_ring) << 4) | (\pte_attr)
movi a1, ((\vaddr) >> 12) << 2
add a2, a1, a2
s32i a3, a2, 0
movi a3, 0x80000007 | (((\vaddr) >> 10) & 0xfffff000) /* way 7 */
movi a4, 0x04000000 | ((\pt_ring) << 4) | (\pt_attr) /* PADDR 64M */
wdtlb a4, a3
isync
movi a3, (\vaddr)
.endm
/* out: PS.RING=ring, PS.EXCM=excm, a3=vaddr */
.macro go_ring ring, excm, vaddr
movi a3, 10f
pitlb a3, a3
ritlb1 a2, a3
movi a1, 0x10
or a2, a2, a1
movi a1, 0x000ff000
and a3, a3, a1
movi a1, 4
or a3, a3, a1
witlb a2, a3
movi a3, 10f
movi a1, 0x000fffff
and a1, a3, a1
movi a2, 0
wsr a2, excvaddr
movi a3, \vaddr
movi a2, 0x4000f | ((\ring) << 6) | ((\excm) << 4)
jx a1
10:
wsr a2, ps
isync
.endm
/* in: a3 -- virtual address to test */
.macro assert_auto_tlb
movi a2, 0x4000f
wsr a2, ps
isync
pdtlb a2, a3
movi a1, 0xfffff01f
and a2, a2, a1
movi a1, 0xfffff000
and a1, a1, a3
xor a1, a1, a2
assert gei, a1, 0x10
movi a2, 0x14
assert lt, a1, a2
.endm
/* in: a3 -- virtual address to test */
.macro assert_no_auto_tlb
movi a2, 0x4000f
wsr a2, ps
isync
pdtlb a2, a3
movi a1, 0x10
and a1, a1, a2
assert eqi, a1, 0
.endm
.macro assert_sr sr, v
rsr a2, \sr
movi a1, (\v)
assert eq, a1, a2
.endm
.macro assert_epc1_1m vaddr
movi a2, (\vaddr)
movi a1, 0xfffff
and a1, a1, a2
rsr a2, epc1
assert eq, a1, a2
.endm
test dtlb_autoload
set_vector kernel, 0
pt_setup 0, 3, 1, 0x1000, 0x1000, 3
assert_no_auto_tlb
l8ui a1, a3, 0
pdtlb a2, a3
movi a1, 0xfffff010
and a1, a1, a2
movi a3, 0x00001010
assert eq, a1, a3
movi a1, 0xf
and a1, a1, a2
assert lti, a1, 4
rsr a2, excvaddr
assert eq, a2, a3
assert_auto_tlb
test_end
test autoload_load_store_privilege
set_vector kernel, 0
set_vector double, 2f
pt_setup 0, 3, 0, 0x2000, 0x2000, 3
movi a3, 0x2004
assert_no_auto_tlb
movi a2, 0x4005f /* ring 1 + excm => cring == 0 */
wsr a2, ps
isync
1:
l32e a2, a3, -4 /* ring used */
test_fail
2:
rsr a2, excvaddr
addi a1, a3, -4
assert eq, a1, a2
assert_auto_tlb
assert_sr depc, 1b
assert_sr exccause, 26
test_end
test autoload_pte_load_prohibited
set_vector kernel, 2f
pt_setup 0, 3, 0, 0x3000, 0, 0xc
assert_no_auto_tlb
1:
l32i a2, a3, 0
test_fail
2:
rsr a2, excvaddr
assert eq, a2, a3
assert_auto_tlb
assert_sr epc1, 1b
assert_sr exccause, 28
test_end
test autoload_pt_load_prohibited
set_vector kernel, 2f
pt_setup 0, 0xc, 0, 0x4000, 0x4000, 3
assert_no_auto_tlb
1:
l32i a2, a3, 0
test_fail
2:
rsr a2, excvaddr
assert eq, a2, a3
assert_no_auto_tlb
assert_sr epc1, 1b
assert_sr exccause, 24
test_end
test autoload_pt_privilege
set_vector kernel, 2f
pt_setup 0, 3, 1, 0x5000, 0, 3
go_ring 1, 0, 0x5001
l8ui a2, a3, 0
1:
syscall
2:
rsr a2, excvaddr
assert eq, a2, a3
assert_auto_tlb
assert_epc1_1m 1b
assert_sr exccause, 1
test_end
test autoload_pte_privilege
set_vector kernel, 2f
pt_setup 0, 3, 0, 0x6000, 0, 3
go_ring 1, 0, 0x6001
1:
l8ui a2, a3, 0
syscall
2:
rsr a2, excvaddr
assert eq, a2, a3
assert_auto_tlb
assert_epc1_1m 1b
assert_sr exccause, 26
test_end
test autoload_3_level_pt
set_vector kernel, 2f
pt_setup 1, 3, 1, 0x00400000, 0, 3
pt_setup 1, 3, 1, 0x80001000, 0x2000000, 3
go_ring 1, 0, 0x00400001
1:
l8ui a2, a3, 0
syscall
2:
rsr a2, excvaddr
assert eq, a2, a3
assert_no_auto_tlb
assert_epc1_1m 1b
assert_sr exccause, 24
test_end
test_suite_end

View File

@ -161,8 +161,11 @@ static void trace(TraceEventID event, uint64_t x1, uint64_t x2, uint64_t x3,
}
timestamp = get_clock();
#if GLIB_CHECK_VERSION(2, 30, 0)
idx = g_atomic_int_add((gint *)&trace_idx, 1) % TRACE_BUF_LEN;
#else
idx = g_atomic_int_exchange_and_add((gint *)&trace_idx, 1) % TRACE_BUF_LEN;
#endif
trace_buf[idx] = (TraceRecord){
.event = event,
.timestamp_ns = timestamp,

7
vl.c
View File

@ -2659,6 +2659,7 @@ int main(int argc, char **argv, char **envp)
break;
case QEMU_OPTION_m: {
int64_t value;
uint64_t sz;
char *end;
value = strtosz(optarg, &end);
@ -2666,12 +2667,12 @@ int main(int argc, char **argv, char **envp)
fprintf(stderr, "qemu: invalid ram size: %s\n", optarg);
exit(1);
}
if (value != (uint64_t)(ram_addr_t)value) {
sz = QEMU_ALIGN_UP((uint64_t)value, 8192);
ram_size = sz;
if (ram_size != sz) {
fprintf(stderr, "qemu: ram size too large\n");
exit(1);
}
ram_size = value;
break;
}
case QEMU_OPTION_mempath:

View File

@ -710,7 +710,8 @@ static void cpu_ioreq_pio(ioreq_t *req)
for (i = 0; i < req->count; i++) {
tmp = do_inp(req->addr, req->size);
cpu_physical_memory_write(req->data + (sign * i * req->size),
cpu_physical_memory_write(
req->data + (sign * i * (int64_t)req->size),
(uint8_t *) &tmp, req->size);
}
}
@ -721,7 +722,8 @@ static void cpu_ioreq_pio(ioreq_t *req)
for (i = 0; i < req->count; i++) {
uint32_t tmp = 0;
cpu_physical_memory_read(req->data + (sign * i * req->size),
cpu_physical_memory_read(
req->data + (sign * i * (int64_t)req->size),
(uint8_t*) &tmp, req->size);
do_outp(req->addr, req->size, tmp);
}
@ -738,12 +740,14 @@ static void cpu_ioreq_move(ioreq_t *req)
if (!req->data_is_ptr) {
if (req->dir == IOREQ_READ) {
for (i = 0; i < req->count; i++) {
cpu_physical_memory_read(req->addr + (sign * i * req->size),
cpu_physical_memory_read(
req->addr + (sign * i * (int64_t)req->size),
(uint8_t *) &req->data, req->size);
}
} else if (req->dir == IOREQ_WRITE) {
for (i = 0; i < req->count; i++) {
cpu_physical_memory_write(req->addr + (sign * i * req->size),
cpu_physical_memory_write(
req->addr + (sign * i * (int64_t)req->size),
(uint8_t *) &req->data, req->size);
}
}
@ -752,16 +756,20 @@ static void cpu_ioreq_move(ioreq_t *req)
if (req->dir == IOREQ_READ) {
for (i = 0; i < req->count; i++) {
cpu_physical_memory_read(req->addr + (sign * i * req->size),
cpu_physical_memory_read(
req->addr + (sign * i * (int64_t)req->size),
(uint8_t*) &tmp, req->size);
cpu_physical_memory_write(req->data + (sign * i * req->size),
cpu_physical_memory_write(
req->data + (sign * i * (int64_t)req->size),
(uint8_t*) &tmp, req->size);
}
} else if (req->dir == IOREQ_WRITE) {
for (i = 0; i < req->count; i++) {
cpu_physical_memory_read(req->data + (sign * i * req->size),
cpu_physical_memory_read(
req->data + (sign * i * (int64_t)req->size),
(uint8_t*) &tmp, req->size);
cpu_physical_memory_write(req->addr + (sign * i * req->size),
cpu_physical_memory_write(
req->addr + (sign * i * (int64_t)req->size),
(uint8_t*) &tmp, req->size);
}
}

View File

@ -320,10 +320,6 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
target_phys_addr_t size;
int found = 0;
if (mapcache->last_address_vaddr == buffer) {
mapcache->last_address_index = -1;
}
QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
if (reventry->vaddr_req == buffer) {
paddr_index = reventry->paddr_index;
@ -342,6 +338,11 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
QTAILQ_REMOVE(&mapcache->locked_entries, reventry, next);
g_free(reventry);
if (mapcache->last_address_index == paddr_index) {
mapcache->last_address_index = -1;
mapcache->last_address_vaddr = NULL;
}
entry = &mapcache->entry[paddr_index % mapcache->nr_buckets];
while (entry && (entry->paddr_index != paddr_index || entry->size != size)) {
pentry = entry;

View File

@ -219,6 +219,8 @@ void HELPER(simcall)(CPUXtensaState *env)
default:
qemu_log("%s(%d): not implemented\n", __func__, regs[2]);
regs[2] = -1;
regs[3] = ENOSYS;
break;
}
}