1. 06 Sep, 2015 1 commit
  2. 30 Jul, 2015 1 commit
  3. 29 Jul, 2015 1 commit
  4. 23 Jul, 2015 2 commits
  5. 10 Jul, 2015 1 commit
  6. 05 Jun, 2015 2 commits
    • Paolo Bonzini's avatar
      KVM: implement multiple address spaces · f481b069
      Paolo Bonzini authored
      
      
      Only two ioctls have to be modified; the address space id is
      placed in the higher 16 bits of their slot id argument.
      
      As of this patch, no architecture defines more than one
      address space; x86 will be the first.
      Reviewed-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f481b069
    • Paolo Bonzini's avatar
      KVM: add vcpu-specific functions to read/write/translate GFNs · 8e73485c
      Paolo Bonzini authored
      
      
      We need to hide SMRAM from guests not running in SMM.  Therefore, all
      uses of kvm_read_guest* and kvm_write_guest* must be changed to use
      different address spaces, depending on whether the VCPU is in system
      management mode.  We need to introduce a new family of functions for
      this purpose.
      
      For now, the VCPU-based functions have the same behavior as the
      existing per-VM ones, they just accept a different type for the
      first argument.  Later however they will be changed to use one of many
      "struct kvm_memslots" stored in struct kvm, through an architecture hook.
      VM-based functions will unconditionally use the first memslots pointer.
      
      Whenever possible, this patch introduces slot-based functions with an
      __ prefix, with two wrappers for generic and vcpu-based actions.
      The exceptions are kvm_read_guest and kvm_write_guest, which are copied
      into the new functions kvm_vcpu_read_guest and kvm_vcpu_write_guest.
      Reviewed-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8e73485c
  7. 04 Jun, 2015 1 commit
  8. 28 May, 2015 2 commits
  9. 26 May, 2015 2 commits
  10. 19 May, 2015 1 commit
    • Paolo Bonzini's avatar
      KVM: export __gfn_to_pfn_memslot, drop gfn_to_pfn_async · 3520469d
      Paolo Bonzini authored
      
      
      gfn_to_pfn_async is used in just one place, and because of x86-specific
      treatment that place will need to look at the memory slot.  Hence inline
      it into try_async_pf and export __gfn_to_pfn_memslot.
      
      The patch also switches the subsequent call to gfn_to_pfn_prot to use
      __gfn_to_pfn_memslot.  This is a small optimization.  Finally, remove
      the now-unused async argument of __gfn_to_pfn.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      3520469d
  11. 07 May, 2015 2 commits
    • Rik van Riel's avatar
      kvm,x86: load guest FPU context more eagerly · 653f52c3
      Rik van Riel authored
      
      
      Currently KVM will clear the FPU bits in CR0.TS in the VMCS, and trap to
      re-load them every time the guest accesses the FPU after a switch back into
      the guest from the host.
      
      This patch copies the x86 task switch semantics for FPU loading, with the
      FPU loaded eagerly after first use if the system uses eager fpu mode,
      or if the guest uses the FPU frequently.
      
      In the latter case, after loading the FPU for 255 times, the fpu_counter
      will roll over, and we will revert to loading the FPU on demand, until
      it has been established that the guest is still actively using the FPU.
      
      This mirrors the x86 task switch policy, which seems to work.
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      653f52c3
    • Christian Borntraeger's avatar
      KVM: provide irq_unsafe kvm_guest_{enter|exit} · 0097d12e
      Christian Borntraeger authored
      
      
      Several kvm architectures disable interrupts before kvm_guest_enter.
      kvm_guest_enter then uses local_irq_save/restore to disable interrupts
      again or for the first time. Lets provide underscore versions of
      kvm_guest_{enter|exit} that assume being called locked.
      kvm_guest_enter now disables interrupts for the full function and
      thus we can remove the check for preemptible.
      
      This patch then adopts s390/kvm to use local_irq_disable/enable calls
      which are slighty cheaper that local_irq_save/restore and call these
      new functions.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0097d12e
  12. 08 Apr, 2015 1 commit
    • Nadav Amit's avatar
      KVM: x86: BSP in MSR_IA32_APICBASE is writable · 58d269d8
      Nadav Amit authored
      
      
      After reset, the CPU can change the BSP, which will be used upon INIT.  Reset
      should return the BSP which QEMU asked for, and therefore handled accordingly.
      
      To quote: "If the MP protocol has completed and a BSP is chosen, subsequent
      INITs (either to a specific processor or system wide) do not cause the MP
      protocol to be repeated."
      [Intel SDM 8.4.2: MP Initialization Protocol Requirements and Restrictions]
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Message-Id: <1427933438-12782-3-git-send-email-namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      58d269d8
  13. 26 Mar, 2015 1 commit
  14. 12 Mar, 2015 1 commit
  15. 10 Mar, 2015 1 commit
  16. 09 Mar, 2015 1 commit
    • Rik van Riel's avatar
      kvm,rcu,nohz: use RCU extended quiescent state when running KVM guest · 126a6a54
      Rik van Riel authored
      
      
      The host kernel is not doing anything while the CPU is executing
      a KVM guest VCPU, so it can be marked as being in an extended
      quiescent state, identical to that used when running user space
      code.
      
      The only exception to that rule is when the host handles an
      interrupt, which is already handled by the irq code, which
      calls rcu_irq_enter and rcu_irq_exit.
      
      The guest_enter and guest_exit functions already switch vtime
      accounting independent of context tracking. Leave those calls
      where they are, instead of moving them into the context tracking
      code.
      Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Will deacon <will.deacon@arm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      126a6a54
  17. 12 Feb, 2015 1 commit
  18. 05 Feb, 2015 1 commit
  19. 29 Jan, 2015 1 commit
  20. 23 Jan, 2015 1 commit
  21. 20 Jan, 2015 2 commits
  22. 16 Jan, 2015 1 commit
  23. 04 Dec, 2014 2 commits
    • Igor Mammedov's avatar
      kvm: optimize GFN to memslot lookup with large slots amount · 9c1a5d38
      Igor Mammedov authored
      
      
      Current linear search doesn't scale well when
      large amount of memslots is used and looked up slot
      is not in the beginning memslots array.
      Taking in account that memslots don't overlap, it's
      possible to switch sorting order of memslots array from
      'npages' to 'base_gfn' and use binary search for
      memslot lookup by GFN.
      
      As result of switching to binary search lookup times
      are reduced with large amount of memslots.
      
      Following is a table of search_memslot() cycles
      during WS2008R2 guest boot.
      
                               boot,          boot + ~10 min
                               mostly same    of using it,
                               slot lookup    randomized lookup
                      max      average        average
                      cycles   cycles         cycles
      
      13 slots      : 1450       28           30
      
      13 slots      : 1400       30           40
      binary search
      
      117 slots     : 13000      30           460
      
      117 slots     : 2000       35           180
      binary search
      Signed-off-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9c1a5d38
    • Igor Mammedov's avatar
      kvm: search_memslots: add simple LRU memslot caching · d4ae84a0
      Igor Mammedov authored
      
      
      In typical guest boot workload only 2-3 memslots are used
      extensively, and at that it's mostly the same memslot
      lookup operation.
      
      Adding LRU cache improves average lookup time from
      46 to 28 cycles (~40%) for this workload.
      Signed-off-by: default avatarIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d4ae84a0
  24. 26 Nov, 2014 1 commit
    • Ard Biesheuvel's avatar
      kvm: fix kvm_is_mmio_pfn() and rename to kvm_is_reserved_pfn() · d3fccc7e
      Ard Biesheuvel authored
      This reverts commit 85c8555f
      
       ("KVM: check for !is_zero_pfn() in
      kvm_is_mmio_pfn()") and renames the function to kvm_is_reserved_pfn.
      
      The problem being addressed by the patch above was that some ARM code
      based the memory mapping attributes of a pfn on the return value of
      kvm_is_mmio_pfn(), whose name indeed suggests that such pfns should
      be mapped as device memory.
      
      However, kvm_is_mmio_pfn() doesn't do quite what it says on the tin,
      and the existing non-ARM users were already using it in a way which
      suggests that its name should probably have been 'kvm_is_reserved_pfn'
      from the beginning, e.g., whether or not to call get_page/put_page on
      it etc. This means that returning false for the zero page is a mistake
      and the patch above should be reverted.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d3fccc7e
  25. 25 Nov, 2014 2 commits
  26. 24 Nov, 2014 1 commit
  27. 23 Nov, 2014 1 commit
  28. 21 Nov, 2014 1 commit
  29. 24 Oct, 2014 1 commit
    • Wanpeng Li's avatar
      kvm: vfio: fix unregister kvm_device_ops of vfio · 571ee1b6
      Wanpeng Li authored
      After commit 80ce1639 (KVM: VFIO: register kvm_device_ops dynamically),
      kvm_device_ops of vfio can be registered dynamically. Commit 3c3c29fd
      
      
      (kvm-vfio: do not use module_init) move the dynamic register invoked by
      kvm_init in order to fix broke unloading of the kvm module. However,
      kvm_device_ops of vfio is unregistered after rmmod kvm-intel module
      which lead to device type collision detection warning after kvm-intel
      module reinsmod.
      
          WARNING: CPU: 1 PID: 10358 at /root/cathy/kvm/arch/x86/kvm/../../../virt/kvm/kvm_main.c:3289 kvm_init+0x234/0x282 [kvm]()
          Modules linked in: kvm_intel(O+) kvm(O) nfsv3 nfs_acl auth_rpcgss oid_registry nfsv4 dns_resolver nfs fscache lockd sunrpc pci_stub bridge stp llc autofs4 8021q cpufreq_ondemand ipv6 joydev microcode pcspkr igb i2c_algo_bit ehci_pci ehci_hcd e1000e i2c_i801 ixgbe ptp pps_core hwmon mdio tpm_tis tpm ipmi_si ipmi_msghandler acpi_cpufreq isci libsas scsi_transport_sas button dm_mirror dm_region_hash dm_log dm_mod [last unloaded: kvm_intel]
          CPU: 1 PID: 10358 Comm: insmod Tainted: G        W  O   3.17.0-rc1 #2
          Hardware name: Intel Corporation S2600CP/S2600CP, BIOS RMLSDP.86I.00.29.D696.1311111329 11/11/2013
           0000000000000cd9 ffff880ff08cfd18 ffffffff814a61d9 0000000000000cd9
           0000000000000000 ffff880ff08cfd58 ffffffff810417b7 ffff880ff08cfd48
           ffffffffa045bcac ffffffffa049c420 0000000000000040 00000000000000ff
          Call Trace:
           [<ffffffff814a61d9>] dump_stack+0x49/0x60
           [<ffffffff810417b7>] warn_slowpath_common+0x7c/0x96
           [<ffffffffa045bcac>] ? kvm_init+0x234/0x282 [kvm]
           [<ffffffff810417e6>] warn_slowpath_null+0x15/0x17
           [<ffffffffa045bcac>] kvm_init+0x234/0x282 [kvm]
           [<ffffffffa016e995>] vmx_init+0x1bf/0x42a [kvm_intel]
           [<ffffffffa016e7d6>] ? vmx_check_processor_compat+0x64/0x64 [kvm_intel]
           [<ffffffff810002ab>] do_one_initcall+0xe3/0x170
           [<ffffffff811168a9>] ? __vunmap+0xad/0xb8
           [<ffffffff8109c58f>] do_init_module+0x2b/0x174
           [<ffffffff8109d414>] load_module+0x43e/0x569
           [<ffffffff8109c6d8>] ? do_init_module+0x174/0x174
           [<ffffffff8109c75a>] ? copy_module_from_user+0x39/0x82
           [<ffffffff8109b7dd>] ? module_sect_show+0x20/0x20
           [<ffffffff8109d65f>] SyS_init_module+0x54/0x81
           [<ffffffff814a9a12>] system_call_fastpath+0x16/0x1b
          ---[ end trace 0626f4a3ddea56f3 ]---
      
      The bug can be reproduced by:
      
          rmmod kvm_intel.ko
          insmod kvm_intel.ko
      
      without rmmod/insmod kvm.ko
      This patch fixes the bug by unregistering kvm_device_ops of vfio when the
      kvm-intel module is removed.
      Reported-by: default avatarLiu Rongrong <rongrongx.liu@intel.com>
      Fixes: 3c3c29fd
      
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@linux.intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      571ee1b6
  30. 24 Sep, 2014 3 commits