1. 21 Dec, 2018 1 commit
  2. 02 Nov, 2017 1 commit
    • Greg Kroah-Hartman's avatar
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman authored
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard...
      b2441318
  3. 24 Feb, 2016 1 commit
    • Josh Poimboeuf's avatar
      x86/locking: Create stack frame in PV unlock · 16df4ff8
      Josh Poimboeuf authored
      
      
      The assembly PV_UNLOCK function is a callable non-leaf function which
      doesn't honor CONFIG_FRAME_POINTER, which can result in bad stack
      traces.
      
      Create a stack frame when CONFIG_FRAME_POINTER is enabled.
      Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <Waiman.Long@hpe.com>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/6685a72ddbbd0ad3694337cca0af4b4ea09f5f40.1453405861.git.jpoimboe@redhat.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      16df4ff8
  4. 23 Nov, 2015 1 commit
    • Waiman Long's avatar
      locking/pvqspinlock, x86: Optimize the PV unlock code path · d7804530
      Waiman Long authored
      
      
      The unlock function in queued spinlocks was optimized for better
      performance on bare metal systems at the expense of virtualized guests.
      
      For x86-64 systems, the unlock call needs to go through a
      PV_CALLEE_SAVE_REGS_THUNK() which saves and restores 8 64-bit
      registers before calling the real __pv_queued_spin_unlock()
      function. The thunk code may also be in a separate cacheline from
      __pv_queued_spin_unlock().
      
      This patch optimizes the PV unlock code path by:
      
       1) Moving the unlock slowpath code from the fastpath into a separate
          __pv_queued_spin_unlock_slowpath() function to make the fastpath
          as simple as possible..
      
       2) For x86-64, hand-coded an assembly function to combine the register
          saving thunk code with the fastpath code. Only registers that
          are used in the fastpath will be saved and restored. If the
          fastpath fails, the slowpath function will be called via another
          PV_CALLEE_SAVE_REGS_THUNK(). For 32-bit, it falls back to the C
          __pv_queued_spin_unlock() code as the thunk saves and restores
          only one 32-bit register.
      
      With a microbenchmark of 5M lock-unlock loop, the table below shows
      the execution times before and after the patch with different number
      of threads in a VM running on a 32-core Westmere-EX box with x86-64
      4.2-rc1 based kernels:
      
        Threads	Before patch	After patch	% Change
        -------	------------	-----------	--------
           1		   134.1 ms	  119.3 ms	  -11%
           2		   1286  ms	   953  ms	  -26%
           3		   3715  ms	  3480  ms	  -6.3%
           4		   4092  ms	  3764  ms	  -8.0%
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hpe.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Douglas Hatch <doug.hatch@hpe.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Scott J Norton <scott.norton@hpe.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1447114167-47185-5-git-send-email-Waiman.Long@hpe.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d7804530
  5. 08 May, 2015 1 commit
    • Peter Zijlstra (Intel)'s avatar
      locking/pvqspinlock, x86: Implement the paravirt qspinlock call patching · f233f7f1
      Peter Zijlstra (Intel) authored
      
      
      We use the regular paravirt call patching to switch between:
      
        native_queued_spin_lock_slowpath()	__pv_queued_spin_lock_slowpath()
        native_queued_spin_unlock()		__pv_queued_spin_unlock()
      
      We use a callee saved call for the unlock function which reduces the
      i-cache footprint and allows 'inlining' of SPIN_UNLOCK functions
      again.
      
      We further optimize the unlock path by patching the direct call with a
      "movb $0,%arg1" if we are indeed using the native unlock code. This
      makes the unlock code almost as fast as the !PARAVIRT case.
      
      This significantly lowers the overhead of having
      CONFIG_PARAVIRT_SPINLOCKS enabled, even for native code.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarWaiman Long <Waiman.Long@hp.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Daniel J Blueman <daniel@numascale.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Douglas Hatch <doug.hatch@hp.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paolo Bonzini <paolo.bonzini@gmail.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Scott J Norton <scott.norton@hp.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: virtualization@lists.linux-foundation.org
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/1429901803-29771-10-git-send-email-Waiman.Long@hp.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      f233f7f1