1. 08 Nov, 2019 13 commits
  2. 06 Nov, 2019 1 commit
  3. 15 Jun, 2019 1 commit
    • Arnd Bergmann's avatar
      ARM: prevent tracing IPI_CPU_BACKTRACE · e0c3fc1f
      Arnd Bergmann authored
      [ Upstream commit be167862 ]
      
      Patch series "compiler: allow all arches to enable
      CONFIG_OPTIMIZE_INLINING", v3.
      
      This patch (of 11):
      
      When function tracing for IPIs is enabled, we get a warning for an
      overflow of the ipi_types array with the IPI_CPU_BACKTRACE type as
      triggered by raise_nmi():
      
        arch/arm/kernel/smp.c: In function 'raise_nmi':
        arch/arm/kernel/smp.c:489:2: error: array subscript is above array bounds [-Werror=array-bounds]
          trace_ipi_raise(target, ipi_types[ipinr]);
      
      This is a correct warning as we actually overflow the array here.
      
      This patch raise_nmi() to call __smp_cross_call() instead of
      smp_cross_call(), to avoid calling into ftrace.  For clarification, I'm
      also adding a two new code comments describing how this one is special.
      
      The warning appears to have shown up after commit e7273ff4 ("ARM:
      8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI"), which changed the
      number assignment from '15' to '8', but as far as I can tell has existed
      since the IPI tracepoints were first introduced.  If we decide to
      backport this patch to stable kernels, we probably need to backport
      e7273ff4 as well.
      
      [yamada.masahiro@socionext.com: rebase on v5.1-rc1]
      Link: http://lkml.kernel.org/r/20190423034959.13525-2-yamada.masahiro@socionext.com
      Fixes: e7273ff4 ("ARM: 8488/1: Make IPI_CPU_BACKTRACE a "non-secure" SGI")
      Fixes: 365ec7b1
      
       ("ARM: add IPI tracepoints") # v3.17
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: Masahiro Yamada's avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Christophe Leroy <christophe.leroy@c-s.fr>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Stefan Agner <stefan@agner.ch>
      Cc: Boris Brezillon <bbrezillon@kernel.org>
      Cc: Miquel Raynal <miquel.raynal@bootlin.com>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Brian Norris <computersforpeace@gmail.com>
      Cc: Marek Vasut <marek.vasut@gmail.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      e0c3fc1f
  4. 04 Jun, 2019 1 commit
    • Masahiro Yamada's avatar
      jump_label: move 'asm goto' support test to Kconfig · 0276ebf1
      Masahiro Yamada authored
      commit e9666d10
      
       upstream.
      
      Currently, CONFIG_JUMP_LABEL just means "I _want_ to use jump label".
      
      The jump label is controlled by HAVE_JUMP_LABEL, which is defined
      like this:
      
        #if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_JUMP_LABEL)
        # define HAVE_JUMP_LABEL
        #endif
      
      We can improve this by testing 'asm goto' support in Kconfig, then
      make JUMP_LABEL depend on CC_HAS_ASM_GOTO.
      
      Ugly #ifdef HAVE_JUMP_LABEL will go away, and CONFIG_JUMP_LABEL will
      match to the real kernel capability.
      Signed-off-by: Masahiro Yamada's avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: default avatarSedat Dilek <sedat.dilek@gmail.com>
      [nc: Fix trivial conflicts in 4.19
           arch/xtensa/kernel/jump_label.c doesn't exist yet
           Ensured CC_HAVE_ASM_GOTO and HAVE_JUMP_LABEL were sufficiently
           eliminated]
      Signed-off-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Signed-off-by: Greg Kroah-Hartman <gregk...
      0276ebf1
  5. 16 May, 2019 1 commit
  6. 20 Apr, 2019 1 commit
  7. 05 Apr, 2019 3 commits
    • Russell King's avatar
      ARM: avoid Cortex-A9 livelock on tight dmb loops · 30d503ba
      Russell King authored
      [ Upstream commit 5388a5b8
      
       ]
      
      machine_crash_nonpanic_core() does this:
      
      	while (1)
      		cpu_relax();
      
      because the kernel has crashed, and we have no known safe way to deal
      with the CPU.  So, we place the CPU into an infinite loop which we
      expect it to never exit - at least not until the system as a whole is
      reset by some method.
      
      In the absence of erratum 754327, this code assembles to:
      
      	b	.
      
      In other words, an infinite loop.  When erratum 754327 is enabled,
      this becomes:
      
      1:	dmb
      	b	1b
      
      It has been observed that on some systems (eg, OMAP4) where, if a
      crash is triggered, the system tries to kexec into the panic kernel,
      but fails after taking the secondary CPU down - placing it into one
      of these loops.  This causes the system to livelock, and the most
      noticable effect is the system stops after issuing:
      
      	Loading crashdump kernel...
      
      to the system console.
      
      The tested as working solution I came up with was to add wfe() to
      these infinite loops thusly:
      
      	while (1) {
      		cpu_relax();
      		wfe();
      	}
      
      which, without 754327 builds to:
      
      1:	wfe
      	b	1b
      
      or with 754327 is enabled:
      
      1:	dmb
      	wfe
      	b	1b
      
      Adding "wfe" does two things depending on the environment we're running
      under:
      - where we're running on bare metal, and the processor implements
        "wfe", it stops us spinning endlessly in a loop where we're never
        going to do any useful work.
      - if we're running in a VM, it allows the CPU to be given back to the
        hypervisor and rescheduled for other purposes (maybe a different VM)
        rather than wasting CPU cycles inside a crashed VM.
      
      However, in light of erratum 794072, Will Deacon wanted to see 10 nops
      as well - which is reasonable to cover the case where we have erratum
      754327 enabled _and_ we have a processor that doesn't implement the
      wfe hint.
      
      So, we now end up with:
      
      1:      wfe
              b       1b
      
      when erratum 754327 is disabled, or:
      
      1:      dmb
              nop
              nop
              nop
              nop
              nop
              nop
              nop
              nop
              nop
              nop
              wfe
              b       1b
      
      when erratum 754327 is enabled.  We also get the dmb + 10 nop
      sequence elsewhere in the kernel, in terminating loops.
      
      This is reasonable - it means we get the workaround for erratum
      794072 when erratum 754327 is enabled, but still relinquish the dead
      processor - either by placing it in a lower power mode when wfe is
      implemented as such or by returning it to the hypervisior, or in the
      case where wfe is a no-op, we use the workaround specified in erratum
      794072 to avoid the problem.
      
      These as two entirely orthogonal problems - the 10 nops addresses
      erratum 794072, and the wfe is an optimisation that makes the system
      more efficient when crashed either in terms of power consumption or
      by allowing the host/other VMs to make use of the CPU.
      
      I don't see any reason not to use kexec() inside a VM - it has the
      potential to provide automated recovery from a failure of the VMs
      kernel with the opportunity for saving a crashdump of the failure.
      A panic() with a reboot timeout won't do that, and reading the
      libvirt documentation, setting on_reboot to "preserve" won't either
      (the documentation states "The preserve action for an on_reboot event
      is treated as a destroy".)  Surely it has to be a good thing to
      avoiding having CPUs spinning inside a VM that is doing no useful
      work.
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      30d503ba
    • Vladimir Murzin's avatar
      ARM: 8830/1: NOMMU: Toggle only bits in EXC_RETURN we are really care of · d8945878
      Vladimir Murzin authored
      [ Upstream commit 72cd4064
      
       ]
      
      ARMv8M introduces support for Security extension to M class, among
      other things it affects exception handling, especially, encoding of
      EXC_RETURN.
      
      The new bits have been added:
      
      Bit [6]	Secure or Non-secure stack
      Bit [5]	Default callee register stacking
      Bit [0]	Exception Secure
      
      which conflicts with hard-coded value of EXC_RETURN:
      
      In fact, we only care of few bits:
      
      Bit [3]	 Mode (0 - Handler, 1 - Thread)
      Bit [2]	 Stack pointer selection (0 - Main, 1 - Process)
      
      We can toggle only those bits and left other bits as they were on
      exception entry.
      
      It is basically, what patch does - saves EXC_RETURN when we do
      transition form Thread to Handler mode (it is first svc), so later
      saved value is used instead of EXC_RET_THREADMODE_PROCESSSTACK.
      Signed-off-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      d8945878
    • Sebastian Andrzej Siewior's avatar
      ARM: 8840/1: use a raw_spinlock_t in unwind · d81bdb3c
      Sebastian Andrzej Siewior authored
      [ Upstream commit 74ffe79a
      
       ]
      
      Mostly unwind is done with irqs enabled however SLUB may call it with
      irqs disabled while creating a new SLUB cache.
      
      I had system freeze while loading a module which called
      kmem_cache_create() on init. That means SLUB's __slab_alloc() disabled
      interrupts and then
      
      ->new_slab_objects()
       ->new_slab()
        ->setup_object()
         ->setup_object_debug()
          ->init_tracking()
           ->set_track()
            ->save_stack_trace()
             ->save_stack_trace_tsk()
              ->walk_stackframe()
               ->unwind_frame()
                ->unwind_find_idx()
                 =>spin_lock_irqsave(&unwind_lock);
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
      d81bdb3c
  8. 23 Mar, 2019 1 commit
  9. 20 Feb, 2019 10 commits
  10. 12 Feb, 2019 1 commit
  11. 05 Dec, 2018 1 commit
  12. 11 Oct, 2018 1 commit
  13. 23 Aug, 2018 1 commit
    • Nick Desaulniers's avatar
      include/linux/compiler*.h: make compiler-*.h mutually exclusive · 815f0ddb
      Nick Desaulniers authored
      Commit cafa0010 ("Raise the minimum required gcc version to 4.6")
      recently exposed a brittle part of the build for supporting non-gcc
      compilers.
      
      Both Clang and ICC define __GNUC__, __GNUC_MINOR__, and
      __GNUC_PATCHLEVEL__ for quick compatibility with code bases that haven't
      added compiler specific checks for __clang__ or __INTEL_COMPILER.
      
      This is brittle, as they happened to get compatibility by posing as a
      certain version of GCC.  This broke when upgrading the minimal version
      of GCC required to build the kernel, to a version above what ICC and
      Clang claim to be.
      
      Rather than always including compiler-gcc.h then undefining or
      redefining macros in compiler-intel.h or compiler-clang.h, let's
      separate out the compiler specific macro definitions into mutually
      exclusive headers, do more proper compiler detection, and keep shared
      definitions in compiler_types.h.
      
      Fixes: cafa0010 ("Raise the minimum required gcc version to 4.6")
      Reported-by...
      815f0ddb
  14. 22 Aug, 2018 1 commit
  15. 03 Aug, 2018 1 commit
    • Palmer Dabbelt's avatar
      ARM: Convert to GENERIC_IRQ_MULTI_HANDLER · 4c301f9b
      Palmer Dabbelt authored
      
      
      Converts the ARM interrupt code to use the recently added
      GENERIC_IRQ_MULTI_HANDLER, which is essentially just a copy of ARM's
      existhing MULTI_IRQ_HANDLER.  The only changes are:
      
      * handle_arch_irq is now defined in a generic C file instead of an
        arm-specific assembly file.
       
      * handle_arch_irq is now marked as __ro_after_init.
      Signed-off-by: default avatarPalmer Dabbelt <palmer@sifive.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: linux@armlinux.org.uk
      Cc: catalin.marinas@arm.com
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: jonas@southpole.se
      Cc: stefan.kristiansson@saunalahti.fi
      Cc: shorne@gmail.com
      Cc: jason@lakedaemon.net
      Cc: marc.zyngier@arm.com
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: nicolas.pitre@linaro.org
      Cc: vladimir.murzin@arm.com
      Cc: keescook@chromium.org
      Cc: jinb.park7@gmail.com
      Cc: yamada.masahiro@socionext.com
      Cc: alexandre.belloni@bootlin.com
      Cc: pombredanne@nexb.com
      Cc: Greg KH <gregkh@linuxfoundation.org>
      Cc: kstewart@linuxfoundation.org
      Cc: jhogan@kernel.org
      Cc: mark.rutland@arm.com
      Cc: ard.biesheuvel@linaro.org
      Cc: james.morse@arm.com
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: openrisc@lists.librecores.org
      Link: https://lkml.kernel.org/r/20180622170126.6308-3-palmer@sifive.com
      4c301f9b
  16. 02 Aug, 2018 2 commits
    • Russell King's avatar
      ARM: oabi-compat: copy semops using __copy_from_user() · 8c8484a1
      Russell King authored
      
      
      __get_user_error() is used as a fast accessor to make copying structure
      members as efficient as possible.  However, with software PAN and the
      recent Spectre variant 1, the efficiency is reduced as these are no
      longer fast accessors.
      
      In the case of software PAN, it has to switch the domain register around
      each access, and with Spectre variant 1, it would have to repeat the
      access_ok() check for each access.
      
      Rather than using __get_user_error() to copy each semops element member,
      copy each semops element in full using __copy_from_user().
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      8c8484a1
    • Russell King's avatar
      ARM: vfp: use __copy_from_user() when restoring VFP state · 42019fc5
      Russell King authored
      
      
      __get_user_error() is used as a fast accessor to make copying structure
      members in the signal handling path as efficient as possible.  However,
      with software PAN and the recent Spectre variant 1, the efficiency is
      reduced as these are no longer fast accessors.
      
      In the case of software PAN, it has to switch the domain register around
      each access, and with Spectre variant 1, it would have to repeat the
      access_ok() check for each access.
      
      Use __copy_from_user() rather than __get_user_err() for individual
      members when restoring VFP state.
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@armlinux.org.uk>
      42019fc5