1. 31 Mar, 2015 3 commits
  2. 16 Mar, 2015 1 commit
  3. 13 Mar, 2015 1 commit
  4. 28 Feb, 2015 1 commit
  5. 27 Feb, 2015 1 commit
  6. 20 Feb, 2015 1 commit
  7. 19 Feb, 2015 8 commits
    • David Vrabel's avatar
      x86: pte_protnone() and pmd_protnone() must check entry is not present · e3a1f6ca
      David Vrabel authored
      Since _PAGE_PROTNONE aliases _PAGE_GLOBAL it is only valid if
      _PAGE_PRESENT is clear.  Make pte_protnone() and pmd_protnone() check
      for this.
      
      This fixes a 64-bit Xen PV guest regression introduced by 8a0516ed
      
      
      ("mm: convert p[te|md]_numa users to p[te|md]_protnone_numa").  Any
      userspace process would endlessly fault.
      
      In a 64-bit PV guest, userspace page table entries have _PAGE_GLOBAL set
      by the hypervisor.  This meant that any fault on a present userspace
      entry (e.g., a write to a read-only mapping) would be misinterpreted as
      a NUMA hinting fault and the fault would not be correctly handled,
      resulting in the access endlessly faulting.
      Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e3a1f6ca
    • Quentin Casasnovas's avatar
      x86/microcode/intel: Handle truncated microcode images more robustly · 35a9ff4e
      Quentin Casasnovas authored
      
      
      We do not check the input data bounds containing the microcode before
      copying a struct microcode_intel_header from it. A specially crafted
      microcode could cause the kernel to read invalid memory and lead to a
      denial-of-service.
      Signed-off-by: default avatarQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Link: http://lkml.kernel.org/r/1422964824-22056-3-git-send-email-quentin.casasnovas@oracle.com
      
      
      [ Made error message differ from the next one and flipped comparison. ]
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      35a9ff4e
    • Quentin Casasnovas's avatar
      x86/microcode/intel: Guard against stack overflow in the loader · f84598bd
      Quentin Casasnovas authored
      
      
      mc_saved_tmp is a static array allocated on the stack, we need to make
      sure mc_saved_count stays within its bounds, otherwise we're overflowing
      the stack in _save_mc(). A specially crafted microcode header could lead
      to a kernel crash or potentially kernel execution.
      Signed-off-by: default avatarQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Link: http://lkml.kernel.org/r/1422964824-22056-1-git-send-email-quentin.casasnovas@oracle.com
      
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      f84598bd
    • Hector Marco-Gisbert's avatar
      x86, mm/ASLR: Fix stack randomization on 64-bit systems · 4e7c22d4
      Hector Marco-Gisbert authored
      
      
      The issue is that the stack for processes is not properly randomized on
      64 bit architectures due to an integer overflow.
      
      The affected function is randomize_stack_top() in file
      "fs/binfmt_elf.c":
      
        static unsigned long randomize_stack_top(unsigned long stack_top)
        {
                 unsigned int random_variable = 0;
      
                 if ((current->flags & PF_RANDOMIZE) &&
                         !(current->personality & ADDR_NO_RANDOMIZE)) {
                         random_variable = get_random_int() & STACK_RND_MASK;
                         random_variable <<= PAGE_SHIFT;
                 }
                 return PAGE_ALIGN(stack_top) + random_variable;
                 return PAGE_ALIGN(stack_top) - random_variable;
        }
      
      Note that, it declares the "random_variable" variable as "unsigned int".
      Since the result of the shifting operation between STACK_RND_MASK (which
      is 0x3fffff on x86_64, 22 bits) and PAGE_SHIFT (which is 12 on x86_64):
      
      	  random_variable <<= PAGE_SHIFT;
      
      then the two leftmost bits are dropped when storing the result in the
      "random_variable". This variable shall be at least 34 bits long to hold
      the (22+12) result.
      
      These two dropped bits have an impact on the entropy of process stack.
      Concretely, the total stack entropy is reduced by four: from 2^28 to
      2^30 (One fourth of expected entropy).
      
      This patch restores back the entropy by correcting the types involved
      in the operations in the functions randomize_stack_top() and
      stack_maxrandom_size().
      
      The successful fix can be tested with:
      
        $ for i in `seq 1 10`; do cat /proc/self/maps | grep stack; done
        7ffeda566000-7ffeda587000 rw-p 00000000 00:00 0                          [stack]
        7fff5a332000-7fff5a353000 rw-p 00000000 00:00 0                          [stack]
        7ffcdb7a1000-7ffcdb7c2000 rw-p 00000000 00:00 0                          [stack]
        7ffd5e2c4000-7ffd5e2e5000 rw-p 00000000 00:00 0                          [stack]
        ...
      
      Once corrected, the leading bytes should be between 7ffc and 7fff,
      rather than always being 7fff.
      Signed-off-by: default avatarHector Marco-Gisbert <hecmargi@upv.es>
      Signed-off-by: default avatarIsmael Ripoll <iripoll@upv.es>
      [ Rebased, fixed 80 char bugs, cleaned up commit message, added test example and CVE ]
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      Cc: <stable@vger.kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Fixes: CVE-2015-1593
      Link: http://lkml.kernel.org/r/20150214173350.GA18393@www.outflux.net
      
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      4e7c22d4
    • Dave Hansen's avatar
      x86/mm/init: Fix incorrect page size in init_memory_mapping() printks · f15e0518
      Dave Hansen authored
      
      
      With 32-bit non-PAE kernels, we have 2 page sizes available
      (at most): 4k and 4M.
      
      Enabling PAE replaces that 4M size with a 2M one (which 64-bit
      systems use too).
      
      But, when booting a 32-bit non-PAE kernel, in one of our
      early-boot printouts, we say:
      
        init_memory_mapping: [mem 0x00000000-0x000fffff]
         [mem 0x00000000-0x000fffff] page 4k
        init_memory_mapping: [mem 0x37000000-0x373fffff]
         [mem 0x37000000-0x373fffff] page 2M
        init_memory_mapping: [mem 0x00100000-0x36ffffff]
         [mem 0x00100000-0x003fffff] page 4k
         [mem 0x00400000-0x36ffffff] page 2M
        init_memory_mapping: [mem 0x37400000-0x377fdfff]
         [mem 0x37400000-0x377fdfff] page 4k
      
      Which is obviously wrong.  There is no 2M page available.  This
      is probably because of a badly-named variable: in the map_range
      code: PG_LEVEL_2M.
      
      Instead of renaming all the PG_LEVEL_2M's.  This patch just
      fixes the printout:
      
        init_memory_mapping: [mem 0x00000000-0x000fffff]
         [mem 0x00000000-0x000fffff] page 4k
        init_memory_mapping: [mem 0x37000000-0x373fffff]
         [mem 0x37000000-0x373fffff] page 4M
        init_memory_mapping: [mem 0x00100000-0x36ffffff]
         [mem 0x00100000-0x003fffff] page 4k
         [mem 0x00400000-0x36ffffff] page 4M
        init_memory_mapping: [mem 0x37400000-0x377fdfff]
         [mem 0x37400000-0x377fdfff] page 4k
        BRK [0x03206000, 0x03206fff] PGTABLE
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Cc: Pekka Enberg <penberg@cs.helsinki.fi>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Link: http://lkml.kernel.org/r/20150210212030.665EC267@viggo.jf.intel.com
      
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      f15e0518
    • Jiri Kosina's avatar
      x86/mm/ASLR: Propagate base load address calculation · f47233c2
      Jiri Kosina authored
      Commit:
      
        e2b32e67
      
       ("x86, kaslr: randomize module base load address")
      
      makes the base address for module to be unconditionally randomized in
      case when CONFIG_RANDOMIZE_BASE is defined and "nokaslr" option isn't
      present on the commandline.
      
      This is not consistent with how choose_kernel_location() decides whether
      it will randomize kernel load base.
      
      Namely, CONFIG_HIBERNATION disables kASLR (unless "kaslr" option is
      explicitly specified on kernel commandline), which makes the state space
      larger than what module loader is looking at. IOW CONFIG_HIBERNATION &&
      CONFIG_RANDOMIZE_BASE is a valid config option, kASLR wouldn't be applied
      by default in that case, but module loader is not aware of that.
      
      Instead of fixing the logic in module.c, this patch takes more generic
      aproach. It introduces a new bootparam setup data_type SETUP_KASLR and
      uses that to pass the information whether kaslr has been applied during
      kernel decompression, and sets a global 'kaslr_enabled' variable
      accordingly, so that any kernel code (module loading, livepatching, ...)
      can make decisions based on its value.
      
      x86 module loader is converted to make use of this flag.
      Signed-off-by: default avatarJiri Kosina <jkosina@suse.cz>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Link: https://lkml.kernel.org/r/alpine.LNX.2.00.1502101411280.10719@pobox.suse.cz
      
      
      [ Always dump correct kaslr status when panicking ]
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      f47233c2
    • Fengguang Wu's avatar
      x86/intel/quark: Fix simple_return.cocci warnings · c11a25f4
      Fengguang Wu authored
      
      
      arch/x86/platform/intel-quark/imr.c:129:1-4: WARNING: end returns can be simpified
      
       Simplify a trivial if-return sequence.  Possibly combine with a preceding function call.
      
      Generated by: scripts/coccinelle/misc/simple_return.cocci
      Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Cc: Andy Shevchenko <andy.schevchenko@gmail.com>
      Cc: Ong, Boon Leong <boon.leong.ong@intel.com>
      Cc: Bryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: Darren Hart <dvhart@linux.intel.com>
      Cc: kbuild-all@01.org
      Link: http://lkml.kernel.org/r/20150219081432.GA21996@waimea
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c11a25f4
    • Fengguang Wu's avatar
      x86/intel/quark: Fix ptr_ret.cocci warnings · 32d39169
      Fengguang Wu authored
      
      
      arch/x86/platform/intel-quark/imr.c:280:1-3: WARNING: PTR_ERR_OR_ZERO can be used
      
       Use PTR_ERR_OR_ZERO rather than if(IS_ERR(...)) + PTR_ERR
      
      Generated by: scripts/coccinelle/api/ptr_ret.cocci
      Signed-off-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Cc: Andy Shevchenko <andy.schevchenko@gmail.com>
      Cc: Ong, Boon Leong <boon.leong.ong@intel.com>
      Cc: Bryan O'Donoghue <pure.logic@nexus-software.ie>
      Cc: Darren Hart <dvhart@linux.intel.com>
      Cc: kbuild-all@01.org
      Link: http://lkml.kernel.org/r/20150219081432.GA21983@waimea
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      32d39169
  8. 18 Feb, 2015 10 commits
    • Bryan O'Donoghue's avatar
      x86/intel/quark: Add Intel Quark platform support · 8bbc2a13
      Bryan O'Donoghue authored
      
      
      Add Intel Quark platform support. Quark needs to pull down all
      unlocked IMRs to ensure agreement with the EFI memory map post
      boot.
      
      This patch adds an entry in Kconfig for Quark as a platform and
      makes IMR support mandatory if selected.
      Suggested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Suggested-by: default avatarAndy Shevchenko <andy.shevchenko@gmail.com>
      Tested-by: default avatarOng, Boon Leong <boon.leong.ong@intel.com>
      Signed-off-by: default avatarBryan O'Donoghue <pure.logic@nexus-software.ie>
      Reviewed-by: default avatarAndy Shevchenko <andy.schevchenko@gmail.com>
      Reviewed-by: default avatarDarren Hart <dvhart@linux.intel.com>
      Reviewed-by: default avatarOng, Boon Leong <boon.leong.ong@intel.com>
      Cc: dvhart@infradead.org
      Link: http://lkml.kernel.org/r/1422635379-12476-3-git-send-email-pure.logic@nexus-software.ie
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      8bbc2a13
    • Bryan O'Donoghue's avatar
      x86/intel/quark: Add Isolated Memory Regions for Quark X1000 · 28a375df
      Bryan O'Donoghue authored
      
      
      Intel's Quark X1000 SoC contains a set of registers called
      Isolated Memory Regions. IMRs are accessed over the IOSF mailbox
      interface. IMRs are areas carved out of memory that define
      read/write access rights to the various system agents within the
      Quark system. For a given agent in the system it is possible to
      specify if that agent may read or write an area of memory
      defined by an IMR with a granularity of 1 KiB.
      
      Quark_SecureBootPRM_330234_001.pdf section 4.5 details the
      concept of IMRs quark-x1000-datasheet.pdf section 12.7.4 details
      the implementation of IMRs in silicon.
      
      eSRAM flush, CPU Snoop write-only, CPU SMM Mode, CPU non-SMM
      mode, RMU and PCIe Virtual Channels (VC0 and VC1) can have
      individual read/write access masks applied to them for a given
      memory region in Quark X1000. This enables IMRs to treat each
      memory transaction type listed above on an individual basis and
      to filter appropriately based on the IMR access mask for the
      memory region. Quark supports eight IMRs.
      
      Since all of the DMA capable SoC components in the X1000 are
      mapped to VC0 it is possible to define sections of memory as
      invalid for DMA write operations originating from Ethernet, USB,
      SD and any other DMA capable south-cluster component on VC0.
      Similarly it is possible to mark kernel memory as non-SMM mode
      read/write only or to mark BIOS runtime memory as SMM mode
      accessible only depending on the particular memory footprint on
      a given system.
      
      On an IMR violation Quark SoC X1000 systems are configured to
      reset the system, so ensuring that the IMR memory map is
      consistent with the EFI provided memory map is critical to
      ensure no IMR violations reset the system.
      
      The API for accessing IMRs is based on MTRR code but doesn't
      provide a /proc or /sys interface to manipulate IMRs. Defining
      the size and extent of IMRs is exclusively the domain of
      in-kernel code.
      
      Quark firmware sets up a series of locked IMRs around pieces of
      memory that firmware owns such as ACPI runtime data. During boot
      a series of unlocked IMRs are placed around items in memory to
      guarantee no DMA modification of those items can take place.
      Grub also places an unlocked IMR around the kernel boot params
      data structure and compressed kernel image. It is necessary for
      the kernel to tear down all unlocked IMRs in order to ensure
      that the kernel's view of memory passed via the EFI memory map
      is consistent with the IMR memory map. Without tearing down all
      unlocked IMRs on boot transitory IMRs such as those used to
      protect the compressed kernel image will cause IMR violations and system reboots.
      
      The IMR init code tears down all unlocked IMRs and sets a
      protective IMR around the kernel .text and .rodata as one
      contiguous block. This sanitizes the IMR memory map with respect
      to the EFI memory map and protects the read-only portions of the
      kernel from unwarranted DMA access.
      Tested-by: default avatarOng, Boon Leong <boon.leong.ong@intel.com>
      Signed-off-by: default avatarBryan O'Donoghue <pure.logic@nexus-software.ie>
      Reviewed-by: default avatarAndy Shevchenko <andy.schevchenko@gmail.com>
      Reviewed-by: default avatarDarren Hart <dvhart@linux.intel.com>
      Reviewed-by: default avatarOng, Boon Leong <boon.leong.ong@intel.com>
      Cc: andy.shevchenko@gmail.com
      Cc: dvhart@infradead.org
      Link: http://lkml.kernel.org/r/1422635379-12476-2-git-send-email-pure.logic@nexus-software.ie
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      28a375df
    • Ricardo Ribalda Delgado's avatar
      x86/apic: Fix the devicetree build in certain configs · b273c2c2
      Ricardo Ribalda Delgado authored
      
      
      Without this patch:
      
        LD      init/built-in.o
        arch/x86/built-in.o: In function `dtb_lapic_setup': kernel/devicetree.c:155:
        undefined reference to `apic_force_enable'
        Makefile:923: recipe for target 'vmlinux' failed
        make: *** [vmlinux] Error 1
      Signed-off-by: default avatarRicardo Ribalda Delgado <ricardo.ribalda@gmail.com>
      Reviewed-by: default avatarMaciej W. Rozycki <macro@linux-mips.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Link: http://lkml.kernel.org/r/1422905231-16067-1-git-send-email-ricardo.ribalda@gmail.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b273c2c2
    • Wang Nan's avatar
      kprobes/x86: Mark 2 bytes NOP as boostable · b7e37567
      Wang Nan authored
      
      
      Currently, x86 kprobes is unable to boost 2 bytes nop like:
      
        nopl 0x0(%rax,%rax,1)
      
      which is 0x0f 0x1f 0x44 0x00 0x00.
      
      Such nops have exactly 5 bytes to hold a relative jmp
      instruction. Boosting them should be obviously safe.
      
      This patch enable boosting such nops by simply updating
      twobyte_is_boostable[] array.
      Signed-off-by: default avatarWang Nan <wangnan0@huawei.com>
      Acked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: <lizefan@huawei.com>
      Link: http://lkml.kernel.org/r/1423532045-41049-1-git-send-email-wangnan0@huawei.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      b7e37567
    • Denys Vlasenko's avatar
      uprobes/x86: Fix 2-byte opcode table · 5154d4f2
      Denys Vlasenko authored
      
      
      Enabled probing of lar, lsl, popcnt, lddqu, prefetch insns.
      They should be safe to probe, they throw no exceptions.
      
      Enabled probing of 3-byte opcodes 0f 38-3f xx - these are
      vector isns, so should be safe.
      
      Enabled probing of many currently undefined 0f xx insns.
      At the rate new vector instructions are getting added,
      we don't want to constantly enable more bits.
      We want to only occasionally *disable* ones which
      for some reason can't be probed.
      This includes 0f 24,26 opcodes, which are undefined
      since Pentium. On 486, they were "mov to/from test register".
      
      Explained more fully what 0f 78,79 opcodes are.
      
      Explained what 0f ae opcode is. (It's unclear why we don't allow
      probing it, but let's not change it for now).
      Signed-off-by: default avatarDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1423768732-32194-3-git-send-email-dvlasenk@redhat.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      5154d4f2
    • Denys Vlasenko's avatar
      uprobes/x86: Fix 1-byte opcode tables · 67fc8092
      Denys Vlasenko authored
      
      
      This change fixes 1-byte opcode tables so that only insns
      for which we have real reasons to disallow probing are marked
      with unset bits.
      
      To that end:
      
      Set bits for all prefix bytes. Their setting is ignored anyway -
      we check the bitmap against OPCODE1(insn), not against first
      byte. Keeping them set to 0 only confuses code reader with
      "why we don't support that opcode" question.
      
      Thus: enable bytes c4,c5 in 64-bit mode (VEX prefixes).
      Byte 62 (EVEX prefix) is not yet enabled since insn decoder
      does not support that yet.
      
      For 32-bit mode, enable probing of opcodes 63 (arpl) and d6
      (salc). They don't require any special handling.
      
      For 64-bit mode, disable 9a and ea - these undefined opcodes
      were mistakenly left enabled.
      Signed-off-by: default avatarDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1423768732-32194-2-git-send-email-dvlasenk@redhat.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      67fc8092
    • Denys Vlasenko's avatar
      uprobes/x86: Add comment with insn opcodes, mnemonics and why we dont support them · 097f4e5e
      Denys Vlasenko authored
      
      
      After adding these, it's clear we have some awkward choices
      there. Some valid instructions are prohibited from uprobing
      while several invalid ones are allowed.
      
      Hopefully future edits to the good-opcode tables will fix wrong
      bits or explain why those bits are not wrong.
      
      No actual code changes.
      Signed-off-by: default avatarDenys Vlasenko <dvlasenk@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Link: http://lkml.kernel.org/r/1423768732-32194-1-git-send-email-dvlasenk@redhat.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      097f4e5e
    • Joerg Roedel's avatar
      x86/irq: Check for valid irq descriptor in check_irq_vectors_for_cpu_disable() · d97eb896
      Joerg Roedel authored
      
      
      When an interrupt is migrated away from a cpu it will stay
      in its vector_irq array until smp_irq_move_cleanup_interrupt
      succeeded. The cfg->move_in_progress flag is cleared already
      when the IPI was sent.
      
      When the interrupt is destroyed after migration its 'struct
      irq_desc' is freed and the vector_irq arrays are cleaned up.
      But since cfg->move_in_progress is already 0 the references
      at cpus before the last migration will not be cleared. So
      this would leave a reference to an already destroyed irq
      alive.
      
      When the cpu is taken down at this point, the
      check_irq_vectors_for_cpu_disable() function finds a valid irq
      number in the vector_irq array, but gets NULL for its
      descriptor and dereferences it, causing a kernel panic.
      
      This has been observed on real systems at shutdown. Add a
      check to check_irq_vectors_for_cpu_disable() for a valid
      'struct irq_desc' to prevent this issue.
      Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarJiang Liu <jiang.liu@linux.intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jan Beulich <JBeulich@suse.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Cc: alnovak@suse.com
      Cc: joro@8bytes.org
      Link: http://lkml.kernel.org/r/20150204132754.GA10078@suse.de
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d97eb896
    • Jiang Liu's avatar
      x86/irq: Fix regression caused by commit b568b860 · 1ea76fba
      Jiang Liu authored
      Commit b568b860
      
       ("Treat SCI interrupt as normal GSI interrupt")
      accidently removes support of legacy PIC interrupt when fixing a
      regression for Xen, which causes a nasty regression on HP/Compaq
      nc6000 where we fail to register the ACPI interrupt, and thus
      lose eg. thermal notifications leading a potentially overheated
      machine.
      
      So reintroduce support of legacy PIC based ACPI SCI interrupt.
      Reported-by: default avatarVille Syrjälä <syrjala@sci.fi>
      Tested-by: default avatarVille Syrjälä <syrjala@sci.fi>
      Signed-off-by: default avatarJiang Liu <jiang.liu@linux.intel.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarPavel Machek <pavel@ucw.cz>
      Cc: <stable@vger.kernel.org> # 3.19+
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: Sander Eikelenboom <linux@eikelenboom.it>
      Cc: linux-pm@vger.kernel.org
      Link: http://lkml.kernel.org/r/1424052673-22974-1-git-send-email-jiang.liu@linux.intel.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      1ea76fba
    • Raghavendra K T's avatar
      x86/spinlocks/paravirt: Fix memory corruption on unlock · d6abfdb2
      Raghavendra K T authored
      
      
      Paravirt spinlock clears slowpath flag after doing unlock.
      As explained by Linus currently it does:
      
                      prev = *lock;
                      add_smp(&lock->tickets.head, TICKET_LOCK_INC);
      
                      /* add_smp() is a full mb() */
      
                      if (unlikely(lock->tickets.tail & TICKET_SLOWPATH_FLAG))
                              __ticket_unlock_slowpath(lock, prev);
      
      which is *exactly* the kind of things you cannot do with spinlocks,
      because after you've done the "add_smp()" and released the spinlock
      for the fast-path, you can't access the spinlock any more.  Exactly
      because a fast-path lock might come in, and release the whole data
      structure.
      
      Linus suggested that we should not do any writes to lock after unlock(),
      and we can move slowpath clearing to fastpath lock.
      
      So this patch implements the fix with:
      
       1. Moving slowpath flag to head (Oleg):
          Unlocked locks don't care about the slowpath flag; therefore we can keep
          it set after the last unlock, and clear it again on the first (try)lock.
          -- this removes the write after unlock. note that keeping slowpath flag would
          result in unnecessary kicks.
          By moving the slowpath flag from the tail to the head ticket we also avoid
          the need to access both the head and tail tickets on unlock.
      
       2. use xadd to avoid read/write after unlock that checks the need for
          unlock_kick (Linus):
          We further avoid the need for a read-after-release by using xadd;
          the prev head value will include the slowpath flag and indicate if we
          need to do PV kicking of suspended spinners -- on modern chips xadd
          isn't (much) more expensive than an add + load.
      
      Result:
       setup: 16core (32 cpu +ht sandy bridge 8GB 16vcpu guest)
       benchmark overcommit %improve
       kernbench  1x           -0.13
       kernbench  2x            0.02
       dbench     1x           -1.77
       dbench     2x           -0.63
      
      [Jeremy: Hinted missing TICKET_LOCK_INC for kick]
      [Oleg: Moved slowpath flag to head, ticket_equals idea]
      [PeterZ: Added detailed changelog]
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Reported-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Tested-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarOleg Nesterov <oleg@redhat.com>
      Cc: Andrew Jones <drjones@redhat.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Fernando Luis Vázquez Cao <fernando_b1@lab.ntt.co.jp>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ulrich Obergfell <uobergfe@redhat.com>
      Cc: Waiman Long <Waiman.Long@hp.com>
      Cc: a.ryabinin@samsung.com
      Cc: dave@stgolabs.net
      Cc: hpa@zytor.com
      Cc: jasowang@redhat.com
      Cc: jeremy@goop.org
      Cc: paul.gortmaker@windriver.com
      Cc: riel@redhat.com
      Cc: tglx@linutronix.de
      Cc: waiman.long@hp.com
      Cc: xen-devel@lists.xenproject.org
      Link: http://lkml.kernel.org/r/20150215173043.GA7471@linux.vnet.ibm.com
      
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d6abfdb2
  9. 14 Feb, 2015 6 commits
    • Andrey Ryabinin's avatar
      kasan: enable instrumentation of global variables · bebf56a1
      Andrey Ryabinin authored
      
      
      This feature let us to detect accesses out of bounds of global variables.
      This will work as for globals in kernel image, so for globals in modules.
      Currently this won't work for symbols in user-specified sections (e.g.
      __init, __read_mostly, ...)
      
      The idea of this is simple.  Compiler increases each global variable by
      redzone size and add constructors invoking __asan_register_globals()
      function.  Information about global variable (address, size, size with
      redzone ...) passed to __asan_register_globals() so we could poison
      variable's redzone.
      
      This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
      address making shadow memory handling (
      kasan_module_alloc()/kasan_module_free() ) more simple.  Such alignment
      guarantees that each shadow page backing modules address space correspond
      to only one module_alloc() allocation.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bebf56a1
    • Andrey Ryabinin's avatar
      mm: vmalloc: pass additional vm_flags to __vmalloc_node_range() · cb9e3c29
      Andrey Ryabinin authored
      
      
      For instrumenting global variables KASan will shadow memory backing memory
      for modules.  So on module loading we will need to allocate memory for
      shadow and map it at address in shadow that corresponds to the address
      allocated in module_alloc().
      
      __vmalloc_node_range() could be used for this purpose, except it puts a
      guard hole after allocated area.  Guard hole in shadow memory should be a
      problem because at some future point we might need to have a shadow memory
      at address occupied by guard hole.  So we could fail to allocate shadow
      for module_alloc().
      
      Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into
      __vmalloc_node_range().  Add new parameter 'vm_flags' to
      __vmalloc_node_range() function.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb9e3c29
    • Andrey Ryabinin's avatar
      kasan: enable stack instrumentation · c420f167
      Andrey Ryabinin authored
      
      
      Stack instrumentation allows to detect out of bounds memory accesses for
      variables allocated on stack.  Compiler adds redzones around every
      variable on stack and poisons redzones in function's prologue.
      
      Such approach significantly increases stack usage, so all in-kernel stacks
      size were doubled.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c420f167
    • Andrey Ryabinin's avatar
      x86_64: kasan: add interceptors for memset/memmove/memcpy functions · 393f203f
      Andrey Ryabinin authored
      
      
      Recently instrumentation of builtin functions calls was removed from GCC
      5.0.  To check the memory accessed by such functions, userspace asan
      always uses interceptors for them.
      
      So now we should do this as well.  This patch declares
      memset/memmove/memcpy as weak symbols.  In mm/kasan/kasan.c we have our
      own implementation of those functions which checks memory before accessing
      it.
      
      Default memset/memmove/memcpy now now always have aliases with '__'
      prefix.  For files that built without kasan instrumentation (e.g.
      mm/slub.c) original mem* replaced (via #define) with prefixed variants,
      cause we don't want to check memory accesses there.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      393f203f
    • Andrey Ryabinin's avatar
      x86_64: add KASan support · ef7f0d6a
      Andrey Ryabinin authored
      
      
      This patch adds arch specific code for kernel address sanitizer.
      
      16TB of virtual addressed used for shadow memory.  It's located in range
      [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
      stacks.
      
      At early stage we map whole shadow region with zero page.  Latter, after
      pages mapped to direct mapping address range we unmap zero pages from
      corresponding shadow (see kasan_map_shadow()) and allocate and map a real
      shadow memory reusing vmemmap_populate() function.
      
      Also replace __pa with __pa_nodebug before shadow initialized.  __pa with
      CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
      __phys_addr is instrumented, so __asan_load could be called before shadow
      area initialized.
      Signed-off-by: default avatarAndrey Ryabinin <a.ryabinin@samsung.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrey Konovalov <adech.fo@gmail.com>
      Cc: Yuri Gribov <tetra2005@gmail.com>
      Cc: Konstantin Khlebnikov <koct9i@gmail.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jim Davis <jim.epost@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ef7f0d6a
    • Tejun Heo's avatar
      x86: use %*pb[l] to print bitmaps including cpumasks and nodemasks · bf58b487
      Tejun Heo authored
      
      
      printk and friends can now format bitmaps using '%*pb[l]'.  cpumask
      and nodemask also provide cpumask_pr_args() and nodemask_pr_args()
      respectively which can be used to generate the two printf arguments
      necessary to format the specified cpu/nodemask.
      
      * Unnecessary buffer size calculation and condition on the lenght
        removed from intel_cacheinfo.c::show_shared_cpu_map_func().
      
      * uv_nmi_nr_cpus_pr() got overly smart and implemented "..."
        abbreviation if the output stretched over the predefined 1024 byte
        buffer.  Replaced with plain printk.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Mike Travis <travis@sgi.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bf58b487
  10. 13 Feb, 2015 8 commits
    • Linus Torvalds's avatar
      Revert "x86/apic: Only disable CPU x2apic mode when necessary" · 8329aa9f
      Linus Torvalds authored
      This reverts commit 5fcee53c
      
      .
      
      It causes the suspend to fail on at least the Chromebook Pixel, possibly
      other platforms too.
      
      Joerg Roedel points out that the logic should probably have been
      
                      if (max_physical_apicid > 255 ||
                          !(IS_ENABLED(CONFIG_HYPERVISOR_GUEST) &&
                            hypervisor_x2apic_available())) {
      
      instead, but since the code is not in any fast-path, so we can just live
      without that optimization and just revert to the original code.
      Acked-by: default avatarJoerg Roedel <joro@8bytes.org>
      Acked-by: default avatarJiang Liu <jiang.liu@linux.intel.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8329aa9f
    • Matt Fleming's avatar
      x86/efi: Avoid triple faults during EFI mixed mode calls · 96738c69
      Matt Fleming authored
      
      
      Andy pointed out that if an NMI or MCE is received while we're in the
      middle of an EFI mixed mode call a triple fault will occur. This can
      happen, for example, when issuing an EFI mixed mode call while running
      perf.
      
      The reason for the triple fault is that we execute the mixed mode call
      in 32-bit mode with paging disabled but with 64-bit kernel IDT handlers
      installed throughout the call.
      
      At Andy's suggestion, stop playing the games we currently do at runtime,
      such as disabling paging and installing a 32-bit GDT for __KERNEL_CS. We
      can simply switch to the __KERNEL32_CS descriptor before invoking
      firmware services, and run in compatibility mode. This way, if an
      NMI/MCE does occur the kernel IDT handler will execute correctly, since
      it'll jump to __KERNEL_CS automatically.
      
      However, this change is only possible post-ExitBootServices(). Before
      then the firmware "owns" the machine and expects for its 32-bit IDT
      handlers to be left intact to service interrupts, etc.
      
      So, we now need to distinguish between early boot and runtime
      invocations of EFI services. During early boot, we need to restore the
      GDT that the firmware expects to be present. We can only jump to the
      __KERNEL32_CS code segment for mixed mode calls after ExitBootServices()
      has been invoked.
      
      A liberal sprinkling of comments in the thunking code should make the
      differences in early and late environments more apparent.
      Reported-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Tested-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarMatt Fleming <matt.fleming@intel.com>
      96738c69
    • Rusty Russell's avatar
      lguest: don't look in console features to find emerg_wr. · 55c2d788
      Rusty Russell authored
      
      
      The 1.0 spec clearly states that you must set the ACKNOWLEDGE and
      DRIVER status bits before accessing the feature bits.  This is a
      problem for the early console code, which doesn't really want to
      acknowledge the device (the spec specifically excepts writing to the
      console's emerg_wr from the usual ordering constrains).
      
      Instead, we check that the *size* of the device configuration is
      sufficient to hold emerg_wr: at worst (if the device doesn't support
      the VIRTIO_CONSOLE_F_EMERG_WRITE feature), it will ignore the
      writes.
      Signed-off-by: default avatarRusty Russell <rusty@rustcorp.com.au>
      55c2d788
    • Rasmus Villemoes's avatar
      kernel.h: remove ancient __FUNCTION__ hack · 02f1f217
      Rasmus Villemoes authored
      
      
      __FUNCTION__ hasn't been treated as a string literal since gcc 3.4, so
      this only helps people who only test-compile using 3.3 (compiler-gcc3.h
      barks at anything older than that).  Besides, there are almost no
      occurrences of __FUNCTION__ left in the tree.
      
      [akpm@linux-foundation.org: convert remaining __FUNCTION__ references]
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Michal Nazarewicz <mina86@mina86.com>
      Cc: Joe Perches <joe@perches.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      02f1f217
    • Andy Lutomirski's avatar
      all arches, signal: move restart_block to struct task_struct · f56141e3
      Andy Lutomirski authored
      
      
      If an attacker can cause a controlled kernel stack overflow, overwriting
      the restart block is a very juicy exploit target.  This is because the
      restart_block is held in the same memory allocation as the kernel stack.
      
      Moving the restart block to struct task_struct prevents this exploit by
      making the restart_block harder to locate.
      
      Note that there are other fields in thread_info that are also easy
      targets, at least on some architectures.
      
      It's also a decent simplification, since the restart code is more or less
      identical on all architectures.
      
      [james.hogan@imgtec.com: metag: align thread_info::supervisor_stack]
      Signed-off-by: default avatarAndy Lutomirski <luto@amacapital.net>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: David Miller <davem@davemloft.net>
      Acked-by: default avatarRichard Weinberger <richard@nod.at>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Tested-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f56141e3
    • Mel Gorman's avatar
      x86: mm: restore original pte_special check · c819f37e
      Mel Gorman authored
      Commit b38af472
      
       ("x86,mm: fix pte_special versus pte_numa") adjusted
      the pte_special check to take into account that a special pte had
      SPECIAL and neither PRESENT nor PROTNONE.  Now that NUMA hinting PTEs
      are no longer modifying _PAGE_PRESENT it should be safe to restore the
      original pte_special behaviour.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c819f37e
    • Mel Gorman's avatar
      mm: remove remaining references to NUMA hinting bits and helpers · 21d9ee3e
      Mel Gorman authored
      
      
      This patch removes the NUMA PTE bits and associated helpers.  As a
      side-effect it increases the maximum possible swap space on x86-64.
      
      One potential source of problems is races between the marking of PTEs
      PROT_NONE, NUMA hinting faults and migration.  It must be guaranteed that
      a PTE being protected is not faulted in parallel, seen as a pte_none and
      corrupting memory.  The base case is safe but transhuge has problems in
      the past due to an different migration mechanism and a dependance on page
      lock to serialise migrations and warrants a closer look.
      
      task_work hinting update			parallel fault
      ------------------------			--------------
      change_pmd_range
        change_huge_pmd
          __pmd_trans_huge_lock
            pmdp_get_and_clear
      						__handle_mm_fault
      						pmd_none
      						  do_huge_pmd_anonymous_page
      						  read? pmd_lock blocks until hinting complete, fail !pmd_none test
      						  write? __do_huge_pmd_anonymous_page acquires pmd_lock, checks pmd_none
            pmd_modify
            set_pmd_at
      
      task_work hinting update			parallel migration
      ------------------------			------------------
      change_pmd_range
        change_huge_pmd
          __pmd_trans_huge_lock
            pmdp_get_and_clear
      						__handle_mm_fault
      						  do_huge_pmd_numa_page
      						    migrate_misplaced_transhuge_page
      						    pmd_lock waits for updates to complete, recheck pmd_same
            pmd_modify
            set_pmd_at
      
      Both of those are safe and the case where a transhuge page is inserted
      during a protection update is unchanged.  The case where two processes try
      migrating at the same time is unchanged by this series so should still be
      ok.  I could not find a case where we are accidentally depending on the
      PTE not being cleared and flushed.  If one is missed, it'll manifest as
      corruption problems that start triggering shortly after this series is
      merged and only happen when NUMA balancing is enabled.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Tested-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      21d9ee3e
    • Mel Gorman's avatar
      mm: convert p[te|md]_numa users to p[te|md]_protnone_numa · 8a0516ed
      Mel Gorman authored
      
      
      Convert existing users of pte_numa and friends to the new helper.  Note
      that the kernel is broken after this patch is applied until the other page
      table modifiers are also altered.  This patch layout is to make review
      easier.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Acked-by: default avatarAneesh Kumar <aneesh.kumar@linux.vnet.ibm.com>
      Acked-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
      Tested-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Kirill Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8a0516ed