1. 26 Jul, 2013 3 commits
  2. 22 Jul, 2013 5 commits
    • Tetsuyuki Kobayashi's avatar
      ARM: 7788/1: elf: fix lpae hwcap feature reporting in proc/cpuinfo · ab8d46c0
      Tetsuyuki Kobayashi authored
      Commit a469abd0
      
       ("ARM: elf: add new hwcap for identifying atomic
      ldrd/strd instructions") added a new hwcap to identify LPAE on CPUs
      which support it. Whilst the hwcap data is correct, the string reported
      in /proc/cpuinfo actually matches on HWCAP_VFPD32, which was missing
      an entry in the string table.
      
      This patch fixes this problem by adding a "vfpd32" string at the correct
      offset, preventing us from falsely advertising LPAE on CPUs which do not
      support it.
      
      [will: added commit message]
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarTetsuyuki Kobayashi <koba@kmckk.co.jp>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      ab8d46c0
    • Mark Rutland's avatar
      ARM: 7786/1: hyp: fix macro parameterisation · b60d5db6
      Mark Rutland authored
      
      
      Currently, compare_cpu_mode_with_primary uses a mixture of macro
      arguments and hardcoded registers, and does so incorrectly, as it
      stores (__boot_cpu_mode_offset | BOOT_CPU_MODE_MISMATCH) to
      (__boot_cpu_mode + &__boot_cpu_mode_offset), which could corrupt an
      arbitrary portion of memory.
      
      This patch fixes up compare_cpu_mode_with_primary to use the macro
      arguments, correctly updating __boot_cpu_mode.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarDave Martin <Dave.Martin@arm.com>
      Acked-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <cdall@cs.columbia.edu>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      b60d5db6
    • Russell King's avatar
      ARM: 7785/1: mm: restrict early_alloc to section-aligned memory · c65b7e98
      Russell King authored
      
      
      When map_lowmem() runs, and processes a memory bank whose start or end
      is not section-aligned, memory must be allocated to store the 2nd-level
      page tables. Those allocations are made by calling memblock_alloc().
      
      At this point, the only memory that is free *and* mapped is memory which
      has already been mapped by map_lowmem() itself. For this reason, we must
      calculate the first point at which map_lowmem() will need to allocate
      memory, and set the memblock allocation limit to a lower address, so that
      memblock_alloc() is guaranteed to return memory that is already mapped.
      
      This patch enhances sanity_check_meminfo() to calculate that memory
      address, and pass it to memblock_set_current_limit(), rather than just
      assuming the limit is arm_lowmem_limit.
      
      The algorithm applied is:
      
      * Default memblock_limit to arm_lowmem_limit in the absence of any other
        limit; arm_lowmem_limit is the highest memory that is mapped by
        map_lowmem().
      
      * While walking the list of memblocks, if the start of a block is not
        aligned, 2nd-level page tables will need to be allocated to map the
        first few pages of the block. Hence, the memblock_limit must be before
        the start of the block.
      
      * Similarly, if the end of any block is not aligned, 2nd-level page
        tables will need to be allocated to map the last few pages of the
        block. Hence, the memblock_limit must point at the end of the block,
        rounded down to section-alignment.
      
      * The memory blocks are assumed to be sorted in address order, so the
        first unaligned block start or end is used to set the limit.
      
      With this algorithm, the start or end of almost any bank can be non-
      section-aligned. The only exception is that the start of bank 0 must
      be section-aligned, since otherwise memory would need to be allocated
      when mapping the start of bank 0, which occurs before any free memory
      is mapped.
      
      [swarren, wrote commit description, rewrote calculation of memblock_limit]
      Signed-off-by: Stephen Warren's avatarStephen Warren <swarren@nvidia.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      c65b7e98
    • Will Deacon's avatar
      ARM: 7784/1: mm: ensure SMP alternates assemble to exactly 4 bytes with Thumb-2 · bf3f0f33
      Will Deacon authored
      Commit ae8a8b95
      
       ("ARM: 7691/1: mm: kill unused TLB_CAN_READ_FROM_L1_CACHE
      and use ALT_SMP instead") added early function returns for page table
      cache flushing operations on ARMv7 SMP CPUs.
      
      Unfortunately, when targetting Thumb-2, these `mov pc, lr' sequences
      assemble to 2 bytes which can lead to corruption of the instruction
      stream after code patching.
      
      This patch fixes the alternates to use wide (32-bit) instructions for
      Thumb-2, therefore ensuring that the patching code works correctly.
      
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      bf3f0f33
    • Russell King's avatar
      ARM: document DEBUG_UNCOMPRESS Kconfig option · b6992fa9
      Russell King authored
      
      
      This non-user visible option lacked any kind of documentation.  This
      is quite common for non-user visible options; certian people can't
      understand the point of documenting such options with help text.
      
      However, here we have a case in point: developers don't understand the
      option either, as they were thinking that when the option is not set,
      the decompressor should produce no output what so ever.  This is
      incorrect, as the purpose of this option is to control whether a
      multiplatform kernel uses the kernel debugging macros to produce
      output or not.
      
      So let's document this via help rather than commentry to prevent others
      falling into this misunderstanding.
      Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
      b6992fa9
  3. 19 Jul, 2013 18 commits
  4. 18 Jul, 2013 6 commits
  5. 17 Jul, 2013 1 commit
  6. 16 Jul, 2013 4 commits
  7. 15 Jul, 2013 1 commit
  8. 14 Jul, 2013 2 commits
    • Paul Gortmaker's avatar
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker authored
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: default avatarIngo Molnar <mingo@kernel.org>
      Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
    • Paul Gortmaker's avatar
      score: delete __cpuinit usage from all score files · 70e2a7bf
      Paul Gortmaker authored
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/score uses of the __cpuinit macros from
      all C files.  Currently score does not have any __CPUINIT used in
      assembly files.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      
      
      Cc: Chen Liqin <liqin.chen@sunplusct.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
      70e2a7bf