1. 09 Oct, 2009 1 commit
  2. 17 Jun, 2009 1 commit
    • Matthew Wilcox's avatar
      [IA64] Convert ia64 to use int-ll64.h · e088a4ad
      Matthew Wilcox authored
      
      
      It is generally agreed that it would be beneficial for u64 to be an
      unsigned long long on all architectures.  ia64 (in common with several
      other 64-bit architectures) currently uses unsigned long.  Migrating
      piecemeal is too painful; this giant patch fixes all compilation warnings
      and errors that come as a result of switching to use int-ll64.h.
      
      Note that userspace will still see __u64 defined as unsigned long.  This
      is important as it affects C++ name mangling.
      
      [Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use
       u64 for start/end rather than unsigned long]
      Signed-off-by: default avatarMatthew Wilcox <willy@linux.intel.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      e088a4ad
  3. 16 Mar, 2009 1 commit
  4. 17 Oct, 2008 1 commit
  5. 29 Apr, 2008 1 commit
  6. 04 Apr, 2008 2 commits
  7. 03 Apr, 2008 1 commit
  8. 19 Dec, 2007 1 commit
    • de Dinechin, Christophe (Integrity VM)'s avatar
      [IA64] Avoid unnecessary TLB flushes when allocating memory · aec103bf
      
      
      Improve performance of memory allocations on ia64 by avoiding a global TLB
      purge to purge a single page from the file cache. This happens whenever we
      evict a page from the buffer cache to make room for some other allocation.
      
      Test case: Run 'find /usr -type f | xargs cat > /dev/null' in the
      background to fill the buffer cache, then run something that uses memory,
      e.g. 'gmake -j50 install'. Instrumentation showed that the number of
      global TLB purges went from a few millions down to about 170 over a 12
      hours run of the above.
      
      The performance impact is particularly noticeable under virtualization,
      because a virtual TLB is generally both larger and slower to purge than
      a physical one.
      Signed-off-by: default avatarChristophe de Dinechin <ddd@hp.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      aec103bf
  9. 08 Dec, 2007 1 commit
  10. 11 Jul, 2007 1 commit
  11. 08 May, 2007 1 commit
  12. 30 Jun, 2006 1 commit
  13. 27 Mar, 2006 1 commit
    • Chen, Kenneth W's avatar
      [IA64] optimize flush_tlb_range on large numa box · ce9eed5a
      Chen, Kenneth W authored
      
      
      It was reported from a field customer that global spin lock ptcg_lock
      is giving a lot of grief on munmap performance running on a large numa
      machine.  What appears to be a problem coming from flush_tlb_range(),
      which currently unconditionally calls platform_global_tlb_purge().
      For some of the numa machines in existence today, this function is
      mapped into ia64_global_tlb_purge(), which holds ptcg_lock spin lock
      while executing ptc.ga instruction.
      
      Here is a patch that attempt to avoid global tlb purge whenever
      possible.  It will use local tlb purge as much as possible. Though the
      conditions to use local tlb purge is pretty restrictive.  One of the
      side effect of having flush tlb range instruction on ia64 is that
      kernel don't get a chance to clear out cpu_vm_mask.  On ia64, this mask
      is sticky and it will accumulate if process bounces around.  Thus
      diminishing the possible use of ptc.l.  Thoughts?
      Signed-off-by: default avatarKen Chen <kenneth.w.chen@intel.com>
      Acked-by: default avatarJack Steiner <steiner@sgi.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      ce9eed5a
  14. 13 Jan, 2006 1 commit
    • Jack Steiner's avatar
      [IA64] Hole in IA64 TLB flushing from system threads · cfbb1426
      Jack Steiner authored
      
      
      I originally thought this was an bug only in the SN code, but I think I
      also see a hole in the generic IA64 tlb code. (Separate patch was sent
      for the SN problem).
      
      It looks like there is a bug in the TLB flushing code. During context switch,
      kernel threads (kswapd, for example) inherit the mm of the task that was
      previously running on the cpu. Normally, this is ok because the previous context
      is still loaded into the RR registers. However, if the owner of the mm
      migrates to another cpu, changes it's context number, and references a
      page before kswapd issues a tlb_purge for that same page, the purge will be
      done with a stale context number (& RR registers).
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      cfbb1426
  15. 03 Nov, 2005 1 commit
  16. 31 Oct, 2005 1 commit
  17. 30 Oct, 2005 1 commit
    • Hugh Dickins's avatar
      [PATCH] mm: flush_tlb_range outside ptlock · 663b97f7
      Hugh Dickins authored
      
      
      There was one small but very significant change in the previous patch:
      mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4,
      but that doesn't prove it safe in 2.6.
      
      On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which
      has always been called from outside page_table_lock in dup_mmap, and is so
      proved safe.  Others required a deeper audit: I could find no reliance on
      page_table_lock in any; but in ia64 and parisc found some code which looks a
      bit as if it might want preemption disabled.  That won't do any actual harm,
      so pending a decision from the maintainers, disable preemption there.
      
      Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and
      flush_tlb_page entries in cachetlb.txt: they were rather misleading (what
      generic code does is different from what usually happens), the rules are now
      changing, and it's not yet clear where we'll end up (will the generic
      tlb_flush_mmu happen always under lock?  never under lock?  or sometimes under
      and sometimes not?).
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      663b97f7
  18. 27 Oct, 2005 1 commit
    • Dean Roe's avatar
      [IA64] - Avoid slow TLB purges on SGI Altix systems · c1902aae
      Dean Roe authored
      
      
      flush_tlb_all() can be a scaling issue on large SGI Altix systems
      since it uses the global call_lock and always executes on all cpus.
      When a process enters flush_tlb_range() to purge TLBs for another
      process, it is possible to avoid flush_tlb_all() and instead allow
      sn2_global_tlb_purge() to purge TLBs only where necessary.
      
      This patch modifies flush_tlb_range() so that this case can be handled
      by platform TLB purge functions and updates ia64_global_tlb_purge()
      accordingly.  sn2_global_tlb_purge() now calculates the region register
      value from the mm argument introduced with this patch.
      Signed-off-by: default avatarDean Roe <roe@sgi.com>
      Signed-off-by: default avatarTony Luck <tony.luck@intel.com>
      c1902aae
  19. 25 Oct, 2005 1 commit
  20. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4