Skip to content
  • Alex Shi's avatar
    x86/numa: Improve internode cache alignment · 901b0445
    Alex Shi authored
    
    
    Currently cache alignment among nodes in the kernel is still 128
    bytes on x86 NUMA machines - we got that X86_INTERNODE_CACHE_SHIFT
    default from old P4 processors.
    
    But now most modern x86 CPUs use the same size: 64 bytes from L1 to
    last level L3. so let's remove the incorrect setting, and directly
    use the L1 cache size to do SMP cache line alignment.
    
    This patch saves some memory space on kernel data, and it also
    improves the cache locality of kernel data.
    
    The System.map is quite different with/without this change:
    
    	before patch			after patch
      ...
      000000000000b000 d tlb_vector_|  000000000000b000 d tlb_vector
      000000000000b080 d cpu_loops_p|  000000000000b040 d cpu_loops_
      ...
    
    Signed-off-by: default avatarAlex Shi <alex.shi@intel.com>
    Cc: asit.k.mallick@intel.com
    Link: http://lkml.kernel.org/r/1330774047-18597-1-git-send-email-alex.shi@intel.com
    
    
    Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
    901b0445