Commit 901b0445 authored by Alex Shi's avatar Alex Shi Committed by Ingo Molnar
Browse files

x86/numa: Improve internode cache alignment

Currently cache alignment among nodes in the kernel is still 128
bytes on x86 NUMA machines - we got that X86_INTERNODE_CACHE_SHIFT
default from old P4 processors.

But now most modern x86 CPUs use the same size: 64 bytes from L1 to
last level L3. so let's remove the incorrect setting, and directly
use the L1 cache size to do SMP cache line alignment.

This patch saves some memory space on kernel data, and it also
improves the cache locality of kernel data.

The is quite different with/without this change:

	before patch			after patch
  000000000000b000 d tlb_vector_|  000000000000b000 d tlb_vector
  000000000000b080 d cpu_loops_p|  000000000000b040 d cpu_loops_
Signed-off-by: default avatarAlex Shi <>

Signed-off-by: default avatarIngo Molnar <>
parent e24b90b2
...@@ -303,7 +303,6 @@ config X86_GENERIC ...@@ -303,7 +303,6 @@ config X86_GENERIC
int int
default "12" if X86_VSMP default "12" if X86_VSMP
default "7" if NUMA
default X86_L1_CACHE_SHIFT default X86_L1_CACHE_SHIFT
config X86_CMPXCHG config X86_CMPXCHG
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment