Commit 5ead97c8 authored by Jeremy Fitzhardinge's avatar Jeremy Fitzhardinge Committed by Jeremy Fitzhardinge

xen: Core Xen implementation

This patch is a rollup of all the core pieces of the Xen
implementation, including:
 - booting and setup
 - pagetable setup
 - privileged instructions
 - segmentation
 - interrupt flags
 - upcalls
 - multicall batching


The vmlinux image is decorated with ELF notes which tell the Xen
domain builder what the kernel's requirements are; the domain builder
then constructs the address space accordingly and starts the kernel.

Xen has its own entrypoint for the kernel (contained in an ELF note).
The ELF notes are set up by xen-head.S, which is included into head.S.
In principle it could be linked separately, but it seems to provoke
lots of binutils bugs.

Because the domain builder starts the kernel in a fairly sane state
(32-bit protected mode, paging enabled, flat segments set up), there's
not a lot of setup needed before starting the kernel proper.  The main
steps are:
  1. Install the Xen paravirt_ops, which is simply a matter of a
     structure assignment.
  2. Set init_mm to use the Xen-supplied pagetables (analogous to the
     head.S generated pagetables in a native boot).
  3. Reserve address space for Xen, since it takes a chunk at the top
     of the address space for its own use.
  4. Call start_kernel()


Once we hit the main kernel boot sequence, it will end up calling back
via paravirt_ops to set up various pieces of Xen specific state.  One
of the critical things which requires a bit of extra care is the
construction of the initial init_mm pagetable.  Because Xen places
tight constraints on pagetables (an active pagetable must always be
valid, and must always be mapped read-only to the guest domain), we
need to be careful when constructing the new pagetable to keep these
constraints in mind.  It turns out that the easiest way to do this is
use the initial Xen-provided pagetable as a template, and then just
insert new mappings for memory where a mapping doesn't already exist.

This means that during pagetable setup, it uses a special version of
xen_set_pte which ignores any attempt to remap a read-only page as
read-write (since Xen will map its own initial pagetable as RO), but
lets other changes to the ptes happen, so that things like NX are set


When the kernel runs under Xen, it runs in ring 1 rather than ring 0.
This means that it is more privileged than user-mode in ring 3, but it
still can't run privileged instructions directly.  Non-performance
critical instructions are dealt with by taking a privilege exception
and trapping into the hypervisor and emulating the instruction, but
more performance-critical instructions have their own specific
paravirt_ops.  In many cases we can avoid having to do any hypercalls
for these instructions, or the Xen implementation is quite different
from the normal native version.

The privileged instructions fall into the broad classes of:
  Segmentation: setting up the GDT and the GDT entries, LDT,
     TLS and so on.  Xen doesn't allow the GDT to be directly
     modified; all GDT updates are done via hypercalls where the new
     entries can be validated.  This is important because Xen uses
     segment limits to prevent the guest kernel from damaging the
     hypervisor itself.
  Traps and exceptions: Xen uses a special format for trap entrypoints,
     so when the kernel wants to set an IDT entry, it needs to be
     converted to the form Xen expects.  Xen sets int 0x80 up specially
     so that the trap goes straight from userspace into the guest kernel
     without going via the hypervisor.  sysenter isn't supported.
  Kernel stack: The esp0 entry is extracted from the tss and provided to
  TLB operations: the various TLB calls are mapped into corresponding
     Xen hypercalls.
  Control registers: all the control registers are privileged.  The most
     important is cr3, which points to the base of the current pagetable,
     and we handle it specially.

Another instruction we treat specially is CPUID, even though its not
privileged.  We want to control what CPU features are visible to the
rest of the kernel, and so CPUID ends up going into a paravirt_op.
Xen implements this mainly to disable the ACPI and APIC subsystems.


Xen maintains its own separate flag for masking events, which is
contained within the per-cpu vcpu_info structure.  Because the guest
kernel runs in ring 1 and not 0, the IF flag in EFLAGS is completely
ignored (and must be, because even if a guest domain disables
interrupts for itself, it can't disable them overall).

(A note on terminology: "events" and interrupts are effectively
synonymous.  However, rather than using an "enable flag", Xen uses a
"mask flag", which blocks event delivery when it is non-zero.)

There are paravirt_ops for each of cli/sti/save_fl/restore_fl, which
are implemented to manage the Xen event mask state.  The only thing
worth noting is that when events are unmasked, we need to explicitly
see if there's a pending event and call into the hypervisor to make
sure it gets delivered.


Xen needs a couple of upcall (or callback) functions to be implemented
by each guest.  One is the event upcalls, which is how events
(interrupts, effectively) are delivered to the guests.  The other is
the failsafe callback, which is used to report errors in either
reloading a segment register, or caused by iret.  These are
implemented in i386/kernel/entry.S so they can jump into the normal
iret_exc path when necessary.


Xen provides a multicall mechanism, which allows multiple hypercalls
to be issued at once in order to mitigate the cost of trapping into
the hypervisor.  This is particularly useful for context switches,
since the 4-5 hypercalls they would normally need (reload cr3, update
TLS, maybe update LDT) can be reduced to one.  This patch implements a
generic batching mechanism for hypercalls, which gets used in many
places in the Xen code.
Signed-off-by: default avatarJeremy Fitzhardinge <>
Signed-off-by: default avatarChris Wright <>
Cc: Ian Pratt <>
Cc: Christian Limpach <>
Cc: Adrian Bunk <>
parent a42089dd
......@@ -93,6 +93,9 @@ mflags-$(CONFIG_X86_ES7000) := -Iinclude/asm-i386/mach-es7000
mcore-$(CONFIG_X86_ES7000) := mach-default
core-$(CONFIG_X86_ES7000) := arch/i386/mach-es7000/
# Xen paravirtualization support
core-$(CONFIG_XEN) += arch/i386/xen/
# default subarch .h files
mflags-y += -Iinclude/asm-i386/mach-default
......@@ -1023,6 +1023,77 @@ ENTRY(kernel_thread_helper)
pushl $0
mov %esp, %eax
call xen_evtchn_do_upcall
jmp ret_from_intr
# Hypervisor uses this for application faults while it executes.
# We get here for two reasons:
# 1. Fault while reloading DS, ES, FS or GS
# 2. Fault while executing IRET
# Category 1 we fix up by reattempting the load, and zeroing the segment
# register if the load fails.
# Category 2 we fix up by jumping to do_iret_error. We cannot use the
# normal Linux return path in this case because if we use the IRET hypercall
# to pop the stack frame we end up in an infinite loop of failsafe callbacks.
# We distinguish between categories by maintaining a status value in EAX.
pushl %eax
movl $1,%eax
1: mov 4(%esp),%ds
2: mov 8(%esp),%es
3: mov 12(%esp),%fs
4: mov 16(%esp),%gs
testl %eax,%eax
popl %eax
lea 16(%esp),%esp
jz 5f
addl $16,%esp
jmp iret_exc # EAX != 0 => Category 2 (Bad IRET)
5: pushl $0 # EAX == 0 => Category 1 (Bad segment)
jmp ret_from_exception
.section .fixup,"ax"
6: xorl %eax,%eax
movl %eax,4(%esp)
jmp 1b
7: xorl %eax,%eax
movl %eax,8(%esp)
jmp 2b
8: xorl %eax,%eax
movl %eax,12(%esp)
jmp 3b
9: xorl %eax,%eax
movl %eax,16(%esp)
jmp 4b
.section __ex_table,"a"
.align 4
.long 1b,6b
.long 2b,7b
.long 3b,8b
.long 4b,9b
#endif /* CONFIG_XEN */
.section .rodata,"a"
#include "syscall_table.S"
......@@ -510,7 +510,8 @@ ENTRY(_stext)
* BSS section
.section ".bss.page_aligned","w"
.section ".bss.page_aligned","wa"
.align PAGE_SIZE_asm
.fill 1024,4,0
......@@ -538,6 +539,8 @@ fault_msg:
.ascii "Int %d: CR2 %p err %p EIP %p CS %p flags %p\n"
.asciz "Stack: %p %p %p %p %p %p %p %p\n"
#include "../xen/xen-head.S"
* The IDT and GDT 'descriptors' are a strange 48-bit object
* only used by the lidt and lgdt instructions. They are not
......@@ -88,6 +88,7 @@ SECTIONS
. = ALIGN(4096);
.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
obj-y := enlighten.o setup.o features.o multicalls.o
* Core of Xen paravirt_ops implementation.
* This file contains the xen_paravirt_ops structure itself, and the
* implementations for:
* - privileged instructions
* - interrupt flags
* - segment operations
* - booting and setup
* Jeremy Fitzhardinge <>, XenSource Inc, 2007
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/smp.h>
#include <linux/preempt.h>
#include <linux/percpu.h>
#include <linux/delay.h>
#include <linux/start_kernel.h>
#include <linux/sched.h>
#include <linux/bootmem.h>
#include <linux/module.h>
#include <xen/interface/xen.h>
#include <xen/interface/physdev.h>
#include <xen/interface/vcpu.h>
#include <xen/features.h>
#include <xen/page.h>
#include <asm/paravirt.h>
#include <asm/page.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/hypervisor.h>
#include <asm/fixmap.h>
#include <asm/processor.h>
#include <asm/setup.h>
#include <asm/desc.h>
#include <asm/pgtable.h>
#include "xen-ops.h"
#include "multicalls.h"
DEFINE_PER_CPU(enum paravirt_lazy_mode, xen_lazy_mode);
DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu);
DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info);
DEFINE_PER_CPU(unsigned long, xen_cr3);
struct start_info *xen_start_info;
static void xen_vcpu_setup(int cpu)
per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu];
static void __init xen_banner(void)
printk(KERN_INFO "Booting paravirtualized kernel on %s\n",;
printk(KERN_INFO "Hypervisor signature: %s\n", xen_start_info->magic);
static void xen_cpuid(unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx)
unsigned maskedx = ~0;
* Mask out inconvenient features, to try and disable as many
* unsupported kernel subsystems as possible.
if (*eax == 1)
maskedx = ~((1 << X86_FEATURE_APIC) | /* disable APIC */
(1 << X86_FEATURE_ACPI) | /* disable ACPI */
(1 << X86_FEATURE_ACC)); /* thermal monitoring */
: "=a" (*eax),
"=b" (*ebx),
"=c" (*ecx),
"=d" (*edx)
: "0" (*eax), "2" (*ecx));
*edx &= maskedx;
static void xen_set_debugreg(int reg, unsigned long val)
HYPERVISOR_set_debugreg(reg, val);
static unsigned long xen_get_debugreg(int reg)
return HYPERVISOR_get_debugreg(reg);
static unsigned long xen_save_fl(void)
struct vcpu_info *vcpu;
unsigned long flags;
vcpu = x86_read_percpu(xen_vcpu);
/* flag has opposite sense of mask */
flags = !vcpu->evtchn_upcall_mask;
/* convert to IF type flag
-0 -> 0x00000000
-1 -> 0xffffffff
return (-flags) & X86_EFLAGS_IF;
static void xen_restore_fl(unsigned long flags)
struct vcpu_info *vcpu;
/* convert from IF type flag */
flags = !(flags & X86_EFLAGS_IF);
vcpu = x86_read_percpu(xen_vcpu);
vcpu->evtchn_upcall_mask = flags;
if (flags == 0) {
/* Unmask then check (avoid races). We're only protecting
against updates by this CPU, so there's no need for
anything stronger. */
if (unlikely(vcpu->evtchn_upcall_pending))
} else
static void xen_irq_disable(void)
struct vcpu_info *vcpu;
vcpu = x86_read_percpu(xen_vcpu);
vcpu->evtchn_upcall_mask = 1;
static void xen_irq_enable(void)
struct vcpu_info *vcpu;
vcpu = x86_read_percpu(xen_vcpu);
vcpu->evtchn_upcall_mask = 0;
/* Unmask then check (avoid races). We're only protecting
against updates by this CPU, so there's no need for
anything stronger. */
if (unlikely(vcpu->evtchn_upcall_pending))
static void xen_safe_halt(void)
/* Blocking includes an implicit local_irq_enable(). */
if (HYPERVISOR_sched_op(SCHEDOP_block, 0) != 0)
static void xen_halt(void)
if (irqs_disabled())
HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL);
static void xen_set_lazy_mode(enum paravirt_lazy_mode mode)
switch (mode) {
BUG_ON(x86_read_percpu(xen_lazy_mode) == PARAVIRT_LAZY_NONE);
BUG_ON(x86_read_percpu(xen_lazy_mode) != PARAVIRT_LAZY_NONE);
/* flush if necessary, but don't change state */
if (x86_read_percpu(xen_lazy_mode) != PARAVIRT_LAZY_NONE)
x86_write_percpu(xen_lazy_mode, mode);
static unsigned long xen_store_tr(void)
return 0;
static void xen_set_ldt(const void *addr, unsigned entries)
unsigned long linear_addr = (unsigned long)addr;
struct mmuext_op *op;
struct multicall_space mcs = xen_mc_entry(sizeof(*op));
op = mcs.args;
op->cmd = MMUEXT_SET_LDT;
if (linear_addr) {
/* ldt my be vmalloced, use arbitrary_virt_to_machine */
xmaddr_t maddr;
maddr = arbitrary_virt_to_machine((unsigned long)addr);
linear_addr = (unsigned long)maddr.maddr;
op->arg1.linear_addr = linear_addr;
op->arg2.nr_ents = entries;
MULTI_mmuext_op(, op, 1, NULL, DOMID_SELF);
static void xen_load_gdt(const struct Xgt_desc_struct *dtr)
unsigned long *frames;
unsigned long va = dtr->address;
unsigned int size = dtr->size + 1;
unsigned pages = (size + PAGE_SIZE - 1) / PAGE_SIZE;
int f;
struct multicall_space mcs;
/* A GDT can be up to 64k in size, which corresponds to 8192
8-byte entries, or 16 4k pages.. */
BUG_ON(size > 65536);
mcs = xen_mc_entry(sizeof(*frames) * pages);
frames = mcs.args;
for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) {
frames[f] = virt_to_mfn(va);
make_lowmem_page_readonly((void *)va);
MULTI_set_gdt(, frames, size / sizeof(struct desc_struct));
static void load_TLS_descriptor(struct thread_struct *t,
unsigned int cpu, unsigned int i)
struct desc_struct *gdt = get_cpu_gdt_table(cpu);
xmaddr_t maddr = virt_to_machine(&gdt[GDT_ENTRY_TLS_MIN+i]);
struct multicall_space mc = __xen_mc_entry(0);
MULTI_update_descriptor(, maddr.maddr, t->tls_array[i]);
static void xen_load_tls(struct thread_struct *t, unsigned int cpu)
load_TLS_descriptor(t, cpu, 0);
load_TLS_descriptor(t, cpu, 1);
load_TLS_descriptor(t, cpu, 2);
static void xen_write_ldt_entry(struct desc_struct *dt, int entrynum,
u32 low, u32 high)
unsigned long lp = (unsigned long)&dt[entrynum];
xmaddr_t mach_lp = virt_to_machine(lp);
u64 entry = (u64)high << 32 | low;
if (HYPERVISOR_update_descriptor(mach_lp.maddr, entry))
static int cvt_gate_to_trap(int vector, u32 low, u32 high,
struct trap_info *info)
u8 type, dpl;
type = (high >> 8) & 0x1f;
dpl = (high >> 13) & 3;
if (type != 0xf && type != 0xe)
return 0;
info->vector = vector;
info->address = (high & 0xffff0000) | (low & 0x0000ffff);
info->cs = low >> 16;
info->flags = dpl;
/* interrupt gates clear IF */
if (type == 0xe)
info->flags |= 4;
return 1;
/* Locations of each CPU's IDT */
static DEFINE_PER_CPU(struct Xgt_desc_struct, idt_desc);
/* Set an IDT entry. If the entry is part of the current IDT, then
also update Xen. */
static void xen_write_idt_entry(struct desc_struct *dt, int entrynum,
u32 low, u32 high)
int cpu = smp_processor_id();
unsigned long p = (unsigned long)&dt[entrynum];
unsigned long start = per_cpu(idt_desc, cpu).address;
unsigned long end = start + per_cpu(idt_desc, cpu).size + 1;
write_dt_entry(dt, entrynum, low, high);
if (p >= start && (p + 8) <= end) {
struct trap_info info[2];
info[1].address = 0;
if (cvt_gate_to_trap(entrynum, low, high, &info[0]))
if (HYPERVISOR_set_trap_table(info))
/* Load a new IDT into Xen. In principle this can be per-CPU, so we
hold a spinlock to protect the static traps[] array (static because
it avoids allocation, and saves stack space). */
static void xen_load_idt(const struct Xgt_desc_struct *desc)
static DEFINE_SPINLOCK(lock);
static struct trap_info traps[257];
int cpu = smp_processor_id();
unsigned in, out, count;
per_cpu(idt_desc, cpu) = *desc;
count = (desc->size+1) / 8;
BUG_ON(count > 256);
for (in = out = 0; in < count; in++) {
const u32 *entry = (u32 *)(desc->address + in * 8);
if (cvt_gate_to_trap(in, entry[0], entry[1], &traps[out]))
traps[out].address = 0;
if (HYPERVISOR_set_trap_table(traps))
/* Write a GDT descriptor entry. Ignore LDT descriptors, since
they're handled differently. */
static void xen_write_gdt_entry(struct desc_struct *dt, int entry,
u32 low, u32 high)
switch ((high >> 8) & 0xff) {
/* ignore */
default: {
xmaddr_t maddr = virt_to_machine(&dt[entry]);
u64 desc = (u64)high << 32 | low;
if (HYPERVISOR_update_descriptor(maddr.maddr, desc))
static void xen_load_esp0(struct tss_struct *tss,
struct thread_struct *thread)
struct multicall_space mcs = xen_mc_entry(0);
MULTI_stack_switch(, __KERNEL_DS, thread->esp0);
static void xen_set_iopl_mask(unsigned mask)
struct physdev_set_iopl set_iopl;
/* Force the change at ring 0. */
set_iopl.iopl = (mask == 0) ? 1 : (mask >> 12) & 3;
HYPERVISOR_physdev_op(PHYSDEVOP_set_iopl, &set_iopl);
static void xen_io_delay(void)
static unsigned long xen_apic_read(unsigned long reg)
return 0;
static void xen_flush_tlb(void)
struct mmuext_op op;
if (HYPERVISOR_mmuext_op(&op, 1, NULL, DOMID_SELF))
static void xen_flush_tlb_single(unsigned long addr)
struct mmuext_op op;
op.arg1.linear_addr = addr & PAGE_MASK;
if (HYPERVISOR_mmuext_op(&op, 1, NULL, DOMID_SELF))
static unsigned long xen_read_cr2(void)
return x86_read_percpu(xen_vcpu)->arch.cr2;
static void xen_write_cr4(unsigned long cr4)
/* never allow TSC to be disabled */
native_write_cr4(cr4 & ~X86_CR4_TSD);
* Page-directory addresses above 4GB do not fit into architectural %cr3.
* When accessing %cr3, or equivalent field in vcpu_guest_context, guests
* must use the following accessor macros to pack/unpack valid MFNs.
* Note that Xen is using the fact that the pagetable base is always
* page-aligned, and putting the 12 MSB of the address into the 12 LSB
* of cr3.
#define xen_pfn_to_cr3(pfn) (((unsigned)(pfn) << 12) | ((unsigned)(pfn) >> 20))
#define xen_cr3_to_pfn(cr3) (((unsigned)(cr3) >> 12) | ((unsigned)(cr3) << 20))
static unsigned long xen_read_cr3(void)
return x86_read_percpu(xen_cr3);
static void xen_write_cr3(unsigned long cr3)
if (cr3 == x86_read_percpu(xen_cr3)) {
/* just a simple tlb flush */
x86_write_percpu(xen_cr3, cr3);
struct mmuext_op *op;
struct multicall_space mcs = xen_mc_entry(sizeof(*op));
unsigned long mfn = pfn_to_mfn(PFN_DOWN(cr3));
op = mcs.args;
op->arg1.mfn = mfn;
MULTI_mmuext_op(, op, 1, NULL, DOMID_SELF);