- 24 Jul, 2008 2 commits
-
-
Roland McGrath authored
This adds a fast path for 64-bit syscall entry and exit when TIF_SYSCALL_AUDIT is set, but no other kind of syscall tracing. This path does not need to save and restore all registers as the general case of tracing does. Avoiding the iret return path when syscall audit is enabled helps performance a lot. Signed-off-by:
Roland McGrath <roland@redhat.com>
-
Roland McGrath authored
This short-circuit path in sysret_signal looks wrong to me. AFAICT, in practice the branch is never taken--and if it were, it would go wrong. To wit, try loading a module whose init function does set_thread_flag(TIF_IRET), and see insmod crash (presumably with a wrong user stack pointer). This is because the FIXUP_TOP_OF_STACK work hasn't been done yet when we jump around the call to ptregscall_common and get to int_with_check--where it expects the user RSP,SS,CS and EFLAGS to have been stored by FIXUP_TOP_OF_STACK. I don't think it's normally possible to get to sysret_signal with no _TIF_DO_NOTIFY_MASK bits set anyway, so these two instructions are already superfluous. If it ever did happen, it is harmless to call do_notify_resume with nothing for it to do. Signed-off-by:
Roland McGrath <roland@redhat.com>
-
- 16 Jul, 2008 4 commits
-
-
Roland McGrath authored
This unifies and cleans up the syscall tracing code on i386 and x86_64. Using a single function for entry and exit tracing on 32-bit made the do_syscall_trace() into some terrible spaghetti. The logic is clear and simple using separate syscall_trace_enter() and syscall_trace_leave() functions as on 64-bit. The unification adds PTRACE_SYSEMU and PTRACE_SYSEMU_SINGLESTEP support on x86_64, for 32-bit ptrace() callers and for 64-bit ptrace() callers tracing either 32-bit or 64-bit tasks. It behaves just like 32-bit. Changing syscall_trace_enter() to return the syscall number shortens all the assembly paths, while adding the SYSEMU feature in a simple way. Signed-off-by:
Roland McGrath <roland@redhat.com>
-
Jeremy Fitzhardinge authored
Exceptions using paranoidentry need to have their exception frames adjusted explicitly. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com>
-
Jeremy Fitzhardinge authored
Implement the failsafe callback, so that iret and segment register load exceptions are reported to the kernel. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jeremy Fitzhardinge authored
Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 12 Jul, 2008 1 commit
-
-
Roland McGrath authored
On three of the several paths in entry_64.S that call do_notify_resume() on the way back to user mode, we fail to properly check again for newly-arrived work that requires another call to do_notify_resume() before going to user mode. These paths set the mask to check only _TIF_NEED_RESCHED, but this is wrong. The other paths that lead to do_notify_resume() do this correctly already, and entry_32.S does it correctly in all cases. All paths back to user mode have to check all the _TIF_WORK_MASK flags at the last possible stage, with interrupts disabled. Otherwise, we miss any flags (TIF_SIGPENDING for example) that were set any time after we entered do_notify_resume(). More work flags can be set (or left set) synchronously inside do_notify_resume(), as TIF_SIGPENDING can be, or asynchronously by interrupts or other CPUs (which then send an asynchronous interrupt). There are many different scenarios that could hit this bug, most of them races. The simplest one to demonstrate does not require any race: when one signal has done handler setup at the check before returning from a syscall, and there is another signal pending that should be handled. The second signal's handler should interrupt the first signal handler before it actually starts (so the interrupted PC is still at the handler's entry point). Instead, it runs away until the next kernel entry (next syscall, tick, etc). This test behaves correctly on 32-bit kernels, and fails on 64-bit (either 32-bit or 64-bit test binary). With this fix, it works. #define _GNU_SOURCE #include <stdio.h> #include <signal.h> #include <string.h> #include <sys/ucontext.h> #ifndef REG_RIP #define REG_RIP REG_EIP #endif static sig_atomic_t hit1, hit2; static void handler (int sig, siginfo_t *info, void *ctx) { ucontext_t *uc = ctx; if ((void *) uc->uc_mcontext.gregs[REG_RIP] == &handler) { if (sig == SIGUSR1) hit1 = 1; else hit2 = 1; } printf ("%s at %#lx\n", strsignal (sig), uc->uc_mcontext.gregs[REG_RIP]); } int main (void) { struct sigaction sa; sigset_t set; sigemptyset (&sa.sa_mask); sa.sa_flags = SA_SIGINFO; sa.sa_sigaction = &handler; if (sigaction (SIGUSR1, &sa, NULL) || sigaction (SIGUSR2, &sa, NULL)) return 2; sigemptyset (&set); sigaddset (&set, SIGUSR1); sigaddset (&set, SIGUSR2); if (sigprocmask (SIG_BLOCK, &set, NULL)) return 3; printf ("main at %p, handler at %p\n", &main, &handler); raise (SIGUSR1); raise (SIGUSR2); if (sigprocmask (SIG_UNBLOCK, &set, NULL)) return 4; if (hit1 + hit2 == 1) { puts ("PASS"); return 0; } puts ("FAIL"); return 1; } Signed-off-by:
Roland McGrath <roland@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 09 Jul, 2008 1 commit
-
-
Glauber Costa authored
This is for consistency with i386. Signed-off-by:
Glauber Costa <gcosta@redhat.com> Signed-off-by:
H. Peter Anvin <hpa@zytor.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 08 Jul, 2008 7 commits
-
-
Jeremy Fitzhardinge authored
Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jeremy Fitzhardinge authored
64-bit Xen pushes a couple of extra words onto an exception frame. Add a hook to deal with them. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jeremy Fitzhardinge authored
In a 64-bit system, we need separate sysret/sysexit operations to return to a 32-bit userspace. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citirx.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jeremy Fitzhardinge authored
There's no need to combine restoring the user rsp within the sysret pvop, so split it out. This makes the pvop's semantics closer to the machine instruction. Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citirx.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jeremy Fitzhardinge authored
Don't conflate sysret and sysexit; they're different instructions with different semantics, and may be in use at the same time (at least within the same kernel, depending on whether its an Intel or AMD system). sysexit - just return to userspace, does no register restoration of any kind; must explicitly atomically enable interrupts. sysret - reloads flags from r11, so no need to explicitly enable interrupts on 64-bit, responsible for restoring usermode %gs Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citirx.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Jeremy Fitzhardinge authored
This is needed when the kernel is running on RING3, such as under Xen. x86_64 has a weird feature that makes it #GP on iret when SS is a null descriptor. This need to be tested on bare metal to make sure it doesn't cause any problems. AMD specs say SS is always ignored (except on iret?). Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: xen-devel <xen-devel@lists.xensource.com> Cc: Stephen Tweedie <sct@redhat.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Cc: Mark McLoughlin <markmc@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Cliff Wickman authored
TLB shootdown for SGI UV. Depends on patch (in tip/x86/irq): x86-update-macros-used-by-uv-platform.patch Jack Steiner May 29 This patch provides the ability to flush TLB's in cpu's that are not on the local node. The hardware mechanism for distributing the flush messages is the UV's "broadcast assist unit". The hook to intercept TLB shootdown requests is a 2-line change to native_flush_tlb_others() (arch/x86/kernel/tlb_64.c). This code has been tested on a hardware simulator. The real hardware is not yet available. The shootdown statistics are provided through /proc/sgi_uv/ptc_statistics. The use of /sys was considered, but would have required the use of many /sys files. The debugfs was also considered, but these statistics should be available on an ongoing basis, not just for debugging. Issues to be fixed later: - The IRQ for the messaging interrupt is currently hardcoded as 200 (see UV_BAU_MESSAGE). It should be dynamically assigned in the future. - The use of appropriate udelay()'s is untested, as they are a problem in the simulator. Signed-off-by:
Cliff Wickman <cpw@sgi.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 27 Jun, 2008 1 commit
-
-
Vegard Nossum authored
From the code: "B stepping K8s sometimes report an truncated RIP for IRET exceptions returning to compat mode. Check for these here too." The code then proceeds to truncate the upper 32 bits of %rbp. This means that when do_page_fault() is finally called, its prologue, do_page_fault: push %rbp movl %rsp, %rbp will put the truncated base pointer on the stack. This means that the stack tracer will not be able to follow the base-pointer changes and will see all subsequent stack frames as unreliable. This patch changes the code to use a different register (%rcx) for the checking and leaves %rbp untouched. Signed-off-by:
Vegard Nossum <vegard.nossum@gmail.com> Signed-off-by:
Pekka Enberg <penberg@cs.helsinki.fi> Acked-by:
Arjan van de Ven <arjan@linux.intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 26 Jun, 2008 1 commit
-
-
Jens Axboe authored
This converts x86, x86-64, and xen to use the new helpers for smp_call_function() and friends, and adds support for smp_call_function_single(). Acked-by:
Ingo Molnar <mingo@elte.hu> Acked-by:
Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 23 Jun, 2008 1 commit
-
-
Abhishek Sagar authored
Record the address of the mcount call-site. Currently all archs except sparc64 record the address of the instruction following the mcount call-site. Some general cleanups are entailed. Storing mcount addresses in rec->ip enables looking them up in the kprobe hash table later on to check if they're kprobe'd. Signed-off-by:
Abhishek Sagar <sagar.abhishek@gmail.com> Cc: davem@davemloft.net Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 19 Jun, 2008 1 commit
-
-
Jan Beulich authored
Signed-off-by:
Jan Beulich <jbeulich@novell.com> Cc: "Andi Kleen" <andi@firstfloor.org> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 25 May, 2008 1 commit
-
-
Jan Beulich authored
Remove the not longer used handlers for reserved vectors. Signed-off-by:
Jan Beulich <jbeulich@novell.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 23 May, 2008 2 commits
-
-
Steven Rostedt authored
This patch replaces the indirect call to the mcount function pointer with a direct call that will be patched by the dynamic ftrace routines. On boot up, the mcount function calls the ftace_stub function. When the dynamic ftrace code is initialized, the ftrace_stub is replaced with a call to the ftrace_record_ip, which records the instruction pointers of the locations that call it. Later, the ftraced daemon will call kstop_machine and patch all the locations to nops. When a ftrace is enabled, the original calls to mcount will now be set top call ftrace_caller, which will do a direct call to the registered ftrace function. This direct call is also patched when the function that should be called is updated. All patching is performed by a kstop_machine routine to prevent any type of race conditions that is associated with modifying code on the fly. Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
Arnaldo Carvalho de Melo authored
If CONFIG_FTRACE is selected and /proc/sys/kernel/ftrace_enabled is set to a non-zero value the ftrace routine will be called everytime we enter a kernel function that is not marked with the "notrace" attribute. The ftrace routine will then call a registered function if a function happens to be registered. [ This code has been highly hacked by Steven Rostedt and Ingo Molnar, so don't blame Arnaldo for all of this ;-) ] Update: It is now possible to register more than one ftrace function. If only one ftrace function is registered, that will be the function that ftrace calls directly. If more than one function is registered, then ftrace will call a function that will loop through the functions to call. Signed-off-by:
Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Signed-off-by:
Steven Rostedt <srostedt@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 17 Apr, 2008 1 commit
-
-
Roland McGrath authored
When we're stopped at syscall entry tracing, ptrace can change the %rax value from -ENOSYS to something else. If no system call is actually made because the syscall number (now in orig_rax) is bad, then we now always reset %rax to -ENOSYS again. This changes it to leave the return value alone after entry tracing. That way, the %rax value set by ptrace is there to be seen in user mode (or in syscall exit tracing). This is consistent with what the 32-bit kernel does. Signed-off-by:
Roland McGrath <roland@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 26 Feb, 2008 1 commit
-
-
Ingo Molnar authored
pointed out by pageexec@freemail.hu: > what happens here is that gcc treats the argument area as owned by the > callee, not the caller and is allowed to do certain tricks. for ssp it > will make a copy of the struct passed by value into the local variable > area and pass *its* address down, and it won't copy it back into the > original instance stored in the argument area. > > so once sys_execve returns, the pt_regs passed by value hasn't at all > changed and its default content will cause a nice double fault (FWIW, > this part took me the longest to debug, being down with cold didn't > help it either ;). To fix this we pass in pt_regs by pointer. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 19 Feb, 2008 1 commit
-
-
Adrian Bunk authored
Signed-off-by:
Adrian Bunk <bunk@kernel.org> Cc: hpa@zytor.com Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 09 Feb, 2008 1 commit
-
-
Ingo Molnar authored
Use a common irq_return entry point for all the iret places, which need the paravirt INTERRUPT return wrapper. Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 06 Feb, 2008 2 commits
-
-
Roland McGrath authored
This change broke recovery of exceptions in iret: commit 72fe4858 Author: Glauber de Oliveira Costa <gcosta@redhat.com> x86: replace privileged instructions with paravirt macros The ENTRY(native_iret) macro adds alignment padding before the iretq instruction, so "iret_label" no longer points exactly at the instruction. It was sloppy to leave the old "iret_label" label behind when replacing its nearby use. Removing it would have revealed the other use of the label later in the file, and upon noticing that use, anyone exercising the minimum of attention to detail expected of anyone touching this subtle code would realize it needed to change as well. Signed-off-by:
Roland McGrath <roland@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Roland McGrath authored
Unify the x86-64 behavior for 32-bit processes that set bogus %cs/%ss values (the only ones that can fault in iret) match what the native i386 behavior is. (do not kill the task via do_exit but generate a SIGSEGV signal) [ tglx@linutronix.de: build fix ] Signed-off-by:
Roland McGrath <roland@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 30 Jan, 2008 1 commit
-
-
Glauber de Oliveira Costa authored
The assembly code in entry_64.S issues a bunch of privileged instructions, like cli, sti, swapgs, and others. Paravirt guests are forbidden to do so, and we then replace them with macros that will do the right thing. Signed-off-by:
Glauber de Oliveira Costa <gcosta@redhat.com> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 25 Jan, 2008 1 commit
-
-
Peter Zijlstra authored
Use HR-timers (when available) to deliver an accurate preemption tick. The regular scheduler tick that runs at 1/HZ can be too coarse when nice level are used. The fairness system will still keep the cpu utilisation 'fair' by then delaying the task that got an excessive amount of CPU time but try to minimize this by delivering preemption points spot-on. The average frequency of this extra interrupt is sched_latency / nr_latency. Which need not be higher than 1/HZ, its just that the distribution within the sched_latency period is important. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 17 Oct, 2007 1 commit
-
-
Andrey Mirkin authored
Right now register edi is just cleared before calling do_exit. That is wrong because correct return value will be ignored. Value from rax should be copied to rdi instead of clearing edi. AK: changed to 32bit move because it's strictly an int [ tglx: arch/x86 adaptation ] Signed-off-by:
Andrey Mirkin <major@openvz.org> Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de>
-
- 11 Oct, 2007 3 commits
-
-
Peter Zijlstra authored
Run the lockdep_sys_exit hook after all other C code on the syscall return path. Signed-off-by:
Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
Thomas Gleixner authored
Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ingo Molnar <mingo@elte.hu>
-
- 31 Jul, 2007 1 commit
-
-
Stephane Eranian authored
Remove unused TIF_NOTIFY_RESUME flag for all processor architectures. The flag was not used excecpt on IA-64 where the patch replaces it with TIF_PERFMON_WORK. Signed-off-by:
stephane eranian <eranian@hpl.hp.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 22 Jul, 2007 1 commit
-
-
Tim Hockin authored
Background: /dev/mcelog is typically polled manually. This is less than optimal for situations where accurate accounting of MCEs is important. Calling poll() on /dev/mcelog does not work. Description: This patch adds support for poll() to /dev/mcelog. This results in immediate wakeup of user apps whenever the poller finds MCEs. Because the exception handler can not take any locks, it can not call the wakeup itself. Instead, it uses a thread_info flag (TIF_MCE_NOTIFY) which is caught at the next return from interrupt or exit from idle, calling the mce_user_notify() routine. This patch also disables the "fake panic" path of the mce_panic(), because it results in printk()s in the exception handler and crashy systems. This patch also does some small cleanup for essentially unused variables, and moves the user notification into the body of the poller, so it is only called once per poll, rather than once per CPU. Result: Applications can now poll() on /dev/mcelog. When an error is logged (whether through the poller or through an exception) the applications are woken up promptly. This should not affect any previous behaviors. If no MCEs are being logged, there is no overhead. Alternatives: I considered simply supporting poll() through the poller and not using TIF_MCE_NOTIFY at all. However, the time between an uncorrectable error happening and the user application being notified is *the*most* critical window for us. Many uncorrectable errors can be logged to the network if given a chance. I also considered doing the MCE poll directly from the idle notifier, but decided that was overkill. Testing: I used an error-injecting DIMM to create lots of correctable DRAM errors and verified that my user app is woken up in sync with the polling interval. I also used the northbridge to inject uncorrectable ECC errors, and verified (printk() to the rescue) that the notify routine is called and the user app does wake up. I built with PREEMPT on and off, and verified that my machine survives MCEs. [wli@holomorphy.com: build fix] Signed-off-by:
Tim Hockin <thockin@google.com> Signed-off-by:
William Irwin <bill.irwin@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 23 Jun, 2007 1 commit
-
-
Andi Kleen authored
Previously a program could switch to a compat mode segment and then execute SYSCALL and it would jump to an uninitialized MSR and crash the kernel. Instead supply a dummy target for this case. Pointed out by Jan Beulich Cc: jbeulich@novell.com Signed-off-by:
Andi Kleen <ak@suse.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 02 May, 2007 1 commit
-
-
Jan Beulich authored
Signed-off-by:
Jan Beulich <jbeulich@novell.com> Signed-off-by:
Andi Kleen <ak@suse.de>
-
- 26 Feb, 2007 1 commit
-
-
Eric W. Biederman authored
The problem: After moving an interrupt when is it safe to teardown the data structures for receiving the interrupt at the old location? With a normal pci device it is possible to issue a read to a device to flush all posted writes. This does not work for the oldest ioapics because they are on a 3-wire apic bus which is a completely different data path. For some more modern ioapics when everything is using front side bus delivery you can flush interrupts by simply issuing a read to the ioapic. For other modern ioapics emperical testing has shown that this does not work. So it appears the only reliable way to know the last of the irqs from an ioapic have been received from before the ioapic was reprogrammed is to received the first irq from the ioapic from after it was reprogrammed. Once we know the last irq message has been received from an ioapic into a local apic we then need to know that irq message has been processed through the local apics. Signed-off-by:
Eric W. Biederman <ebiederm@xmission.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 15 Dec, 2006 1 commit
-
-
Linus Torvalds authored
It has caused more problems than it ever really solved, and is apparently not getting cleaned up and fixed. We can put it back when it's stable and isn't likely to make warning or bug events worse. In the meantime, enable frame pointers for more readable stack traces. Signed-off-by:
Linus Torvalds <torvalds@osdl.org>
-