- Sep 26, 2023
-
-
Jan Kiszka authored
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
- Apr 30, 2023
-
-
Jan Kiszka authored
While we might be able to handle this more gracefully under I-pipe, e.g. by adding rcu_nmi_enter/exit to I-pipe IRQ handlers, it's less risky at this stage to keep putting our head into the sand and ignore the issue. Dovetail has this cleanly resolved while I-pipe is suffering from more lockdep issues that are no longer worth addressing. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Hardware IRQs are enabled here, e.g. when coming from irq_finalize_oneshot()! It might happen that right after enabling the IRQ at hardware level the IRQ comes in again, even before we could fix the software IRQ state flag. The result was a IRQ storm and system freeze because the same IRQ was not properly masked at hw level during next IRQ arrival. The software IRQ flag is checked before going down to the HW level. Switching to unmask_irq() has two effects: - We do not have to care about the software IRQ state flag anymore - IRQ state is updated in a hwirq off section together with the hw state, so the software state flag and hw state should no longer diverge. Link: https://lore.kernel.org/xenomai/20220210153313.2229625-1-gunter.grau@philips.com/ Signed-off-by: Florian Bezdeka <florian.bezdeka@siemens.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Handling of PCIe MSI interrupts resulted in system hanging or high latencies. Fix is to replaced missed call to generic_handle_irq with ipipe_handle_irq(). Signed-off-by: Scott Reed <scott.reed@arcor.de> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
The following commit in the vanilla kernel introduced a check for the cached interrupt mask flag in mask_irq(): bf22ff45 ("genirq: Avoid unnecessary low level irq function calls") This means if the flag is not serviced correctly the real bit in the hardware interrupt controller may not be cleared or set. The __ipipe_end_level_irq() function does not follow this rule. It unmasks the bit in the hardware without setting the cached flags accordingly. So after the first level interrupt is finished the mask cache has a wrong state. If now the next interrupt fires, the mask_irq() function will not really mask the interrupt in the hardware which causes a interrupt storm after reenabeling the hard irqs. The fix now also updates the shadow flag correctly. Signed-off-by: Gunter Grau <gunter.grau@philips.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
On some archs, WARN* is implemented by triggering a fault. This can cause troubles if we are already in a fault and try to submit too much root work. Convert to open-coded warning and stack dump. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
cpu_buffer->current_context is supposed to be protected by irq disabling, just like in dovetail. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Some GPIO or pinmux drivers referencing ipipe_handle_demuxed_irq() may compile as modules, which requires __ipipe_dispatch_irq() exported. Signed-off-by: Fino Meng <fino.meng@linux.intel.com> [Jan: fine-tune commit message] Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
To clarify naming and purpose: this is a pipeline-specific routine, which further prepares an interrupt descriptor for receiving events for the most part, swapping the interrupt flow handler specifically for chained IRQs (not the interrupt handler per se). Signed-off-by: Philippe Gerum <rpm@xenomai.org> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
This patch enables the Intel pinctrl/GPIO core driver to operate in a pipelined interrupt system. However, it does not allow chained GPIO IRQs to be handled from the head stage of such pipeline yet. In other words, the chained GPIO interrupts can safely be handled from the in-band stage when CONFIG_IPIPE is turned on, but cannot be routed to a real-time application. Enabling full support will require the I-pipe core to handle IRQs chained from a shared parent interrupt natively, which it is not implemented at the moment. Signed-off-by: Philippe Gerum <rpm@xenomai.org> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
The pipeline core must be given an opportunity to fixup the interrupt descriptor right before a flow handler is assigned to it. To this end, make sure the irq_set_[chip_]handler_[name_]locked() helpers also call __fixup_irq_handler(). Signed-off-by: Philippe Gerum <rpm@xenomai.org> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
This uses the same pattern as ftrace for calling ipipe_trace_panic_dump, resulting in one fill less that needs to be patched for ipipe. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
This is in line with what ftrace does and ensures that setups with lowered log levels will not miss this important information. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
We do not need the special handling of __DO_TRACE(..., rcuidle=1) when running over the head domain. In fact, we cannot use it because it switches to srcu which is incompatible with that context. It's safe to switch to normal RCU because no head domain caller of a trace_*_rcuidle tracepoints should do this from rcu-problematic paths, specifically idle. Ported from the dovetail queue. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
When a huge number of CPUs is available (e.g. CONFIG_MAXSMP/x86), we might overflow the stack with cpumask_t variables in ipipe_select_timer(). Allocate the cpumask we need there dynamically instead. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
When a huge number of CPUs is available (e.g. CONFIG_MAXSMP/x86), we might overflow the stack with cpumask_t variables in ipipe_critical_enter(). Instead of allocating cpumask_var_t dynamically for these, rely on the fact that we cannot reenter the code accessing them by design, so those variables may be moved to local static storage. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Some configurations may define more than 256K distinct interrupts (e.g. CONFIG_MAXSMP/x86), which is the limit for the current 3-level mapping used for logging IRQs. Add a 4th mapping level to support configurations up to 16M interrupts. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
Needed on x86 at least when CONFIG_IPIPE is off. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
-
Jan Kiszka authored
x86 may generate one, so change the signature. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
Likely needed since c942cee4 which split enabling and startup. This fixes unpopulated vectors in the IOAPIC on x86 at least, possibly more. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
It's time to let ipipe_enable_irq return a proper error as it will gain another function that may fail. Drop the WARN_ON_ONCE in favor of that. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
Breaks in non-debug builds otherwise, e.g. https://travis-ci.com/xenomai-ci/xenomai/jobs/212725223 Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
All callers of lockdep_hardirqs_on/off already filter out !ipipe_root_p. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
Lost in d7fc2c06 ("lockdep: ipipe: exclude the head stage from IRQ state tracing") but still needed by x86 at least. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
Since 9beae1ea, we are supposed to pass down flags, not just 0 or 1. Luckily, 1 happened to be FOLL_WRITE, so we did the right thing by chance. Moreover, get_user_pages is deprecated in favor of its locked/unlocked/ fast variants. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
If evtdev should be NULL, we will crash on get_dev_mode(evtdev) further down. So this test is never false in absence of sever bugs. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
A long time ago (probably in 2.6-times), someone converted spaces to tabs, shuffling the layout around this way, and by forgetting to account for the multi-domain removal. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
Since 4.9, we need to declare continued lines via KERN_CONT. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
-
-
-
Jan Kiszka authored
When a CPU is unplugged, make sure to drop all per-CPU ipipe timer devices when removing the CPU. Otherwise, we will corrupt the device list when re-registering the host timer on CPU onlining. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
linux/ipipe_base.h was originally designed to circumvment #include hell by exporting only a subset of core definitions with minimum dependencies to other inner headers. The latest code reorganization fixed this issue in a better way, and linux/ipipe.h is currently the only direct reader of linux/ipipe_base.h, so let's merge both headers back as linux/ipipe.h.
-
Jan Kiszka authored
At least one arch, infamous x86, has a difference of NR_syscalls depending on compat vs. native ABI. Account for that by introducing a function that can deliver the currently valid syscall number if an arch implements such a service. In all other cases, this change is functionally no difference. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
This I-pipe hook reports the desired resumption mode to the subscriber: resume all process tasks or just single-step a particular one? The use case is to enable synchronous stopping / resuming of all head tasks of a ptraced real-time process. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
Jan Kiszka authored
A little bit inspired by the kernel's user return notifier, this introduces an I-pipe hook before the kernel jumps back to a userspace context from the root domain. The hook is design to allow a switch back to the head domain, thus will not run through signal/preemption checks when returning from the callback over head. It is guaranteed to fire on return from interrupts and exceptions but may also fire on certain syscall-return paths. The first use case for the hook is resumption of ptraced tasks over head if they were stopped in that domain. This provides just the generic infrastructure, the invocation of __ipipe_notify_user_intreturn as well as the definition of TIP_USERINTRET are architecture-specific. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
-
-
__ipipe_migrate_head() should not BUG() unconditionally when failing to schedule out a thread, but rather let the real-time core handle the situation a bit more gracefully.
-