- 02 Jan, 2022 40 commits
-
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
__secure_computing, called by syscall_trace_enter, returns -1 when a call should be skipped. We must avoid that this is interpreted as EXIT_SYSCALL_OOB in the dovetail case. Fixes, e.g., crashes of Chrome in sandbox mode. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
These changes aim at enabling support for both the legacy and prctl-based syscall formats by the same kernel, without affecting the syscall handling logic in the companion cores which still depend on the former. CONFIG_DOVETAIL_LEGACY_SYSCALL_RANGE should be turned on by such cores in order to enable the legacy call format. If CONFIG_DOVETAIL_LEGACY_SYSCALL_RANGE is set, we assume the companion core may not handle the new prctl-based call format, but expects the __OOB_SYSCALL_BIT to be set directly into the syscall code register instead, defining its own syscall range. In this case, prctl() requests with an oob signature might be received by the oob syscall handler, but these should always be handled from the in-band stage, regardless of the call arguments. To this end, the oob syscall handler is allowed to ask for the request to be propagated to the peer in-band handler, which would then decide to either handle the request locally, or pass it down in turn to the regular syscall handler. This is a rare case, when prctl-based syscalls are not accepted by the companion core, but some application would issue prctl() calls matching the oob signature (i.e. prctl(option | __OOB_SYSCALL_BIT, ...)), denoting either a misconfiguration, or a broken application. If CONFIG_DOVETAIL_LEGACY_SYSCALL_RANGE is unset, every oob syscall must be folded into a prctl() request, with the __OOB_SYSCALL_BIT set into the option argument. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Receiving in-band __NR_prctl syscalls via their oob handler may confuse some companion cores still using the legacy call form (i.e. nr | __OOB_SYSCALL_BIT). Dig a little deeper into prctl() call args in order to pass oob call forms exclusively to handle_oob_syscall(). Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Receiving in-band __NR_prctl syscalls via their oob handler may confuse some companion cores still using the legacy call form (i.e. nr | __OOB_SYSCALL_BIT). Dig a little deeper into prctl() call args in order to pass oob call forms exclusively to handle_oob_syscall(). Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
This code is already under CONFIG_SMP: The comment got it right, not yet the implementation. Only affects the pipeline-off case. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Whenever an IRQ was handled for a vector being NULL or in one of the error states the interrupt was not acknowledged at the APIC. That can happen if a vector is cleaned up by one of the device drivers while there is still one IRQ in flight. This has two effects: - If the affected vector is re-assigned later, it does not work, the IRQ never makes its way to the CPU - Interrupts with lower priority are no longer delivered to the CPU The problem was observed on a quite big Intel XEON machine where some vectors / irqs were temporary used and cleaned up and re-assigned later. Signed-off-by:
Florian Bezdeka <florian.bezdeka@siemens.com>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Handle both the legacy and prctl-based call forms available from Dovetail, i.e.: syscall(sys_evl_<op> | __OOB_SYSCALL_BIT, ...) /* old form */ syscall(__NR_prctl, sys_evl_<op> | __OOB_SYSCALL, ...) /* new form */ With this change, we gain out-of-the-box support for Valgrind since applications can now emit EVL system calls wrapped in prctl() requests, which are readily recognized by the instrumentation framework. As a result, the ABI number is bumped to #27. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Updating tsk->comm requires holding the task lock, along with propagating the change to a few subsystems (e.g. trace, perf). Use set_task_comm() to update such field instead of open coding the change in a half baked way. Also, make sure such change is forwarded to the process event connector. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Make sure the percpu timer IPI is disabled on all CPUs before release. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Several factors may cause the active watchpoint count to be lesser than the poll table size as a result of collecting events: - a stale file descriptor is encountered (-EBADF) - bad user memory is written to while copying back evl_poll_event (-EFAULT) - the user event set has fewer entries than the ready set (> 0) In all of these cases, we may end up watching fewer files than the total amount of items in the poll table, in which case poll_context.nr is larger than the actual number of active watchpoints. For this reason, clear_wait() cannot iterate over poll_context.nr items, but should rather consider the active watchpoints only. To this end, introduce poll_context.active which is set by collect_events() appropriately. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This bug would cause the last watchpoint from the poll table to be left unexpectedly detached, which in turn would break clear_wait(). At this chance, clarify some naming in evl_drop_poll_table(). Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This code lays the groundwork for an out-of-band network stack. It currently provides basic AF_PACKET support (SOCK_RAW only) from the out-of-band stage, reusing the regular NIC drivers unmodified for dealing with the hardware. Therefore, the stack has NO end-to-end real-time property just yet. The next steps will address: - a kernel API for making regular NIC drivers oob-capable. - a mechanism for the out-of-band core to learn the routing information dynamically produced by the in-band stack. - support for UDP/AF_INET. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Align on the in-band definition of a ktime value which cannot represent a sensible timeout. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Knowing which synchronization object a thread sleeps on may be quite helpful while debugging. Export this information as a thread device attribute to /sys. Add it to the relevant tracepoints as well. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Some operations on factories started from /sys may be sensitive. Add a predicate to check whether the caller is authorized to perform such type of operation on a given factory. Access is granted for CAP_SYS_ADMIN-capable users, and/or users whose euid matches the kuid of the factory's clone device. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
In some circumstances, a wait operation cannot complete because we don't have enough memory to prepare the waiter for wake up, such as transferring a buffer to it. In this case, we want evl_wait_schedule() to notify the caller explicitly by a distinct wake up status (i.e. -ENOMEM), which cannot be misinterpreted as a different wake up cause. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
So far the API would only allow us to set the reason for a mass wake up of all waiters. We also need a way to set the reason for a directed wake up of a single thread. Change evl_wake_up() appropriately. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This does not depend on CONFIG_EVL_DEBUG_CORE. We entered oob (trap_entry), we must leave that way (trap_exit). Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We may be running a SMP kernel on a uniprocessor machine whose interrupt controller supports no IPI. We should attempt to hook IPIs only if the hardware can support multiple CPUs, otherwise it is unneeded and poised to fail. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We have only very few syscalls, prefer a plain switch to a pointer indirection which ends up being fairly costly due to exploit mitigations. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
EVL_HIGH_PERCPU_CONCURRENCY optimizes the implementation for applications with many real-time threads running concurrently on any given CPU core (typically when eight or more threads may be sharing a single CPU core). This is a combination of the scalable scheduler and rb-tree timer indexing as a single configuration switch, since both aspects are normally coupled. If the application system runs only a few EVL threads per CPU core, then this option should be turned off, in order to minimize the cache footprint of the queuing operations performed by the scheduler and timer subsystems. Otherwise, it should be turned on in order to have constant-time queuing operations for a large number of runnable threads and outstanding timers. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-