- 03 Jul, 2018 40 commits
-
-
Adding PIDs and the state of the previous task will allow to track Xenomai task switches in kernelshark (so far via out-of-tree patches, upstream is planning for the necessary plugin concept). Moreover, reporting the current priority on context switch helps debugging unexpected or delayed context switches Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
The code of cobalt_print_sched_params is carried into the format string in tracefs, and trace-cmd tries to make any sense out of it. While it can process simply statements, this code is too complex and will prevent the parsing. Convert it into a function. That still does not resolve the parsing issue of trace-cmd, but that can be addressed by a custom plugin which can then interpret this tracepoint. That wouldn't be possible with the broken format string. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
__print_symbolic already ensures that unknown policies are printed numerically. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
That renaming only took place in 4.5. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Catch the case that we try to obtain the pid of a not yet fully initialized thread. Signal the error by returning -1 which is specifically useful in case the value is added to some debug output or trace. xnthread_host_pid is now too complex for inlining. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
This allows to inject a user-defined string into the system's ftrace without leaving RT mode (because of using a standard write on /sys/kernel/debug/tracing/trace_marker). As the signature of this function is different from the existing trace syscall, create as dedicated one. For simplicity reasons, the maximum string length that can be passed down to the kernel is limited to 255 characters (+1 for termination). We call directly into the internal __trace_puts to avoid both the unneeded strlen call of the trace_puts wrapper and the false warning that kernel code uses trace_printk. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Philippe Gerum authored
Ptracing may cause timer overruns, as the ptraced application cannot go waiting for the current period in a timely manner when stopped on a breakpoint or single-stepped. A mechanism was introduced a long time ago for hiding those overruns from the application, while ptracing is in effect. The current implementation dealing with this case has two major flaws: - it crashes the system when single-stepping (observed on ARM i.MX6q), revealing a past regression which went unnoticed so far. - it uses a big hammer to forward (most) timers without running their respective timeout handler while ptracing, in order to hide this timespan from the overrun accounting code. This introduces two issues: * the timer forwarding code sits in the tick announcement code, which is a very hot path, despite ptracing an application is definitely not a common operation. * all timers are affected / blocked during ptracing, except those which have been specifically marked (XNTIMER_NOBLCK) at creation, which turns out to be impractical for the common case. The new implementation only addresses what is at stake, i.e. hiding overrun reports due to ptracing from applications. This can be done simply by noting when a thread should disregard overruns after an exit from the ptraced mode (XNHICCUP), then discard the pending overruns if this flag is detected by the code reporting them (xntimer_get_overrun()).
-
In contrast to #ifdef CONFIG_x, #if IS_ENABLED(x) (or our wrapper of the latter) does not update the dependency information for kbuild. So, switching any config easily left inconsistent build artifacts behind. This conversion also fixes de66d324 : there is and there was never a CONFIG_XENO_DEBUG_LOCKING. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
This reveals a bug in the trylock kernel slow-path when CONFIG_XENO_OPT_DEBUG_MUTEX_SLEEP is set. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
No need to have different patterns, and the one of mutex_timedlock is more compact. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
We missed to call set_current_owner on successful acquisition. That destroyed prio ceiling and could even cause a crash when lock debugging was enabled. This can easily be addressed by switching the open-coded trylock to xnsynch_try_acquire. Nice side effect: less code. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
Since 4.9.x, the interrupt pipeline implementation guarantees that the regular context switching code may be used over the head stage, including the fpu management bits. Drop the open coded support and use mainline's implementation instead. At this chance, drop the useless conditionals for handling the non-FPU case: this one does not apply to arm64.
-
Philippe Gerum authored
-
Philippe Gerum authored
At this chance, stop using the obsolete flush_cache_all() routine which cannot honor the documented semantics for arm64. Besides, calibrating the access times to the timer registers in no-cache conditions does not make sense.
-
Philippe Gerum authored
-
Philippe Gerum authored
-
At least on x86-64-compat, the missing destruction of the smokey barriers, specifically their embedded mutexes, cause crashes of the test. The reason is likely a mismatch between the kernel's and userland's view on which object is still active, combined with the fact that userland kept them on the volatile stack. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
-
-
User may expect this (probably last) sleeping service to be available under Cobalt just like sleep, nanosleep & Co. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
-
Philippe Gerum authored
Does not impact performances and fixes inclusion hell for pulling the struct xnthread definition for good.
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
Specific system calls may benefit from dealing with the caller's runtime mode by themselves, depending on internal information which the generic syscall dispatcher does not have access to. To this end, a new syscall mode called "handover" is introduced. Syscalls bearing this mode bit are always entered from the current calling domain. The syscall handler may return -ENOSYS to trigger a switch to the converse domain until all domains have been visited once, at which point the syscall fails with -ENOSYS automatically.
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
For outdated uClibc.
-
Implement pthread_setschedprio on top of pthread_setschedparam_ex with the help of the new __SCHED_CURRENT policy. This ensures that prio changes are directly applied to the real-time core, and that with just a single syscall. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Define the internal scheduling policy "current": it shall refer to the target thread's current scheduling policy. This will allow to model pthread_setschedprio on top of pthread_setschedparam_ex with only a single syscall. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
xnsynch_release also needs to tell the caller about the potential need for a reschedule after deboosting for prio-protection. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
We currently return the next owner, but no caller of xnsynch_release evaluates this beyond != NULL and calls xnsched_run in that case. Simplify the API by returning a need_resched flag directly. This will also help with fixing the missing reschedule after PP deboost. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Philippe Gerum authored
-
Philippe Gerum authored
--enable-lazy-setsched should be given for enabling lazy propagation of scheduling parameters upon calls to pthread_setschedparam*(), sched_setscheduler(). Defaults to off.
-
Philippe Gerum authored
-
Philippe Gerum authored
Do not switch to secondary mode upon schedparam updates for propagating changes to the regular kernel, if the caller runs in primary mode when entering pthread_setschedparam*() or sched_setscheduler(). In such a case, the update request to the regular kernel is left pending until the target thread resumes execution in relaxed mode, at which point it is committed. CAUTION: This mechanism won't update the schedparams cached by the glibc for the caller in user-space, but this is the deal: we don't relax threads which issue pthread_setschedparam[_ex]() from primary mode anymore, but then only the kernel side (Cobalt and the host kernel) will be aware of the change, and glibc might cache obsolete information. If the caller already runs in relaxed mode on entry to these services, the update request takes place immediately, via the regular (g)libc calls. In any case, the new scheduling parameters for the target thread are immediately applied by Cobalt, regardless of the update path followed for the regular kernel.
-
Philippe Gerum authored
Provide a mechanism for carrying out a lazy propagation of schedparam updates to the regular kernel, so that userland does not have to switch to secondary mode for this. When userland issues sc_cobalt_thread_setschedparam_ex for updating the scheduling parameters of a Xenomai thread, a request for propagating this change to the regular kernel is made pending. Such request will be committed later, either when: - the thread relaxes if it is running in primary mode when the update request is received; - next time the thread calls back into the Cobalt core as a result of receiving a HOME action from a SIGSHADOW notification, which is sent if such thread was relaxed at the time of the update request. As a result, the target thread will have propagated the schedparams update to the regular kernel as soon as it resumes (relaxed) execution in user-space.
-