- 02 Mar, 2016 6 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
Some timers initialized by the Cobalt core may have a valid reason to live on CPUs excluded from the real-time set. A typical example would be the host timer from the scheduler slot, which relays ticks to the regular kernel. Other core timers are just better dealt with when created and left passive on those CPUs. Mark all core timers specifically, and exclude them from the LART detection code in __xntimer_init(). This fixes a spurious Cobalt debug assertion seen on SMP at boot, when the real-time CPU set is restricted to a subset of the online CPU set.
-
Philippe Gerum authored
The situation below would cause a kernel crash on any earlier 3.x release, with ktask implemented in a dynamically loaded/unloaded module: CPU0: rtdm_task_destroy(ktask) ... rmmod(module) CPU1: ktask() ... ... __xnthread_test_cancel() do_exit() (last) schedule() OOPS: prev still treading on stale memory In this case, the module would be unmapped too early, before the cancelled task can ultimately schedule away. The changes also fix a stale reference from the joiner thread to the former ->idtag field, after the joinee's TCB has been dropped.
-
- 01 Mar, 2016 3 commits
-
-
Philippe Gerum authored
Make xnthread_join() switch the caller to secondary mode prior to waiting for the target thread termination. The original runtime mode is restored upon return. Since the joiner was already synchronized on an event that may be sent by the joinee from secondary mode exclusively, this change does not drop any real-time guarantee for the joiner: there has never been any in the first place. This is a preparation step to a stricter synchronization between the joiner and the joinee, especially in the SMP case.
-
Philippe Gerum authored
There is no point in switching a kthread to weak scheduling when cancelling it, as it must reach a cancellation point asap as part of its work loop anyway. Would it omit testing for cancellation, weak scheduling would not help enforcing the exit request anyway.
-
Philippe Gerum authored
-
- 29 Feb, 2016 11 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
There is no reason to ask for physically contiguous memory from alloc_pages[_exact]() for the main heap: real-time heaps are certainly no place for getting DMA-suitable buffers from. Using alloc_pages*() for common Cobalt heaps is a problem: - this raises the probability of getting allocation failures in case of memory fragmentation (although seldom at boot time, some Cobalt-based modules also using such services could fail allocating their heap later on). - this restricts the maximum heap size to MAX_ORDER (currently 4Mb), which may be too small in some configurations. Therefore, switch from alloc_pages_exact() to vmalloc().
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 28 Feb, 2016 1 commit
-
-
Philippe Gerum authored
-
- 27 Feb, 2016 3 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
This is for unit testing in task context. The actor task performs simple test requests issued by userland.
-
Philippe Gerum authored
-
- 26 Feb, 2016 4 commits
-
-
xntimer_get_overruns might be called on meanwhile stopped timers, specifically by cobalt_timer_deliver. We crash if we try to dequeue a stopped timer, and we should not restart it as well. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Philippe Gerum authored
Add this chance, fix a potential UMR in smokey_barrier_wait().
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 24 Feb, 2016 8 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
-
If a timer is not running, it may still have a non-null interval value. This has to be returned according to the standard. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
__cobalt_timer_getval already checks this. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Move the call into a potential cobalt extension to the front so that it can completely control the value returned by timer_gettimeout. Signed-off-by:
Jan Kiszka <jan.kiszka@siemens.com>
-
Philippe Gerum authored
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 16 Feb, 2016 2 commits
-
-
Philippe Gerum authored
-
Philippe Gerum authored
-
- 12 Feb, 2016 1 commit
-
-
Philippe Gerum authored
-ENOMEM is confusing in this case, does not actually reflect the error condition, and does not match the error code commonly returned by POSIX services for denoting a (temporary) lack of resources. Besides, this source of error was not even mentioned in the documentation of the affected services. All error codes must be detected, and any program that might have specifically checked for -ENOMEM during error recovery in the affected services was potentially confused, so this change does not introduce a significant ABI variation.
-
- 09 Feb, 2016 1 commit
-
-
Philippe Gerum authored
Fixup a potential race upon return from grant/drain_wait operations, e.g. given two threads A and B: A:enqueue_waiter(self) A:monitor_wait A:monitor_unlock A:[timed] sleep A:wakeup on timeout/interrupt B:monitor_lock B:look_for_queued_waiter (found A, update A's state) B:monitor_unlock A:dequeue_waiter(self) A:return -ETIMEDOUT/-EINTR The race may happen anytime between the timeout/interrupt event is received by A, and the moment it grabs back the monitor lock before unqueuing. When the race happens, B can squeeze in a signal before A unqueues after resumption on error. Problem: A's internal state has been updated (e.g. some data transferred to it), but it will receive -ETIMEDOUT/-EINTR, causing it to miss the update eventually. The fix involves filtering out -ETIMEDOUT/-EINTR errors upon return from wait_grant/drain operations whenever the syncobj was actually signaled. This issue was detected and described by http://xenomai.org/pipermail/xenomai/2016-February/035852.html
-