1. 22 Nov, 2017 1 commit
  2. 20 Nov, 2017 1 commit
  3. 19 Nov, 2017 5 commits
  4. 18 Nov, 2017 4 commits
  5. 17 Nov, 2017 6 commits
  6. 16 Nov, 2017 1 commit
    • Philippe Gerum's avatar
      cobalt/monitor: fix wait/exit race, lost wakeup signal · 3112a3f9
      Philippe Gerum authored
      __cobalt_monitor_wait() is racy wrt handling the pended state flag.
      Given the threads A which consumes events from a monitor, and B which
      produces them, the following situation may happen:
      [A] __cobalt_monitor_wait()
      [A]     release gate mutex
      [B] fast acquire gate mutex
      [B] grant/signal monitor
      [B] fast release gate mutex (!MONITOR_PENDED, syscall-less)
      [A]     raise MONITOR_PENDED flag
      [A]     sleep_on(monitor)
              ... lost wake up signal, A keeps sleeping ...
      To fix this, release the gate mutex only after the PENDED bit is
      raised in the monitor state flags.
  7. 14 Nov, 2017 1 commit
    • Jan Kiszka's avatar
      cobalt/debug: Fix locking · 5036abbe
      Jan Kiszka authored and Philippe Gerum's avatar Philippe Gerum committed
      None of these functions are called over interrupt context. Leaving the
      critical sections interruptible can cause premature/double-unlock
      scenarios and bug reports such as
      [Xenomai] lock ffffffff81c56000 already unlocked on CPU #1
                last owner = kernel/xenomai/debug.c:74 (hash_symbol(), CPU #1)
       000000000000002f ffff88007dc8bb10 ffffffff8118ae8f ffffffff00000001
       0000000000000021 ffff88007f897fde ffff88007dc8bb50 ffffffff8118b266
       00000000000000f1 ffff88007dc8bd68 0000000000000006 ffff88007dc8bd40
      Call Trace:
       [<ffffffff8118ae8f>] xnlock_dbg_release+0xdf/0xf0
       [<ffffffff8118b266>] hash_symbol+0x236/0x2d0
       [<ffffffff8118b668>] xndebug_trace_relax+0x118/0x450
       [<ffffffff811b8d50>] ? CoBaLt32emu_mmap+0xf0/0xf0
       [<ffffffff811b8dd7>] CoBaLt32emu_backtrace+0x87/0xb0
       [<ffffffff8100def6>] ? fpu__clear+0xd6/0x160
       [<ffffffff817b3691>] ? _raw_spin_unlock_irq+0x11/0x30
       [<ffffffff811ab1cc>] ipipe_syscall_hook+0x11c/0x3a0
       [<ffffffff8113d9bf>] __ipipe_notify_syscall+0xbf/0x180
       [<ffffffff810cd019>] ? __set_current_blocked+0x49/0x50
       [<ffffffff8113daab>] ipipe_handle_syscall+0x2b/0xb0
       [<ffffffff81001c9d>] do_fast_syscall_32+0xbd/0x220
       [<ffffffff817b64e2>] sysenter_flags_fixed+0x8/0x12
      Signed-off-by: Jan Kiszka's avatarJan Kiszka <jan.kiszka@siemens.com>
  8. 09 Nov, 2017 1 commit
  9. 05 Nov, 2017 1 commit
  10. 29 Oct, 2017 2 commits
    • Philippe Gerum's avatar
      cobalt/sched: drop obsolete, broken xnthread_migrate() call · b3fa56e6
      Philippe Gerum authored
      Migrating a Cobalt thread to a different CPU _must_ go through the
      regular set_cpus_allowed*() interface.
      xnthread_migrate() cannot work with the 3.x architecture, and only
      worked with pure kernel-based threads on 2.x.
      Let's drop the confusing routine.
    • Philippe Gerum's avatar
      cobalt/process: fix CPU migration handling for blocked threads · 59344943
      Philippe Gerum authored
      To maintain consistency between both Cobalt and host schedulers,
      reflecting a thread migration to another CPU into the Cobalt scheduler
      state must happen from secondary mode only, on behalf of the migrated
      thread itself once it runs on the target CPU (*).
      For this reason, handle_setaffinity_event() may NOT fix up
      thread->sched immediately using the passive migration call for a
      blocked thread.
      Failing to ensure this may lead to the following scenario, with taskA
      as the migrated thread, and taskB any other Cobalt thread:
      CPU0(cobalt): suspend(taskA, XNRELAX)
      CPU0(cobalt): suspend(taskB, ...)
      CPU0(cobalt): enter_root(), next_task := <whatever>
      CPU0(root): handle_setaffinity_event(taskA, CPU3)
            taskA->sched = xnsched_struct(CPU3)
      CPU0(root): <relax epilogue code for taskA>
      CPU0(root): resume(taskA, XNRELAX)
            enqueue(rq=CPU3), reschedule IPI to CPU3
      CPU0(root): resume(taskB, ...)
      CPU0(root): leave_root(), host_task := taskA
      CPU0(cobalt): suspend(taskA)
      CPU0(cobalt): enter_root(), next_task := host_task := taskA
      CPU0(root??): <<<taskA execution>>> BROKEN
      CPU3(cobalt): <taskA execution> via reschedule IPI
      To sum up, we would end up with the migrated task running on both CPUs
      in parallel, which would be, well, a problem.
      To resync the Cobalt scheduler information, send a SIGSHADOW signal to
      the migrated thread, asking it to switch back to primary mode from the
      handler, at which point the interrupted syscall may be restarted. This
      guarantees that check_affinity() is called, and fixups are done from
      the proper context.
      There is a cost: setting the affinity of a blocked thread may now
      induce a delay for that target thread as well, since it has to make a
      roundtrip between primary and secondary modes for handling the change
      event. However, 1) there is no other safe way to handle such event, 2)
      changing the CPU affinity of a remote real-time thread at random times
      makes absolutely no sense latency-wise, anyway.
      (*) This means that the Cobalt scheduler state regarding the
      CPU information lags behind the host scheduler state until
      the migrated thread switches back to primary mode
      (i.e. task_cpu(p) != xnsched_cpu(xnthread_from_task(p)->sched)).
      This is ok since Cobalt does not schedule such thread until then.
  11. 19 Oct, 2017 1 commit
  12. 29 Sep, 2017 2 commits
    • Philippe Gerum's avatar
      include/boilerplate: strip private APIs out of public headers · 1ed69a66
      Philippe Gerum authored
      Some private Xenomai APIs are known to enter namespace conflicts with
      C++ frameworks used by client real-time applications, e.g. Boost.
      Instead of polluting the namespace even more by prefixing all private
      definitions, hide them from the public scope by moving them to their
      own separate headers, only the Xenomai C implementation should pull.
      This commit only partially addresses the general issue, more work
      remains to be done in this direction to fix all potential conflicts.
    • Antoine Hoarau's avatar
      boilerplate: rename clz()/ctz() -> __clz()/__ctz() · 5ec8be8b
      Antoine Hoarau authored and Philippe Gerum's avatar Philippe Gerum committed
      Prevents conflicts with libtbb that defines clz as an inline function and gets
      replaced by those macros. Temporary fix waiting to move this code in private headers.
  13. 27 Sep, 2017 1 commit
  14. 24 Sep, 2017 1 commit
  15. 15 Sep, 2017 3 commits
  16. 28 Aug, 2017 1 commit
  17. 20 Aug, 2017 2 commits
  18. 16 Aug, 2017 2 commits
  19. 15 Aug, 2017 1 commit
  20. 01 Aug, 2017 1 commit
    • Philippe Gerum's avatar
      cobalt/timer: fix handling of timer migration over the handler [steely] · 96a188d5
      Philippe Gerum authored
      The removed code was not only useless, but actually harmful since:
      - xntimer_migrate() does re-queue a migrating timer if it is running,
        which is always the case in the calling context.
      - switching to a remote per-cpu timer queue in the middle of the
        processing loop is definitely not a good idea.
  21. 28 Jul, 2017 1 commit
  22. 27 Jul, 2017 1 commit
    • Jan Kiszka's avatar
      cobalt: Do not destroy info flags on remote thread suspension · 368f1ef5
      Jan Kiszka authored and Philippe Gerum's avatar Philippe Gerum committed
      Scenario: A high prio thread is running and a low-prio is waiting for a
      timeout. Now the timeout occurs, and the low-prio thread is woken up but
      can't run yet. If the high-prio thread suspends the low-prio one,
      XNTIMEO will be lost due to that. Similar scenarios a possible with
      other info flags and remote suspension reasons.
      Address them by clearing info flags only if the current thread is
      suspending itself.
      Signed-off-by: Jan Kiszka's avatarJan Kiszka <jan.kiszka@siemens.com>