1. 10 Dec, 2017 1 commit
  2. 08 Dec, 2017 3 commits
  3. 05 Dec, 2017 3 commits
    • Greg Gallagher's avatar
      drivers/gpio: Add zynq-7000 rtdm gpio driver · 56f9dd3f
      Greg Gallagher authored and Philippe Gerum's avatar Philippe Gerum committed
      56f9dd3f
    • Greg Gallagher's avatar
      drivers/gpio: Request gpio at open() · 70f9d8f1
      Greg Gallagher authored and Philippe Gerum's avatar Philippe Gerum committed
      For the zynq platform (and possibly others in the future) we need to modify
      gpio-core to request gpio pins before using them. The open() function will
      now request a gpio and fail if it's already reserved. This should make
      the pin request transparent to the user. The ability to request and release
      pins is also available in an ioctrl message.  Tested on Microzed Zynq-7010
      platform and the raspberry pi 2 board.
      70f9d8f1
    • Greg Gallagher's avatar
      cobalt/rtdm: Split rtdm_fd_enter up · 03b70fea
      Greg Gallagher authored and Philippe Gerum's avatar Philippe Gerum committed
      split rtdm_fd_enter, move the functionality where we store the fd
      until after the open() call succeeds.  Calls where open() fail a fd is
      left in the tree even after the cleanup code is executed.  If this fd
      number is used again we will fail the call to open until a different
      fd is used.  This patch addresses this situation by not adding the fd
      into the tree until open has succeeded and the fd is valid.
      03b70fea
  4. 01 Dec, 2017 1 commit
  5. 25 Nov, 2017 1 commit
    • Philippe Gerum's avatar
      copperplate/heapobj-tlsf: fix private heap init · 7a038dfa
      Philippe Gerum authored
      TLSF's init_memory_pool() wants heaps to be larger than 4k on 64bit
      architectures, due to the increased size of the meta-data compared to
      their 32bit counterpart (struct tlsf).
      
      Use 8k as the minimum heap size instead of PAGE_SIZE in the pshared
      case, and make sure the main pool size amounts to at least 8k in the
      process-private case, so that init_memory_pool() never fails in
      heapobj_pkg_init_private().
      7a038dfa
  6. 22 Nov, 2017 1 commit
  7. 20 Nov, 2017 1 commit
  8. 19 Nov, 2017 5 commits
  9. 18 Nov, 2017 4 commits
  10. 17 Nov, 2017 6 commits
  11. 16 Nov, 2017 1 commit
    • Philippe Gerum's avatar
      cobalt/monitor: fix wait/exit race, lost wakeup signal · 3112a3f9
      Philippe Gerum authored
      __cobalt_monitor_wait() is racy wrt handling the pended state flag.
      Given the threads A which consumes events from a monitor, and B which
      produces them, the following situation may happen:
      
      [A] __cobalt_monitor_wait()
      [A]     release gate mutex
      [B] fast acquire gate mutex
      [B] grant/signal monitor
      [B] fast release gate mutex (!MONITOR_PENDED, syscall-less)
      [A]     raise MONITOR_PENDED flag
      [A]     sleep_on(monitor)
              ... lost wake up signal, A keeps sleeping ...
      
      To fix this, release the gate mutex only after the PENDED bit is
      raised in the monitor state flags.
      3112a3f9
  12. 14 Nov, 2017 1 commit
    • Jan Kiszka's avatar
      cobalt/debug: Fix locking · 5036abbe
      Jan Kiszka authored and Philippe Gerum's avatar Philippe Gerum committed
      
      
      None of these functions are called over interrupt context. Leaving the
      critical sections interruptible can cause premature/double-unlock
      scenarios and bug reports such as
      
      [Xenomai] lock ffffffff81c56000 already unlocked on CPU #1
                last owner = kernel/xenomai/debug.c:74 (hash_symbol(), CPU #1)
       000000000000002f ffff88007dc8bb10 ffffffff8118ae8f ffffffff00000001
       0000000000000021 ffff88007f897fde ffff88007dc8bb50 ffffffff8118b266
       00000000000000f1 ffff88007dc8bd68 0000000000000006 ffff88007dc8bd40
      Call Trace:
       [<ffffffff8118ae8f>] xnlock_dbg_release+0xdf/0xf0
       [<ffffffff8118b266>] hash_symbol+0x236/0x2d0
       [<ffffffff8118b668>] xndebug_trace_relax+0x118/0x450
       [<ffffffff811b8d50>] ? CoBaLt32emu_mmap+0xf0/0xf0
       [<ffffffff811b8dd7>] CoBaLt32emu_backtrace+0x87/0xb0
       [<ffffffff8100def6>] ? fpu__clear+0xd6/0x160
       [<ffffffff817b3691>] ? _raw_spin_unlock_irq+0x11/0x30
       [<ffffffff811ab1cc>] ipipe_syscall_hook+0x11c/0x3a0
       [<ffffffff8113d9bf>] __ipipe_notify_syscall+0xbf/0x180
       [<ffffffff810cd019>] ? __set_current_blocked+0x49/0x50
       [<ffffffff8113daab>] ipipe_handle_syscall+0x2b/0xb0
       [<ffffffff81001c9d>] do_fast_syscall_32+0xbd/0x220
       [<ffffffff817b64e2>] sysenter_flags_fixed+0x8/0x12
      
      Signed-off-by: Jan Kiszka's avatarJan Kiszka <jan.kiszka@siemens.com>
      5036abbe
  13. 09 Nov, 2017 1 commit
  14. 05 Nov, 2017 1 commit
  15. 29 Oct, 2017 2 commits
    • Philippe Gerum's avatar
      cobalt/sched: drop obsolete, broken xnthread_migrate() call · b3fa56e6
      Philippe Gerum authored
      Migrating a Cobalt thread to a different CPU _must_ go through the
      regular set_cpus_allowed*() interface.
      
      xnthread_migrate() cannot work with the 3.x architecture, and only
      worked with pure kernel-based threads on 2.x.
      
      Let's drop the confusing routine.
      b3fa56e6
    • Philippe Gerum's avatar
      cobalt/process: fix CPU migration handling for blocked threads · 59344943
      Philippe Gerum authored
      To maintain consistency between both Cobalt and host schedulers,
      reflecting a thread migration to another CPU into the Cobalt scheduler
      state must happen from secondary mode only, on behalf of the migrated
      thread itself once it runs on the target CPU (*).
      
      For this reason, handle_setaffinity_event() may NOT fix up
      thread->sched immediately using the passive migration call for a
      blocked thread.
      
      Failing to ensure this may lead to the following scenario, with taskA
      as the migrated thread, and taskB any other Cobalt thread:
      
      CPU0(cobalt): suspend(taskA, XNRELAX)
      CPU0(cobalt): suspend(taskB, ...)
      CPU0(cobalt): enter_root(), next_task := <whatever>
      ...
      CPU0(root): handle_setaffinity_event(taskA, CPU3)
            taskA->sched = xnsched_struct(CPU3)
      CPU0(root): <relax epilogue code for taskA>
      CPU0(root): resume(taskA, XNRELAX)
            enqueue(rq=CPU3), reschedule IPI to CPU3
      CPU0(root): resume(taskB, ...)
      CPU0(root): leave_root(), host_task := taskA
      ...
      CPU0(cobalt): suspend(taskA)
      CPU0(cobalt): enter_root(), next_task := host_task := taskA
      CPU0(root??): <<<taskA execution>>> BROKEN
      CPU3(cobalt): <taskA execution> via reschedule IPI
      
      To sum up, we would end up with the migrated task running on both CPUs
      in parallel, which would be, well, a problem.
      
      To resync the Cobalt scheduler information, send a SIGSHADOW signal to
      the migrated thread, asking it to switch back to primary mode from the
      handler, at which point the interrupted syscall may be restarted. This
      guarantees that check_affinity() is called, and fixups are done from
      the proper context.
      
      There is a cost: setting the affinity of a blocked thread may now
      induce a delay for that target thread as well, since it has to make a
      roundtrip between primary and secondary modes for handling the change
      event. However, 1) there is no other safe way to handle such event, 2)
      changing the CPU affinity of a remote real-time thread at random times
      makes absolutely no sense latency-wise, anyway.
      
      (*) This means that the Cobalt scheduler state regarding the
      CPU information lags behind the host scheduler state until
      the migrated thread switches back to primary mode
      (i.e. task_cpu(p) != xnsched_cpu(xnthread_from_task(p)->sched)).
      This is ok since Cobalt does not schedule such thread until then.
      59344943
  16. 19 Oct, 2017 1 commit
  17. 29 Sep, 2017 2 commits
    • Philippe Gerum's avatar
      include/boilerplate: strip private APIs out of public headers · 1ed69a66
      Philippe Gerum authored
      Some private Xenomai APIs are known to enter namespace conflicts with
      C++ frameworks used by client real-time applications, e.g. Boost.
      
      Instead of polluting the namespace even more by prefixing all private
      definitions, hide them from the public scope by moving them to their
      own separate headers, only the Xenomai C implementation should pull.
      
      This commit only partially addresses the general issue, more work
      remains to be done in this direction to fix all potential conflicts.
      1ed69a66
    • Antoine Hoarau's avatar
      boilerplate: rename clz()/ctz() -> __clz()/__ctz() · 5ec8be8b
      Antoine Hoarau authored and Philippe Gerum's avatar Philippe Gerum committed
      Prevents conflicts with libtbb that defines clz as an inline function and gets
      replaced by those macros. Temporary fix waiting to move this code in private headers.
      5ec8be8b
  18. 27 Sep, 2017 1 commit
  19. 24 Sep, 2017 1 commit
  20. 15 Sep, 2017 3 commits