- 15 May, 2021 2 commits
-
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The pipelined interrupt entry code must always run the common work loop before returning to user mode on the in-band stage, including after the preempted task was demoted from oob to in-band context as a result of handling the incoming IRQ. Failing to do so may cause in-band work to be left pending in this particular case, like _TIF_RETUSER and other _TIF_WORK conditions. This bug caused the smokey 'gdb' test to fail on x86: https://xenomai.org/pipermail/xenomai/2021-March/044522.html Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
- 03 May, 2021 38 commits
-
-
Philippe Gerum authored
Since #ae18ad28 , MAX_RT_PRIO should be used instead. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
A process is now marked for COW-breaking on fork() upon the first call to dovetail_init_altsched(), and must ensure its memory is locked via a call to mlockall(MCL_CURRENT|MCL_FUTURE) as usual. As a result, force_commit_memory() became pointless and was removed from the Dovetail interface. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
evl/factory.h is included more than once, remove the one that isn't necessary. Signed-off-by:
Zhang Kun <zhangkun@cdjrlc.com>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
An EVL lock is now distinct from a hard lock in that it tracks and disables preemption in the core when held. Such spinlock may be useful when only EVL threads running out-of-band can contend for the lock, to the exclusion of out-of-band IRQ handlers. In this case, disabling preemption before attempting to grab the lock may be substituted to disabling hard irqs. There are gotchas when using such type of lock from the in-band context, see comments in evl/lock.h. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Very short sections of code outside of any hot path are protected by such lock. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
For the most part, the gate lock is nested with a wait queue hard lock - which requires hard irqs to be off - to access the protected sections. Therefore we would not benefit in the common case from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
For the most part, a thread hard lock - which requires hard irqs to be off - is nested with the mutex lock to access the protected sections. Therefore we would not benefit in the common case from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Sleeping voluntarily with EVL preemption disabled is a bug. Add the proper assertion to detect this. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Given the semantics of an evl_flag, disabling preemption manually around the evl_raise_flag(to_flag) -> evl_wait_flag(from_flag) sequence does not make sense. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The subscriber lock is shared between both execution stages, but accessed from the in-band stage for the most part, which implies disabling hard irqs while holding it. Meanwhile, out-of-band IRQs and EVL threads may compete for the observable lock, which would require hard irqs to be disabled while holding it. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock in any case. Make these hard locks to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The data protected by the inbound (oob -> in-band traffic) buffer lock is frequently accessed from the in-band stage by design, where hard irqs should be disabled. Conversely, the out-of-band sections are short enough to bear with interrupt-free execution. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts would usually compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The data protected by the file table lock is frequently accessed from the in-band stage where holding it with hard irqs off is required. Therefore we would not benefit in the common case from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Now that the inclusion hell is fixed with evl/wait.h, we may include it into mm_info.h, for defining the ptsync barrier statically into the out-of-band mm state. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts would usually compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not generally benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts may compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Out-of-band IRQs and EVL thread contexts may compete for such lock, which would require hard irqs to be disabled while holding it. Therefore we would not benefit from the preemption disabling feature we are going to add to the EVL-specific spinlock. Make it a hard lock to clarify the intent. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We don't actually need to rely on the oob stall bit, provided hard irqs are off in the deemed interrupt-free sections, because the latter is sufficient as long as the code does not traverse a pipeline synchronization point (sync_current_irq_stage()) while holding a lock, which would be in and of itself a bug in the first place. Remove the stall/unstall operations from the evl_spinlock implementation, fixing the few locations which were still testing the oob stall bit. The oob stall bit is still set by Dovetail on entry to IRQ handlers, which is ok: we will neither use nor affect it anymore, only relying on hard disabled irqs. This temporary alignment of the evl_spinlock on the hard spinlock is a first step to revisit the lock types in the core, before the evl_spinlock is changed again to manage the preemption count. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Checking the oob stall bit in __evl_enable_preempt() to block the rescheduling is obsolete. It relates to a nested locking construct which is long gone, when the evl_spinlock managed the preemption count and the big lock was still in, i.e.: lock_irqsave(&ugly_big_lock, flags) /* stall bit raised */ evl_spin_lock(&inner_lock); /* +1 preempt */ wake_up_high_prio_thread(); evl_spin_unlock(&inner_lock); /* -1 preempt == 0, NO schedule because stalled */ unlock_irqrestore(&ugly_big_lock, flags) /* stall bit restored */ This was a way to prevent a rescheduling to take place inadvertently while holding the big lock. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This is a simple synchronization mechanism allowing an in-band caller to pass a point in the code making sure that no out-of-band operations which might traverse the same crossing are in flight. Out-of-band callers delimit the danger zone by down-ing and up-ing the barrier at the crossing, the in-band code should ask for passing the crossing. CAUTION: the caller must guarantee that evl_down_crossing() cannot be invoked _after_ evl_pass_crossing() is entered for a given crossing. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Returns the current kthread descriptor or NULL if another thread context is running. CAUTION: does not account for IRQ context. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We need more flexibility in the argument passed to the thread function. Change for an opaque pointer passed to evl_run_kthread() and variants instead of the current kthread descriptor. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
By convention, all thread-related calls which implicitly affect current and therefore do not take any @thread parameter should use a short-form name, such as evl_delay(), evl_sleep(). For this reason, the following renames took place: - evl_set_thread_period -> evl_set_period - evl_wait_thread_period -> evl_wait_period - evl_delay_thread -> evl_delay In addition, complete the set of kthread-specific calls which are based on the inner thread interface (this one working for user and kernel threads indifferently): - evl_kthread_unblock - evl_kthread_join Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This is an internal interface which should deal with ktime directly, not timespec64. In addition, rename to set() in order to match the converse short form read() call. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
This is a tiny fix for the repeated evl/sched.h inclusion. Signed-off-by:
lio <liu.hailong6@zte.com.cn>
-
Trace event *evl_sched_attrs* calls TP_printk("%s") to out print the thread name get by evl_element_name(). However, evl_element_name() may return NULL sometimes, and TP_printk(%s) may cause some problems. This patch will avoid this. Signed-off-by:
lio <carver4lio@163.com>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Prioritization of timers in timer queues dates back to the Dark Ages of Xenomai 2.x, when multiple time bases would co-exist in the core, some of which representing date values as a count of periodic ticks. In such a case, multiple timers might elapse on the very same tick, hence the need for prioritizing them. With a single time base indexing timers on absolute date values, which are expressed as a 64bit monotonic count of nanoseconds, the likelihood of observing identical trigger dates is very low. Furthermore, the formerly defined priorities where assigned as follows: 1) high priority to the per-thread periodic and resource timers 2) medium priority to the user-defined timers 3) low priority to the in-band tick emulation timer It turns out that forcibly prioritizing 1) over 2) is at least debatable, if not questionable: resource timers have no high priority at all, they merely tick on the (unlikely) timeout condition. On the other hand, user-defined timers may well deal with high priority events only some EVL driver code may know about. Finally, handling 3) is a fast operation on top of Dovetail, which is already deferred internally whenever the timer management core detects that some oob activity is running/pending. So we may remove the logic handling the timer priority, only relying on the trigger date for dispatching. This should save precious cycles in the hot path without any actual downside. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Align naming of the kthread termination-related calls on the in-band counterparts. At this chance, further clarify the interface by having evl_kthread_should_stop() explicitly return a boolean status. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-