- 05 Jun, 2021 40 commits
-
-
Philippe Gerum authored
This enables threads running out-of-band to retrieve most of the raw thread-related state information "evl ps" accesses through /sysfs attributes. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
SCHED_FIFO is the most critical and frequently used scheduling policy with EVL, so there is a net gain in inlining the small helpers manipulating the thread list for this one. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
A practical LART is to know beforehand when some out-of-band work is about to run on a non-isolated CPU, so that the application may warn the user about the potentially higher latency figures induced by higher rates of cache and TLB misses which may be caused by heavy in-band load running on the same core. The control device now accepts the [oob_]ioctl(EVL_CTLIOC_GET_CPUSTATE) request, which queries the current state of a CPU: - EVL_CPU_ISOL if it does not belong to housekeeping set of the in-band kernel, i.e. not mentioned in isolcpus= for the scheduling domain. - EVL_CPU_OOB if it is part of the out-of-band set EVL manages. - EVL_CPU_OFFLINE if currently off. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The CPU the sampler runs on must be part of the out-of-band set. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Although writing to a proxy is primarily intended for oob callers, allowing inband tasks to use this channel too comes in handy as the sequence of messages relayed by such proxy can be preserved regardless of the execution stage they originate from, as they percolate through the same buffer. Inband writers can block until the output has drained, unless O_NONBLOCK is set for the proxy. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
clock_sleep() only accepts absolute timespecs, so we have no use of a pointer for collecting the remaining sleep time upon interrupt. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Connecting a proxy with some special file may impose fixed-size writes to the latter, e.g. an eventfd would require that 64bit values only be written. These changes enable the application to set a fixed granularity the proxy must abide by when writing to the target file, so that such requirement is honored. In addition: - oob_write() may block until enough space is available in the transfer buffer to complete the sending, unless O_NONBLOCK is set on the proxy file. - POLLOUT|POLLWRNORM can be monitored for waiting for the output to drain. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Concurrent reads or writes to an xbuf causing nested buffer space reservations could end up in a miscalculation of the number of bytes remaining to be read or written. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
With this rework, we have two separate per-thread mode flags for greater flexibility aimed at controlling the debug features on a per-thread basis: - T_WOSS can be set to trigger SIGDEBUG upon (unexpected) stage switch to in-band mode. This is strictly equivalent to the obsoleted T_WARN bit. - T_WOLI enables/disables the detection of locking inconsistencies with mutexes via the EVL_THRIOC_{SET, CLEAR}_MODE interface. This combines the former static CONFIG_EVL_DEBUG_MUTEX_INBAND and CONFIG_EVL_DEBUG_MUTEX_SLEEP options. Enabling CONFIG_EVL_DEBUG_WOLI turns on T_WOLI by default for every new EVL thread running in userland, which can be opted out on a per-thread basis using EVL_THRIOC_CLEAR_MODE. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We want the policy to appear clearly in the name. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The new hierachy of priority scales is as follows: EVL_CORE_MIN_PRIO = EVL_WEAK_MIN_PRIO ... EVL_FIFO_MIN_PRIO == EVL_QUOTA_MIN_PRIO == EVL_TP_MIN_PRIO (== 1) ... EVL_FIFO_MAX_PRIO = EVL_QUOTA_MAX_PRIO == EVL_TP_MAX_PRIO == EVL_WEAK_MAX_PRIO (< MAX_USER_RT_PRIO) ... EVL_CORE_MAX_PRIO (> MAX_RT_PRIO) We reserve a couple of priority levels above the highest inband kthread priority (MAX_RT_PRIO..MAX_RT_PRIO+1), which are guaranteed to be higher than the highest inband user task priority (MAX_USER_RT_PRIO-1) we use for SCHED_FIFO. Those extra levels can be used for EVL kthreads which must top the priority of any userland thread. SCHED_EVL was dropped int the process, since userland is now constrained to EVL_FIFO_MAX_PRIO by construction. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Add missing policy accessors and init bits. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
When arming T_INBAND for the stage switching thread, we have to remove it from the runqueue in the same move if T_READY is present in its state flags. Failing to do so creates a race with another CPU readying that thread by calling evl_release_thread(), which leads to an inconsistent scheduler state with both T_INBAND and T_READY set for the thread. When this happens, evl_switch_inband() may pick the switching thread from the runqueue for out-of-band scheduling in __evl_schedule() despite being formally blocked by T_INBAND, instead of waiting for the inband scheduler to do so for completing the transition to inband context. As a result, dovetail_resume_inband() spuriously runs from the out-of-band stage eventually (caught by CONFIG_DEBUG_DOVETAIL), which leads to a galactic mess. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We create devices on the fly as part of the procedure for instantiating new EVL elements, expecting them to appear dynamically in the /dev filesystem hierarchy. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Per-factory device types are introduced to control the ownership of element devices. Those are created under the /dev/evl hierachy upon receipt of the EVL_IOC_CLONE request by the clone device of their parent factory. With this change, the ownership of a clone device is inherited by all element devices it instantiates (threads, xbuf, monitors and so on). This is useful for enabling EVL services for non-privileged users, which should only require to set the ownership and permissions of the control and clone devices appropriately. Caveat: for inherited non-default ownership to stick in presence of udev/mdev, make sure to define a rule which prevents the default root.root to be applied for these devices. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Requires a couple of ABI changes. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Callers of the out-of-band polling interface are subject to spurious wakeups by design, which means that they have to handle the case where a signaled condition goes stale before execution resumes. On the other hand, we certainly don't want to miss a wake up event in case the condition is met again in the meantime (i.e. transient true->false->true event), therefore making provisions for clearing such event is asking for trouble. Since the caller has to check for spurious wake ups, we should wake it up as soon as a false->true transition is observed. Therefore, evl_clear_poll_events() is deemed useless and even error-prone: drop it. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Make sure the per-event gate_offset and gate pointer are reset properly when the event is removed from the per-gate event tracking queue. This happens in two occasions: - when waiters are unblocked on gate unlock because the event was signaled. - upon return of an aborted wait operation (i.e. T_BREAK|T_TIMEO|T_RMID). Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Currently concerns SYS_NICE, IPC_LOCK and SYS_RAWIO. Capabilities which have been granted are dropped when the thread detaches from the core. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We don't plan for switching CPU from out-of-band context anymore, so T_MOVED is basically pointless now. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
The inband kernel does not attempt to restart an interrupted syscall upon signal delivery, we should not either. Since signal delivery requires switching inband, clear T_SYSRST in evl_switch_inband() to prevent this. This is a port of a Cobalt fix for the same issue by Jan Kiszka <jan.kiszka@siemens.com>. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
This protocol operates the monitor value as a set of boolean event flags forming a 32bit-wide event group. All bits from a group are initially set to zero. An event flag group is a lightweight notification mechanism. The application can send bitmasks to raise individual bits from the group (i.e. group_value |= bits), or wait for the group to have at least one bit set for satisfying the request. In the latter case, the group value is read then cleared atomically, and the collected bits are returned to the thread heading the wait queue. Signed-off-by:
Philippe Gerum <rpm@xenomai.org>
-