- Sep 29, 2024
-
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
- Sep 28, 2024
-
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Some drivers (e.g. mellanox/mx5 do refer to PP_ALLOC_CACHE_* definitions, we cannot change their definition as we'd like to support dynamic allocation for oob mode. Revert those changes, introducing dedicated helpers in replacement. Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We need to serialize against softirqs when running the NAPI buffer management code. Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
We need irq_pipeline_can_idle() to leave the hard irqs off and the inband stage stalled on return, so that no IRQ can sneak in once the log has been flushed, and we don't break the logic of default_idle_call() which may follow. Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Philippe Gerum authored
Signed-off-by: Philippe Gerum <rpm@xenomai.org>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
oob_netdev_state is directly accessible from the device descriptor without going through the obsoleted oob_context_state struct anymore. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
We can use eBPF filtering in order to leave the decision about oob vs inband packet delivery to userland. Provide an ioctl() interface to do so on a per-device basis. An eBPF filter may be attached to any netdevice which is enabled for oob diversion. If present, this program is in charge of determining which stack is eventually going to handle every ingress packet passed to netif_oob_deliver() by the inband stack. The eBPF program belongs to the "socket" class, since we already have a skb at this point. In absence of filter, the default VLAN-based rules apply, i.e. an ingress packet should be handled by the oob stack if it is VLAN tagged, and its VLAN id matches one of the ids defined as oob traffic discriminators.
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
So far, turning on oob diversion was available to VLAN devices only. The rationale behind this restriction was that the VLAN id could be used as a discriminator for oob traffic, feeding the EVL netstack instead of the inband one. In preparation for supporting per-device eBPF module in the netstack which would serve as a user-defined traffic filter, we can lift most of this general restriction. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
sock_oob_release() is called when the file struct associated to the socket is closed, which unlike sock_oob_destroy() may block. Move the calls to evl_release_file() and evl_pass_crossing() to the release hook, which fixes an issue with sock_oob_destroy() attempting to block while passing the crossing in a RCU callback context. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
send/receive ops may want to update the I/O vector they receive in place, typically to perform incremental load/store from or to them, updating iovlen on the fly. Since we allocate these vectors, pass the corresponding handlers a mutable reference to them. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
user_iov can be abbreviated to uio. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
We are going to have several more net-related attributes, polluting the 'control' namespace for this would be wrong. Create a 'net' factory definining a single device, to which the net-related attributes are associated. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
We may need to charge memory which is not carried by a skb. Provide an inner interface for this purpose, basing the original skb-based call on it. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Some use cases may require the stax to be acquired from a non-preemptible in-band context, for which calling schedule() is a no-go. Add support for spinning/busy waiting on the stax when attempting to lock from in-band to enable stage exclusion for such cases. Passing EVL_STAX_SPIN_INBAND to the new 'flags' parameter of evl_init_stax() turns this feature on. The feature is not provided to oob waiters. A spinning wait from in-band is safe provided oob does not sleep indefinitely while holding a stax contented by inband. However, preemption of a spinning in-band waiter by any oob thread waking up from a sleep state on the same CPU is guaranteed, which keeps the scheme sane. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
We have multiple upcoming uses of a fast cache in the oob netstack (e.g. IPv4 routes, ARP entries) which must be accessed locklessly for lookups from any execution stage, and may be updated under lock from the in-band stage exclusively. This patch introduces a generic cache supporting these capabilities, with RCU-based read protection from updates, and lock-based serialization for dealing with concurrent updates. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
This comes in handy when we need to manage references into the posted data manually. Returns true if the in-band worker was triggered as a result of calling evl_call_inband*(), false otherwise. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Start with moving the vectored I/O helpers to this library. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Plan for adding more protocol-specific data, such as ipv4 bits. Let's have them grouped into a union in the EVL socket descriptoor. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Philippe Gerum authored
Unlike what the comment suggests, there is no reason to call evl_schedule() prior to pulling more data from the RX queue. We are running in a plain task context with preemption enabled, regular priority rules apply. Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-
Signed-off-by: Philippe Gerum <philippe.gerum@exail.com>
-