1. 22 Sep, 2016 1 commit
  2. 20 Aug, 2016 1 commit
    • Marcelo Ricardo Leitner's avatar
      sctp: linearize early if it's not GSO · 4c2f2454
      Marcelo Ricardo Leitner authored
      Because otherwise when crc computation is still needed it's way more
      expensive than on a linear buffer to the point that it affects
      performance.
      
      It's so expensive that netperf test gives a perf output as below:
      
      Overhead  Command         Shared Object       Symbol
        18,62%  netserver       [kernel.vmlinux]    [k] crc32_generic_shift
         2,57%  netserver       [kernel.vmlinux]    [k] __pskb_pull_tail
         1,94%  netserver       [kernel.vmlinux]    [k] fib_table_lookup
         1,90%  netserver       [kernel.vmlinux]    [k] copy_user_enhanced_fast_string
         1,66%  swapper         [kernel.vmlinux]    [k] intel_idle
         1,63%  netserver       [kernel.vmlinux]    [k] _raw_spin_lock
         1,59%  netserver       [sctp]              [k] sctp_packet_transmit
         1,55%  netserver       [kernel.vmlinux]    [k] memcpy_erms
         1,42%  netserver       [sctp]              [k] sctp_rcv
      
      # netperf -H 192.168.10.1 -l 10 -t SCTP_STREAM -cC -- -m 12000
      SCTP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.1 () port 0 AF_INET
      Recv   Send    Send                          Utilization       Service Demand
      Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
      Size   Size    Size     Time     Throughput  local    remote   local   remote
      bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
      
      212992 212992  12000    10.00      3016.42   2.88     3.78     1.874   2.462
      
      After patch:
      Overhead  Command         Shared Object      Symbol
         2,75%  netserver       [kernel.vmlinux]   [k] memcpy_erms
         2,63%  netserver       [kernel.vmlinux]   [k] copy_user_enhanced_fast_string
         2,39%  netserver       [kernel.vmlinux]   [k] fib_table_lookup
         2,04%  netserver       [kernel.vmlinux]   [k] __pskb_pull_tail
         1,91%  netserver       [kernel.vmlinux]   [k] _raw_spin_lock
         1,91%  netserver       [sctp]             [k] sctp_packet_transmit
         1,72%  netserver       [mlx4_en]          [k] mlx4_en_process_rx_cq
         1,68%  netserver       [sctp]             [k] sctp_rcv
      
      # netperf -H 192.168.10.1 -l 10 -t SCTP_STREAM -cC -- -m 12000
      SCTP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.10.1 () port 0 AF_INET
      Recv   Send    Send                          Utilization       Service Demand
      Socket Socket  Message  Elapsed              Send     Recv     Send    Recv
      Size   Size    Size     Time     Throughput  local    remote   local   remote
      bytes  bytes   bytes    secs.    10^6bits/s  % S      % S      us/KB   us/KB
      
      212992 212992  12000    10.00      3681.77   3.83     3.46     2.045   1.849
      
      Fixes: 3acb50c1
      
       ("sctp: delay as much as possible skb_linearize")
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      4c2f2454
  3. 25 Jul, 2016 1 commit
    • Marcelo Ricardo Leitner's avatar
      sctp: fix BH handling on socket backlog · eefc1b1d
      Marcelo Ricardo Leitner authored
      Now that the backlog processing is called with BH enabled, we have to
      disable BH before taking the socket lock via bh_lock_sock() otherwise
      it may dead lock:
      
      sctp_backlog_rcv()
                      bh_lock_sock(sk);
      
                      if (sock_owned_by_user(sk)) {
                              if (sk_add_backlog(sk, skb, sk->sk_rcvbuf))
                                      sctp_chunk_free(chunk);
                              else
                                      backloged = 1;
                      } else
                              sctp_inq_push(inqueue, chunk);
      
                      bh_unlock_sock(sk);
      
      while sctp_inq_push() was disabling/enabling BH, but enabling BH
      triggers pending softirq, which then may try to re-lock the socket in
      sctp_rcv().
      
      [  219.187215]  <IRQ>
      [  219.187217]  [<ffffffff817ca3e0>] _raw_spin_lock+0x20/0x30
      [  219.187223]  [<ffffffffa041888c>] sctp_rcv+0x48c/0xba0 [sctp]
      [  219.187225]  [<ffffffff816e7db2>] ? nf_iterate+0x62/0x80
      [  219.187226]  [<ffffffff816f1b14>] ip_local_deliver_finish+0x94/0x1e0
      [  219.187228]  [<ffffffff816f1e1f>] ip_local_deliver+0x6f/0xf0
      [  219.187229]  [<ffffffff816f1a80>] ? ip_rcv_finish+0x3b0/0x3b0
      [  219.187230]  [<ffffffff816f17a8>] ip_rcv_finish+0xd8/0x3b0
      [  219.187232]  [<ffffffff816f2122>] ip_rcv+0x282/0x3a0
      [  219.187233]  [<ffffffff810d8bb6>] ? update_curr+0x66/0x180
      [  219.187235]  [<ffffffff816abac4>] __netif_receive_skb_core+0x524/0xa90
      [  219.187236]  [<ffffffff810d8e00>] ? update_cfs_shares+0x30/0xf0
      [  219.187237]  [<ffffffff810d557c>] ? __enqueue_entity+0x6c/0x70
      [  219.187239]  [<ffffffff810dc454>] ? enqueue_entity+0x204/0xdf0
      [  219.187240]  [<ffffffff816ac048>] __netif_receive_skb+0x18/0x60
      [  219.187242]  [<ffffffff816ad1ce>] process_backlog+0x9e/0x140
      [  219.187243]  [<ffffffff816ac8ec>] net_rx_action+0x22c/0x370
      [  219.187245]  [<ffffffff817cd352>] __do_softirq+0x112/0x2e7
      [  219.187247]  [<ffffffff817cc3bc>] do_softirq_own_stack+0x1c/0x30
      [  219.187247]  <EOI>
      [  219.187248]  [<ffffffff810aa1c8>] do_softirq.part.14+0x38/0x40
      [  219.187249]  [<ffffffff810aa24d>] __local_bh_enable_ip+0x7d/0x80
      [  219.187254]  [<ffffffffa0408428>] sctp_inq_push+0x68/0x80 [sctp]
      [  219.187258]  [<ffffffffa04190f1>] sctp_backlog_rcv+0x151/0x1c0 [sctp]
      [  219.187260]  [<ffffffff81692b07>] __release_sock+0x87/0xf0
      [  219.187261]  [<ffffffff81692ba0>] release_sock+0x30/0xa0
      [  219.187265]  [<ffffffffa040e46d>] sctp_accept+0x17d/0x210 [sctp]
      [  219.187266]  [<ffffffff810e7510>] ? prepare_to_wait_event+0xf0/0xf0
      [  219.187268]  [<ffffffff8172d52c>] inet_accept+0x3c/0x130
      [  219.187269]  [<ffffffff8168d7a3>] SYSC_accept4+0x103/0x210
      [  219.187271]  [<ffffffff817ca2ba>] ? _raw_spin_unlock_bh+0x1a/0x20
      [  219.187272]  [<ffffffff81692bfc>] ? release_sock+0x8c/0xa0
      [  219.187276]  [<ffffffffa0413e22>] ? sctp_inet_listen+0x62/0x1b0 [sctp]
      [  219.187277]  [<ffffffff8168f2d0>] SyS_accept+0x10/0x20
      
      Fixes: 860fbbc3
      
       ("sctp: prepare for socket backlog behavior change")
      Cc: Eric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      eefc1b1d
  4. 14 Jul, 2016 3 commits
    • Marcelo Ricardo Leitner's avatar
      sctp: do not clear chunk->ecn_ce_done flag · d9cef425
      Marcelo Ricardo Leitner authored
      We should not clear that flag when switching to a new skb from a GSO skb
      because it would cause ECN processing to happen multiple times per GSO
      skb, which is not wanted. Instead, let it be processed once per chunk.
      That is, in other words, once per IP header available.
      
      Fixes: 90017acc
      
       ("sctp: Add GSO support")
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      d9cef425
    • Marcelo Ricardo Leitner's avatar
      sctp: avoid identifying address family many times for a chunk · e7487c86
      Marcelo Ricardo Leitner authored
      
      
      Identifying address family operations during rx path is not something
      expensive but it's ugly to the eye to have it done multiple times,
      specially when we already validated it during initial rx processing.
      
      This patch takes advantage of the now shared sctp_input_cb and make the
      pointer to the operations readily available.
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      e7487c86
    • Marcelo Ricardo Leitner's avatar
      sctp: allow GSO frags to access the chunk too · 1f45f78f
      Marcelo Ricardo Leitner authored
      SCTP will try to access original IP headers on sctp_recvmsg in order to
      copy the addresses used. There are also other places that do similar access
      to IP or even SCTP headers. But after 90017acc ("sctp: Add GSO
      support") they aren't always there because they are only present in the
      header skb.
      
      SCTP handles the queueing of incoming data by cloning the incoming skb
      and limiting to only the relevant payload. This clone has its cb updated
      to something different and it's then queued on socket rx queue. Thus we
      need to fix this in two moments.
      
      For rx path, not related to socket queue yet, this patch uses a
      partially copied sctp_input_cb to such GSO frags. This restores the
      ability to access the headers for this part of the code.
      
      Regarding the socket rx queue, it removes iif member from sctp_event and
      also add a chunk pointer on it.
      
      With these changes we're always able to reach the headers again.
      
      The biggest change here is that now the sctp_chunk struct and the
      original skb are only freed after the application consumed the buffer.
      Note however that the original payload was already like this due to the
      skb cloning.
      
      For iif, SCTP's IPv4 code doesn't use it, so no change is necessary.
      IPv6 now can fetch it directly from original's IPv6 CB as the original
      skb is still accessible.
      
      In the future we probably can simplify sctp_v*_skb_iif() stuff, as
      sctp_v4_skb_iif() was called but it's return value not used, and now
      it's not even called, but such cleanup is out of scope for this change.
      
      Fixes: 90017acc
      
       ("sctp: Add GSO support")
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      1f45f78f
  5. 03 Jun, 2016 2 commits
    • Marcelo Ricardo Leitner's avatar
      sctp: Add GSO support · 90017acc
      Marcelo Ricardo Leitner authored
      SCTP has this pecualiarity that its packets cannot be just segmented to
      (P)MTU. Its chunks must be contained in IP segments, padding respected.
      So we can't just generate a big skb, set gso_size to the fragmentation
      point and deliver it to IP layer.
      
      This patch takes a different approach. SCTP will now build a skb as it
      would be if it was received using GRO. That is, there will be a cover
      skb with protocol headers and children ones containing the actual
      segments, already segmented to a way that respects SCTP RFCs.
      
      With that, we can tell skb_segment() to just split based on frag_list,
      trusting its sizes are already in accordance.
      
      This way SCTP can benefit from GSO and instead of passing several
      packets through the stack, it can pass a single large packet.
      
      v2:
      - Added support for receiving GSO frames, as requested by Dave Miller.
      - Clear skb->cb if packet is GSO (otherwise it's not used by SCTP)
      - Added heuristics similar to what we have in TCP for not generating
        single GSO packets that fills cwnd.
      v3:
      - consider sctphdr size in skb_gso_transport_seglen()
      - rebased due to 5c7cdf33
      
       ("gso: Remove arbitrary checks for
        unsupported GSO")
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Tested-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      90017acc
    • Marcelo Ricardo Leitner's avatar
      sctp: delay as much as possible skb_linearize · 3acb50c1
      Marcelo Ricardo Leitner authored
      
      
      This patch is a preparation for the GSO one. In order to successfully
      handle GSO packets on rx path we must not call skb_linearize, otherwise
      it defeats any gain GSO may have had.
      
      This patch thus delays as much as possible the call to skb_linearize,
      leaving it to sctp_inq_pop() moment. For that the sanity checks
      performed now know how to deal with fragments.
      
      One positive side-effect of this is that if the socket is backlogged it
      will have the chance of doing it on backlog processing instead of
      during softirq.
      
      With this move, it's evident that a check for non-linearity in
      sctp_inq_pop was ineffective and is now removed. Note that a similar
      check is performed a bit below this one.
      Signed-off-by: default avatarMarcelo Ricardo Leitner <marcelo.leitner@gmail.com>
      Tested-by: default avatarXin Long <lucien.xin@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      3acb50c1
  6. 02 May, 2016 1 commit
  7. 15 Apr, 2016 1 commit
  8. 14 Oct, 2014 1 commit
    • Daniel Borkmann's avatar
      net: sctp: fix remote memory pressure from excessive queueing · 26b87c78
      Daniel Borkmann authored
      This scenario is not limited to ASCONF, just taken as one
      example triggering the issue. When receiving ASCONF probes
      in the form of ...
      
        -------------- INIT[ASCONF; ASCONF_ACK] ------------->
        <----------- INIT-ACK[ASCONF; ASCONF_ACK] ------------
        -------------------- COOKIE-ECHO -------------------->
        <-------------------- COOKIE-ACK ---------------------
        ---- ASCONF_a; [ASCONF_b; ...; ASCONF_n;] JUNK ------>
        [...]
        ---- ASCONF_m; [ASCONF_o; ...; ASCONF_z;] JUNK ------>
      
      ... where ASCONF_a, ASCONF_b, ..., ASCONF_z are good-formed
      ASCONFs and have increasing serial numbers, we process such
      ASCONF chunk(s) marked with !end_of_packet and !singleton,
      since we have not yet reached the SCTP packet end. SCTP does
      only do verification on a chunk by chunk basis, as an SCTP
      packet is nothing more than just a container of a stream of
      chunks which it eats up one by one.
      
      We could run into the case that we receive a packet with a
      malformed tail, above marked as trailing JUNK. All previous
      chunks are here goodformed, so the stack will eat up all
      previous chunks up to this point. In case JUNK does not fit
      into a chunk header and there are no more other chunks in
      the input queue, or in case JUNK contains a garbage chunk
      header, but the encoded chunk length would exceed the skb
      tail, or we came here from an entirely different scenario
      and the chunk has pdiscard=1 mark (without having had a flush
      point), it will happen, that we will excessively queue up
      the association's output queue (a correct final chunk may
      then turn it into a response flood when flushing the
      queue ;)): I ran a simple script with incremental ASCONF
      serial numbers and could see the server side consuming
      excessive amount of RAM [before/after: up to 2GB and more].
      
      The issue at heart is that the chunk train basically ends
      with !end_of_packet and !singleton markers and since commit
      2e3216cd ("sctp: Follow security requirement of responding
      with 1 packet") therefore preventing an output queue flush
      point in sctp_do_sm() -> sctp_cmd_interpreter() on the input
      chunk (chunk = event_arg) even though local_cork is set,
      but its precedence has changed since then. In the normal
      case, the last chunk with end_of_packet=1 would trigger the
      queue flush to accommodate possible outgoing bundling.
      
      In the input queue, sctp_inq_pop() seems to do the right thing
      in terms of discarding invalid chunks. So, above JUNK will
      not enter the state machine and instead be released and exit
      the sctp_assoc_bh_rcv() chunk processing loop. It's simply
      the flush point being missing at loop exit. Adding a try-flush
      approach on the output queue might not work as the underlying
      infrastructure might be long gone at this point due to the
      side-effect interpreter run.
      
      One possibility, albeit a bit of a kludge, would be to defer
      invalid chunk freeing into the state machine in order to
      possibly trigger packet discards and thus indirectly a queue
      flush on error. It would surely be better to discard chunks
      as in the current, perhaps better controlled environment, but
      going back and forth, it's simply architecturally not possible.
      I tried various trailing JUNK attack cases and it seems to
      look good now.
      
      Joint work with Vlad Yasevich.
      
      Fixes: 2e3216cd
      
       ("sctp: Follow security requirement of responding with 1 packet")
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: default avatarVlad Yasevich <vyasevich@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      26b87c78
  9. 06 Dec, 2013 1 commit
  10. 09 Aug, 2013 1 commit
  11. 25 Jul, 2013 1 commit
  12. 02 Jul, 2013 1 commit
    • Daniel Borkmann's avatar
      net: sctp: rework debugging framework to use pr_debug and friends · bb33381d
      Daniel Borkmann authored
      We should get rid of all own SCTP debug printk macros and use the ones
      that the kernel offers anyway instead. This makes the code more readable
      and conform to the kernel code, and offers all the features of dynamic
      debbuging that pr_debug() et al has, such as only turning on/off portions
      of debug messages at runtime through debugfs. The runtime cost of having
      CONFIG_DYNAMIC_DEBUG enabled, but none of the debug statements printing,
      is negligible [1]. If kernel debugging is completly turned off, then these
      statements will also compile into "empty" functions.
      
      While we're at it, we also need to change the Kconfig option as it /now/
      only refers to the ifdef'ed code portions in outqueue.c that enable further
      debugging/tracing of SCTP transaction fields. Also, since SCTP_ASSERT code
      was enabled with this Kconfig option and has now been removed, we
      transform those code parts into WARNs resp. where appropriate BUG_ONs so
      that those bugs can be more easily detected as probably not many people
      have SCTP debugging permanently turned on.
      
      To turn on all SCTP debugging, the following steps are needed:
      
       # mount -t debugfs none /sys/kernel/debug
       # echo -n 'module sctp +p' > /sys/kernel/debug/dynamic_debug/control
      
      This can be done more fine-grained on a per file, per line basis and others
      as described in [2].
      
       [1] https://www.kernel.org/doc/ols/2009/ols2009-pages-39-46.pdf
      
      
       [2] Documentation/dynamic-debug-howto.txt
      Signed-off-by: default avatarDaniel Borkmann <dborkman@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      bb33381d
  13. 17 Apr, 2013 1 commit
  14. 03 Dec, 2012 1 commit
    • Michele Baldessari's avatar
      sctp: Add support to per-association statistics via a new SCTP_GET_ASSOC_STATS call · 196d6759
      Michele Baldessari authored
      
      
      The current SCTP stack is lacking a mechanism to have per association
      statistics. This is an implementation modeled after OpenSolaris'
      SCTP_GET_ASSOC_STATS.
      
      Userspace part will follow on lksctp if/when there is a general ACK on
      this.
      V4:
      - Move ipackets++ before q->immediate.func() for consistency reasons
      - Move sctp_max_rto() at the end of sctp_transport_update_rto() to avoid
        returning bogus RTO values
      - return asoc->rto_min when max_obs_rto value has not changed
      
      V3:
      - Increase ictrlchunks in sctp_assoc_bh_rcv() as well
      - Move ipackets++ to sctp_inq_push()
      - return 0 when no rto updates took place since the last call
      
      V2:
      - Implement partial retrieval of stat struct to cope for future expansion
      - Kill the rtxpackets counter as it cannot be precise anyway
      - Rename outseqtsns to outofseqtsns to make it clearer that these are out
        of sequence unexpected TSNs
      - Move asoc->ipackets++ under a lock to avoid potential miscounts
      - Fold asoc->opackets++ into the already existing asoc check
      - Kill unneeded (q->asoc) test when increasing rtxchunks
      - Do not count octrlchunks if sending failed (SCTP_XMIT_OK != 0)
      - Don't count SHUTDOWNs as SACKs
      - Move SCTP_GET_ASSOC_STATS to the private space API
      - Adjust the len check in sctp_getsockopt_assoc_stats() to allow for
        future struct growth
      - Move association statistics in their own struct
      - Update idupchunks when we send a SACK with dup TSNs
      - return min_rto in max_rto when RTO has not changed. Also return the
        transport when max_rto last changed.
      
      Signed-off: Michele Baldessari <michele@acksyn.org>
      Acked-by: default avatarVlad Yasevich <vyasevich@gmail.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      196d6759
  15. 26 Aug, 2010 1 commit
  16. 30 Mar, 2010 1 commit
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Guess-its-ok-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  17. 05 Feb, 2008 1 commit
  18. 07 Nov, 2007 1 commit
    • Vlad Yasevich's avatar
      SCTP: Fix a potential race between timers and receive path. · 027f6e1a
      Vlad Yasevich authored
      
      
      There is a possible race condition where the timer code will
      free the association and the next packet in the queue will also
      attempt to free the same association.
      
      The example is, when we receive an ABORT at about the same time
      as the retransmission timer fires.  If the timer wins the race,
      it will free the association.  Once it releases the lock, the
      queue processing will recieve the ABORT and will try to free
      the association again.
      Signed-off-by: default avatarVlad Yasevich <vladislav.yasevich@hp.com>
      027f6e1a
  19. 10 Oct, 2007 1 commit
  20. 26 Sep, 2007 1 commit
  21. 26 Apr, 2007 1 commit
    • Arnaldo Carvalho de Melo's avatar
      [SK_BUFF]: Convert skb->tail to sk_buff_data_t · 27a884dc
      Arnaldo Carvalho de Melo authored
      
      
      So that it is also an offset from skb->head, reduces its size from 8 to 4 bytes
      on 64bit architectures, allowing us to combine the 4 bytes hole left by the
      layer headers conversion, reducing struct sk_buff size to 256 bytes, i.e. 4
      64byte cachelines, and since the sk_buff slab cache is SLAB_HWCACHE_ALIGN...
      :-)
      
      Many calculations that previously required that skb->{transport,network,
      mac}_header be first converted to a pointer now can be done directly, being
      meaningful as offsets or pointers.
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      27a884dc
  22. 11 Feb, 2007 1 commit
  23. 22 Nov, 2006 1 commit
  24. 22 Sep, 2006 1 commit
  25. 06 May, 2006 1 commit
  26. 17 Jan, 2006 1 commit
  27. 09 Jul, 2005 1 commit
  28. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4