1. 20 May, 2016 1 commit
    • Du, Changbin's avatar
      debugobjects: make fixup functions return bool instead of int · b1e4d9d8
      Du, Changbin authored
      
      
      I am going to introduce debugobjects infrastructure to USB subsystem.
      But before this, I found the code of debugobjects could be improved.
      This patchset will make fixup functions return bool type instead of int.
      Because fixup only need report success or no.  boolean is the 'real'
      type.
      
      This patch (of 7):
      
      The object debugging infrastructure core provides some fixup callbacks
      for the subsystem who use it.  These callbacks are called from the debug
      code whenever a problem in debug_object_init is detected.  And
      debugobjects core suppose them returns 1 when the fixup was successful,
      otherwise 0.  So the return type is boolean.
      
      A bad thing is that debug_object_fixup use the return value for
      arithmetic operation.  It confused me that what is the reall return
      type.
      
      Reading over the whole code, I found some place do use the return value
      incorrectly(see next patch).  So why use bool type instead?
      Signed-off-by: default avatarDu, Changbin <changbin.du@intel.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Josh Triplett <josh@kernel.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b1e4d9d8
  2. 16 May, 2016 1 commit
  3. 12 May, 2016 1 commit
    • David Howells's avatar
      KEYS: Fix ASN.1 indefinite length object parsing · 23c8a812
      David Howells authored
      
      
      This fixes CVE-2016-0758.
      
      In the ASN.1 decoder, when the length field of an ASN.1 value is extracted,
      it isn't validated against the remaining amount of data before being added
      to the cursor.  With a sufficiently large size indicated, the check:
      
      	datalen - dp < 2
      
      may then fail due to integer overflow.
      
      Fix this by checking the length indicated against the amount of remaining
      data in both places a definite length is determined.
      
      Whilst we're at it, make the following changes:
      
       (1) Check the maximum size of extended length does not exceed the capacity
           of the variable it's being stored in (len) rather than the type that
           variable is assumed to be (size_t).
      
       (2) Compare the EOC tag to the symbolic constant ASN1_EOC rather than the
           integer 0.
      
       (3) To reduce confusion, move the initialisation of len outside of:
      
      	for (len = 0; n > 0; n--) {
      
           since it doesn't have anything to do with the loop counter n.
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      Reviewed-by: default avatarMimi Zohar <zohar@linux.vnet.ibm.com>
      Acked-by: default avatarDavid Woodhouse <David.Woodhouse@intel.com>
      Acked-by: default avatarPeter Jones <pjones@redhat.com>
      23c8a812
  4. 09 May, 2016 1 commit
  5. 06 May, 2016 1 commit
  6. 03 May, 2016 1 commit
  7. 29 Apr, 2016 1 commit
  8. 24 Apr, 2016 1 commit
  9. 21 Apr, 2016 1 commit
  10. 15 Apr, 2016 1 commit
  11. 13 Apr, 2016 2 commits
  12. 08 Apr, 2016 1 commit
  13. 06 Apr, 2016 5 commits
    • Naveen N. Rao's avatar
      lib/test_bpf: Add additional BPF_ADD tests · 9c94f6c8
      Naveen N. Rao authored
      
      
      Some of these tests proved useful with the powerpc eBPF JIT port due to
      sign-extended 16-bit immediate loads. Though some of these aspects get
      covered in other tests, it is better to have explicit tests so as to
      quickly tag the precise problem.
      
      Cc: Alexei Starovoitov <ast@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9c94f6c8
    • Naveen N. Rao's avatar
      lib/test_bpf: Add test to check for result of 32-bit add that overflows · b64b50ea
      Naveen N. Rao authored
      
      
      BPF_ALU32 and BPF_ALU64 tests for adding two 32-bit values that results in
      32-bit overflow.
      
      Cc: Alexei Starovoitov <ast@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      b64b50ea
    • Naveen N. Rao's avatar
      lib/test_bpf: Add tests for unsigned BPF_JGT · c7395d6b
      Naveen N. Rao authored
      
      
      Unsigned Jump-if-Greater-Than.
      
      Cc: Alexei Starovoitov <ast@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c7395d6b
    • Naveen N. Rao's avatar
      lib/test_bpf: Fix JMP_JSET tests · 9f134c34
      Naveen N. Rao authored
      
      
      JMP_JSET tests incorrectly used BPF_JNE. Fix the same.
      
      Cc: Alexei Starovoitov <ast@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarNaveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      9f134c34
    • Jerome Marchand's avatar
      assoc_array: don't call compare_object() on a node · 8d4a2ec1
      Jerome Marchand authored
      
      
      Changes since V1: fixed the description and added KASan warning.
      
      In assoc_array_insert_into_terminal_node(), we call the
      compare_object() method on all non-empty slots, even when they're
      not leaves, passing a pointer to an unexpected structure to
      compare_object(). Currently it causes an out-of-bound read access
      in keyring_compare_object detected by KASan (see below). The issue
      is easily reproduced with keyutils testsuite.
      Only call compare_object() when the slot is a leave.
      
      KASan warning:
      ==================================================================
      BUG: KASAN: slab-out-of-bounds in keyring_compare_object+0x213/0x240 at addr ffff880060a6f838
      Read of size 8 by task keyctl/1655
      =============================================================================
      BUG kmalloc-192 (Not tainted): kasan: bad access detected
      -----------------------------------------------------------------------------
      
      Disabling lock debugging due to kernel taint
      INFO: Allocated in assoc_array_insert+0xfd0/0x3a60 age=69 cpu=1 pid=1647
      	___slab_alloc+0x563/0x5c0
      	__slab_alloc+0x51/0x90
      	kmem_cache_alloc_trace+0x263/0x300
      	assoc_array_insert+0xfd0/0x3a60
      	__key_link_begin+0xfc/0x270
      	key_create_or_update+0x459/0xaf0
      	SyS_add_key+0x1ba/0x350
      	entry_SYSCALL_64_fastpath+0x12/0x76
      INFO: Slab 0xffffea0001829b80 objects=16 used=8 fp=0xffff880060a6f550 flags=0x3fff8000004080
      INFO: Object 0xffff880060a6f740 @offset=5952 fp=0xffff880060a6e5d1
      
      Bytes b4 ffff880060a6f730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f740: d1 e5 a6 60 00 88 ff ff 0e 00 00 00 00 00 00 00  ...`............
      Object ffff880060a6f750: 02 cf 8e 60 00 88 ff ff 02 c0 8e 60 00 88 ff ff  ...`.......`....
      Object ffff880060a6f760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f7a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f7b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f7c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f7d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f7e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      Object ffff880060a6f7f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
      CPU: 0 PID: 1655 Comm: keyctl Tainted: G    B           4.5.0-rc4-kasan+ #291
      Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
       0000000000000000 000000001b2800b4 ffff880060a179e0 ffffffff81b60491
       ffff88006c802900 ffff880060a6f740 ffff880060a17a10 ffffffff815e2969
       ffff88006c802900 ffffea0001829b80 ffff880060a6f740 ffff880060a6e650
      Call Trace:
       [<ffffffff81b60491>] dump_stack+0x85/0xc4
       [<ffffffff815e2969>] print_trailer+0xf9/0x150
       [<ffffffff815e9454>] object_err+0x34/0x40
       [<ffffffff815ebe50>] kasan_report_error+0x230/0x550
       [<ffffffff819949be>] ? keyring_get_key_chunk+0x13e/0x210
       [<ffffffff815ec62d>] __asan_report_load_n_noabort+0x5d/0x70
       [<ffffffff81994cc3>] ? keyring_compare_object+0x213/0x240
       [<ffffffff81994cc3>] keyring_compare_object+0x213/0x240
       [<ffffffff81bc238c>] assoc_array_insert+0x86c/0x3a60
       [<ffffffff81bc1b20>] ? assoc_array_cancel_edit+0x70/0x70
       [<ffffffff8199797d>] ? __key_link_begin+0x20d/0x270
       [<ffffffff8199786c>] __key_link_begin+0xfc/0x270
       [<ffffffff81993389>] key_create_or_update+0x459/0xaf0
       [<ffffffff8128ce0d>] ? trace_hardirqs_on+0xd/0x10
       [<ffffffff81992f30>] ? key_type_lookup+0xc0/0xc0
       [<ffffffff8199e19d>] ? lookup_user_key+0x13d/0xcd0
       [<ffffffff81534763>] ? memdup_user+0x53/0x80
       [<ffffffff819983ea>] SyS_add_key+0x1ba/0x350
       [<ffffffff81998230>] ? key_get_type_from_user.constprop.6+0xa0/0xa0
       [<ffffffff828bcf4e>] ? retint_user+0x18/0x23
       [<ffffffff8128cc7e>] ? trace_hardirqs_on_caller+0x3fe/0x580
       [<ffffffff81004017>] ? trace_hardirqs_on_thunk+0x17/0x19
       [<ffffffff828bc432>] entry_SYSCALL_64_fastpath+0x12/0x76
      Memory state around the buggy address:
       ffff880060a6f700: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
       ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc
      >ffff880060a6f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
                                              ^
       ffff880060a6f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
       ffff880060a6f900: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00
      ==================================================================
      Signed-off-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      cc: stable@vger.kernel.org
      8d4a2ec1
  14. 05 Apr, 2016 15 commits
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_raw_from_sgl(): fix out-of-bounds buffer access · 0bb5c9ea
      Nicolai Stange authored
      Within the copying loop in mpi_read_raw_from_sgl(), the last input SGE's
      byte count gets artificially extended as follows:
      
        if (sg_is_last(sg) && (len % BYTES_PER_MPI_LIMB))
          len += BYTES_PER_MPI_LIMB - (len % BYTES_PER_MPI_LIMB);
      
      Within the following byte copying loop, this causes reads beyond that
      SGE's allocated buffer:
      
        BUG: KASAN: slab-out-of-bounds in mpi_read_raw_from_sgl+0x331/0x650
                                           at addr ffff8801e168d4d8
        Read of size 1 by task systemd-udevd/721
        [...]
        Call Trace:
         [<ffffffff818c4d35>] dump_stack+0xbc/0x117
         [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
         [<ffffffff814af5d1>] ? print_section+0x61/0xb0
         [<ffffffff814b1109>] print_trailer+0x179/0x2c0
         [<ffffffff814bc524>] object_err+0x34/0x40
         [<ffffffff814bfdc7>] kasan_report_error+0x307/0x8c0
         [<ffffffff814bf315>] ? kasan_unpoison_shadow+0x35/0x50
         [<ffffffff814bf38e>] ? kasan_kmalloc+0x5e/0x70
         [<ffffffff814c0ad1>] kasan_report+0x71/0xa0
         [<ffffffff81938171>] ? mpi_read_raw_from_sgl+0x331/0x650
         [<ffffffff814bf1a6>] __asan_load1+0x46/0x50
         [<ffffffff81938171>] mpi_read_raw_from_sgl+0x331/0x650
         [<ffffffff817f41b6>] rsa_verify+0x106/0x260
         [<ffffffff817f40b0>] ? rsa_set_pub_key+0xf0/0xf0
         [<ffffffff818edc79>] ? sg_init_table+0x29/0x50
         [<ffffffff817f4d22>] ? pkcs1pad_sg_set_buf+0xb2/0x2e0
         [<ffffffff817f5b74>] pkcs1pad_verify+0x1f4/0x2b0
         [<ffffffff81831057>] public_key_verify_signature+0x3a7/0x5e0
         [<ffffffff81830cb0>] ? public_key_describe+0x80/0x80
         [<ffffffff817830f0>] ? keyring_search_aux+0x150/0x150
         [<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370
         [<ffffffff814b83f0>] ? kfree+0x220/0x370
         [<ffffffff818312c2>] public_key_verify_signature_2+0x32/0x50
         [<ffffffff81830b5c>] verify_signature+0x7c/0xb0
         [<ffffffff81835d0c>] pkcs7_validate_trust+0x42c/0x5f0
         [<ffffffff813c391a>] system_verify_data+0xca/0x170
         [<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b
         [<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0
         [<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290
        [...]
      
      The exact purpose of the len extension isn't clear to me, but due to
      its form, I suspect that it's a leftover somehow accounting for leading
      zero bytes within the most significant output limb.
      
      Note however that without that len adjustement, the total number of bytes
      ever processed by the inner loop equals nbytes and thus, the last output
      limb gets written at this point. Thus the net effect of the len adjustement
      cited above is just to keep the inner loop running for some more
      iterations, namely < BYTES_PER_MPI_LIMB ones, reading some extra bytes from
      beyond the last SGE's buffer and discarding them afterwards.
      
      Fix this issue by purging the extension of len beyond the last input SGE's
      buffer length.
      
      Fixes: 2d4d1eea
      
       ("lib/mpi: Add mpi sgl helpers")
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0bb5c9ea
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_raw_from_sgl(): sanitize meaning of indices · 85d541a3
      Nicolai Stange authored
      
      
      Within the byte reading loop in mpi_read_raw_sgl(), there are two
      housekeeping indices used, z and x.
      
      At all times, the index z represents the number of output bytes covered
      by the input SGEs for which processing has completed so far. This includes
      any leading zero bytes within the most significant limb.
      
      The index x changes its meaning after the first outer loop's iteration
      though: while processing the first input SGE, it represents
      
        "number of leading zero bytes in most significant output limb" +
         "current position within current SGE"
      
      For the remaining SGEs OTOH, x corresponds just to
      
        "current position within current SGE"
      
      After all, it is only the sum of z and x that has any meaning for the
      output buffer and thus, the
      
        "number of leading zero bytes in most significant output limb"
      
      part can be moved away from x into z from the beginning, opening up the
      opportunity for cleaner code.
      
      Before the outer loop iterating over the SGEs, don't initialize z with
      zero, but with the number of leading zero bytes in the most significant
      output limb. For the inner loop iterating over a single SGE's bytes,
      get rid of the buf_shift offset to x' bounds and let x run from zero to
      sg->length - 1.
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      85d541a3
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_raw_from_sgl(): fix nbits calculation · 64c09b0b
      Nicolai Stange authored
      The number of bits, nbits, is calculated in mpi_read_raw_from_sgl() as
      follows:
      
        nbits = nbytes * 8;
      
      Afterwards, the number of leading zero bits of the first byte get
      subtracted:
      
        nbits -= count_leading_zeros(*(u8 *)(sg_virt(sgl) + lzeros));
      
      However, count_leading_zeros() takes an unsigned long and thus,
      the u8 gets promoted to an unsigned long.
      
      Thus, the above doesn't subtract the number of leading zeros in the most
      significant nonzero input byte from nbits, but the number of leading
      zeros of the most significant nonzero input byte promoted to unsigned long,
      i.e. BITS_PER_LONG - 8 too many.
      
      Fix this by subtracting
      
        count_leading_zeros(...) - (BITS_PER_LONG - 8)
      
      from nbits only.
      
      Fixes: 2d4d1eea
      
       ("lib/mpi: Add mpi sgl helpers")
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      64c09b0b
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_raw_from_sgl(): purge redundant clearing of nbits · 60e1b74c
      Nicolai Stange authored
      
      
      In mpi_read_raw_from_sgl(), unsigned nbits is calculated as follows:
      
        nbits = nbytes * 8;
      
      and redundantly cleared later on if nbytes == 0:
      
        if (nbytes > 0)
          ...
        else
          nbits = 0;
      
      Purge this redundant clearing for the sake of clarity.
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      60e1b74c
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_raw_from_sgl(): don't include leading zero SGEs in nbytes · ab1e912e
      Nicolai Stange authored
      At the very beginning of mpi_read_raw_from_sgl(), the leading zeros of
      the input scatterlist are counted:
      
        lzeros = 0;
        for_each_sg(sgl, sg, ents, i) {
          ...
          if (/* sg contains nonzero bytes */)
            break;
      
          /* sg contains nothing but zeros here */
          ents--;
          lzeros = 0;
        }
      
      Later on, the total number of trailing nonzero bytes is calculated by
      subtracting the number of leading zero bytes from the total number of input
      bytes:
      
        nbytes -= lzeros;
      
      However, since lzeros gets reset to zero for each completely zero leading
      sg in the loop above, it doesn't include those.
      
      Besides wasting resources by allocating a too large output buffer,
      this mistake propagates into the calculation of x, the number of
      leading zeros within the most significant output limb:
      
        x = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB;
      
      What's more, the low order bytes of the output, equal in number to the
      extra bytes in nbytes, are left uninitialized.
      
      Fix this by adjusting nbytes for each completely zero leading scatterlist
      entry.
      
      Fixes: 2d4d1eea
      
       ("lib/mpi: Add mpi sgl helpers")
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ab1e912e
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_raw_from_sgl(): replace len argument by nbytes · b6985389
      Nicolai Stange authored
      
      
      Currently, the nbytes local variable is calculated from the len argument
      as follows:
      
        ... mpi_read_raw_from_sgl(..., unsigned int len)
        {
          unsigned nbytes;
          ...
          if (!ents)
            nbytes = 0;
          else
            nbytes = len - lzeros;
          ...
        }
      
      Given that nbytes is derived from len in a trivial way and that the len
      argument is shadowed by a local len variable in several loops, this is just
      confusing.
      
      Rename the len argument to nbytes and get rid of the nbytes local variable.
      Do the nbytes calculation in place.
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      b6985389
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_buffer(): fix buffer overflow · 462696fd
      Nicolai Stange authored
      Currently, mpi_read_buffer() writes full limbs to the output buffer
      and moves memory around to purge leading zero limbs afterwards.
      
      However, with
      
        commit 9cbe21d8 ("lib/mpi: only require buffers as big as needed for
                              the integer")
      
      the caller is only required to provide a buffer large enough to hold the
      result without the leading zeros.
      
      This might result in a buffer overflow for small MP numbers with leading
      zeros.
      
      Fix this by coping the result to its final destination within the output
      buffer and not copying the leading zeros at all.
      
      Fixes: 9cbe21d8
      
       ("lib/mpi: only require buffers as big as needed for
                            the integer")
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      462696fd
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_buffer(): replace open coded endian conversion · 90f864e2
      Nicolai Stange authored
      
      
      Currently, the endian conversion from CPU order to BE is open coded in
      mpi_read_buffer().
      
      Replace this by the centrally provided cpu_to_be*() macros.
      Copy from the temporary storage on stack to the destination buffer
      by means of memcpy().
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      90f864e2
    • Nicolai Stange's avatar
      lib/mpi: mpi_read_buffer(): optimize skipping of leading zero limbs · f00fa241
      Nicolai Stange authored
      
      
      Currently, if the number of leading zeros is greater than fits into a
      complete limb, mpi_read_buffer() skips them by iterating over them
      limb-wise.
      
      Instead of skipping the high order zero limbs within the loop as shown
      above, adjust the copying loop's bounds.
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f00fa241
    • Nicolai Stange's avatar
      lib/mpi: mpi_write_sgl(): replace open coded endian conversion · d7552906
      Nicolai Stange authored
      
      
      Currently, the endian conversion from CPU order to BE is open coded in
      mpi_write_sgl().
      
      Replace this by the centrally provided cpu_to_be*() macros.
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d7552906
    • Nicolai Stange's avatar
      lib/mpi: mpi_write_sgl(): fix out-of-bounds stack access · cece762f
      Nicolai Stange authored
      Within the copying loop in mpi_write_sgl(), we have
      
        if (lzeros) {
          mpi_limb_t *limb1 = (void *)p - sizeof(alimb);
          mpi_limb_t *limb2 = (void *)p - sizeof(alimb)
                                     + lzeros;
          *limb1 = *limb2;
          ...
        }
      
      where p points past the end of alimb2 which lives on the stack and contains
      the current limb in BE order.
      
      The purpose of the above is to shift the non-zero bytes of alimb2 to its
      beginning in memory, i.e. to skip its leading zero bytes.
      
      However, limb2 points somewhere into the middle of alimb2 and thus, reading
      *limb2 pulls in lzero bytes from somewhere.
      
      Indeed, KASAN splats:
      
        BUG: KASAN: stack-out-of-bounds in mpi_write_to_sgl+0x4e3/0x6f0
                                            at addr ffff8800cb04f601
        Read of size 8 by task systemd-udevd/391
        page:ffffea00032c13c0 count:0 mapcount:0 mapping:   (null) index:0x0
        flags: 0x3fff8000000000()
        page dumped because: kasan: bad access detected
        CPU: 3 PID: 391 Comm: systemd-udevd Tainted: G  B  L
                                                    4.5.0-next-20160316+ #12
        [...]
        Call Trace:
         [<ffffffff8194889e>] dump_stack+0xdc/0x15e
         [<ffffffff819487c2>] ? _atomic_dec_and_lock+0xa2/0xa2
         [<ffffffff814892b5>] ? __dump_page+0x185/0x330
         [<ffffffff8150ffd6>] kasan_report_error+0x5e6/0x8b0
         [<ffffffff814724cd>] ? kzfree+0x2d/0x40
         [<ffffffff819c5bce>] ? mpi_free_limb_space+0xe/0x20
         [<ffffffff819c469e>] ? mpi_powm+0x37e/0x16f0
         [<ffffffff815109f1>] kasan_report+0x71/0xa0
         [<ffffffff819c0353>] ? mpi_write_to_sgl+0x4e3/0x6f0
         [<ffffffff8150ed34>] __asan_load8+0x64/0x70
         [<ffffffff819c0353>] mpi_write_to_sgl+0x4e3/0x6f0
         [<ffffffff819bfe70>] ? mpi_set_buffer+0x620/0x620
         [<ffffffff819c0e6f>] ? mpi_cmp+0xbf/0x180
         [<ffffffff8186e282>] rsa_verify+0x202/0x260
      
      What's more, since lzeros can be anything from 1 to sizeof(mpi_limb_t)-1,
      the above will cause unaligned accesses which is bad on non-x86 archs.
      
      Fix the issue, by preparing the starting point p for the upcoming copy
      operation instead of shifting the source memory, i.e. alimb2.
      
      Fixes: 2d4d1eea
      
       ("lib/mpi: Add mpi sgl helpers")
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      cece762f
    • Nicolai Stange's avatar
      lib/mpi: mpi_write_sgl(): purge redundant pointer arithmetic · ea122be0
      Nicolai Stange authored
      
      
      Within the copying loop in mpi_write_sgl(), we have
      
        if (lzeros) {
          ...
          p -= lzeros;
          y = lzeros;
        }
        p = p - (sizeof(alimb) - y);
      
      If lzeros == 0, then y == 0, too. Thus, lzeros gets subtracted and added
      back again to p.
      
      Purge this redundancy.
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ea122be0
    • Nicolai Stange's avatar
      lib/mpi: mpi_write_sgl(): fix style issue with lzero decrement · 654842ef
      Nicolai Stange authored
      
      
      Within the copying loop in mpi_write_sgl(), we have
      
        if (lzeros > 0) {
          ...
          lzeros -= sizeof(alimb);
        }
      
      However, at this point, lzeros < sizeof(alimb) holds. Make this fact
      explicit by rewriting the above to
      
        if (lzeros) {
          ...
          lzeros = 0;
        }
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      654842ef
    • Nicolai Stange's avatar
      lib/mpi: mpi_write_sgl(): fix skipping of leading zero limbs · f2d1362f
      Nicolai Stange authored
      Currently, if the number of leading zeros is greater than fits into a
      complete limb, mpi_write_sgl() skips them by iterating over them limb-wise.
      
      However, it fails to adjust its internal leading zeros tracking variable,
      lzeros, accordingly: it does a
      
        p -= sizeof(alimb);
        continue;
      
      which should really have been a
      
        lzeros -= sizeof(alimb);
        continue;
      
      Since lzeros never decreases if its initial value >= sizeof(alimb), nothing
      gets copied by mpi_write_sgl() in that case.
      
      Instead of skipping the high order zero limbs within the loop as shown
      above, fix the issue by adjusting the copying loop's bounds.
      
      Fixes: 2d4d1eea
      
       ("lib/mpi: Add mpi sgl helpers")
      Signed-off-by: default avatarNicolai Stange <nicstange@gmail.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f2d1362f
    • Bob Copeland's avatar
      rhashtable: accept GFP flags in rhashtable_walk_init · 8f6fd83c
      Bob Copeland authored
      
      
      In certain cases, the 802.11 mesh pathtable code wants to
      iterate over all of the entries in the forwarding table from
      the receive path, which is inside an RCU read-side critical
      section.  Enable walks inside atomic sections by allowing
      GFP_ATOMIC allocations for the walker state.
      
      Change all existing callsites to pass in GFP_KERNEL.
      Acked-by: default avatarThomas Graf <tgraf@suug.ch>
      Signed-off-by: default avatarBob Copeland <me@bobcopeland.com>
      [also adjust gfs2/glock.c and rhashtable tests]
      Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
      8f6fd83c
  15. 01 Apr, 2016 1 commit
  16. 31 Mar, 2016 1 commit
  17. 25 Mar, 2016 4 commits
    • Alexander Potapenko's avatar
      kasan: test fix: warn if the UAF could not be detected in kmalloc_uaf2 · 9dcadd38
      Alexander Potapenko authored
      
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9dcadd38
    • Alexander Potapenko's avatar
      mm, kasan: stackdepot implementation. Enable stackdepot for SLAB · cd11016e
      Alexander Potapenko authored
      
      
      Implement the stack depot and provide CONFIG_STACKDEPOT.  Stack depot
      will allow KASAN store allocation/deallocation stack traces for memory
      chunks.  The stack traces are stored in a hash table and referenced by
      handles which reside in the kasan_alloc_meta and kasan_free_meta
      structures in the allocated memory chunks.
      
      IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
      duplication.
      
      Right now stackdepot support is only enabled in SLAB allocator.  Once
      KASAN features in SLAB are on par with those in SLUB we can switch SLUB
      to stackdepot as well, thus removing the dependency on SLUB stack
      bookkeeping, which wastes a lot of memory.
      
      This patch is based on the "mm: kasan: stack depots" patch originally
      prepared by Dmitry Chernenkov.
      
      Joonsoo has said that he plans to reuse the stackdepot code for the
      mm/page_owner.c debugging facility.
      
      [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
      [aryabinin@virtuozzo.com: comment style fixes]
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cd11016e
    • Alexander Potapenko's avatar
      mm, kasan: SLAB support · 7ed2f9e6
      Alexander Potapenko authored
      
      
      Add KASAN hooks to SLAB allocator.
      
      This patch is based on the "mm: kasan: unified support for SLUB and SLAB
      allocators" patch originally prepared by Dmitry Chernenkov.
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7ed2f9e6
    • Alexander Potapenko's avatar
      kasan: modify kmalloc_large_oob_right(), add kmalloc_pagealloc_oob_right() · e6e8379c
      Alexander Potapenko authored
      
      
      This patchset implements SLAB support for KASAN
      
      Unlike SLUB, SLAB doesn't store allocation/deallocation stacks for heap
      objects, therefore we reimplement this feature in mm/kasan/stackdepot.c.
      The intention is to ultimately switch SLUB to use this implementation as
      well, which will save a lot of memory (right now SLUB bloats each object
      by 256 bytes to store the allocation/deallocation stacks).
      
      Also neither SLUB nor SLAB delay the reuse of freed memory chunks, which
      is necessary for better detection of use-after-free errors.  We
      introduce memory quarantine (mm/kasan/quarantine.c), which allows
      delayed reuse of deallocated memory.
      
      This patch (of 7):
      
      Rename kmalloc_large_oob_right() to kmalloc_pagealloc_oob_right(), as
      the test only checks the page allocator functionality.  Also reimplement
      kmalloc_large_oob_right() so that the test allocates a large enough
      chunk of memory that still does not trigger the page allocator fallback.
      Signed-off-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Konstantin Serebryany <kcc@google.com>
      Cc: Dmitry Chernenkov <dmitryc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e6e8379c
  18. 23 Mar, 2016 1 commit