- 20 May, 2016 1 commit
-
-
Du, Changbin authored
I am going to introduce debugobjects infrastructure to USB subsystem. But before this, I found the code of debugobjects could be improved. This patchset will make fixup functions return bool type instead of int. Because fixup only need report success or no. boolean is the 'real' type. This patch (of 7): The object debugging infrastructure core provides some fixup callbacks for the subsystem who use it. These callbacks are called from the debug code whenever a problem in debug_object_init is detected. And debugobjects core suppose them returns 1 when the fixup was successful, otherwise 0. So the return type is boolean. A bad thing is that debug_object_fixup use the return value for arithmetic operation. It confused me that what is the reall return type. Reading over the whole code, I found some place do use the return value incorrectly(see next patch). So why use bool type instead? Signed-off-by:
Du, Changbin <changbin.du@intel.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Josh Triplett <josh@kernel.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tejun Heo <tj@kernel.org> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 16 May, 2016 1 commit
-
-
Daniel Borkmann authored
Since the blinding is strictly only called from inside eBPF JITs, we need to change signatures for bpf_int_jit_compile() and bpf_prog_select_runtime() first in order to prepare that the eBPF program we're dealing with can change underneath. Hence, for call sites, we need to return the latest prog. No functional change in this patch. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 12 May, 2016 1 commit
-
-
David Howells authored
This fixes CVE-2016-0758. In the ASN.1 decoder, when the length field of an ASN.1 value is extracted, it isn't validated against the remaining amount of data before being added to the cursor. With a sufficiently large size indicated, the check: datalen - dp < 2 may then fail due to integer overflow. Fix this by checking the length indicated against the amount of remaining data in both places a definite length is determined. Whilst we're at it, make the following changes: (1) Check the maximum size of extended length does not exceed the capacity of the variable it's being stored in (len) rather than the type that variable is assumed to be (size_t). (2) Compare the EOC tag to the symbolic constant ASN1_EOC rather than the integer 0. (3) To reduce confusion, move the initialisation of len outside of: for (len = 0; n > 0; n--) { since it doesn't have anything to do with the loop counter n. Signed-off-by:
David Howells <dhowells@redhat.com> Reviewed-by:
Mimi Zohar <zohar@linux.vnet.ibm.com> Acked-by:
David Woodhouse <David.Woodhouse@intel.com> Acked-by:
Peter Jones <pjones@redhat.com>
-
- 09 May, 2016 1 commit
-
-
Al Viro authored
they are open-coded in all users except iov_iter_advance(), and there they wouldn't be a bad idea either - as it is, iov_iter_advance(i, 0) ends up dereferencing potentially past the end of iovec array. It doesn't do anything with the value it reads, and very unlikely to trigger an oops on dereference, but it is not impossible. Reported-by:
Jiri Slaby <jslaby@suse.cz> Reported-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 06 May, 2016 1 commit
-
-
Joonsoo Kim authored
Recently, we allow to save the stacktrace whose hashed value is 0. It causes the problem that stackdepot could return 0 even if in success. User of stackdepot cannot distinguish whether it is success or not so we need to solve this problem. In this patch, 1 bit are added to handle and make valid handle none 0 by setting this bit. After that, valid handle will not be 0 and 0 handle will represent failure correctly. Fixes: 33334e25 ("lib/stackdepot.c: allow the stack trace hash to be zero") Link: http://lkml.kernel.org/r/1462252403-1106-1-git-send-email-iamjoonsoo.kim@lge.com Signed-off-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 03 May, 2016 1 commit
-
-
Tudor Ambarus authored
A kernel taint results when loading the rsa_generic module: root@(none):~# modprobe rsa_generic asn1_decoder: module license 'unspecified' taints kernel. Disabling lock debugging due to kernel taint "Tainting" of the kernel is (usually) a way of indicating that a proprietary module has been inserted, which is not the case here. Signed-off-by:
Tudor Ambarus <tudor-dan.ambarus@nxp.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 29 Apr, 2016 1 commit
-
-
Alexander Potapenko authored
Do not bail out from depot_save_stack() if the stack trace has zero hash. Initially depot_save_stack() silently dropped stack traces with zero hashes, however there's actually no point in reserving this zero value. Reported-by:
Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by:
Alexander Potapenko <glider@google.com> Acked-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 24 Apr, 2016 1 commit
-
-
Nicolas Dichtel authored
Fix typo and describe 'padattr'. Fixes: 089bf1a6 ("libnl: add more helpers to align attributes on 64-bit") Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 21 Apr, 2016 1 commit
-
-
Nicolas Dichtel authored
Signed-off-by:
Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 15 Apr, 2016 1 commit
-
-
Ming Lin authored
Now it's ready to move the mempool based SG chained allocator code from SCSI driver to lib/sg_pool.c, which will be compiled only based on a Kconfig symbol CONFIG_SG_POOL. SCSI selects CONFIG_SG_POOL. Reviewed-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Ming Lin <ming.l@ssi.samsung.com> Reviewed-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com>
-
- 13 Apr, 2016 2 commits
-
-
Rui Salvaterra authored
These identifiers are bogus. The interested architectures should define HAVE_EFFICIENT_UNALIGNED_ACCESS whenever relevant to do so. If this isn't true for some arch, it should be fixed in the arch definition. Signed-off-by:
Rui Salvaterra <rsalvaterra@gmail.com> Reviewed-by:
Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rui Salvaterra authored
Based on Sergey's test patch [1], this fixes zram with lz4 compression on big endian cpus. Note that the 64-bit preprocessor test is not a cleanup, it's part of the fix, since those identifiers are bogus (for example, __ppc64__ isn't defined anywhere else in the kernel, which means we'd fall into the 32-bit definitions on ppc64). Tested on ppc64 with no regression on x86_64. [1] http://marc.info/?l=linux-kernel&m=145994470805853&w=4 Cc: stable@vger.kernel.org Suggested-by:
Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by:
Rui Salvaterra <rsalvaterra@gmail.com> Reviewed-by:
Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 08 Apr, 2016 1 commit
-
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 06 Apr, 2016 5 commits
-
-
Naveen N. Rao authored
Some of these tests proved useful with the powerpc eBPF JIT port due to sign-extended 16-bit immediate loads. Though some of these aspects get covered in other tests, it is better to have explicit tests so as to quickly tag the precise problem. Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Naveen N. Rao authored
BPF_ALU32 and BPF_ALU64 tests for adding two 32-bit values that results in 32-bit overflow. Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Naveen N. Rao authored
Unsigned Jump-if-Greater-Than. Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Naveen N. Rao authored
JMP_JSET tests incorrectly used BPF_JNE. Fix the same. Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by:
Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Jerome Marchand authored
Changes since V1: fixed the description and added KASan warning. In assoc_array_insert_into_terminal_node(), we call the compare_object() method on all non-empty slots, even when they're not leaves, passing a pointer to an unexpected structure to compare_object(). Currently it causes an out-of-bound read access in keyring_compare_object detected by KASan (see below). The issue is easily reproduced with keyutils testsuite. Only call compare_object() when the slot is a leave. KASan warning: ================================================================== BUG: KASAN: slab-out-of-bounds in keyring_compare_object+0x213/0x240 at addr ffff880060a6f838 Read of size 8 by task keyctl/1655 ============================================================================= BUG kmalloc-192 (Not tainted): kasan: bad access detected ----------------------------------------------------------------------------- Disabling lock debugging due to kernel taint INFO: Allocated in assoc_array_insert+0xfd0/0x3a60 age=69 cpu=1 pid=1647 ___slab_alloc+0x563/0x5c0 __slab_alloc+0x51/0x90 kmem_cache_alloc_trace+0x263/0x300 assoc_array_insert+0xfd0/0x3a60 __key_link_begin+0xfc/0x270 key_create_or_update+0x459/0xaf0 SyS_add_key+0x1ba/0x350 entry_SYSCALL_64_fastpath+0x12/0x76 INFO: Slab 0xffffea0001829b80 objects=16 used=8 fp=0xffff880060a6f550 flags=0x3fff8000004080 INFO: Object 0xffff880060a6f740 @offset=5952 fp=0xffff880060a6e5d1 Bytes b4 ffff880060a6f730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f740: d1 e5 a6 60 00 88 ff ff 0e 00 00 00 00 00 00 00 ...`............ Object ffff880060a6f750: 02 cf 8e 60 00 88 ff ff 02 c0 8e 60 00 88 ff ff ...`.......`.... Object ffff880060a6f760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ Object ffff880060a6f7f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ CPU: 0 PID: 1655 Comm: keyctl Tainted: G B 4.5.0-rc4-kasan+ #291 Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 0000000000000000 000000001b2800b4 ffff880060a179e0 ffffffff81b60491 ffff88006c802900 ffff880060a6f740 ffff880060a17a10 ffffffff815e2969 ffff88006c802900 ffffea0001829b80 ffff880060a6f740 ffff880060a6e650 Call Trace: [<ffffffff81b60491>] dump_stack+0x85/0xc4 [<ffffffff815e2969>] print_trailer+0xf9/0x150 [<ffffffff815e9454>] object_err+0x34/0x40 [<ffffffff815ebe50>] kasan_report_error+0x230/0x550 [<ffffffff819949be>] ? keyring_get_key_chunk+0x13e/0x210 [<ffffffff815ec62d>] __asan_report_load_n_noabort+0x5d/0x70 [<ffffffff81994cc3>] ? keyring_compare_object+0x213/0x240 [<ffffffff81994cc3>] keyring_compare_object+0x213/0x240 [<ffffffff81bc238c>] assoc_array_insert+0x86c/0x3a60 [<ffffffff81bc1b20>] ? assoc_array_cancel_edit+0x70/0x70 [<ffffffff8199797d>] ? __key_link_begin+0x20d/0x270 [<ffffffff8199786c>] __key_link_begin+0xfc/0x270 [<ffffffff81993389>] key_create_or_update+0x459/0xaf0 [<ffffffff8128ce0d>] ? trace_hardirqs_on+0xd/0x10 [<ffffffff81992f30>] ? key_type_lookup+0xc0/0xc0 [<ffffffff8199e19d>] ? lookup_user_key+0x13d/0xcd0 [<ffffffff81534763>] ? memdup_user+0x53/0x80 [<ffffffff819983ea>] SyS_add_key+0x1ba/0x350 [<ffffffff81998230>] ? key_get_type_from_user.constprop.6+0xa0/0xa0 [<ffffffff828bcf4e>] ? retint_user+0x18/0x23 [<ffffffff8128cc7e>] ? trace_hardirqs_on_caller+0x3fe/0x580 [<ffffffff81004017>] ? trace_hardirqs_on_thunk+0x17/0x19 [<ffffffff828bc432>] entry_SYSCALL_64_fastpath+0x12/0x76 Memory state around the buggy address: ffff880060a6f700: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00 ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc >ffff880060a6f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ^ ffff880060a6f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff880060a6f900: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00 ================================================================== Signed-off-by:
Jerome Marchand <jmarchan@redhat.com> Signed-off-by:
David Howells <dhowells@redhat.com> cc: stable@vger.kernel.org
-
- 05 Apr, 2016 15 commits
-
-
Nicolai Stange authored
Within the copying loop in mpi_read_raw_from_sgl(), the last input SGE's byte count gets artificially extended as follows: if (sg_is_last(sg) && (len % BYTES_PER_MPI_LIMB)) len += BYTES_PER_MPI_LIMB - (len % BYTES_PER_MPI_LIMB); Within the following byte copying loop, this causes reads beyond that SGE's allocated buffer: BUG: KASAN: slab-out-of-bounds in mpi_read_raw_from_sgl+0x331/0x650 at addr ffff8801e168d4d8 Read of size 1 by task systemd-udevd/721 [...] Call Trace: [<ffffffff818c4d35>] dump_stack+0xbc/0x117 [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169 [<ffffffff814af5d1>] ? print_section+0x61/0xb0 [<ffffffff814b1109>] print_trailer+0x179/0x2c0 [<ffffffff814bc524>] object_err+0x34/0x40 [<ffffffff814bfdc7>] kasan_report_error+0x307/0x8c0 [<ffffffff814bf315>] ? kasan_unpoison_shadow+0x35/0x50 [<ffffffff814bf38e>] ? kasan_kmalloc+0x5e/0x70 [<ffffffff814c0ad1>] kasan_report+0x71/0xa0 [<ffffffff81938171>] ? mpi_read_raw_from_sgl+0x331/0x650 [<ffffffff814bf1a6>] __asan_load1+0x46/0x50 [<ffffffff81938171>] mpi_read_raw_from_sgl+0x331/0x650 [<ffffffff817f41b6>] rsa_verify+0x106/0x260 [<ffffffff817f40b0>] ? rsa_set_pub_key+0xf0/0xf0 [<ffffffff818edc79>] ? sg_init_table+0x29/0x50 [<ffffffff817f4d22>] ? pkcs1pad_sg_set_buf+0xb2/0x2e0 [<ffffffff817f5b74>] pkcs1pad_verify+0x1f4/0x2b0 [<ffffffff81831057>] public_key_verify_signature+0x3a7/0x5e0 [<ffffffff81830cb0>] ? public_key_describe+0x80/0x80 [<ffffffff817830f0>] ? keyring_search_aux+0x150/0x150 [<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370 [<ffffffff814b83f0>] ? kfree+0x220/0x370 [<ffffffff818312c2>] public_key_verify_signature_2+0x32/0x50 [<ffffffff81830b5c>] verify_signature+0x7c/0xb0 [<ffffffff81835d0c>] pkcs7_validate_trust+0x42c/0x5f0 [<ffffffff813c391a>] system_verify_data+0xca/0x170 [<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b [<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0 [<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290 [...] The exact purpose of the len extension isn't clear to me, but due to its form, I suspect that it's a leftover somehow accounting for leading zero bytes within the most significant output limb. Note however that without that len adjustement, the total number of bytes ever processed by the inner loop equals nbytes and thus, the last output limb gets written at this point. Thus the net effect of the len adjustement cited above is just to keep the inner loop running for some more iterations, namely < BYTES_PER_MPI_LIMB ones, reading some extra bytes from beyond the last SGE's buffer and discarding them afterwards. Fix this issue by purging the extension of len beyond the last input SGE's buffer length. Fixes: 2d4d1eea ("lib/mpi: Add mpi sgl helpers") Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Within the byte reading loop in mpi_read_raw_sgl(), there are two housekeeping indices used, z and x. At all times, the index z represents the number of output bytes covered by the input SGEs for which processing has completed so far. This includes any leading zero bytes within the most significant limb. The index x changes its meaning after the first outer loop's iteration though: while processing the first input SGE, it represents "number of leading zero bytes in most significant output limb" + "current position within current SGE" For the remaining SGEs OTOH, x corresponds just to "current position within current SGE" After all, it is only the sum of z and x that has any meaning for the output buffer and thus, the "number of leading zero bytes in most significant output limb" part can be moved away from x into z from the beginning, opening up the opportunity for cleaner code. Before the outer loop iterating over the SGEs, don't initialize z with zero, but with the number of leading zero bytes in the most significant output limb. For the inner loop iterating over a single SGE's bytes, get rid of the buf_shift offset to x' bounds and let x run from zero to sg->length - 1. Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
The number of bits, nbits, is calculated in mpi_read_raw_from_sgl() as follows: nbits = nbytes * 8; Afterwards, the number of leading zero bits of the first byte get subtracted: nbits -= count_leading_zeros(*(u8 *)(sg_virt(sgl) + lzeros)); However, count_leading_zeros() takes an unsigned long and thus, the u8 gets promoted to an unsigned long. Thus, the above doesn't subtract the number of leading zeros in the most significant nonzero input byte from nbits, but the number of leading zeros of the most significant nonzero input byte promoted to unsigned long, i.e. BITS_PER_LONG - 8 too many. Fix this by subtracting count_leading_zeros(...) - (BITS_PER_LONG - 8) from nbits only. Fixes: 2d4d1eea ("lib/mpi: Add mpi sgl helpers") Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
In mpi_read_raw_from_sgl(), unsigned nbits is calculated as follows: nbits = nbytes * 8; and redundantly cleared later on if nbytes == 0: if (nbytes > 0) ... else nbits = 0; Purge this redundant clearing for the sake of clarity. Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
At the very beginning of mpi_read_raw_from_sgl(), the leading zeros of the input scatterlist are counted: lzeros = 0; for_each_sg(sgl, sg, ents, i) { ... if (/* sg contains nonzero bytes */) break; /* sg contains nothing but zeros here */ ents--; lzeros = 0; } Later on, the total number of trailing nonzero bytes is calculated by subtracting the number of leading zero bytes from the total number of input bytes: nbytes -= lzeros; However, since lzeros gets reset to zero for each completely zero leading sg in the loop above, it doesn't include those. Besides wasting resources by allocating a too large output buffer, this mistake propagates into the calculation of x, the number of leading zeros within the most significant output limb: x = BYTES_PER_MPI_LIMB - nbytes % BYTES_PER_MPI_LIMB; What's more, the low order bytes of the output, equal in number to the extra bytes in nbytes, are left uninitialized. Fix this by adjusting nbytes for each completely zero leading scatterlist entry. Fixes: 2d4d1eea ("lib/mpi: Add mpi sgl helpers") Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Currently, the nbytes local variable is calculated from the len argument as follows: ... mpi_read_raw_from_sgl(..., unsigned int len) { unsigned nbytes; ... if (!ents) nbytes = 0; else nbytes = len - lzeros; ... } Given that nbytes is derived from len in a trivial way and that the len argument is shadowed by a local len variable in several loops, this is just confusing. Rename the len argument to nbytes and get rid of the nbytes local variable. Do the nbytes calculation in place. Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Currently, mpi_read_buffer() writes full limbs to the output buffer and moves memory around to purge leading zero limbs afterwards. However, with commit 9cbe21d8 ("lib/mpi: only require buffers as big as needed for the integer") the caller is only required to provide a buffer large enough to hold the result without the leading zeros. This might result in a buffer overflow for small MP numbers with leading zeros. Fix this by coping the result to its final destination within the output buffer and not copying the leading zeros at all. Fixes: 9cbe21d8 ("lib/mpi: only require buffers as big as needed for the integer") Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Currently, the endian conversion from CPU order to BE is open coded in mpi_read_buffer(). Replace this by the centrally provided cpu_to_be*() macros. Copy from the temporary storage on stack to the destination buffer by means of memcpy(). Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Currently, if the number of leading zeros is greater than fits into a complete limb, mpi_read_buffer() skips them by iterating over them limb-wise. Instead of skipping the high order zero limbs within the loop as shown above, adjust the copying loop's bounds. Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Currently, the endian conversion from CPU order to BE is open coded in mpi_write_sgl(). Replace this by the centrally provided cpu_to_be*() macros. Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Within the copying loop in mpi_write_sgl(), we have if (lzeros) { mpi_limb_t *limb1 = (void *)p - sizeof(alimb); mpi_limb_t *limb2 = (void *)p - sizeof(alimb) + lzeros; *limb1 = *limb2; ... } where p points past the end of alimb2 which lives on the stack and contains the current limb in BE order. The purpose of the above is to shift the non-zero bytes of alimb2 to its beginning in memory, i.e. to skip its leading zero bytes. However, limb2 points somewhere into the middle of alimb2 and thus, reading *limb2 pulls in lzero bytes from somewhere. Indeed, KASAN splats: BUG: KASAN: stack-out-of-bounds in mpi_write_to_sgl+0x4e3/0x6f0 at addr ffff8800cb04f601 Read of size 8 by task systemd-udevd/391 page:ffffea00032c13c0 count:0 mapcount:0 mapping: (null) index:0x0 flags: 0x3fff8000000000() page dumped because: kasan: bad access detected CPU: 3 PID: 391 Comm: systemd-udevd Tainted: G B L 4.5.0-next-20160316+ #12 [...] Call Trace: [<ffffffff8194889e>] dump_stack+0xdc/0x15e [<ffffffff819487c2>] ? _atomic_dec_and_lock+0xa2/0xa2 [<ffffffff814892b5>] ? __dump_page+0x185/0x330 [<ffffffff8150ffd6>] kasan_report_error+0x5e6/0x8b0 [<ffffffff814724cd>] ? kzfree+0x2d/0x40 [<ffffffff819c5bce>] ? mpi_free_limb_space+0xe/0x20 [<ffffffff819c469e>] ? mpi_powm+0x37e/0x16f0 [<ffffffff815109f1>] kasan_report+0x71/0xa0 [<ffffffff819c0353>] ? mpi_write_to_sgl+0x4e3/0x6f0 [<ffffffff8150ed34>] __asan_load8+0x64/0x70 [<ffffffff819c0353>] mpi_write_to_sgl+0x4e3/0x6f0 [<ffffffff819bfe70>] ? mpi_set_buffer+0x620/0x620 [<ffffffff819c0e6f>] ? mpi_cmp+0xbf/0x180 [<ffffffff8186e282>] rsa_verify+0x202/0x260 What's more, since lzeros can be anything from 1 to sizeof(mpi_limb_t)-1, the above will cause unaligned accesses which is bad on non-x86 archs. Fix the issue, by preparing the starting point p for the upcoming copy operation instead of shifting the source memory, i.e. alimb2. Fixes: 2d4d1eea ("lib/mpi: Add mpi sgl helpers") Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Within the copying loop in mpi_write_sgl(), we have if (lzeros) { ... p -= lzeros; y = lzeros; } p = p - (sizeof(alimb) - y); If lzeros == 0, then y == 0, too. Thus, lzeros gets subtracted and added back again to p. Purge this redundancy. Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Within the copying loop in mpi_write_sgl(), we have if (lzeros > 0) { ... lzeros -= sizeof(alimb); } However, at this point, lzeros < sizeof(alimb) holds. Make this fact explicit by rewriting the above to if (lzeros) { ... lzeros = 0; } Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Nicolai Stange authored
Currently, if the number of leading zeros is greater than fits into a complete limb, mpi_write_sgl() skips them by iterating over them limb-wise. However, it fails to adjust its internal leading zeros tracking variable, lzeros, accordingly: it does a p -= sizeof(alimb); continue; which should really have been a lzeros -= sizeof(alimb); continue; Since lzeros never decreases if its initial value >= sizeof(alimb), nothing gets copied by mpi_write_sgl() in that case. Instead of skipping the high order zero limbs within the loop as shown above, fix the issue by adjusting the copying loop's bounds. Fixes: 2d4d1eea ("lib/mpi: Add mpi sgl helpers") Signed-off-by:
Nicolai Stange <nicstange@gmail.com> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Bob Copeland authored
In certain cases, the 802.11 mesh pathtable code wants to iterate over all of the entries in the forwarding table from the receive path, which is inside an RCU read-side critical section. Enable walks inside atomic sections by allowing GFP_ATOMIC allocations for the walker state. Change all existing callsites to pass in GFP_KERNEL. Acked-by:
Thomas Graf <tgraf@suug.ch> Signed-off-by:
Bob Copeland <me@bobcopeland.com> [also adjust gfs2/glock.c and rhashtable tests] Signed-off-by:
Johannes Berg <johannes.berg@intel.com>
-
- 01 Apr, 2016 1 commit
-
-
Richard Cochran authored
By accident I stumbled across code that is no longer used. According to git grep, the global functions in lib/proportions.c are not used anywhere. This patch removes the old, unused code. Peter Zijlstra further commented: "Ah indeed, that got replaced with the flex proportion code a while back." Signed-off-by:
Richard Cochran <rcochran@linutronix.de> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/4265b49bed713fbe3faaf8c05da0e1792f09c0b3.1459432020.git.rcochran@linutronix.de Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- 31 Mar, 2016 1 commit
-
-
Paul E. McKenney authored
This commit adds a new rcuperf module that carries out simple performance tests of RCU grace periods. Signed-off-by:
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
-
- 25 Mar, 2016 4 commits
-
-
Alexander Potapenko authored
Signed-off-by:
Alexander Potapenko <glider@google.com> Acked-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Alexander Potapenko authored
Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot will allow KASAN store allocation/deallocation stack traces for memory chunks. The stack traces are stored in a hash table and referenced by handles which reside in the kasan_alloc_meta and kasan_free_meta structures in the allocated memory chunks. IRQ stack traces are cut below the IRQ entry point to avoid unnecessary duplication. Right now stackdepot support is only enabled in SLAB allocator. Once KASAN features in SLAB are on par with those in SLUB we can switch SLUB to stackdepot as well, thus removing the dependency on SLUB stack bookkeeping, which wastes a lot of memory. This patch is based on the "mm: kasan: stack depots" patch originally prepared by Dmitry Chernenkov. Joonsoo has said that he plans to reuse the stackdepot code for the mm/page_owner.c debugging facility. [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t] [aryabinin@virtuozzo.com: comment style fixes] Signed-off-by:
Alexander Potapenko <glider@google.com> Signed-off-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Alexander Potapenko authored
Add KASAN hooks to SLAB allocator. This patch is based on the "mm: kasan: unified support for SLUB and SLAB allocators" patch originally prepared by Dmitry Chernenkov. Signed-off-by:
Alexander Potapenko <glider@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Alexander Potapenko authored
This patchset implements SLAB support for KASAN Unlike SLUB, SLAB doesn't store allocation/deallocation stacks for heap objects, therefore we reimplement this feature in mm/kasan/stackdepot.c. The intention is to ultimately switch SLUB to use this implementation as well, which will save a lot of memory (right now SLUB bloats each object by 256 bytes to store the allocation/deallocation stacks). Also neither SLUB nor SLAB delay the reuse of freed memory chunks, which is necessary for better detection of use-after-free errors. We introduce memory quarantine (mm/kasan/quarantine.c), which allows delayed reuse of deallocated memory. This patch (of 7): Rename kmalloc_large_oob_right() to kmalloc_pagealloc_oob_right(), as the test only checks the page allocator functionality. Also reimplement kmalloc_large_oob_right() so that the test allocates a large enough chunk of memory that still does not trigger the page allocator fallback. Signed-off-by:
Alexander Potapenko <glider@google.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 23 Mar, 2016 1 commit
-
-
Helge Deller authored
On parisc and metag the stack grows upwards, so for those we need to scan the stack downwards in order to calculate how much stack a process has used. Tested on a 64bit parisc kernel. Signed-off-by:
Helge Deller <deller@gmx.de>
-