1. 28 Sep, 2016 1 commit
  2. 15 Sep, 2016 1 commit
    • Jann Horn's avatar
      aio: mark AIO pseudo-fs noexec · 22f6b4d3
      Jann Horn authored
      This ensures that do_mmap() won't implicitly make AIO memory mappings
      executable if the READ_IMPLIES_EXEC personality flag is set.  Such
      behavior is problematic because the security_mmap_file LSM hook doesn't
      catch this case, potentially permitting an attacker to bypass a W^X
      policy enforced by SELinux.
      I have tested the patch on my machine.
      To test the behavior, compile and run this:
          #define _GNU_SOURCE
          #include <unistd.h>
          #include <sys/personality.h>
          #include <linux/aio_abi.h>
          #include <err.h>
          #include <stdlib.h>
          #include <stdio.h>
          #include <sys/syscall.h>
          int main(void) {
              aio_context_t ctx = 0;
              if (syscall(__NR_io_setup, 1, &ctx))
                  err(1, "io_setup");
              char cmd[1000];
              sprintf(cmd, "cat /proc/%d/maps | grep -F '/[aio]'",
              return 0;
      In the output, "rw-s" is good, "rwxs" is bad.
      Signed-off-by: default avatarJann Horn <jann@thejh.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  3. 24 May, 2016 1 commit
  4. 03 Apr, 2016 1 commit
  5. 04 Sep, 2015 1 commit
  6. 15 Apr, 2015 1 commit
    • Jens Axboe's avatar
      aio: fix serial draining in exit_aio() · dc48e56d
      Jens Axboe authored
      exit_aio() currently serializes killing io contexts. Each context
      killing ends up having to do percpu_ref_kill(), which in turns has
      to wait for an RCU grace period. This can take a long time, depending
      on the number of contexts. And there's no point in doing them serially,
      when we could be waiting for all of them in one fell swoop.
      This patches makes my fio thread offload test case exit 0.2s instead
      of almost 6s.
      Reviewed-by: default avatarJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
  7. 12 Apr, 2015 8 commits
  8. 06 Apr, 2015 2 commits
    • Al Viro's avatar
      ioctx_alloc(): fix vma (and file) leak on failure · deeb8525
      Al Viro authored
      If we fail past the aio_setup_ring(), we need to destroy the
      mapping.  We don't need to care about anybody having found ctx,
      or added requests to it, since the last failure exit is exactly
      the failure to make ctx visible to lookups.
      Reproducer (based on one by Joe Mario <jmario@redhat.com>):
      void count(char *p)
      	char s[80];
      	printf("%s: ", p);
      	sprintf(s, "/bin/cat /proc/%d/maps|/bin/fgrep -c '/[aio] (deleted)'", getpid());
      int main()
      	io_context_t *ctx;
      	int created, limit, i, destroyed;
      	FILE *f;
      	if ((f = fopen("/proc/sys/fs/aio-max-nr", "r")) == NULL)
      		perror("opening aio-max-nr");
      	else if (fscanf(f, "%d", &limit) != 1)
      		fprintf(stderr, "can't parse aio-max-nr\n");
      	else if ((ctx = calloc(limit, sizeof(io_context_t))) == NULL)
      		perror("allocating aio_context_t array");
      	else {
      		for (i = 0, created = 0; i < limit; i++) {
      			if (io_setup(1000, ctx + created) == 0)
      		for (i = 0, destroyed = 0; i < created; i++)
      			if (io_destroy(ctx[i]) == 0)
      		printf("created %d, failed %d, destroyed %d\n",
      			created, limit - created, destroyed);
      Found-by: default avatarJoe Mario <jmario@redhat.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
    • Al Viro's avatar
      fix mremap() vs. ioctx_kill() race · b2edffdd
      Al Viro authored
      teach ->mremap() method to return an error and have it fail for
      aio mappings in process of being killed
      Note that in case of ->mremap() failure we need to undo move_page_tables()
      we'd already done; we could call ->mremap() first, but then the failure of
      move_page_tables() would require undoing whatever _successful_ ->mremap()
      has done, which would be a lot more headache in general.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
  9. 13 Mar, 2015 3 commits
    • Christoph Hellwig's avatar
      fs: split generic and aio kiocb · 04b2fa9f
      Christoph Hellwig authored
      Most callers in the kernel want to perform synchronous file I/O, but
      still have to bloat the stack with a full struct kiocb.  Split out
      the parts needed in filesystem code from those in the aio code, and
      only allocate those needed to pass down argument on the stack.  The
      aio code embedds the generic iocb in the one it allocates and can
      easily get back to it by using container_of.
      Also add a ->ki_complete method to struct kiocb, this is used to call
      into the aio code and thus removes the dependency on aio for filesystems
      impementing asynchronous operations.  It will also allow other callers
      to substitute their own completion callback.
      We also add a new ->ki_flags field to work around the nasty layering
      violation recently introduced in commit 5e33f6 ("usb: gadget: ffs: add
      eventfd notification about ffs events").
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
    • Christoph Hellwig's avatar
      fs: don't allow to complete sync iocbs through aio_complete · 599bd19b
      Christoph Hellwig authored
      The AIO interface is fairly complex because it tries to allow
      filesystems to always work async and then wakeup a synchronous
      caller through aio_complete.  It turns out that basically no one
      was doing this to avoid the complexity and context switches,
      and we've already fixed up the remaining users and can now
      get rid of this case.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
    • Christoph Hellwig's avatar
      fs: remove ki_nbytes · 66ee59af
      Christoph Hellwig authored
      There is no need to pass the total request length in the kiocb, as
      we already get passed in through the iov_iter argument.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
  10. 20 Feb, 2015 1 commit
  11. 04 Feb, 2015 1 commit
    • Dave Chinner's avatar
      aio: annotate aio_read_event_ring for sleep patterns · 9c9ce763
      Dave Chinner authored
      Under CONFIG_DEBUG_ATOMIC_SLEEP=y, aio_read_event_ring() will throw
      warnings like the following due to being called from wait_event
       WARNING: CPU: 0 PID: 16006 at kernel/sched/core.c:7300 __might_sleep+0x7f/0x90()
       do not call blocking ops when !TASK_RUNNING; state=1 set at [<ffffffff810d85a3>] prepare_to_wait_event+0x63/0x110
       Modules linked in:
       CPU: 0 PID: 16006 Comm: aio-dio-fcntl-r Not tainted 3.19.0-rc6-dgc+ #705
       Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
        ffffffff821c0372 ffff88003c117cd8 ffffffff81daf2bd 000000000000d8d8
        ffff88003c117d28 ffff88003c117d18 ffffffff8109beda ffff88003c117cf8
        ffffffff821c115e 0000000000000061 0000000000000000 00007ffffe4aa300
       Call Trace:
        [<ffffffff81daf2bd>] dump_stack+0x4c/0x65
        [<ffffffff8109beda>] warn_slowpath_common+0x8a/0xc0
        [<ffffffff8109bf56>] warn_slowpath_fmt+0x46/0x50
        [<ffffffff810d85a3>] ? prepare_to_wait_event+0x63/0x110
        [<ffffffff810d85a3>] ? prepare_to_wait_event+0x63/0x110
        [<ffffffff810bdfcf>] __might_sleep+0x7f/0x90
        [<ffffffff81db8344>] mutex_lock+0x24/0x45
        [<ffffffff81216b7c>] aio_read_events+0x4c/0x290
        [<ffffffff81216fac>] read_events+0x1ec/0x220
        [<ffffffff810d8650>] ? prepare_to_wait_event+0x110/0x110
        [<ffffffff810fdb10>] ? hrtimer_get_res+0x50/0x50
        [<ffffffff8121899d>] SyS_io_getevents+0x4d/0xb0
        [<ffffffff81dba5a9>] system_call_fastpath+0x12/0x17
       ---[ end trace bde69eaf655a4fea ]---
      There is not actually a bug here, so annotate the code to tell the
      debug logic that everything is just fine and not to fire a false
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
  12. 20 Jan, 2015 2 commits
  13. 13 Dec, 2014 2 commits
    • Fam Zheng's avatar
      aio: Skip timer for io_getevents if timeout=0 · 5f785de5
      Fam Zheng authored
      In this case, it is basically a polling. Let's not involve timer at all
      because that would hurt performance for application event loops.
      In an arbitrary test I've done, io_getevents syscall elapsed time
      reduces from 50000+ nanoseconds to a few hundereds.
      Signed-off-by: default avatarFam Zheng <famz@redhat.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
    • Pavel Emelyanov's avatar
      aio: Make it possible to remap aio ring · e4a0d3e7
      Pavel Emelyanov authored
      There are actually two issues this patch addresses. Let me start with
      the one I tried to solve in the beginning.
      So, in the checkpoint-restore project (criu) we try to dump tasks'
      state and restore one back exactly as it was. One of the tasks' state
      bits is rings set up with io_setup() call. There's (almost) no problems
      in dumping them, there's a problem restoring them -- if I dump a task
      with aio ring originally mapped at address A, I want to restore one
      back at exactly the same address A. Unfortunately, the io_setup() does
      not allow for that -- it mmaps the ring at whatever place mm finds
      appropriate (it calls do_mmap_pgoff() with zero address and without
      the MAP_FIXED flag).
      To make restore possible I'm going to mremap() the freshly created ring
      into the address A (under which it was seen before dump). The problem is
      that the ring's virtual address is passed back to the user-space as the
      context ID and this ID is then used as search key by all the other io_foo()
      calls. Reworking this ID to be just some integer doesn't seem to work, as
      this value is already used by libaio as a pointer using which this library
      accesses memory for aio meta-data.
      So, to make restore work we need to make sure that
      a) ring is mapped at desired virtual address
      b) kioctx->user_id matches this value
      Having said that, the patch makes mremap() on aio region update the
      kioctx's user_id and mmap_base values.
      Here appears the 2nd issue I mentioned in the beginning of this mail.
      If (regardless of the C/R dances I do) someone creates an io context
      with io_setup(), then mremap()-s the ring and then destroys the context,
      the kill_ioctx() routine will call munmap() on wrong (old) address.
      This will result in a) aio ring remaining in memory and b) some other
      vma get unexpectedly unmapped.
      What do you think?
      Signed-off-by: default avatarPavel Emelyanov <xemul@parallels.com>
      Acked-by: default avatarDmitry Monakhov <dmonakhov@openvz.org>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
  14. 06 Nov, 2014 1 commit
    • Gu Zheng's avatar
      aio: fix uncorrent dirty pages accouting when truncating AIO ring buffer · 835f252c
      Gu Zheng authored
      Markus reported that when shutting down mysqld (with AIO support,
      on a ext3 formatted Harddrive) leads to a negative number of dirty pages
      (underrun to the counter). The negative number results in a drastic reduction
      of the write performance because the page cache is not used, because the kernel
      thinks it is still 2 ^ 32 dirty pages open.
      Add a warn trace in __dec_zone_state will catch this easily:
      static inline void __dec_zone_state(struct zone *zone, enum
      	zone_stat_item item)
      +    WARN_ON_ONCE(item == NR_FILE_DIRTY &&
      	atomic_long_read(&zone->vm_stat[item]) < 0);
      [   21.341632] ------------[ cut here ]------------
      [   21.346294] WARNING: CPU: 0 PID: 309 at include/linux/vmstat.h:242
      [   21.355296] Modules linked in: wutbox_cp sata_mv
      [   21.359968] CPU: 0 PID: 309 Comm: kworker/0:1 Not tainted 3.14.21-WuT #80
      [   21.366793] Workqueue: events free_ioctx
      [   21.370760] [<c0016a64>] (unwind_backtrace) from [<c0012f88>]
      [   21.378562] [<c0012f88>] (show_stack) from [<c03f8ccc>]
      [   21.385840] [<c03f8ccc>] (dump_stack) from [<c0023ae4>]
      [   21.393976] [<c0023ae4>] (warn_slowpath_common) from [<c0023bb8>]
      [   21.402800] [<c0023bb8>] (warn_slowpath_null) from [<c00c0688>]
      [   21.411524] [<c00c0688>] (cancel_dirty_page) from [<c00c080c>]
      [   21.420272] [<c00c080c>] (truncate_inode_page) from [<c00c0a94>]
      [   21.429890] [<c00c0a94>] (truncate_inode_pages_range) from
      [<c00c0f6c>] (truncate_pagecache+0x88/0xac)
      [   21.439252] [<c00c0f6c>] (truncate_pagecache) from [<c00c0fec>]
      [   21.447731] [<c00c0fec>] (truncate_setsize) from [<c013b3a8>]
      [   21.456826] [<c013b3a8>] (put_aio_ring_file.isra.14) from
      [<c013b424>] (aio_free_ring+0x20/0xcc)
      [   21.465660] [<c013b424>] (aio_free_ring) from [<c013b4f4>]
      [   21.473190] [<c013b4f4>] (free_ioctx) from [<c003d8d8>]
      [   21.481132] [<c003d8d8>] (process_one_work) from [<c003e988>]
      [   21.489350] [<c003e988>] (worker_thread) from [<c00448ac>]
      [   21.496621] [<c00448ac>] (kthread) from [<c000ec18>]
      [   21.503884] ---[ end trace 79c4bf42c038c9a1 ]---
      The cause is that we set the aio ring file pages as *DIRTY* via SetPageDirty
      (bypasses the VFS dirty pages increment) when init, and aio fs uses
      *default_backing_dev_info* as the backing dev, which does not disable
      the dirty pages accounting capability.
      So truncating aio ring file will contribute to accounting dirty pages (VFS
      dirty pages decrement), then error occurs.
      The original goal is keeping these pages in memory (can not be reclaimed
      or swapped) in life-time via marking it dirty. But thinking more, we have
      already pinned pages via elevating the page's refcount, which can already
      achieve the goal, so the SetPageDirty seems unnecessary.
      In order to fix the issue, using the __set_page_dirty_no_writeback instead
      of the nop .set_page_dirty, and dropped the SetPageDirty (don't manually
      set the dirty flags, don't disable set_page_dirty(), rely on default behaviour).
      With the above change, the dirty pages accounting can work well. But as we
      known, aio fs is an anonymous one, which should never cause any real write-back,
      we can ignore the dirty pages (write back) accounting by disabling the dirty
      pages (write back) accounting capability. So we introduce an aio private
      backing dev info (disabled the ACCT_DIRTY/WRITEBACK/ACCT_WB capabilities) to
      replace the default one.
      Reported-by: default avatarMarkus Königshaus <m.koenigshaus@wut.de>
      Signed-off-by: default avatarGu Zheng <guz.fnst@cn.fujitsu.com>
      Cc: stable <stable@vger.kernel.org>
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
  15. 24 Sep, 2014 1 commit
    • Tejun Heo's avatar
      percpu_ref: add PERCPU_REF_INIT_* flags · 2aad2a86
      Tejun Heo authored
      With the recent addition of percpu_ref_reinit(), percpu_ref now can be
      used as a persistent switch which can be turned on and off repeatedly
      where turning off maps to killing the ref and waiting for it to drain;
      however, there currently isn't a way to initialize a percpu_ref in its
      off (killed and drained) state, which can be inconvenient for certain
      persistent switch use cases.
      Similarly, percpu_ref_switch_to_atomic/percpu() allow dynamic
      selection of operation mode; however, currently a newly initialized
      percpu_ref is always in percpu mode making it impossible to avoid the
      latency overhead of switching to atomic mode.
      This patch adds @flags to percpu_ref_init() and implements the
      following flags.
      * PERCPU_REF_INIT_ATOMIC	: start ref in atomic mode
      * PERCPU_REF_INIT_DEAD		: start ref killed and drained
      These flags should be able to serve the above two use cases.
      v2: target_core_tpg.c conversion was missing.  Fixed.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarKent Overstreet <kmo@daterainc.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
  16. 08 Sep, 2014 1 commit
    • Tejun Heo's avatar
      percpu-refcount: add @gfp to percpu_ref_init() · a34375ef
      Tejun Heo authored
      Percpu allocator now supports allocation mask.  Add @gfp to
      percpu_ref_init() so that !GFP_KERNEL allocation masks can be used
      with percpu_refs too.
      This patch doesn't make any functional difference.
      v2: blk-mq conversion was missing.  Updated.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Kent Overstreet <koverstreet@google.com>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Cc: Jens Axboe <axboe@kernel.dk>
  17. 04 Sep, 2014 1 commit
  18. 02 Sep, 2014 1 commit
    • Jeff Moyer's avatar
      aio: add missing smp_rmb() in read_events_ring · 2ff396be
      Jeff Moyer authored
      We ran into a case on ppc64 running mariadb where io_getevents would
      return zeroed out I/O events.  After adding instrumentation, it became
      clear that there was some missing synchronization between reading the
      tail pointer and the events themselves.  This small patch fixes the
      problem in testing.
      Thanks to Zach for helping to look into this, and suggesting the fix.
      Signed-off-by: default avatarJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: stable@vger.kernel.org
  19. 24 Aug, 2014 1 commit
    • Benjamin LaHaise's avatar
      aio: fix reqs_available handling · d856f32a
      Benjamin LaHaise authored
      As reported by Dan Aloni, commit f8567a38 ("aio: fix aio request
      leak when events are reaped by userspace") introduces a regression when
      user code attempts to perform io_submit() with more events than are
      available in the ring buffer.  Reverting that commit would reintroduce a
      regression when user space event reaping is used.
      Fixing this bug is a bit more involved than the previous attempts to fix
      this regression.  Since we do not have a single point at which we can
      count events as being reaped by user space and io_getevents(), we have
      to track event completion by looking at the number of events left in the
      event ring.  So long as there are as many events in the ring buffer as
      there have been completion events generate, we cannot call
      put_reqs_available().  The code to check for this is now placed in
      A test program from Dan and modified by me for verifying this bug is available
      at http://www.kvack.org/~bcrl/20140824-aio_bug.c
      Reported-by: default avatarDan Aloni <dan@kernelim.com>
      Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Acked-by: default avatarDan Aloni <dan@kernelim.com>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Mateusz Guzik <mguzik@redhat.com>
      Cc: Petr Matousek <pmatouse@redhat.com>
      Cc: stable@vger.kernel.org      # v3.16 and anything that f8567a38
       was backported to
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
  20. 24 Jul, 2014 4 commits
  21. 22 Jul, 2014 1 commit
  22. 14 Jul, 2014 1 commit
  23. 28 Jun, 2014 2 commits
    • Tejun Heo's avatar
      percpu-refcount: require percpu_ref to be exited explicitly · 9a1049da
      Tejun Heo authored
      Currently, a percpu_ref undoes percpu_ref_init() automatically by
      freeing the allocated percpu area when the percpu_ref is killed.
      While seemingly convenient, this has the following niggles.
      * It's impossible to re-init a released reference counter without
        going through re-allocation.
      * In the similar vein, it's impossible to initialize a percpu_ref
        count with static percpu variables.
      * We need and have an explicit destructor anyway for failure paths -
      This patch removes the automatic percpu counter freeing in
      percpu_ref_kill_rcu() and repurposes percpu_ref_cancel_init() into a
      generic destructor now named percpu_ref_exit().  percpu_ref_destroy()
      is considered but it gets confusing with percpu_ref_kill() while
      "exit" clearly indicates that it's the counterpart of
      All percpu_ref_cancel_init() users are updated to invoke
      percpu_ref_exit() instead and explicit percpu_ref_exit() calls are
      added to the destruction path of all percpu_ref users.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Kent Overstreet <kmo@daterainc.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Nicholas A. Bellinger <nab@linux-iscsi.org>
      Cc: Li Zefan <lizefan@huawei.com>
    • Tejun Heo's avatar
      percpu-refcount, aio: use percpu_ref_cancel_init() in ioctx_alloc() · 55c6c814
      Tejun Heo authored
      ioctx_alloc() reaches inside percpu_ref and directly frees
      ->pcpu_count in its failure path, which is quite gross.  percpu_ref
      has been providing a proper interface to do this,
      percpu_ref_cancel_init(), for quite some time now.  Let's use that
      This patch doesn't introduce any behavior changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
      Cc: Kent Overstreet <kmo@daterainc.com>
  24. 24 Jun, 2014 1 commit