1. 24 Nov, 2017 4 commits
  2. 22 Nov, 2017 1 commit
  3. 21 Nov, 2017 2 commits
    • Kees Cook's avatar
      treewide: setup_timer() -> timer_setup() · e99e88a9
      Kees Cook authored
      This converts all remaining cases of the old setup_timer() API into using
      timer_setup(), where the callback argument is the structure already
      holding the struct timer_list. These should have no behavioral changes,
      since they just change which pointer is passed into the callback with
      the same available pointers after conversion. It handles the following
      examples, in addition to some other variations.
      
      Casting from unsigned long:
      
          void my_callback(unsigned long data)
          {
              struct something *ptr = (struct something *)data;
          ...
          }
          ...
          setup_timer(&ptr->my_timer, my_callback, ptr);
      
      and forced object casts:
      
          void my_callback(struct something *ptr)
          {
          ...
          }
          ...
          setup_timer(&ptr->my_timer, my_callback, (unsigned long)ptr);
      
      become:
      
          void my_callback(struct timer_list *t)
          {
              struct something *ptr = from_timer(ptr, t, my_timer);
          ...
          }
          ...
          timer_setup(&ptr->my_timer, my_callback, 0);
      
      Direct fun...
      e99e88a9
    • Kees Cook's avatar
      block/laptop_mode: Convert timers to use timer_setup() · bca237a5
      Kees Cook authored
      
      
      In preparation for unconditionally passing the struct timer_list pointer to
      all timer callbacks, switch to using the new timer_setup() and from_timer()
      to pass the timer pointer explicitly.
      
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: Matthew Wilcox <mawilcox@microsoft.com>
      Cc: Jeff Layton <jlayton@redhat.com>
      Cc: linux-block@vger.kernel.org
      Cc: linux-mm@kvack.org
      Signed-off-by: default avatarKees Cook <keescook@chromium.org>
      bca237a5
  4. 19 Nov, 2017 2 commits
  5. 17 Nov, 2017 1 commit
  6. 16 Nov, 2017 2 commits
  7. 15 Nov, 2017 3 commits
    • Luca Miccio's avatar
      block, bfq: move debug blkio stats behind CONFIG_DEBUG_BLK_CGROUP · a33801e8
      Luca Miccio authored
      BFQ currently creates, and updates, its own instance of the whole
      set of blkio statistics that cfq creates. Yet, from the comments
      of Tejun Heo in [1], it turned out that most of these statistics
      are meant/useful only for debugging. This commit makes BFQ create
      the latter, debugging statistics only if the option
      CONFIG_DEBUG_BLK_CGROUP is set.
      
      By doing so, this commit also enables BFQ to enjoy a high perfomance
      boost. The reason is that, if CONFIG_DEBUG_BLK_CGROUP is not set, then
      BFQ has to update far fewer statistics, and, in particular, not the
      heaviest to update.  To give an idea of the benefits, if
      CONFIG_DEBUG_BLK_CGROUP is not set, then, on an Intel i7-4850HQ, and
      with 8 threads doing random I/O in parallel on null_blk (configured
      with 0 latency), the throughput of BFQ grows from 310 to 400 KIOPS
      (+30%). We have measured similar or even much higher boosts with other
      CPUs: e.g., +45% with an ARM CortexTM-A53 Octa-core. Our results have
      been obtained and can be reproduced very easily with the script in [1].
      
      [1] https://www.spinics.net/lists/linux-block/msg18943.html
      
      Suggested-by: default avatarTejun Heo <tj@kernel.org>
      Suggested-by: default avatarUlf Hansson <ulf.hansson@linaro.org>
      Tested-by: default avatarLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: default avatarLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: default avatarPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a33801e8
    • Paolo Valente's avatar
      block, bfq: update blkio stats outside the scheduler lock · 24bfd19b
      Paolo Valente authored
      bfq invokes various blkg_*stats_* functions to update the statistics
      contained in the special files blkio.bfq.* in the blkio controller
      groups, i.e., the I/O accounting related to the proportional-share
      policy provided by bfq. The execution of these functions takes a
      considerable percentage, about 40%, of the total per-request execution
      time of bfq (i.e., of the sum of the execution time of all the bfq
      functions that have to be executed to process an I/O request from its
      creation to its destruction).  This reduces the request-processing
      rate sustainable by bfq noticeably, even on a multicore CPU. In fact,
      the bfq functions that invoke blkg_*stats_* functions cannot be
      executed in parallel with the rest of the code of bfq, because both
      are executed under the same same per-device scheduler lock.
      
      To reduce this slowdown, this commit moves, wherever possible, the
      invocation of these functions (more precisely, of the bfq functions
      that invoke blkg_*stats_* functions) outside the critical sections
      protected by the scheduler lock.
      
      With this change, and with all blkio.bfq.* statistics enabled, the
      throughput grows, e.g., from 250 to 310 KIOPS (+25%) on an Intel
      i7-4850HQ, in case of 8 threads doing random I/O in parallel on
      null_blk, with the latter configured with 0 latency. We obtained the
      same or higher throughput boosts, up to +30%, with other processors
      (some figures are reported in the documentation). For our tests, we
      used the script [1], with which our results can be easily reproduced.
      
      NOTE. This commit still protects the invocation of blkg_*stats_*
      functions with the request_queue lock, because the group these
      functions are invoked on may otherwise disappear before or while these
      functions are executed.  Fortunately, tests without even this lock
      show, by difference, that the serialization caused by this lock has a
      little impact (at most ~5% of throughput reduction).
      
      [1] https://github.com/Algodev-github/IOSpeed
      
      Tested-by: default avatarLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: default avatarPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: default avatarLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      24bfd19b
    • Luca Miccio's avatar
      block, bfq: add missing invocations of bfqg_stats_update_io_add/remove · 614822f8
      Luca Miccio authored
      
      
      bfqg_stats_update_io_add and bfqg_stats_update_io_remove are to be
      invoked, respectively, when an I/O request enters and when an I/O
      request exits the scheduler. Unfortunately, bfq does not fully comply
      with this scheme, because it does not invoke these functions for
      requests that are inserted into or extracted from its priority
      dispatch list. This commit fixes this mistake.
      Tested-by: default avatarLee Tibbert <lee.tibbert@gmail.com>
      Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
      Signed-off-by: default avatarPaolo Valente <paolo.valente@linaro.org>
      Signed-off-by: default avatarLuca Miccio <lucmiccio@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      614822f8
  8. 11 Nov, 2017 17 commits
  9. 04 Nov, 2017 8 commits