• Tejun Heo's avatar
    blkcg: implement per-blkg request allocation · a051661c
    Tejun Heo authored
    Currently, request_queue has one request_list to allocate requests
    from regardless of blkcg of the IO being issued.  When the unified
    request pool is used up, cfq proportional IO limits become meaningless
    - whoever grabs the next request being freed wins the race regardless
    of the configured weights.
    This can be easily demonstrated by creating a blkio cgroup w/ very low
    weight, put a program which can issue a lot of random direct IOs there
    and running a sequential IO from a different cgroup.  As soon as the
    request pool is used up, the sequential IO bandwidth crashes.
    This patch implements per-blkg request_list.  Each blkg has its own
    request_list and any IO allocates its request from the matching blkg
    making blkcgs completely isolated in terms of request allocation.
    * Root blkcg uses the request_list embedded in each request_queue,
      which was renamed to @q->root_rl from @q->rq.  While making blkcg rl
      handling a bit harier, this enables avoiding most overhead for root
    * Queue fullness is properly per request_list but bdi isn't blkcg
      aware yet, so congestion state currently just follows the root
      blkcg.  As writeback isn't aware of blkcg yet, this works okay for
      async congestion but readahead may get the wrong signals.  It's
      better than blkcg completely collapsing with shared request_list but
      needs to be improved with future changes.
    * After this change, each block cgroup gets a full request pool making
      resource consumption of each cgroup higher.  This makes allowing
      non-root users to create cgroups less desirable; however, note that
      allowing non-root users to directly manage cgroups is already
      severely broken regardless of this patch - each block cgroup
      consumes kernel memory and skews IO weight (IO weights are not
    v2: queue-sysfs.txt updated and patch description udpated as suggested
        by Vivek.
    v3: blk_get_rl() wasn't checking error return from
        blkg_lookup_create() and may cause oops on lookup failure.  Fix it
        by falling back to root_rl on blkg lookup failures.  This problem
        was spotted by Rakesh Iyer <rni@google.com>.
    v4: Updated to accomodate 458f27a9
     "block: Avoid missed wakeup in
        request waitqueue".  blk_drain_queue() now wakes up waiters on all
        blkg->rl on the target queue.
    Signed-off-by: default avatarTejun Heo <tj@kernel.org>
    Acked-by: default avatarVivek Goyal <vgoyal@redhat.com>
    Cc: Wu Fengguang <fengguang.wu@intel.com>
    Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>