1. 19 Feb, 2013 1 commit
  2. 14 Feb, 2013 3 commits
    • Tejun Heo's avatar
      workqueue: rename cpu_workqueue to pool_workqueue · 112202d9
      Tejun Heo authored
      
      
      workqueue has moved away from global_cwqs to worker_pools and with the
      scheduled custom worker pools, wforkqueues will be associated with
      pools which don't have anything to do with CPUs.  The workqueue code
      went through significant amount of changes recently and mass renaming
      isn't likely to hurt much additionally.  Let's replace 'cpu' with
      'pool' so that it reflects the current design.
      
      * s/struct cpu_workqueue_struct/struct pool_workqueue/
      * s/cpu_wq/pool_wq/
      * s/cwq/pwq/
      
      This patch is purely cosmetic.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      112202d9
    • Tejun Heo's avatar
      workqueue: reimplement is_chained_work() using current_wq_worker() · 8d03ecfe
      Tejun Heo authored
      
      
      is_chained_work() was added before current_wq_worker() and implemented
      its own ham-fisted way of finding out whether %current is a workqueue
      worker - it iterates through all possible workers.
      
      Drop the custom implementation and reimplement using
      current_wq_worker().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      8d03ecfe
    • Tejun Heo's avatar
      workqueue: fix is_chained_work() regression · 1dd63814
      Tejun Heo authored
      c9e7cf27
      
       ("workqueue: move busy_hash from global_cwq to
      worker_pool") incorrectly converted is_chained_work() to use
      get_gcwq() inside for_each_gcwq_cpu() while removing get_gcwq().
      
      As cwq might not exist for all possible workqueue CPUs, @cwq can be
      NULL and the following cwq deferences can lead to oops.
      
      Fix it by using for_each_cwq_cpu() instead, which is the better one to
      use anyway as we only need to check pools that the wq is associated
      with.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      1dd63814
  3. 07 Feb, 2013 9 commits
    • Lai Jiangshan's avatar
      workqueue: pick cwq instead of pool in __queue_work() · 8594fade
      Lai Jiangshan authored
      
      
      Currently, __queue_work() chooses the pool to queue a work item to and
      then determines cwq from the target wq and the chosen pool.  This is a
      bit backwards in that we can determine cwq first and simply use
      cwq->pool.  This way, we can skip get_std_worker_pool() in queueing
      path which will be a hurdle when implementing custom worker pools.
      
      Update __queue_work() such that it chooses the target cwq and then use
      cwq->pool instead of the other way around.  While at it, add missing
      {} in an if statement.
      
      This patch doesn't introduce any functional changes.
      
      tj: The original patch had two get_cwq() calls - the first to
          determine the pool by doing get_cwq(cpu, wq)->pool and the second
          to determine the matching cwq from get_cwq(pool->cpu, wq).
          Updated the function such that it chooses cwq instead of pool and
          removed the second call.  Rewrote the description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      8594fade
    • Lai Jiangshan's avatar
      workqueue: make get_work_pool_id() cheaper · 54d5b7d0
      Lai Jiangshan authored
      
      
      get_work_pool_id() currently first obtains pool using get_work_pool()
      and then return pool->id.  For an off-queue work item, this involves
      obtaining pool ID from worker->data, performing idr_find() to find the
      matching pool and then returning its pool->id which of course is the
      same as the one which went into idr_find().
      
      Just open code WORK_STRUCT_CWQ case and directly return pool ID from
      work->data.
      
      tj: The original patch dropped on-queue work item handling and renamed
          the function to offq_work_pool_id().  There isn't much benefit in
          doing so.  Handling it only requires a single if() and we need at
          least BUG_ON(), which is also a branch, even if we drop on-queue
          handling.  Open code WORK_STRUCT_CWQ case and keep the function in
          line with get_work_pool().  Rewrote the description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      54d5b7d0
    • Tejun Heo's avatar
      workqueue: move nr_running into worker_pool · e19e397a
      Tejun Heo authored
      
      
      As nr_running is likely to be accessed from other CPUs during
      try_to_wake_up(), it was kept outside worker_pool; however, while less
      frequent, other fields in worker_pool are accessed from other CPUs
      for, e.g., non-reentrancy check.  Also, with recent pool related
      changes, accessing nr_running matching the worker_pool isn't as simple
      as it used to be.
      
      Move nr_running inside worker_pool.  Keep it aligned to cacheline and
      define CPU pools using DEFINE_PER_CPU_SHARED_ALIGNED().  This should
      give at least the same cacheline behavior.
      
      get_pool_nr_running() is replaced with direct pool->nr_running
      accesses.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Joonsoo Kim <js1304@gmail.com>
      e19e397a
    • Tejun Heo's avatar
      workqueue: cosmetic update in try_to_grab_pending() · 16062836
      Tejun Heo authored
      
      
      With the recent is-work-queued-here test simplification, the nested
      if() in try_to_grab_pending() can be collapsed.  Collapse it.
      
      This patch is purely cosmetic.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      16062836
    • Lai Jiangshan's avatar
      workqueue: simplify is-work-item-queued-here test · 0b3dae68
      Lai Jiangshan authored
      
      
      Currently, determining whether a work item is queued on a locked pool
      involves somewhat convoluted memory barrier dancing.  It goes like the
      following.
      
      * When a work item is queued on a pool, work->data is updated before
        work->entry is linked to the pending list with a wmb() inbetween.
      
      * When trying to determine whether a work item is currently queued on
        a pool pointed to by work->data, it locks the pool and looks at
        work->entry.  If work->entry is linked, we then do rmb() and then
        check whether work->data points to the current pool.
      
      This works because, work->data can only point to a pool if it
      currently is or were on the pool and,
      
      * If it currently is on the pool, the tests would obviously succeed.
      
      * It it left the pool, its work->entry was cleared under pool->lock,
        so if we're seeing non-empty work->entry, it has to be from the work
        item being linked on another pool.  Because work->data is updated
        before work->entry is linked with wmb() inbetween, work->data update
        from another pool is guaranteed to be visible if we do rmb() after
        seeing non-empty work->entry.  So, we either see empty work->entry
        or we see updated work->data pointin to another pool.
      
      While this works, it's convoluted, to put it mildly.  With recent
      updates, it's now guaranteed that work->data points to cwq only while
      the work item is queued and that updating work->data to point to cwq
      or back to pool is done under pool->lock, so we can simply test
      whether work->data points to cwq which is associated with the
      currently locked pool instead of the convoluted memory barrier
      dancing.
      
      This patch replaces the memory barrier based "are you still here,
      really?" test with much simpler "does work->data points to me?" test -
      if work->data points to a cwq which is associated with the currently
      locked pool, the work item is guaranteed to be queued on the pool as
      work->data can start and stop pointing to such cwq only under
      pool->lock and the start and stop coincide with queue and dequeue.
      
      tj: Rewrote the comments and description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      0b3dae68
    • Lai Jiangshan's avatar
      workqueue: make work->data point to pool after try_to_grab_pending() · 4468a00f
      Lai Jiangshan authored
      
      
      We plan to use work->data pointing to cwq as the synchronization
      invariant when determining whether a given work item is on a locked
      pool or not, which requires work->data pointing to cwq only while the
      work item is queued on the associated pool.
      
      With delayed_work updated not to overload work->data for target
      workqueue recording, the only case where we still have off-queue
      work->data pointing to cwq is try_to_grab_pending() which doesn't
      update work->data after stealing a queued work item.  There's no
      reason for try_to_grab_pending() to not update work->data to point to
      the pool instead of cwq, like the normal execution does.
      
      This patch adds set_work_pool_and_keep_pending() which makes
      work->data point to pool instead of cwq but keeps the pending bit
      unlike set_work_pool_and_clear_pending() (surprise!).
      
      After this patch, it's guaranteed that only queued work items point to
      cwqs.
      
      This patch doesn't introduce any visible behavior change.
      
      tj: Renamed the new helper function to match
          set_work_pool_and_clear_pending() and rewrote the description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      4468a00f
    • Lai Jiangshan's avatar
      workqueue: add delayed_work->wq to simplify reentrancy handling · 60c057bc
      Lai Jiangshan authored
      
      
      To avoid executing the same work item from multiple CPUs concurrently,
      a work_struct records the last pool it was on in its ->data so that,
      on the next queueing, the pool can be queried to determine whether the
      work item is still executing or not.
      
      A delayed_work goes through timer before actually being queued on the
      target workqueue and the timer needs to know the target workqueue and
      CPU.  This is currently achieved by modifying delayed_work->work.data
      such that it points to the cwq which points to the target workqueue
      and the last CPU the work item was on.  __queue_delayed_work()
      extracts the last CPU from delayed_work->work.data and then combines
      it with the target workqueue to create new work.data.
      
      The only thing this rather ugly hack achieves is encoding the target
      workqueue into delayed_work->work.data without using a separate field,
      which could be a trade off one can make; unfortunately, this entangles
      work->data management between regular workqueue and delayed_work code
      by setting cwq pointer before the work item is actually queued and
      becomes a hindrance for further improvements of work->data handling.
      
      This can be easily made sane by adding a target workqueue field to
      delayed_work.  While delayed_work is used widely in the kernel and
      this does make it a bit larger (<5%), I think this is the right
      trade-off especially given the prospect of much saner handling of
      work->data which currently involves quite tricky memory barrier
      dancing, and don't expect to see any measureable effect.
      
      Add delayed_work->wq and drop the delayed_work->work.data overloading.
      
      tj: Rewrote the description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      60c057bc
    • Lai Jiangshan's avatar
      workqueue: make work_busy() test WORK_STRUCT_PENDING first · 038366c5
      Lai Jiangshan authored
      
      
      Currently, work_busy() first tests whether the work has a pool
      associated with it and if not, considers it idle.  This works fine
      even for delayed_work.work queued on timer, as __queue_delayed_work()
      sets cwq on delayed_work.work - a queued delayed_work always has its
      cwq and thus pool associated with it.
      
      However, we're about to update delayed_work queueing and this won't
      hold.  Update work_busy() such that it tests WORK_STRUCT_PENDING
      before the associated pool.  This doesn't make any noticeable behavior
      difference now.
      
      With work_pending() test moved, the function read a lot better with
      "if (!pool)" test flipped to positive.  Flip it.
      
      While at it, lose the comment about now non-existent reentrant
      workqueues.
      
      tj: Reorganized the function and rewrote the description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      038366c5
    • Lai Jiangshan's avatar
      workqueue: replace WORK_CPU_NONE/LAST with WORK_CPU_END · 6be19588
      Lai Jiangshan authored
      
      
      Now that workqueue has moved away from gcwqs, workqueue no longer has
      the need to have a CPU identifier indicating "no cpu associated" - we
      now use WORK_OFFQ_POOL_NONE instead - and most uses of WORK_CPU_NONE
      are gone.
      
      The only left usage is as the end marker for for_each_*wq*()
      iterators, where the name WORK_CPU_NONE is confusing w/o actual
      WORK_CPU_NONE usages.  Similarly, WORK_CPU_LAST which equals
      WORK_CPU_NONE no longer makes sense.
      
      Replace both WORK_CPU_NONE and LAST with WORK_CPU_END.  This patch
      doesn't introduce any functional difference.
      
      tj: s/WORK_CPU_LAST/WORK_CPU_END/ and rewrote the description.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      6be19588
  4. 24 Jan, 2013 17 commits
    • Tejun Heo's avatar
      workqueue: post global_cwq removal cleanups · 706026c2
      Tejun Heo authored
      
      
      Remove remaining references to gcwq.
      
      * __next_gcwq_cpu() steals __next_wq_cpu() name.  The original
        __next_wq_cpu() became __next_cwq_cpu().
      
      * s/for_each_gcwq_cpu/for_each_wq_cpu/
        s/for_each_online_gcwq_cpu/for_each_online_wq_cpu/
      
      * s/gcwq_mayday_timeout/pool_mayday_timeout/
      
      * s/gcwq_unbind_fn/wq_unbind_fn/
      
      * Drop references to gcwq in comments.
      
      This patch doesn't introduce any functional changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      706026c2
    • Tejun Heo's avatar
      workqueue: rename nr_running variables · e6e380ed
      Tejun Heo authored
      
      
      Rename per-cpu and unbound nr_running variables such that they match
      the pool variables.
      
      This patch doesn't introduce any functional changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      e6e380ed
    • Tejun Heo's avatar
      workqueue: remove global_cwq · a60dc39c
      Tejun Heo authored
      
      
      global_cwq is now nothing but a container for per-cpu standard
      worker_pools.  Declare the worker pools directly as
      cpu/unbound_std_worker_pools[] and remove global_cwq.
      
      * ____cacheline_aligned_in_smp moved from global_cwq to worker_pool.
        This probably would have made sense even before this change as we
        want each pool to be aligned.
      
      * get_gcwq() is replaced with std_worker_pools() which returns the
        pointer to the standard pool array for a given CPU.
      
      * __alloc_workqueue_key() updated to use get_std_worker_pool() instead
        of open-coding pool determination.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      
      v2: Joonsoo pointed out that it'd better to align struct worker_pool
          rather than the array so that every pool is aligned.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      a60dc39c
    • Tejun Heo's avatar
      workqueue: remove worker_pool->gcwq · 4e8f0a60
      Tejun Heo authored
      
      
      The only remaining user of pool->gcwq is std_worker_pool_pri().
      Reimplement it using get_gcwq() and remove worker_pool->gcwq.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      4e8f0a60
    • Tejun Heo's avatar
      workqueue: replace for_each_worker_pool() with for_each_std_worker_pool() · 38db41d9
      Tejun Heo authored
      
      
      for_each_std_worker_pool() takes @cpu instead of @gcwq.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      38db41d9
    • Tejun Heo's avatar
      workqueue: make freezing/thawing per-pool · a1056305
      Tejun Heo authored
      
      
      Instead of holding locks from both pools and then processing the pools
      together, make freezing/thwaing per-pool - grab locks of one pool,
      process it, release it and then proceed to the next pool.
      
      While this patch changes processing order across pools, order within
      each pool remains the same.  As each pool is independent, this
      shouldn't break anything.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      a1056305
    • Tejun Heo's avatar
      workqueue: make hotplug processing per-pool · 94cf58bb
      Tejun Heo authored
      
      
      Instead of holding locks from both pools and then processing the pools
      together, make hotplug processing per-pool - grab locks of one pool,
      process it, release it and then proceed to the next pool.
      
      rebind_workers() is updated to take and process @pool instead of @gcwq
      which results in a lot of de-indentation.  gcwq_claim_assoc_and_lock()
      and its counterpart are replaced with in-line per-pool locking.
      
      While this patch changes processing order across pools, order within
      each pool remains the same.  As each pool is independent, this
      shouldn't break anything.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      94cf58bb
    • Tejun Heo's avatar
      workqueue: move global_cwq->lock to worker_pool · d565ed63
      Tejun Heo authored
      
      
      Move gcwq->lock to pool->lock.  The conversion is mostly
      straight-forward.  Things worth noting are
      
      * In many places, this removes the need to use gcwq completely.  pool
        is used directly instead.  get_std_worker_pool() is added to help
        some of these conversions.  This also leaves get_work_gcwq() without
        any user.  Removed.
      
      * In hotplug and freezer paths, the pools belonging to a CPU are often
        processed together.  This patch makes those paths hold locks of all
        pools, with highpri lock nested inside, to keep the conversion
        straight-forward.  These nested lockings will be removed by
        following patches.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      d565ed63
    • Tejun Heo's avatar
      workqueue: move global_cwq->cpu to worker_pool · ec22ca5e
      Tejun Heo authored
      
      
      Move gcwq->cpu to pool->cpu.  This introduces a couple places where
      gcwq->pools[0].cpu is used.  These will soon go away as gcwq is
      further reduced.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      ec22ca5e
    • Tejun Heo's avatar
      workqueue: move busy_hash from global_cwq to worker_pool · c9e7cf27
      Tejun Heo authored
      
      
      There's no functional necessity for the two pools on the same CPU to
      share the busy hash table.  It's also likely to be a bottleneck when
      implementing pools with user-specified attributes.
      
      This patch makes busy_hash per-pool.  The conversion is mostly
      straight-forward.  Changes worth noting are,
      
      * Large block of changes in rebind_workers() is moving the block
        inside for_each_worker_pool() as now there are separate hash tables
        for each pool.  This changes the order of operations but doesn't
        break anything.
      
      * Thre for_each_worker_pool() loops in gcwq_unbind_fn() are combined
        into one.  This again changes the order of operaitons but doesn't
        break anything.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      c9e7cf27
    • Tejun Heo's avatar
      workqueue: record pool ID instead of CPU in work->data when off-queue · 7c3eed5c
      Tejun Heo authored
      
      
      Currently, when a work item is off-queue, work->data records the CPU
      it was last on, which is used to locate the last executing instance
      for non-reentrance, flushing, etc.
      
      We're in the process of removing global_cwq and making worker_pool the
      top level abstraction.  This patch makes work->data point to the pool
      it was last associated with instead of CPU.
      
      After the previous WORK_OFFQ_POOL_CPU and worker_poo->id additions,
      the conversion is fairly straight-forward.  WORK_OFFQ constants and
      functions are modified to record and read back pool ID instead.
      worker_pool_by_id() is added to allow looking up pool from ID.
      get_work_pool() replaces get_work_gcwq(), which is reimplemented using
      get_work_pool().  get_work_pool_id() replaces work_cpu().
      
      This patch shouldn't introduce any observable behavior changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      7c3eed5c
    • Tejun Heo's avatar
      workqueue: add worker_pool->id · 9daf9e67
      Tejun Heo authored
      
      
      Add worker_pool->id which is allocated from worker_pool_idr.  This
      will be used to record the last associated worker_pool in work->data.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      9daf9e67
    • Tejun Heo's avatar
      workqueue: introduce WORK_OFFQ_CPU_NONE · 715b06b8
      Tejun Heo authored
      
      
      Currently, when a work item is off queue, high bits of its data
      encodes the last CPU it was on.  This is scheduled to be changed to
      pool ID, which will make it impossible to use WORK_CPU_NONE to
      indicate no association.
      
      This patch limits the number of bits which are used for off-queue cpu
      number to 31 (so that the max fits in an int) and uses the highest
      possible value - WORK_OFFQ_CPU_NONE - to indicate no association.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      715b06b8
    • Tejun Heo's avatar
      workqueue: make GCWQ_FREEZING a pool flag · 35b6bb63
      Tejun Heo authored
      
      
      Make GCWQ_FREEZING a pool flag POOL_FREEZING.  This patch doesn't
      change locking - FREEZING on both pools of a CPU are set or clear
      together while holding gcwq->lock.  It shouldn't cause any functional
      difference.
      
      This leaves gcwq->flags w/o any flags.  Removed.
      
      While at it, convert BUG_ON()s in freeze_workqueue_begin() and
      thaw_workqueues() to WARN_ON_ONCE().
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      35b6bb63
    • Tejun Heo's avatar
      workqueue: make GCWQ_DISASSOCIATED a pool flag · 24647570
      Tejun Heo authored
      
      
      Make GCWQ_DISASSOCIATED a pool flag POOL_DISASSOCIATED.  This patch
      doesn't change locking - DISASSOCIATED on both pools of a CPU are set
      or clear together while holding gcwq->lock.  It shouldn't cause any
      functional difference.
      
      This is part of an effort to remove global_cwq and make worker_pool
      the top level abstraction, which in turn will help implementing worker
      pools with user-specified attributes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      24647570
    • Tejun Heo's avatar
      workqueue: use std_ prefix for the standard per-cpu pools · e34cdddb
      Tejun Heo authored
      
      
      There are currently two worker pools per cpu (including the unbound
      cpu) and they are the only pools in use.  New class of pools are
      scheduled to be added and some pool related APIs will be added
      inbetween.  Call the existing pools the standard pools and prefix them
      with std_.  Do this early so that new APIs can use std_ prefix from
      the beginning.
      
      This patch doesn't introduce any functional difference.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      e34cdddb
    • Tejun Heo's avatar
      workqueue: unexport work_cpu() · e2905b29
      Tejun Heo authored
      
      
      This function no longer has any external users.  Unexport it.  It will
      be removed later on.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      e2905b29
  5. 18 Jan, 2013 4 commits
    • Tejun Heo's avatar
      workqueue: implement current_is_async() · 84b233ad
      Tejun Heo authored
      
      
      This function queries whether %current is an async worker executing an
      async item.  This will be used to implement warning on synchronous
      request_module() from async workers.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      84b233ad
    • Tejun Heo's avatar
      workqueue: move struct worker definition to workqueue_internal.h · 2eaebdb3
      Tejun Heo authored
      
      
      This will be used to implement an inline function to query whether
      %current is a workqueue worker and, if so, allow determining which
      work item it's executing.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      2eaebdb3
    • Tejun Heo's avatar
      workqueue: rename kernel/workqueue_sched.h to kernel/workqueue_internal.h · ea138446
      Tejun Heo authored
      
      
      Workqueue wants to expose more interface internal to kernel/.  Instead
      of adding a new header file, repurpose kernel/workqueue_sched.h.
      Rename it to workqueue_internal.h and add include protector.
      
      This patch doesn't introduce any functional changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      ea138446
    • Tejun Heo's avatar
      workqueue: set PF_WQ_WORKER on rescuers · 111c225a
      Tejun Heo authored
      
      
      PF_WQ_WORKER is used to tell scheduler that the task is a workqueue
      worker and needs wq_worker_sleeping/waking_up() invoked on it for
      concurrency management.  As rescuers never participate in concurrency
      management, PF_WQ_WORKER wasn't set on them.
      
      There's a need for an interface which can query whether %current is
      executing a work item and if so which.  Such interface requires a way
      to identify all tasks which may execute work items and PF_WQ_WORKER
      will be used for that.  As all normal workers always have PF_WQ_WORKER
      set, we only need to add it to rescuers.
      
      As rescuers start with WORKER_PREP but never clear it, it's always
      NOT_RUNNING and there's no need to worry about it interfering with
      concurrency management even if PF_WQ_WORKER is set; however, unlike
      normal workers, rescuers currently don't have its worker struct as
      kthread_data().  It uses the associated workqueue_struct instead.
      This is problematic as wq_worker_sleeping/waking_up() expect struct
      worker at kthread_data().
      
      This patch adds worker->rescue_wq and start rescuer kthreads with
      worker struct as kthread_data and sets PF_WQ_WORKER on rescuers.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      111c225a
  6. 19 Dec, 2012 1 commit
    • Tejun Heo's avatar
      workqueue: fix find_worker_executing_work() brekage from hashtable conversion · 023f27d3
      Tejun Heo authored
      42f8570f
      
       ("workqueue: use new hashtable implementation") incorrectly
      made busy workers hashed by the pointer value of worker instead of
      work.  This broke find_worker_executing_work() which in turn broke a
      lot of fundamental operations of workqueue - non-reentrancy and
      flushing among others.  The flush malfunction triggered warning in
      disk event code in Fengguang's automated test.
      
       write_dev_root_ (3265) used greatest stack depth: 2704 bytes left
       ------------[ cut here ]------------
       WARNING: at /c/kernel-tests/src/stable/block/genhd.c:1574 disk_clear_events+0x\
      cf/0x108()
       Hardware name: Bochs
       Modules linked in:
       Pid: 3328, comm: ata_id Not tainted 3.7.0-01930-gbff6343 #1167
       Call Trace:
        [<ffffffff810997c4>] warn_slowpath_common+0x83/0x9c
        [<ffffffff810997f7>] warn_slowpath_null+0x1a/0x1c
        [<ffffffff816aea77>] disk_clear_events+0xcf/0x108
        [<ffffffff811bd8be>] check_disk_change+0x27/0x59
        [<ffffffff822e48e2>] cdrom_open+0x49/0x68b
        [<ffffffff81ab0291>] idecd_open+0x88/0xb7
        [<ffffffff811be58f>] __blkdev_get+0x102/0x3ec
        [<ffffffff811bea08>] blkdev_get+0x18f/0x30f
        [<ffffffff811bebfd>] blkdev_open+0x75/0x80
        [<ffffffff8118f510>] do_dentry_open+0x1ea/0x295
        [<ffffffff8118f5f0>] finish_open+0x35/0x41
        [<ffffffff8119c720>] do_last+0x878/0xa25
        [<ffffffff8119c993>] path_openat+0xc6/0x333
        [<ffffffff8119cf37>] do_filp_open+0x38/0x86
        [<ffffffff81190170>] do_sys_open+0x6c/0xf9
        [<ffffffff8119021e>] sys_open+0x21/0x23
        [<ffffffff82c1c3d9>] system_call_fastpath+0x16/0x1b
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarFengguang Wu <fengguang.wu@intel.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      023f27d3
  7. 18 Dec, 2012 5 commits
    • Tejun Heo's avatar
      workqueue: consider work function when searching for busy work items · a2c1c57b
      Tejun Heo authored
      
      
      To avoid executing the same work item concurrenlty, workqueue hashes
      currently busy workers according to their current work items and looks
      up the the table when it wants to execute a new work item.  If there
      already is a worker which is executing the new work item, the new item
      is queued to the found worker so that it gets executed only after the
      current execution finishes.
      
      Unfortunately, a work item may be freed while being executed and thus
      recycled for different purposes.  If it gets recycled for a different
      work item and queued while the previous execution is still in
      progress, workqueue may make the new work item wait for the old one
      although the two aren't really related in any way.
      
      In extreme cases, this false dependency may lead to deadlock although
      it's extremely unlikely given that there aren't too many self-freeing
      work item users and they usually don't wait for other work items.
      
      To alleviate the problem, record the current work function in each
      busy worker and match it together with the work item address in
      find_worker_executing_work().  While this isn't complete, it ensures
      that unrelated work items don't interact with each other and in the
      very unlikely case where a twisted wq user triggers it, it's always
      onto itself making the culprit easy to spot.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarAndrey Isakov <andy51@gmx.ru>
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=51701
      Cc: stable@vger.kernel.org
      a2c1c57b
    • Sasha Levin's avatar
      workqueue: use new hashtable implementation · 42f8570f
      Sasha Levin authored
      Switch workqueues to use the new hashtable implementation. This reduces the
      amount of generic unrelated code in the workqueues.
      
      This patch depends on d9b482c8
      
       ("hashtable: introduce a small and naive
      hashtable") which was merged in v3.6.
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      42f8570f
    • Linus Torvalds's avatar
      Merge branch 'akpm' (Andrew's patch-bomb) · 848b8141
      Linus Torvalds authored
      Merge misc patches from Andrew Morton:
       "Incoming:
      
         - lots of misc stuff
      
         - backlight tree updates
      
         - lib/ updates
      
         - Oleg's percpu-rwsem changes
      
         - checkpatch
      
         - rtc
      
         - aoe
      
         - more checkpoint/restart support
      
        I still have a pile of MM stuff pending - Pekka should be merging
        later today after which that is good to go.  A number of other things
        are twiddling thumbs awaiting maintainer merges."
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (180 commits)
        scatterlist: don't BUG when we can trivially return a proper error.
        docs: update documentation about /proc/<pid>/fdinfo/<fd> fanotify output
        fs, fanotify: add @mflags field to fanotify output
        docs: add documentation about /proc/<pid>/fdinfo/<fd> output
        fs, notify: add procfs fdinfo helper
        fs, exportfs: add exportfs_encode_inode_fh() helper
        fs, exportfs: escape nil dereference if no s_export_op present
        fs, epoll: add procfs fdinfo helper
        fs, eventfd: add procfs fdinfo helper
        procfs: add ability to plug in auxiliary fdinfo providers
        tools/testing/selftests/kcmp/kcmp_test.c: print reason for failure in kcmp_test
        breakpoint selftests: print failure status instead of cause make error
        kcmp selftests: print fail status instead of cause make error
        kcmp selftests: make run_tests fix
        mem-hotplug selftests: print failure status instead of cause make error
        cpu-hotplug selftests: print failure status instead of cause make error
        mqueue selftests: print failure status instead of cause make error
        vm selftests: print failure status instead of cause make error
        ubifs: use prandom_bytes
        mtd: nandsim: use prandom_bytes
        ...
      848b8141
    • Eric W. Biederman's avatar
      efi: Fix the build with user namespaces enabled. · 99295618
      Eric W. Biederman authored
      
      
      When compiling efivars.c the build fails with:
      
         CC      drivers/firmware/efivars.o
        drivers/firmware/efivars.c: In function ‘efivarfs_get_inode’:
        drivers/firmware/efivars.c:886:31: error: incompatible types when assigning to type ‘kgid_t’ from type ‘int’
        make[2]: *** [drivers/firmware/efivars.o] Error 1
        make[1]: *** [drivers/firmware/efivars.o] Error 2
      
      Fix the build error by removing the duplicate initialization of i_uid and
      i_gid inode_init_always has already initialized them to 0.
      Signed-off-by: default avatar"Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      99295618
    • Stephen Rothwell's avatar
      mm,numa: fix update_mmu_cache_pmd call · ce4a9cc5
      Stephen Rothwell authored
      This build error is currently hidden by the fact that the x86
      implementation of 'update_mmu_cache_pmd()' is a macro that doesn't use
      its last argument, but commit b32967ff
      
       ("mm: numa: Add THP migration
      for the NUMA working set scanning fault case") introduced a call with
      the wrong third argument.
      
      In the akpm tree, it causes this build error:
      
        mm/migrate.c: In function 'migrate_misplaced_transhuge_page_put':
        mm/migrate.c:1666:2: error: incompatible type for argument 3 of 'update_mmu_cache_pmd'
        arch/x86/include/asm/pgtable.h:792:20: note: expected 'struct pmd_t *' but argument is of type 'pmd_t'
      
      Fix it.
      Signed-off-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ce4a9cc5