1. 03 Aug, 2012 5 commits
    • Tejun Heo's avatar
      workqueue: set delayed_work->timer function on initialization · d8e794df
      Tejun Heo authored
      delayed_work->timer.function is currently initialized during
      queue_delayed_work_on().  Export delayed_work_timer_fn() and set
      delayed_work timer function during delayed_work initialization
      together with other fields.
      This ensures the timer function is always valid on an initialized
      delayed_work.  This is to help mod_delayed_work() implementation.
      To detect delayed_work users which diddle with the internal timer,
      trigger WARN if timer function doesn't match on queue.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
    • Tejun Heo's avatar
      workqueue: disable irq while manipulating PENDING · 8930caba
      Tejun Heo authored
      Queueing operations use WORK_STRUCT_PENDING_BIT to synchronize access
      to the target work item.  They first try to claim the bit and proceed
      with queueing only after that succeeds and there's a window between
      PENDING being set and the actual queueing where the task can be
      interrupted or preempted.
      There's also a similar window in process_one_work() when clearing
      PENDING.  A work item is dequeued, gcwq->lock is released and then
      PENDING is cleared and the worker might get interrupted or preempted
      between releasing gcwq->lock and clearing PENDING.
      cancel[_delayed]_work_sync() tries to claim or steal PENDING.  The
      function assumes that a work item with PENDING is either queued or in
      the process of being [de]queued.  In the latter case, it busy-loops
      until either the work item loses PENDING or is queued.  If canceling
      coincides with the above described interrupts or preemptions, the
      canceling task will busy-loop while the queueing or executing task is
      This patch keeps irq disabled across claiming PENDING and actual
      queueing and moves PENDING clearing in process_one_work() inside
      gcwq->lock so that busy looping from PENDING && !queued doesn't wait
      for interrupted/preempted tasks.  Note that, in process_one_work(),
      setting last CPU and clearing PENDING got merged into single
      This removes possible long busy-loops and will allow using
      try_to_grab_pending() from bh and irq contexts.
      v2: __queue_work() was testing preempt_count() to ensure that the
          caller has disabled preemption.  This triggers spuriously if
          !CONFIG_PREEMPT_COUNT.  Use preemptible() instead.  Reported by
          Fengguang Wu.
      v3: Disable irq instead of preemption.  IRQ will be disabled while
          grabbing gcwq->lock later anyway and this allows using
          try_to_grab_pending() from bh and irq contexts.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
    • Tejun Heo's avatar
      workqueue: add missing smp_wmb() in process_one_work() · 959d1af8
      Tejun Heo authored
      WORK_STRUCT_PENDING is used to claim ownership of a work item and
      process_one_work() releases it before starting execution.  When
      someone else grabs PENDING, all pre-release updates to the work item
      should be visible and all updates made by the new owner should happen
      Grabbing PENDING uses test_and_set_bit() and thus has a full barrier;
      however, clearing doesn't have a matching wmb.  Given the preceding
      spin_unlock and use of clear_bit, I don't believe this can be a
      problem on an actual machine and there hasn't been any related report
      but it still is theretically possible for clear_pending to permeate
      upwards and happen before work->entry update.
      Add an explicit smp_wmb() before work_clear_pending().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: stable@vger.kernel.org
    • Tejun Heo's avatar
      workqueue: make queueing functions return bool · d4283e93
      Tejun Heo authored
      All queueing functions return 1 on success, 0 if the work item was
      already pending.  Update them to return bool instead.  This signifies
      better that they don't return 0 / -errno.
      This is cleanup and doesn't cause any functional difference.
      While at it, fix comment opening for schedule_work_on().
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
    • Tejun Heo's avatar
      workqueue: reorder queueing functions so that _on() variants are on top · 0a13c00e
      Tejun Heo authored
      Currently, queue/schedule[_delayed]_work_on() are located below the
      counterpart without the _on postifx even though the latter is usually
      implemented using the former.  Swap them.
      This is cleanup and doesn't cause any functional difference.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
  2. 22 Jul, 2012 1 commit
    • Tejun Heo's avatar
      workqueue: fix spurious CPU locality WARN from process_one_work() · 6fec10a1
      Tejun Heo authored
       "workqueue: reimplement CPU online rebinding to handle idle
      workers" added CPU locality sanity check in process_one_work().  It
      triggers if a worker is executing on a different CPU without UNBOUND
      or REBIND set.
      This works for all normal workers but rescuers can trigger this
      spuriously when they're serving the unbound or a disassociated
      global_cwq - rescuers don't have either flag set and thus its
      gcwq->cpu can be a different value including %WORK_CPU_UNBOUND.
      Fix it by additionally testing %GCWQ_DISASSOCIATED.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatar"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      LKML-Refence: <20120721213656.GA7783@linux.vnet.ibm.com>
  3. 17 Jul, 2012 9 commits
    • Tejun Heo's avatar
      workqueue: simplify CPU hotplug code · 8db25e78
      Tejun Heo authored
      With trustee gone, CPU hotplug code can be simplified.
      * gcwq_claim/release_management() now grab and release gcwq lock too
        respectively and gained _and_lock and _and_unlock postfixes.
      * All CPU hotplug logic was implemented in workqueue_cpu_callback()
        which was called by workqueue_cpu_up/down_callback() for the correct
        priority.  This was because up and down paths shared a lot of logic,
        which is no longer true.  Remove workqueue_cpu_callback() and move
        all hotplug logic into the two actual callbacks.
      This patch doesn't make any functional changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: remove CPU offline trustee · 628c78e7
      Tejun Heo authored
      With the previous changes, a disassociated global_cwq now can run as
      an unbound one on its own - it can create workers as necessary to
      drain remaining works after the CPU has been brought down and manage
      the number of workers using the usual idle timer mechanism making
      trustee completely redundant except for the actual unbinding
      This patch removes the trustee and let a disassociated global_cwq
      manage itself.  Unbinding is moved to a work item (for CPU affinity)
      which is scheduled and flushed from CPU_DONW_PREPARE.
      This patch moves nr_running clearing outside gcwq and manager locks to
      simplify the code.  As nr_running is unused at the point, this is
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: don't butcher idle workers on an offline CPU · 3ce63377
      Tejun Heo authored
      Currently, during CPU offlining, after all pending work items are
      drained, the trustee butchers all workers.  Also, on CPU onlining
      failure, workqueue_cpu_callback() ensures that the first idle worker
      is destroyed.  Combined, these guarantee that an offline CPU doesn't
      have any worker for it once all the lingering work items are finished.
      This guarantee isn't really necessary and makes CPU on/offlining more
      expensive than needs to be, especially for platforms which use CPU
      hotplug for powersaving.
      This patch lets offline CPUs removes idle worker butchering from the
      trustee and let a CPU which failed onlining keep the created first
      worker.  The first worker is created if the CPU doesn't have any
      during CPU_DOWN_PREPARE and started right away.  If onlining succeeds,
      the rebind_workers() call in CPU_ONLINE will rebind it like any other
      workers.  If onlining fails, the worker is left alone till the next
      This makes CPU hotplugs cheaper by allowing global_cwqs to keep
      workers across them and simplifies code.
      Note that trustee doesn't re-arm idle timer when it's done and thus
      the disassociated global_cwq will keep all workers until it comes back
      online.  This will be improved by further patches.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: reimplement CPU online rebinding to handle idle workers · 25511a47
      Tejun Heo authored
      Currently, if there are left workers when a CPU is being brough back
      online, the trustee kills all idle workers and scheduled rebind_work
      so that they re-bind to the CPU after the currently executing work is
      finished.  This works for busy workers because concurrency management
      doesn't try to wake up them from scheduler callbacks, which require
      the target task to be on the local run queue.  The busy worker bumps
      concurrency counter appropriately as it clears WORKER_UNBOUND from the
      rebind work item and it's bound to the CPU before returning to the
      idle state.
      To reduce CPU on/offlining overhead (as many embedded systems use it
      for powersaving) and simplify the code path, workqueue is planned to
      be modified to retain idle workers across CPU on/offlining.  This
      patch reimplements CPU online rebinding such that it can also handle
      idle workers.
      As noted earlier, due to the local wakeup requirement, rebinding idle
      workers is tricky.  All idle workers must be re-bound before scheduler
      callbacks are enabled.  This is achieved by interlocking idle
      re-binding.  Idle workers are requested to re-bind and then hold until
      all idle re-binding is complete so that no bound worker starts
      executing work item.  Only after all idle workers are re-bound and
      parked, CPU_ONLINE proceeds to release them and queue rebind work item
      to busy workers thus guaranteeing scheduler callbacks aren't invoked
      until all idle workers are ready.
      worker_rebind_fn() is renamed to busy_worker_rebind_fn() and
      idle_worker_rebind() for idle workers is added.  Rebinding logic is
      moved to rebind_workers() and now called from CPU_ONLINE after
      flushing trustee.  While at it, add CPU sanity check in
      Note that now a worker may become idle or the manager between trustee
      release and rebinding during CPU_ONLINE.  As the previous patch
      updated create_worker() so that it can be used by regular manager
      while unbound and this patch implements idle re-binding, this is safe.
      This prepares for removal of trustee and keeping idle workers across
      CPU hotplugs.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: drop @bind from create_worker() · bc2ae0f5
      Tejun Heo authored
      Currently, create_worker()'s callers are responsible for deciding
      whether the newly created worker should be bound to the associated CPU
      and create_worker() sets WORKER_UNBOUND only for the workers for the
      unbound global_cwq.  Creation during normal operation is always via
      maybe_create_worker() and @bind is true.  For workers created during
      hotplug, @bind is false.
      Normal operation path is planned to be used even while the CPU is
      going through hotplug operations or offline and this static decision
      won't work.
      Drop @bind from create_worker() and decide whether to bind by looking
      at GCWQ_DISASSOCIATED.  create_worker() will also set WORKER_UNBOUND
      autmatically if disassociated.  To avoid flipping GCWQ_DISASSOCIATED
      while create_worker() is in progress, the flag is now allowed to be
      changed only while holding all manager_mutexes on the global_cwq.
      This requires that GCWQ_DISASSOCIATED is not cleared behind trustee's
      back.  CPU_ONLINE no longer clears DISASSOCIATED before flushing
      trustee, which clears DISASSOCIATED before rebinding remaining workers
      if asked to release.  For cases where trustee isn't around, CPU_ONLINE
      clears DISASSOCIATED after flushing trustee.  Also, now, first_idle
      has UNBOUND set on creation which is explicitly cleared by CPU_ONLINE
      while binding it.  These convolutions will soon be removed by further
      simplification of CPU hotplug path.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: use mutex for global_cwq manager exclusion · 60373152
      Tejun Heo authored
      POOL_MANAGING_WORKERS is used to ensure that at most one worker takes
      the manager role at any given time on a given global_cwq.  Trustee
      later hitched on it to assume manager adding blocking wait for the
      bit.  As trustee already needed a custom wait mechanism, waiting for
      MANAGING_WORKERS was rolled into the same mechanism.
      Trustee is scheduled to be removed.  This patch separates out
      MANAGING_WORKERS wait into per-pool mutex.  Workers use
      mutex_trylock() to test for manager role and trustee uses mutex_lock()
      to claim manager roles.
      gcwq_claim/release_management() helpers are added to grab and release
      manager roles of all pools on a global_cwq.  gcwq_claim_management()
      always grabs pool manager mutexes in ascending pool index order and
      uses pool index as lockdep subclass.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: ROGUE workers are UNBOUND workers · 403c821d
      Tejun Heo authored
      Currently, WORKER_UNBOUND is used to mark workers for the unbound
      global_cwq and WORKER_ROGUE is used to mark workers for disassociated
      per-cpu global_cwqs.  Both are used to make the marked worker skip
      concurrency management and the only place they make any difference is
      in worker_enter_idle() where WORKER_ROGUE is used to skip scheduling
      idle timer, which can easily be replaced with trustee state testing.
      This patch replaces WORKER_ROGUE with WORKER_UNBOUND and drops
      WORKER_ROGUE.  This is to prepare for removing trustee and handling
      disassociated global_cwqs as unbound.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: drop CPU_DYING notifier operation · f2d5a0ee
      Tejun Heo authored
      Workqueue used CPU_DYING notification to mark GCWQ_DISASSOCIATED.
      This was necessary because workqueue's CPU_DOWN_PREPARE happened
      before other DOWN_PREPARE notifiers and workqueue needed to stay
      associated across the rest of DOWN_PREPARE.
      After the previous patch, workqueue's DOWN_PREPARE happens after
      others and can set GCWQ_DISASSOCIATED directly.  Drop CPU_DYING and
      let the trustee set GCWQ_DISASSOCIATED after disabling concurrency
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
    • Tejun Heo's avatar
      workqueue: perform cpu down operations from low priority cpu_notifier() · 65758202
      Tejun Heo authored
      Currently, all workqueue cpu hotplug operations run off
      CPU_PRI_WORKQUEUE which is higher than normal notifiers.  This is to
      ensure that workqueue is up and running while bringing up a CPU before
      other notifiers try to use workqueue on the CPU.
      Per-cpu workqueues are supposed to remain working and bound to the CPU
      for normal CPU_DOWN_PREPARE notifiers.  This holds mostly true even
      with workqueue offlining running with higher priority because
      workqueue CPU_DOWN_PREPARE only creates a bound trustee thread which
      runs the per-cpu workqueue without concurrency management without
      explicitly detaching the existing workers.
      However, if the trustee needs to create new workers, it creates
      unbound workers which may wander off to other CPUs while
      CPU_DOWN_PREPARE notifiers are in progress.  Furthermore, if the CPU
      down is cancelled, the per-CPU workqueue may end up with workers which
      aren't bound to the CPU.
      While reliably reproducible with a convoluted artificial test-case
      involving scheduling and flushing CPU burning work items from CPU down
      notifiers, this isn't very likely to happen in the wild, and, even
      when it happens, the effects are likely to be hidden by the following
      successful CPU down.
      Fix it by using different priorities for up and down notifiers - high
      priority for up operations and low priority for down operations.
      Workqueue cpu hotplug operations will soon go through further cleanup.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: stable@vger.kernel.org
      Acked-by: default avatar"Rafael J. Wysocki" <rjw@sisk.pl>
  4. 14 Jul, 2012 2 commits
    • Tejun Heo's avatar
      workqueue: reimplement WQ_HIGHPRI using a separate worker_pool · 3270476a
      Tejun Heo authored
      WQ_HIGHPRI was implemented by queueing highpri work items at the head
      of the global worklist.  Other than queueing at the head, they weren't
      handled differently; unfortunately, this could lead to execution
      latency of a few seconds on heavily loaded systems.
      Now that workqueue code has been updated to deal with multiple
      worker_pools per global_cwq, this patch reimplements WQ_HIGHPRI using
      a separate worker_pool.  NR_WORKER_POOLS is bumped to two and
      gcwq->pools[0] is used for normal pri work items and ->pools[1] for
      highpri.  Highpri workers get -20 nice level and has 'H' suffix in
      their names.  Note that this change increases the number of kworkers
      per cpu.
      POOL_HIGHPRI_PENDING, pool_determine_ins_pos() and highpri chain
      wakeup code in process_one_work() are no longer used and removed.
      This allows proper prioritization of highpri work items and removes
      high execution latency of highpri work items.
      v2: nr_running indexing bug in get_pool_nr_running() fixed.
      v3: Refreshed for the get_pool_nr_running() update in the previous
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatarJosh Hunt <joshhunt00@gmail.com>
      LKML-Reference: <CAKA=qzaHqwZ8eqpLNFjxnO2fX-tgAOjmpvxgBFjv6dJeQaOW1w@mail.gmail.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
    • Tejun Heo's avatar
      workqueue: introduce NR_WORKER_POOLS and for_each_worker_pool() · 4ce62e9e
      Tejun Heo authored
      Introduce NR_WORKER_POOLS and for_each_worker_pool() and convert code
      paths which need to manipulate all pools in a gcwq to use them.
      NR_WORKER_POOLS is currently one and for_each_worker_pool() iterates
      over only @gcwq->pool.
      Note that nr_running is per-pool property and converted to an array
      with NR_WORKER_POOLS elements and renamed to pool_nr_running.  Note
      that get_pool_nr_running() currently assumes 0 index.  The next patch
      will make use of non-zero index.
      The changes in this patch are mechanical and don't caues any
      functional difference.  This is to prepare for multiple pools per
      v2: nr_running indexing bug in get_pool_nr_running() fixed.
      v3: Pointer to array is stupid.  Don't use it in get_pool_nr_running()
          as suggested by Linus.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Fengguang Wu <fengguang.wu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
  5. 12 Jul, 2012 4 commits
    • Tejun Heo's avatar
      workqueue: separate out worker_pool flags · 11ebea50
      Tejun Heo authored
      are per-pool properties.  Add worker_pool->flags and make the above
      three flags per-pool flags.
      The changes in this patch are mechanical and don't caues any
      functional difference.  This is to prepare for multiple pools per
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
    • Tejun Heo's avatar
      workqueue: use @pool instead of @gcwq or @cpu where applicable · 63d95a91
      Tejun Heo authored
      Modify all functions which deal with per-pool properties to pass
      around @pool instead of @gcwq or @cpu.
      The changes in this patch are mechanical and don't caues any
      functional difference.  This is to prepare for multiple pools per
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
    • Tejun Heo's avatar
      workqueue: factor out worker_pool from global_cwq · bd7bdd43
      Tejun Heo authored
      Move worklist and all worker management fields from global_cwq into
      the new struct worker_pool.  worker_pool points back to the containing
      gcwq.  worker and cpu_workqueue_struct are updated to point to
      worker_pool instead of gcwq too.
      This change is mechanical and doesn't introduce any functional
      difference other than rearranging of fields and an added level of
      indirection in some places.  This is to prepare for multiple pools per
      v2: Comment typo fixes as suggested by Namhyung.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
    • Tejun Heo's avatar
      workqueue: don't use WQ_HIGHPRI for unbound workqueues · 974271c4
      Tejun Heo authored
      Unbound wqs aren't concurrency-managed and try to execute work items
      as soon as possible.  This is currently achieved by implicitly setting
      %WQ_HIGHPRI on all unbound workqueues; however, WQ_HIGHPRI
      implementation is about to be restructured and this usage won't be
      valid anymore.
      Add an explicit chain-wakeup path for unbound workqueues in
      process_one_work() instead of piggy backing on %WQ_HIGHPRI.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
  6. 15 May, 2012 1 commit
    • Peter Zijlstra's avatar
      lockdep: fix oops in processing workqueue · 4d82a1de
      Peter Zijlstra authored
      Under memory load, on x86_64, with lockdep enabled, the workqueue's
      process_one_work() has been seen to oops in __lock_acquire(), barfing
      on a 0xffffffff00000000 pointer in the lockdep_map's class_cache[].
      Because it's permissible to free a work_struct from its callout function,
      the map used is an onstack copy of the map given in the work_struct: and
      that copy is made without any locking.
      Surprisingly, gcc (4.5.1 in Hugh's case) uses "rep movsl" rather than
      "rep movsq" for that structure copy: which might race with a workqueue
      user's wait_on_work() doing lock_map_acquire() on the source of the
      copy, putting a pointer into the class_cache[], but only in time for
      the top half of that pointer to be copied to the destination map.
      Boom when process_one_work() subsequently does lock_map_acquire()
      on its onstack copy of the lockdep_map.
      Fix this, and a similar instance in call_timer_fn(), with a
      lockdep_copy_map() function which additionally NULLs the class_cache[].
      Note: this oops was actually seen on 3.4-next, where flush_work() newly
      does the racing lock_map_acquire(); but Tejun points out that 3.4 and
      earlier are already vulnerable to the same through wait_on_work().
      * Patch orginally from Peter.  Hugh modified it a bit and wrote the
      Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Reported-by: default avatarHugh Dickins <hughd@google.com>
      LKML-Reference: <alpine.LSU.2.00.1205070951170.1544@eggly.anvils>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
  7. 14 May, 2012 1 commit
    • Tejun Heo's avatar
      workqueue: skip nr_running sanity check in worker_enter_idle() if trustee is active · 544ecf31
      Tejun Heo authored
      worker_enter_idle() has WARN_ON_ONCE() which triggers if nr_running
      isn't zero when every worker is idle.  This can trigger spuriously
      while a cpu is going down due to the way trustee sets %WORKER_ROGUE
      and zaps nr_running.
      It first sets %WORKER_ROGUE on all workers without updating
      nr_running, releases gcwq->lock, schedules, regrabs gcwq->lock and
      then zaps nr_running.  If the last running worker enters idle
      inbetween, it would see stale nr_running which hasn't been zapped yet
      and trigger the WARN_ON_ONCE().
      Fix it by performing the sanity check iff the trustee is idle.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reported-by: default avatar"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: stable@vger.kernel.org
  8. 23 Apr, 2012 1 commit
    • Stephen Boyd's avatar
      workqueue: Catch more locking problems with flush_work() · 0976dfc1
      Stephen Boyd authored
      If a workqueue is flushed with flush_work() lockdep checking can
      be circumvented. For example:
       static DEFINE_MUTEX(mutex);
       static void my_work(struct work_struct *w)
       static DECLARE_WORK(work, my_work);
       static int __init start_test_module(void)
               return 0;
       static void __exit stop_test_module(void)
      would not always print a warning when flush_work() was called.
      In this trivial example nothing could go wrong since we are
      guaranteed module_init() and module_exit() don't run concurrently,
      but if the work item is schedule asynchronously we could have a
      scenario where the work item is running just at the time flush_work()
      is called resulting in a classic ABBA locking problem.
      Add a lockdep hint by acquiring and releasing the work item
      lockdep_map in flush_work() so that we always catch this
      potential deadlock scenario.
      Signed-off-by: default avatarStephen Boyd <sboyd@codeaurora.org>
      Reviewed-by: default avatarYong Zhang <yong.zhang0@gmail.com>
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
  9. 16 Apr, 2012 1 commit
  10. 12 Mar, 2012 1 commit
  11. 02 Mar, 2012 1 commit
    • Alan Stern's avatar
      Block: use a freezable workqueue for disk-event polling · 62d3c543
      Alan Stern authored
      This patch (as1519) fixes a bug in the block layer's disk-events
      polling.  The polling is done by a work routine queued on the
      system_nrt_wq workqueue.  Since that workqueue isn't freezable, the
      polling continues even in the middle of a system sleep transition.
      Obviously, polling a suspended drive for media changes and such isn't
      a good thing to do; in the case of USB mass-storage devices it can
      lead to real problems requiring device resets and even re-enumeration.
      The patch fixes things by creating a new system-wide, non-reentrant,
      freezable workqueue and using it for disk-events polling.
      Signed-off-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      CC: <stable@kernel.org>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
  12. 11 Jan, 2012 1 commit
  13. 31 Oct, 2011 1 commit
    • Paul Gortmaker's avatar
      kernel: Map most files to use export.h instead of module.h · 9984de1a
      Paul Gortmaker authored
      The changed files were only including linux/module.h for the
      EXPORT_SYMBOL infrastructure, and nothing else.  Revector them
      onto the isolated export header for faster compile times.
      Nothing to see here but a whole lot of instances of:
        -#include <linux/module.h>
        +#include <linux/export.h>
      This commit is only changing the kernel dir; next targets
      will probably be mm, fs, the arch dirs, etc.
      Signed-off-by: default avatarPaul Gortmaker <paul.gortmaker@windriver.com>
  14. 15 Sep, 2011 1 commit
  15. 20 May, 2011 1 commit
    • Tejun Heo's avatar
      workqueue: separate out drain_workqueue() from destroy_workqueue() · 9c5a2ba7
      Tejun Heo authored
      There are users which want to drain workqueues without destroying it.
      Separate out drain functionality from destroy_workqueue() into
      drain_workqueue() and make it accessible to workqueue users.
      To guarantee forward-progress, only chain queueing is allowed while
      drain is in progress.  If a new work item which isn't chained from the
      running or pending work items is queued while draining is in progress,
      WARN_ON_ONCE() is triggered.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
  16. 29 Apr, 2011 1 commit
  17. 31 Mar, 2011 1 commit
  18. 24 Mar, 2011 1 commit
    • Tejun Heo's avatar
      percpu: Always align percpu output section to PAGE_SIZE · 0415b00d
      Tejun Heo authored
      Percpu allocator honors alignment request upto PAGE_SIZE and both the
      percpu addresses in the percpu address space and the translated kernel
      addresses should be aligned accordingly.  The calculation of the
      former depends on the alignment of percpu output section in the kernel
      The linker script macros PERCPU_VADDR() and PERCPU() are used to
      define this output section and the latter takes @align parameter.
      Several architectures are using @align smaller than PAGE_SIZE breaking
      percpu memory alignment.
      This patch removes @align parameter from PERCPU(), renames it to
      PERCPU_SECTION() and makes it always align to PAGE_SIZE.  While at it,
      add PCPU_SETUP_BUG_ON() checks such that alignment problems are
      reliably detected and remove percpu alignment comment recently added
      in workqueue.c as the condition would trigger BUG way before reaching
      For um, this patch raises the alignment of percpu area.  As the area
      is in .init, there shouldn't be any noticeable difference.
      This problem was discovered by David Howells while debugging boot
      failure on mn10300.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarMike Frysinger <vapier@gentoo.org>
      Cc: uclinux-dist-devel@blackfin.uclinux.org
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: user-mode-linux-devel@lists.sourceforge.net
  19. 23 Mar, 2011 1 commit
  20. 08 Mar, 2011 1 commit
    • Stanislaw Gruszka's avatar
      debugobjects: Add hint for better object identification · 99777288
      Stanislaw Gruszka authored
      In complex subsystems like mac80211 structures can contain several
      timers and work structs, so identifying a specific instance from the
      call trace and object type output of debugobjects can be hard.
      Allow the subsystems which support debugobjects to provide a hint
      function. This function returns a pointer to a kernel address
      (preferrably the objects callback function) which is printed along
      with the debugobjects type.
      Add hint methods for timer_list, work_struct and hrtimer.
      [ tglx: Massaged changelog, made it compile ]
      Signed-off-by: default avatarStanislaw Gruszka <sgruszka@redhat.com>
      LKML-Reference: <20110307085809.GA9334@redhat.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
  21. 21 Feb, 2011 1 commit
  22. 16 Feb, 2011 2 commits
  23. 14 Feb, 2011 1 commit