1. 04 Dec, 2017 1 commit
  2. 24 Oct, 2017 1 commit
    • Will Deacon's avatar
      locking/barriers: Kill lockless_dereference() · 59ecbbe7
      Will Deacon authored
      lockless_dereference() is a nice idea, but it gained little traction in
      kernel code since its introduction three years ago. This is partly
      because it's a pain to type, but also because using READ_ONCE() instead
      has worked correctly on all architectures apart from Alpha, which is a
      fully supported but somewhat niche architecture these days.
      
      Now that READ_ONCE() has been upgraded to contain an implicit
      smp_read_barrier_depends() and the few callers of lockless_dereference()
      have been converted, we can remove lockless_dereference() altogether.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1508840570-22169-5-git-send-email-will.deacon@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      59ecbbe7
  3. 20 Oct, 2017 2 commits
  4. 09 Oct, 2017 2 commits
  5. 17 Aug, 2017 1 commit
    • Paul E. McKenney's avatar
      doc: Update memory-barriers.txt for read-to-write dependencies · 66ce3a4d
      Paul E. McKenney authored
      The memory-barriers.txt document contains an obsolete passage stating that
      smp_read_barrier_depends() is required to force ordering for read-to-write
      dependencies.  We now know that this is not required, even for DEC Alpha.
      This commit therefore updates this passage to state that read-to-write
      dependencies are respected even without smp_read_barrier_depends().
      Reported-by: default avatarLance Roy <ldr709@gmail.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Andrea Parri <parri.andrea@gmail.com>
      Cc: Jade Alglave <j.alglave@ucl.ac.uk>
      Cc: Luc Maranget <luc.maranget@inria.fr>
      [ paulmck: Reference control-dependencies sections and use WRITE_ONCE()
        per Will Deacon.  Correctly place split-cache paragraph while there. ]
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      66ce3a4d
  6. 10 Aug, 2017 2 commits
    • Peter Zijlstra's avatar
      locking: Remove smp_mb__before_spinlock() · a9668cd6
      Peter Zijlstra authored
      Now that there are no users of smp_mb__before_spinlock() left, remove
      it entirely.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      a9668cd6
    • Peter Zijlstra's avatar
      Documentation/locking/atomic: Add documents for new atomic_t APIs · 706eeb3e
      Peter Zijlstra authored
      Since we've vastly expanded the atomic_t interface in recent years the
      existing documentation is woefully out of date and people seem to get
      confused a bit.
      
      Start a new document to hopefully better explain the current state of
      affairs.
      
      The old atomic_ops.txt also covers bitmaps and a few more details so
      this is not a full replacement and we'll therefore keep that document
      around until such a time that we've managed to write more text to cover
      its entire.
      
      Also please, ReST people, go away.
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      706eeb3e
  7. 12 Jul, 2017 1 commit
  8. 24 Jun, 2017 1 commit
  9. 08 Jun, 2017 1 commit
  10. 09 May, 2017 1 commit
  11. 12 Apr, 2017 1 commit
  12. 15 Jan, 2017 1 commit
  13. 12 Aug, 2016 3 commits
  14. 17 Jun, 2016 1 commit
  15. 28 Apr, 2016 3 commits
  16. 13 Apr, 2016 6 commits
  17. 14 Mar, 2016 8 commits
  18. 12 Jan, 2016 1 commit
    • Michael S. Tsirkin's avatar
      asm-generic: implement virt_xxx memory barriers · 6a65d263
      Michael S. Tsirkin authored
      Guests running within virtual machines might be affected by SMP effects even if
      the guest itself is compiled without SMP support.  This is an artifact of
      interfacing with an SMP host while running an UP kernel.  Using mandatory
      barriers for this use-case would be possible but is often suboptimal.
      
      In particular, virtio uses a bunch of confusing ifdefs to work around
      this, while xen just uses the mandatory barriers.
      
      To better handle this case, low-level virt_mb() etc macros are made available.
      These are implemented trivially using the low-level __smp_xxx macros,
      the purpose of these wrappers is to annotate those specific cases.
      
      These have the same effect as smp_mb() etc when SMP is enabled, but generate
      identical code for SMP and non-SMP systems. For example, virtual machine guests
      should use virt_mb() rather than smp_mb() when synchronizing against a
      (possibly SMP) host.
      Suggested-by: default avatarDavid Miller <davem@davemloft.net>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      6a65d263
  19. 05 Dec, 2015 1 commit
  20. 04 Dec, 2015 1 commit
  21. 04 Nov, 2015 1 commit
    • Linus Torvalds's avatar
      atomic: remove all traces of READ_ONCE_CTRL() and atomic*_read_ctrl() · 105ff3cb
      Linus Torvalds authored
      This seems to be a mis-reading of how alpha memory ordering works, and
      is not backed up by the alpha architecture manual.  The helper functions
      don't do anything special on any other architectures, and the arguments
      that support them being safe on other architectures also argue that they
      are safe on alpha.
      
      Basically, the "control dependency" is between a previous read and a
      subsequent write that is dependent on the value read.  Even if the
      subsequent write is actually done speculatively, there is no way that
      such a speculative write could be made visible to other cpu's until it
      has been committed, which requires validating the speculation.
      
      Note that most weakely ordered architectures (very much including alpha)
      do not guarantee any ordering relationship between two loads that depend
      on each other on a control dependency:
      
          read A
          if (val == 1)
              read B
      
      because the conditional may be predicted, and the "read B" may be
      speculatively moved up to before reading the value A.  So we require the
      user to insert a smp_rmb() between the two accesses to be correct:
      
          read A;
          if (A == 1)
              smp_rmb()
              read B
      
      Alpha is further special in that it can break that ordering even if the
      *address* of B depends on the read of A, because the cacheline that is
      read later may be stale unless you have a memory barrier in between the
      pointer read and the read of the value behind a pointer:
      
          read ptr
          read offset(ptr)
      
      whereas all other weakly ordered architectures guarantee that the data
      dependency (as opposed to just a control dependency) will order the two
      accesses.  As a result, alpha needs a "smp_read_barrier_depends()" in
      between those two reads for them to be ordered.
      
      The coontrol dependency that "READ_ONCE_CTRL()" and "atomic_read_ctrl()"
      had was a control dependency to a subsequent *write*, however, and
      nobody can finalize such a subsequent write without having actually done
      the read.  And were you to write such a value to a "stale" cacheline
      (the way the unordered reads came to be), that would seem to lose the
      write entirely.
      
      So the things that make alpha able to re-order reads even more
      aggressively than other weak architectures do not seem to be relevant
      for a subsequent write.  Alpha memory ordering may be strange, but
      there's no real indication that it is *that* strange.
      
      Also, the alpha architecture reference manual very explicitly talks
      about the definition of "Dependence Constraints" in section 5.6.1.7,
      where a preceding read dominates a subsequent write.
      
      Such a dependence constraint admittedly does not impose a BEFORE (alpha
      architecture term for globally visible ordering), but it does guarantee
      that there can be no "causal loop".  I don't see how you could avoid
      such a loop if another cpu could see the stored value and then impact
      the value of the first read.  Put another way: the read and the write
      could not be seen as being out of order wrt other cpus.
      
      So I do not see how these "x_ctrl()" functions can currently be necessary.
      
      I may have to eat my words at some point, but in the absense of clear
      proof that alpha actually needs this, or indeed even an explanation of
      how alpha could _possibly_ need it, I do not believe these functions are
      called for.
      
      And if it turns out that alpha really _does_ need a barrier for this
      case, that barrier still should not be "smp_read_barrier_depends()".
      We'd have to make up some new speciality barrier just for alpha, along
      with the documentation for why it really is necessary.
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul E McKenney <paulmck@us.ibm.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      105ff3cb