1. 05 Sep, 2018 1 commit
    • Sagi Grimberg's avatar
      nvmet-rdma: fix possible bogus dereference under heavy load · 8407879c
      Sagi Grimberg authored
      Currently we always repost the recv buffer before we send a response
      capsule back to the host. Since ordering is not guaranteed for send
      and recv completions, it is posible that we will receive a new request
      from the host before we got a send completion for the response capsule.
      Today, we pre-allocate 2x rsps the length of the queue, but in reality,
      under heavy load there is nothing that is really preventing the gap to
      expand until we exhaust all our rsps.
      To fix this, if we don't have any pre-allocated rsps left, we dynamically
      allocate a rsp and make sure to free it when we are done. If under memory
      pressure we fail to allocate a rsp, we silently drop the command and
      wait for the host to retry.
      Reported-by: default avatarSteve Wise <swise@opengridcomputing.com>
      Tested-by: default avatarSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      [hch: dropped a superflous assignment]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
  2. 24 Jul, 2018 1 commit
  3. 23 Jul, 2018 3 commits
  4. 18 Jun, 2018 1 commit
  5. 26 Mar, 2018 5 commits
  6. 08 Jan, 2018 2 commits
  7. 06 Jan, 2018 1 commit
  8. 11 Nov, 2017 2 commits
  9. 18 Aug, 2017 1 commit
  10. 28 Jun, 2017 2 commits
  11. 20 May, 2017 1 commit
  12. 04 Apr, 2017 3 commits
  13. 16 Mar, 2017 1 commit
  14. 22 Feb, 2017 2 commits
  15. 26 Jan, 2017 1 commit
  16. 14 Dec, 2016 1 commit
  17. 06 Dec, 2016 2 commits
  18. 14 Nov, 2016 3 commits
  19. 23 Sep, 2016 1 commit
  20. 18 Aug, 2016 1 commit
  21. 16 Aug, 2016 1 commit
  22. 04 Aug, 2016 2 commits
    • Sagi Grimberg's avatar
      nvmet-rdma: Don't use the inline buffer in order to avoid allocation for small reads · 40e64e07
      Sagi Grimberg authored
      Under extreme conditions this might cause data corruptions. By doing that
      we we repost the buffer and then post this buffer for the device to send.
      If we happen to use shared receive queues the device might write to the
      buffer before it sends it (there is no ordering between send and recv
      queues). Without SRQs we probably won't get that if the host doesn't
      mis-behave and send more than we allowed it, but relying on that is not
      really a good idea.
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
    • Sagi Grimberg's avatar
      nvmet-rdma: Correctly handle RDMA device hot removal · d8f7750a
      Sagi Grimberg authored
      When configuring a device attached listener, we may
      see device removal events. In this case we return a
      non-zero return code from the cm event handler which
      implicitly destroys the cm_id. It is possible that in
      the future the user will remove this listener and by
      that trigger a second call to rdma_destroy_id on an
      already destroyed cm_id -> BUG.
      In addition, when a queue bound (active session) cm_id
      generates a DEVICE_REMOVAL event we must guarantee all
      resources are cleaned up by the time we return from the
      event handler.
      Introduce nvmet_rdma_device_removal which addresses
      (or at least attempts to) both scenarios.
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
  23. 08 Jul, 2016 1 commit