1. 07 Dec, 2018 1 commit
  2. 09 Nov, 2018 2 commits
  3. 23 Oct, 2018 1 commit
    • David Howells's avatar
      iov_iter: Separate type from direction and use accessor functions · aa563d7b
      David Howells authored
      
      
      In the iov_iter struct, separate the iterator type from the iterator
      direction and use accessor functions to access them in most places.
      
      Convert a bunch of places to use switch-statements to access them rather
      then chains of bitwise-AND statements.  This makes it easier to add further
      iterator types.  Also, this can be more efficient as to implement a switch
      of small contiguous integers, the compiler can use ~50% fewer compare
      instructions than it has to use bitwise-and instructions.
      
      Further, cease passing the iterator type into the iterator setup function.
      The iterator function can set that itself.  Only the direction is required.
      Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
      aa563d7b
  4. 17 Oct, 2018 9 commits
  5. 05 Oct, 2018 1 commit
    • Sagi Grimberg's avatar
      nvmet-rdma: use a private workqueue for delete · 2acf70ad
      Sagi Grimberg authored
      Queue deletion is done asynchronous when the last reference on the queue
      is dropped.  Thus, in order to make sure we don't over allocate under a
      connect/disconnect storm, we let queue deletion complete before making
      forward progress.
      
      However, given that we flush the system_wq from rdma_cm context which
      runs from a workqueue context, we can have a circular locking complaint
      [1]. Fix that by using a private workqueue for queue deletion.
      
      [1]:
      ======================================================
      WARNING: possible circular locking dependency detected
      4.19.0-rc4-dbg+ #3 Not tainted
      ------------------------------------------------------
      kworker/5:0/39 is trying to acquire lock:
      00000000a10b6db9 (&id_priv->handler_mutex){+.+.}, at: rdma_destroy_id+0x6f/0x440 [rdma_cm]
      
      but task is already holding lock:
      00000000331b4e2c ((work_completion)(&queue->release_work)){+.+.}, at: process_one_work+0x3ed/0xa20
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #3 ((work_completion)(&queue->release_work)){+.+.}:
             process_one_work+0x474/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      -> #2 ((wq_completion)"events"){+.+.}:
             flush_workqueue+0xf3/0x970
             nvmet_rdma_cm_handler+0x133d/0x1734 [nvmet_rdma]
             cma_ib_req_handler+0x72f/0xf90 [rdma_cm]
             cm_process_work+0x2e/0x110 [ib_cm]
             cm_req_handler+0x135b/0x1c30 [ib_cm]
             cm_work_handler+0x2b7/0x38cd [ib_cm]
             process_one_work+0x4ae/0xa20
      nvmet_rdma:nvmet_rdma_cm_handler: nvmet_rdma: disconnected (10): status 0 id 0000000040357082
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      nvme nvme0: Reconnecting in 10 seconds...
      
      -> #1 (&id_priv->handler_mutex/1){+.+.}:
             __mutex_lock+0xfe/0xbe0
             mutex_lock_nested+0x1b/0x20
             cma_ib_req_handler+0x6aa/0xf90 [rdma_cm]
             cm_process_work+0x2e/0x110 [ib_cm]
             cm_req_handler+0x135b/0x1c30 [ib_cm]
             cm_work_handler+0x2b7/0x38cd [ib_cm]
             process_one_work+0x4ae/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      -> #0 (&id_priv->handler_mutex){+.+.}:
             lock_acquire+0xc5/0x200
             __mutex_lock+0xfe/0xbe0
             mutex_lock_nested+0x1b/0x20
             rdma_destroy_id+0x6f/0x440 [rdma_cm]
             nvmet_rdma_release_queue_work+0x8e/0x1b0 [nvmet_rdma]
             process_one_work+0x4ae/0xa20
             worker_thread+0x63/0x5a0
             kthread+0x1cf/0x1f0
             ret_from_fork+0x24/0x30
      
      Fixes: 777dc823
      
       ("nvmet-rdma: occasionally flush ongoing controller teardown")
      Reported-by: default avatarBart Van Assche <bvanassche@acm.org>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Tested-by: default avatarBart Van Assche <bvanassche@acm.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      2acf70ad
  6. 01 Oct, 2018 4 commits
    • Sagi Grimberg's avatar
      nvmet: don't split large I/Os unconditionally · 73383adf
      Sagi Grimberg authored
      
      
      If we know that the I/O size exceeds our inline bio vec, no
      point using it and split the rest to begin with. We could
      in theory reuse the inline bio and only allocate the bio_vec,
      but its really not worth optimizing for.
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      73383adf
    • James Smart's avatar
      nvmet_fc: support target port removal with nvmet layer · ea96d649
      James Smart authored
      
      
      Currently, if a targetport has been connected to via the nvmet config
      (in other words, the add_port() transport routine called, and the nvmet
      port pointer stored for using in upcalls on new io), and if the
      targetport is then removed (say the lldd driver decides to unload or
      fully reset its hardware) and then re-added (the lldd driver reloads or
      reinits its hardware), the port pointer has been lost so there's no way
      to continue to post commands up to nvmet via the transport port.
      
      Correct by allocating a small "port context" structure that will be
      linked to by the targetport. The context will save the targetport WWN's
      and the nvmet port pointer to use for it.  Initial allocation will occur
      when the targetport is bound to via add_port.  The context will be
      deallocated when remove_port() is called.  If a targetport is removed
      while nvmet has the active port context, the targetport will be unlinked
      from the port context before removal.  If a new targetport is registered,
      the port contexts without a binding are looked through and if the WWN's
      match (so it's the same as nvmet's port context) the port context is
      linked to the new target port.  Thus new io can be received on the new
      targetport and operation resumes with nvmet.
      
      Additionally, this also resolves nvmet configuration changing out from
      underneath of the nvme-fc target port (for example: a nvmetcli clear).
      Signed-off-by: default avatarJames Smart <james.smart@broadcom.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ea96d649
    • Milan P. Gandhi's avatar
      d4e4230c
    • Chaitanya Kulkarni's avatar
      nvmet: remove redundant module prefix · d93cb392
      Chaitanya Kulkarni authored
      
      
      This patch removes the redundant module prefix used in the pr_err() when
      nvmet_get_smart_log_nsid() failed to find the namespace provided as a part
      of smart-log command.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      d93cb392
  7. 17 Sep, 2018 1 commit
  8. 05 Sep, 2018 1 commit
    • Sagi Grimberg's avatar
      nvmet-rdma: fix possible bogus dereference under heavy load · 8407879c
      Sagi Grimberg authored
      
      
      Currently we always repost the recv buffer before we send a response
      capsule back to the host. Since ordering is not guaranteed for send
      and recv completions, it is posible that we will receive a new request
      from the host before we got a send completion for the response capsule.
      
      Today, we pre-allocate 2x rsps the length of the queue, but in reality,
      under heavy load there is nothing that is really preventing the gap to
      expand until we exhaust all our rsps.
      
      To fix this, if we don't have any pre-allocated rsps left, we dynamically
      allocate a rsp and make sure to free it when we are done. If under memory
      pressure we fail to allocate a rsp, we silently drop the command and
      wait for the host to retry.
      Reported-by: default avatarSteve Wise <swise@opengridcomputing.com>
      Tested-by: default avatarSteve Wise <swise@opengridcomputing.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      [hch: dropped a superflous assignment]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      8407879c
  9. 28 Aug, 2018 2 commits
  10. 08 Aug, 2018 1 commit
    • Chaitanya Kulkarni's avatar
      nvmet: add ns write protect support · dedf0be5
      Chaitanya Kulkarni authored
      
      
      This patch implements the Namespace Write Protect feature described in
      "NVMe TP 4005a Namespace Write Protect". In this version, we implement
      No Write Protect and Write Protect states for target ns which can be
      toggled by set-features commands from the host side.
      
      For write-protect state transition, we need to flush the ns specified
      as a part of command so we also add helpers for carrying out synchronous
      flush operations.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      [hch: fixed an incorrect endianess conversion, minor cleanups]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      dedf0be5
  11. 27 Jul, 2018 5 commits
  12. 25 Jul, 2018 2 commits
  13. 24 Jul, 2018 6 commits
  14. 23 Jul, 2018 4 commits