Skip to content
  • Benjamin LaHaise's avatar
    aio: convert the ioctx list to table lookup v3 · db446a08
    Benjamin LaHaise authored
    
    
    On Wed, Jun 12, 2013 at 11:14:40AM -0700, Kent Overstreet wrote:
    > On Mon, Apr 15, 2013 at 02:40:55PM +0300, Octavian Purdila wrote:
    > > When using a large number of threads performing AIO operations the
    > > IOCTX list may get a significant number of entries which will cause
    > > significant overhead. For example, when running this fio script:
    > >
    > > rw=randrw; size=256k ;directory=/mnt/fio; ioengine=libaio; iodepth=1
    > > blocksize=1024; numjobs=512; thread; loops=100
    > >
    > > on an EXT2 filesystem mounted on top of a ramdisk we can observe up to
    > > 30% CPU time spent by lookup_ioctx:
    > >
    > >  32.51%  [guest.kernel]  [g] lookup_ioctx
    > >   9.19%  [guest.kernel]  [g] __lock_acquire.isra.28
    > >   4.40%  [guest.kernel]  [g] lock_release
    > >   4.19%  [guest.kernel]  [g] sched_clock_local
    > >   3.86%  [guest.kernel]  [g] local_clock
    > >   3.68%  [guest.kernel]  [g] native_sched_clock
    > >   3.08%  [guest.kernel]  [g] sched_clock_cpu
    > >   2.64%  [guest.kernel]  [g] lock_release_holdtime.part.11
    > >   2.60%  [guest.kernel]  [g] memcpy
    > >   2.33%  [guest.kernel]  [g] lock_acquired
    > >   2.25%  [guest.kernel]  [g] lock_acquire
    > >   1.84%  [guest.kernel]  [g] do_io_submit
    > >
    > > This patchs converts the ioctx list to a radix tree. For a performance
    > > comparison the above FIO script was run on a 2 sockets 8 core
    > > machine. This are the results (average and %rsd of 10 runs) for the
    > > original list based implementation and for the radix tree based
    > > implementation:
    > >
    > > cores         1         2         4         8         16        32
    > > list       109376 ms  69119 ms  35682 ms  22671 ms  19724 ms  16408 ms
    > > %rsd         0.69%      1.15%     1.17%     1.21%     1.71%     1.43%
    > > radix       73651 ms  41748 ms  23028 ms  16766 ms  15232 ms   13787 ms
    > > %rsd         1.19%      0.98%     0.69%     1.13%    0.72%      0.75%
    > > % of radix
    > > relative    66.12%     65.59%    66.63%    72.31%   77.26%     83.66%
    > > to list
    > >
    > > To consider the impact of the patch on the typical case of having
    > > only one ctx per process the following FIO script was run:
    > >
    > > rw=randrw; size=100m ;directory=/mnt/fio; ioengine=libaio; iodepth=1
    > > blocksize=1024; numjobs=1; thread; loops=100
    > >
    > > on the same system and the results are the following:
    > >
    > > list        58892 ms
    > > %rsd         0.91%
    > > radix       59404 ms
    > > %rsd         0.81%
    > > % of radix
    > > relative    100.87%
    > > to list
    >
    > So, I was just doing some benchmarking/profiling to get ready to send
    > out the aio patches I've got for 3.11 - and it looks like your patch is
    > causing a ~1.5% throughput regression in my testing :/
    ... <snip>
    
    I've got an alternate approach for fixing this wart in lookup_ioctx()...
    Instead of using an rbtree, just use the reserved id in the ring buffer
    header to index an array pointing the ioctx.  It's not finished yet, and
    it needs to be tidied up, but is most of the way there.
    
    		-ben
    --
    "Thought is the essence of where you are now."
    --
    kmo> And, a rework of Ben's code, but this was entirely his idea
    kmo>		-Kent
    
    bcrl> And fix the code to use the right mm_struct in kill_ioctx(), actually
    free memory.
    
    Signed-off-by: default avatarBenjamin LaHaise <bcrl@kvack.org>
    db446a08