1. 19 Oct, 2013 2 commits
    • Namhyung Kim's avatar
      ftrace: Get rid of ftrace_graph_filter_enabled · 9aa72b4b
      Namhyung Kim authored
      The ftrace_graph_filter_enabled means that user sets function filter
      and it always has same meaning of ftrace_graph_count > 0.
      
      Link: http://lkml.kernel.org/r/1381739066-7531-2-git-send-email-namhyung@kernel.org
      
      Signed-off-by: default avatarNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      9aa72b4b
    • Steven Rostedt's avatar
      tracing: Fix potential out-of-bounds in trace_get_user() · 057db848
      Steven Rostedt authored
      Andrey reported the following report:
      
      ERROR: AddressSanitizer: heap-buffer-overflow on address ffff8800359c99f3
      ffff8800359c99f3 is located 0 bytes to the right of 243-byte region [ffff8800359c9900, ffff8800359c99f3)
      Accessed by thread T13003:
        #0 ffffffff810dd2da (asan_report_error+0x32a/0x440)
        #1 ffffffff810dc6b0 (asan_check_region+0x30/0x40)
        #2 ffffffff810dd4d3 (__tsan_write1+0x13/0x20)
        #3 ffffffff811cd19e (ftrace_regex_release+0x1be/0x260)
        #4 ffffffff812a1065 (__fput+0x155/0x360)
        #5 ffffffff812a12de (____fput+0x1e/0x30)
        #6 ffffffff8111708d (task_work_run+0x10d/0x140)
        #7 ffffffff810ea043 (do_exit+0x433/0x11f0)
        #8 ffffffff810eaee4 (do_group_exit+0x84/0x130)
        #9 ffffffff810eafb1 (SyS_exit_group+0x21/0x30)
        #10 ffffffff81928782 (system_call_fastpath+0x16/0x1b)
      
      Allocated by thread T5167:
        #0 ffffffff810dc778 (asan_slab_alloc+0x48/0xc0)
        #1 ffffffff8128337c (__kmalloc+0xbc/0x500)
        #2 ffffffff811d9d54 (trace_parser_get_init+0x34/0x90)
        #3 ffffffff811cd7b3 (ftrace_regex_open+0x83/0x2e0)
        #4 ffffffff811cda7d (ftrace_filter_open+0x2d/0x40)
        #5 ffffffff8129b4ff (do_dentry_open+0x32f/0x430)
        #6 ffffffff8129b668 (finish_open+0x68/0xa0)
        #7 ffffffff812b66ac (do_last+0xb8c/0x1710)
        #8 ffffffff812b7350 (path_openat+0x120/0xb50)
        #9 ffffffff812b8884 (do_filp_open+0x54/0xb0)
        #10 ffffffff8129d36c (do_sys_open+0x1ac/0x2c0)
        #11 ffffffff8129d4b7 (SyS_open+0x37/0x50)
        #12 ffffffff81928782 (system_call_fastpath+0x16/0x1b)
      
      Shadow bytes around the buggy address:
        ffff8800359c9700: fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd fd
        ffff8800359c9780: fd fd fd fd fd fd fd fd fa fa fa fa fa fa fa fa
        ffff8800359c9800: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9880: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>ffff8800359c9980: 00 00 00 00 00 00 00 00 00 00 00 00 00 00[03]fb
        ffff8800359c9a00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9a80: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
        ffff8800359c9b00: fa fa fa fa fa fa fa fa 00 00 00 00 00 00 00 00
        ffff8800359c9b80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        ffff8800359c9c00: 00 00 00 00 00 00 00 00 fa fa fa fa fa fa fa fa
      Shadow byte legend (one shadow byte represents 8 application bytes):
        Addressable:           00
        Partially addressable: 01 02 03 04 05 06 07
        Heap redzone:          fa
        Heap kmalloc redzone:  fb
        Freed heap region:     fd
        Shadow gap:            fe
      
      The out-of-bounds access happens on 'parser->buffer[parser->idx] = 0;'
      
      Although the crash happened in ftrace_regex_open() the real bug
      occurred in trace_get_user() where there's an incrementation to
      parser->idx without a check against the size. The way it is triggered
      is if userspace sends in 128 characters (EVENT_BUF_SIZE + 1), the loop
      that reads the last character stores it and then breaks out because
      there is no more characters. Then the last character is read to determine
      what to do next, and the index is incremented without checking size.
      
      Then the caller of trace_get_user() usually nulls out the last character
      with a zero, but since the index is equal to the size, it writes a nul
      character after the allocated space, which can corrupt memory.
      
      Luckily, only root user has write access to this file.
      
      Link: http://lkml.kernel.org/r/20131009222323.04fd1a0d@gandalf.local.home
      
      Reported-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      057db848
  2. 10 Oct, 2013 1 commit
  3. 03 Sep, 2013 1 commit
    • Steven Rostedt (Red Hat)'s avatar
      ftrace: Fix a slight race in modifying what function callback gets traced · 59338f75
      Steven Rostedt (Red Hat) authored
      
      
      There's a slight race when going from a list function to a non list
      function. That is, when only one callback is registered to the function
      tracer, it gets called directly by the mcount trampoline. But if this
      function has filters, it may be called by the wrong functions.
      
      As the list ops callback that handles multiple callbacks that are
      registered to ftrace, it also handles what functions they call. While
      the transaction is taking place, use the list function always, and
      after all the updates are finished (only the functions that should be
      traced are being traced), then we can update the trampoline to call
      the function directly.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      59338f75
  4. 22 Aug, 2013 5 commits
  5. 03 Aug, 2013 3 commits
  6. 01 Aug, 2013 2 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing/uprobes: Fail to unregister if probe event files are in use · c6c2401d
      Steven Rostedt (Red Hat) authored
      Uprobes suffer the same problem that kprobes have. There's a race between
      writing to the "enable" file and removing the probe. The probe checks for
      it being in use and if it is not, goes about deleting the probe and the
      event that represents it. But the problem with that is, after it checks
      if it is in use it can be enabled, and the deletion of the event (access
      to the probe) will fail, as it is in use. But the uprobe will still be
      deleted. This is a problem as the event can reference the uprobe that
      was deleted.
      
      The fix is to remove the event first, and check to make sure the event
      removal succeeds. Then it is safe to remove the probe.
      
      When the event exists, either ftrace or perf can enable the probe and
      prevent the event from being removed.
      
      Link: http://lkml.kernel.org/r/20130704034038.991525256@goodmis.org
      
      Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      c6c2401d
    • Steven Rostedt (Red Hat)'s avatar
      tracing/kprobes: Fail to unregister if probe event files are in use · 40c32592
      Steven Rostedt (Red Hat) authored
      When a probe is being removed, it cleans up the event files that correspond
      to the probe. But there is a race between writing to one of these files
      and deleting the probe. This is especially true for the "enable" file.
      
      	CPU 0				CPU 1
      	-----				-----
      
      				  fd = open("enable",O_WRONLY);
      
        probes_open()
        release_all_trace_probes()
        unregister_trace_probe()
        if (trace_probe_is_enabled(tp))
      	return -EBUSY
      
      				   write(fd, "1", 1)
      				   __ftrace_set_clr_event()
      				   call->class->reg()
      				    (kprobe_register)
      				     enable_trace_probe(tp)
      
        __unregister_trace_probe(tp);
        list_del(&tp->list)
        unregister_probe_event(tp) <-- fails!
        free_trace_probe(tp)
      
      				   write(fd, "0", 1)
      				   __ftrace_set_clr_event()
      				   call->class->unreg
      				    (kprobe_register)
      				    disable_trace_probe(tp) <-- BOOM!
      
      A test program was written that used two threads to simulate the
      above scenario adding a nanosleep() interval to change the timings
      and after several thousand runs, it was able to trigger this bug
      and crash:
      
      BUG: unable to handle kernel paging request at 00000005000000f9
      IP: [<ffffffff810dee70>] probes_open+0x3b/0xa7
      PGD 7808a067 PUD 0
      Oops: 0000 [#1] PREEMPT SMP
      Dumping ftrace buffer:
      ---------------------------------
      Modules linked in: ipt_MASQUERADE sunrpc ip6t_REJECT nf_conntrack_ipv6
      CPU: 1 PID: 2070 Comm: test-kprobe-rem Not tainted 3.11.0-rc3-test+ #47
      Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
      task: ffff880077756440 ti: ffff880076e52000 task.ti: ffff880076e52000
      RIP: 0010:[<ffffffff810dee70>]  [<ffffffff810dee70>] probes_open+0x3b/0xa7
      RSP: 0018:ffff880076e53c38  EFLAGS: 00010203
      RAX: 0000000500000001 RBX: ffff88007844f440 RCX: 0000000000000003
      RDX: 0000000000000003 RSI: 0000000000000003 RDI: ffff880076e52000
      RBP: ffff880076e53c58 R08: ffff880076e53bd8 R09: 0000000000000000
      R10: ffff880077756440 R11: 0000000000000006 R12: ffffffff810dee35
      R13: ffff880079250418 R14: 0000000000000000 R15: ffff88007844f450
      FS:  00007f87a276f700(0000) GS:ffff88007d480000(0000) knlGS:0000000000000000
      CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
      CR2: 00000005000000f9 CR3: 0000000077262000 CR4: 00000000000007e0
      Stack:
       ffff880076e53c58 ffffffff81219ea0 ffff88007844f440 ffffffff810dee35
       ffff880076e53ca8 ffffffff81130f78 ffff8800772986c0 ffff8800796f93a0
       ffffffff81d1b5d8 ffff880076e53e04 0000000000000000 ffff88007844f440
      Call Trace:
       [<ffffffff81219ea0>] ? security_file_open+0x2c/0x30
       [<ffffffff810dee35>] ? unregister_trace_probe+0x4b/0x4b
       [<ffffffff81130f78>] do_dentry_open+0x162/0x226
       [<ffffffff81131186>] finish_open+0x46/0x54
       [<ffffffff8113f30b>] do_last+0x7f6/0x996
       [<ffffffff8113cc6f>] ? inode_permission+0x42/0x44
       [<ffffffff8113f6dd>] path_openat+0x232/0x496
       [<ffffffff8113fc30>] do_filp_open+0x3a/0x8a
       [<ffffffff8114ab32>] ? __alloc_fd+0x168/0x17a
       [<ffffffff81131f4e>] do_sys_open+0x70/0x102
       [<ffffffff8108f06e>] ? trace_hardirqs_on_caller+0x160/0x197
       [<ffffffff81131ffe>] SyS_open+0x1e/0x20
       [<ffffffff81522742>] system_call_fastpath+0x16/0x1b
      Code: e5 41 54 53 48 89 f3 48 83 ec 10 48 23 56 78 48 39 c2 75 6c 31 f6 48 c7
      RIP  [<ffffffff810dee70>] probes_open+0x3b/0xa7
       RSP <ffff880076e53c38>
      CR2: 00000005000000f9
      ---[ end trace 35f17d68fc569897 ]---
      
      The unregister_trace_probe() must be done first, and if it fails it must
      fail the removal of the kprobe.
      
      Several changes have already been made by Oleg Nesterov and Masami Hiramatsu
      to allow moving the unregister_probe_event() before the removal of
      the probe and exit the function if it fails. This prevents the tp
      structure from being used after it is freed.
      
      Link: http://lkml.kernel.org/r/20130704034038.819592356@goodmis.org
      
      Acked-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      40c32592
  7. 31 Jul, 2013 3 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add comment to describe special break case in probe_remove_event_call() · 2ba64035
      Steven Rostedt (Red Hat) authored
      
      
      The "break" used in the do_for_each_event_file() is used as an optimization
      as the loop is really a double loop. The loop searches all event files
      for each trace_array. There's only one matching event file per trace_array
      and after we find the event file for the trace_array, the break is used
      to jump to the next trace_array and start the search there.
      
      As this is not a standard way of using "break" in C code, it requires
      a comment right before the break to let people know what is going on.
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2ba64035
    • Oleg Nesterov's avatar
      tracing: trace_remove_event_call() should fail if call/file is in use · 2816c551
      Oleg Nesterov authored
      Change trace_remove_event_call(call) to return the error if this
      call is active. This is what the callers assume but can't verify
      outside of the tracing locks. Both trace_kprobe.c/trace_uprobe.c
      need the additional changes, unregister_trace_probe() should abort
      if trace_remove_event_call() fails.
      
      The caller is going to free this call/file so we must ensure that
      nobody can use them after trace_remove_event_call() succeeds.
      debugfs should be fine after the previous changes and event_remove()
      does TRACE_REG_UNREGISTER, but still there are 2 reasons why we need
      the additional checks:
      
      - There could be a perf_event(s) attached to this tp_event, so the
        patch checks ->perf_refcount.
      
      - TRACE_REG_UNREGISTER can be suppressed by FTRACE_EVENT_FL_SOFT_MODE,
        so we simply check FTRACE_EVENT_FL_ENABLED protected by event_mutex.
      
      Link: http://lkml.kernel.org/r/20130729175033.GB26284@redhat.com
      
      Reviewed-by: default avatarMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2816c551
    • Steven Rostedt (Red Hat)'s avatar
      ftrace: Check module functions being traced on reload · 8c4f3c3f
      Steven Rostedt (Red Hat) authored
      There's been a nasty bug that would show up and not give much info.
      The bug displayed the following warning:
      
       WARNING: at kernel/trace/ftrace.c:1529 __ftrace_hash_rec_update+0x1e3/0x230()
       Pid: 20903, comm: bash Tainted: G           O 3.6.11+ #38405.trunk
       Call Trace:
        [<ffffffff8103e5ff>] warn_slowpath_common+0x7f/0xc0
        [<ffffffff8103e65a>] warn_slowpath_null+0x1a/0x20
        [<ffffffff810c2ee3>] __ftrace_hash_rec_update+0x1e3/0x230
        [<ffffffff810c4f28>] ftrace_hash_move+0x28/0x1d0
        [<ffffffff811401cc>] ? kfree+0x2c/0x110
        [<ffffffff810c68ee>] ftrace_regex_release+0x8e/0x150
        [<ffffffff81149f1e>] __fput+0xae/0x220
        [<ffffffff8114a09e>] ____fput+0xe/0x10
        [<ffffffff8105fa22>] task_work_run+0x72/0x90
        [<ffffffff810028ec>] do_notify_resume+0x6c/0xc0
        [<ffffffff8126596e>] ? trace_hardirqs_on_thunk+0x3a/0x3c
        [<ffffffff815c0f88>] int_signal+0x12/0x17
       ---[ end trace 793179526ee09b2c ]---
      
      It was finally narrowed down to unloading a module that was being traced.
      
      It was actually more than that. When functions are being traced, there's
      a table of all functions that have a ref count of the number of active
      tracers attached to that function. When a function trace callback is
      registered to a function, the function's record ref count is incremented.
      When it is unregistered, the function's record ref count is decremented.
      If an inconsistency is detected (ref count goes below zero) the above
      warning is shown and the function tracing is permanently disabled until
      reboot.
      
      The ftrace callback ops holds a hash of functions that it filters on
      (and/or filters off). If the hash is empty, the default means to filter
      all functions (for the filter_hash) or to disable no functions (for the
      notrace_hash).
      
      When a module is unloaded, it frees the function records that represent
      the module functions. These records exist on their own pages, that is
      function records for one module will not exist on the same page as
      function records for other modules or even the core kernel.
      
      Now when a module unloads, the records that represents its functions are
      freed. When the module is loaded again, the records are recreated with
      a default ref count of zero (unless there's a callback that traces all
      functions, then they will also be traced, and the ref count will be
      incremented).
      
      The problem is that if an ftrace callback hash includes functions of the
      module being unloaded, those hash entries will not be removed. If the
      module is reloaded in the same location, the hash entries still point
      to the functions of the module but the module's ref counts do not reflect
      that.
      
      With the help of Steve and Joern, we found a reproducer:
      
       Using uinput module and uinput_release function.
      
       cd /sys/kernel/debug/tracing
       modprobe uinput
       echo uinput_release > set_ftrace_filter
       echo function > current_tracer
       rmmod uinput
       modprobe uinput
       # check /proc/modules to see if loaded in same addr, otherwise try again
       echo nop > current_tracer
      
       [BOOM]
      
      The above loads the uinput module, which creates a table of functions that
      can be traced within the module.
      
      We add uinput_release to the filter_hash to trace just that function.
      
      Enable function tracincg, which increments the ref count of the record
      associated to uinput_release.
      
      Remove uinput, which frees the records including the one that represents
      uinput_release.
      
      Load the uinput module again (and make sure it's at the same address).
      This recreates the function records all with a ref count of zero,
      including uinput_release.
      
      Disable function tracing, which will decrement the ref count for uinput_release
      which is now zero because of the module removal and reload, and we have
      a mismatch (below zero ref count).
      
      The solution is to check all currently tracing ftrace callbacks to see if any
      are tracing any of the module's functions when a module is loaded (it already does
      that with callbacks that trace all functions). If a callback happens to have
      a module function being traced, it increments that records ref count and starts
      tracing that function.
      
      There may be a strange side effect with this, where tracing module functions
      on unload and then reloading a new module may have that new module's functions
      being traced. This may be something that confuses the user, but it's not
      a big deal. Another approach is to disable all callback hashes on module unload,
      but this leaves some ftrace callbacks that may not be registered, but can
      still have hashes tracing the module's function where ftrace doesn't know about
      it. That situation can cause the same bug. This solution solves that case too.
      Another benefit of this solution, is it is possible to trace a module's
      function on unload and load.
      
      Link: http://lkml.kernel.org/r/20130705142629.GA325@redhat.com
      
      Reported-by: default avatarJörn Engel <joern@logfs.org>
      Reported-by: default avatarDave Jones <davej@redhat.com>
      Reported-by: default avatarSteve Hodgson <steve@purestorage.com>
      Tested-by: default avatarSteve Hodgson <steve@purestorage.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      8c4f3c3f
  8. 30 Jul, 2013 7 commits
  9. 26 Jul, 2013 2 commits
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Add __tracepoint_string() to export string pointers · 102c9323
      Steven Rostedt (Red Hat) authored
      
      
      There are several tracepoints (mostly in RCU), that reference a string
      pointer and uses the print format of "%s" to display the string that
      exists in the kernel, instead of copying the actual string to the
      ring buffer (saves time and ring buffer space).
      
      But this has an issue with userspace tools that read the binary buffers
      that has the address of the string but has no access to what the string
      itself is. The end result is just output that looks like:
      
       rcu_dyntick:          ffffffff818adeaa 1 0
       rcu_dyntick:          ffffffff818adeb5 0 140000000000000
       rcu_dyntick:          ffffffff818adeb5 0 140000000000000
       rcu_utilization:      ffffffff8184333b
       rcu_utilization:      ffffffff8184333b
      
      The above is pretty useless when read by the userspace tools. Ideally
      we would want something that looks like this:
      
       rcu_dyntick:          Start 1 0
       rcu_dyntick:          End 0 140000000000000
       rcu_dyntick:          Start 140000000000000 0
       rcu_callback:         rcu_preempt rhp=0xffff880037aff710 func=put_cred_rcu 0/4
       rcu_callback:         rcu_preempt rhp=0xffff880078961980 func=file_free_rcu 0/5
       rcu_dyntick:          End 0 1
      
      The trace_printk() which also only stores the address of the string
      format instead of recording the string into the buffer itself, exports
      the mapping of kernel addresses to format strings via the printk_format
      file in the debugfs tracing directory.
      
      The tracepoint strings can use this same method and output the format
      to the same file and the userspace tools will be able to decipher
      the address without any modification.
      
      The tracepoint strings need its own section to save the strings because
      the trace_printk section will cause the trace_printk() buffers to be
      allocated if anything exists within the section. trace_printk() is only
      used for debugging and should never exist in the kernel, we can not use
      the trace_printk sections.
      
      Add a new tracepoint_str section that will also be examined by the output
      of the printk_format file.
      
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      102c9323
    • Steven Rostedt (Red Hat)'s avatar
      tracing: Remove locking trace_types_lock from tracing_reset_all_online_cpus() · 09d8091c
      Steven Rostedt (Red Hat) authored
      Commit a8227415 "tracing: Protect ftrace_trace_arrays list in trace_events.c"
      added taking the trace_types_lock mutex in trace_events.c as there were
      several locations that needed it for protection. Unfortunately, it also
      encapsulated a call to tracing_reset_all_online_cpus() which also takes
      the trace_types_lock, causing a deadlock.
      
      This happens when a module has tracepoints and has been traced. When the
      module is removed, the trace events module notifier will grab the
      trace_types_lock, do a bunch of clean ups, and also clears the buffer
      by calling tracing_reset_all_online_cpus. This doesn't happen often
      which explains why it wasn't caught right away.
      
      Commit a8227415 was marked for stable, which means this must be
      sent to stable too.
      
      Link: http://lkml.kernel.org/r/51EEC646.7070306@broadcom.com
      
      Reported-by: default avatarArend van Spril <arend@broadcom.com>
      Tested-by: default avatarArend van Spriel <arend@broadcom.com>
      Cc: Alexander Z Lam <azl@google.com>
      Cc: Vaibhav Nagarnaik <vnagarnaik@google.com>
      Cc: David Sharp <dhsharp@google.com>
      Cc: stable@vger.kernel.org # 3.10
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      09d8091c
  10. 24 Jul, 2013 8 commits
  11. 19 Jul, 2013 6 commits