1. 06 Mar, 2010 1 commit
  2. 28 Feb, 2010 1 commit
    • Frederic Weisbecker's avatar
      tracing: Include irqflags headers from trace clock · ae1f3038
      Frederic Weisbecker authored
      
      
      trace_clock.c includes spinlock.h, which ends up including
      asm/system.h, which in turn includes linux/irqflags.h in x86.
      
      So the definition of raw_local_irq_save is luckily covered there,
      but this is not the case in parisc:
      
         tip/kernel/trace/trace_clock.c:86: error: implicit declaration of function 'raw_local_irq_save'
         tip/kernel/trace/trace_clock.c:112: error: implicit declaration of function 'raw_local_irq_restore'
      
      We need to include linux/irqflags.h directly from trace_clock.c
      to avoid such build error.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      ae1f3038
  3. 27 Feb, 2010 3 commits
    • Ingo Molnar's avatar
      Merge branch 'tip/tracing/core' of... · 48091742
      Ingo Molnar authored
      Merge branch 'tip/tracing/core' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace into tracing/core
      48091742
    • Ingo Molnar's avatar
      Merge branch 'tracing/core' of... · 6fb83029
      Ingo Molnar authored
      Merge branch 'tracing/core' of git://git.kernel.org/pub/scm/linux/kernel/git/frederic/random-tracing into tracing/core
      6fb83029
    • Steven Rostedt's avatar
      ftrace: Add function names to dangling } in function graph tracer · f1c7f517
      Steven Rostedt authored
      
      
      The function graph tracer is currently the most invasive tracer
      in the ftrace family. It can easily overflow the buffer even with
      10megs per CPU. This means that events can often be lost.
      
      On start up, or after events are lost, if the function return is
      recorded but the function enter was lost, all we get to see is the
      exiting '}'.
      
      Here is how a typical trace output starts:
      
       [tracing] cat trace
       # tracer: function_graph
       #
       # CPU  DURATION                  FUNCTION CALLS
       # |     |   |                     |   |   |   |
        0) + 91.897 us   |                  }
        0) ! 567.961 us  |                }
        0)   <========== |
        0) ! 579.083 us  |                _raw_spin_lock_irqsave();
        0)   4.694 us    |                _raw_spin_unlock_irqrestore();
        0) ! 594.862 us  |              }
        0) ! 603.361 us  |            }
        0) ! 613.574 us  |          }
        0) ! 623.554 us  |        }
        0)   3.653 us    |        fget_light();
        0)               |        sock_poll() {
      
      There are a series of '}' with no matching "func() {". There's no information
      to what functions these ending brackets belong to.
      
      This patch adds a stack on the per cpu structure used in outputting
      the function graph tracer to keep track of what function was outputted.
      Then on a function exit event, it checks the depth to see if the
      function exit has a matching entry event. If it does, then it only
      prints the '}', otherwise it adds the function name after the '}'.
      
      This allows function exit events to show what function they belong to
      at trace output startup, when the entry was lost due to ring buffer
      overflow, or even after a new task is scheduled in.
      
      Here is what the above trace will look like after this patch:
      
       [tracing] cat trace
       # tracer: function_graph
       #
       # CPU  DURATION                  FUNCTION CALLS
       # |     |   |                     |   |   |   |
        0) + 91.897 us   |                  } (irq_exit)
        0) ! 567.961 us  |                } (smp_apic_timer_interrupt)
        0)   <========== |
        0) ! 579.083 us  |                _raw_spin_lock_irqsave();
        0)   4.694 us    |                _raw_spin_unlock_irqrestore();
        0) ! 594.862 us  |              } (add_wait_queue)
        0) ! 603.361 us  |            } (__pollwait)
        0) ! 613.574 us  |          } (tcp_poll)
        0) ! 623.554 us  |        } (sock_poll)
        0)   3.653 us    |        fget_light();
        0)               |        sock_poll() {
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      f1c7f517
  4. 26 Feb, 2010 2 commits
  5. 25 Feb, 2010 7 commits
    • Wenji Huang's avatar
      tracing: Simplify memory recycle of trace_define_field · 7b60997f
      Wenji Huang authored
      
      
      Discard freeing field->type since it is not necessary.
      Reviewed-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-5-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      7b60997f
    • Wenji Huang's avatar
      tracing: Remove unnecessary variable in print_graph_return · c85f3a91
      Wenji Huang authored
      
      
      The "cpu" variable is declared at the start of the function and
      also within a branch, with the exact same initialization.
      
      Remove the local variable of the same name in the branch.
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-3-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      c85f3a91
    • Wenji Huang's avatar
      tracing: Fix typo of info text in trace_kprobe.c · a5efd925
      Wenji Huang authored
      
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-2-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a5efd925
    • Wenji Huang's avatar
      tracing: Fix typo in prof_sysexit_enable() · 6574658b
      Wenji Huang authored
      
      Signed-off-by: default avatarWenji Huang <wenji.huang@oracle.com>
      LKML-Reference: <1266997226-6833-1-git-send-email-wenji.huang@oracle.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      6574658b
    • Li Zefan's avatar
      tracing: Remove CONFIG_TRACE_POWER from kernel config · 1ab83a89
      Li Zefan authored
      
      
      The power tracer has been converted to power trace events.
      Acked-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      LKML-Reference: <4B84D50E.4070806@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      1ab83a89
    • Jeff Mahoney's avatar
      tracing: Fix ftrace_event_call alignment for use with gcc 4.5 · 86c38a31
      Jeff Mahoney authored
      
      
      GCC 4.5 introduces behavior that forces the alignment of structures to
       use the largest possible value. The default value is 32 bytes, so if
       some structures are defined with a 4-byte alignment and others aren't
       declared with an alignment constraint at all - it will align at 32-bytes.
      
       For things like the ftrace events, this results in a non-standard array.
       When initializing the ftrace subsystem, we traverse the _ftrace_events
       section and call the initialization callback for each event. When the
       structures are misaligned, we could be treating another part of the
       structure (or the zeroed out space between them) as a function pointer.
      
       This patch forces the alignment for all the ftrace_event_call structures
       to 4 bytes.
      
       Without this patch, the kernel fails to boot very early when built with
       gcc 4.5.
      
       It's trivial to check the alignment of the members of the array, so it
       might be worthwhile to add something to the build system to do that
       automatically. Unfortunately, that only covers this case. I've asked one
       of the gcc developers about adding a warning when this condition is seen.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarJeff Mahoney <jeffm@suse.com>
      LKML-Reference: <4B85770B.6010901@suse.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      86c38a31
    • Steven Rostedt's avatar
      ftrace: Remove memory barriers from NMI code when not needed · 0c54dd34
      Steven Rostedt authored
      
      
      The code in stop_machine that modifies the kernel text has a bit
      of logic to handle the case of NMIs. stop_machine does not prevent
      NMIs from executing, and if an NMI were to trigger on another CPU
      as the modifying CPU is changing the NMI text, a GPF could result.
      
      To prevent the GPF, the NMI calls ftrace_nmi_enter() which may
      modify the code first, then any other NMIs will just change the
      text to the same content which will do no harm. The code that
      stop_machine called must wait for NMIs to finish while it changes
      each location in the kernel. That code may also change the text
      to what the NMI changed it to. The key is that the text will never
      change content while another CPU is executing it.
      
      To make the above work, the call to ftrace_nmi_enter() must also
      do a smp_mb() as well as atomic_inc().  But for applications like
      perf that require a high number of NMIs for profiling, this can have
      a dramatic effect on the system. Not only is it doing a full memory
      barrier on both nmi_enter() as well as nmi_exit() it is also
      modifying a global variable with an atomic operation. This kills
      performance on large SMP machines.
      
      Since the memory barriers are only needed when ftrace is in the
      process of modifying the text (which is seldom), this patch
      adds a "modifying_code" variable that gets set before stop machine
      is executed and cleared afterwards.
      
      The NMIs will check this variable and store it in a per CPU
      "save_modifying_code" variable that it will use to check if it
      needs to do the memory barriers and atomic dec on NMI exit.
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      0c54dd34
  6. 24 Feb, 2010 13 commits
  7. 23 Feb, 2010 13 commits