1. 01 Apr, 2010 2 commits
  2. 30 Mar, 2010 1 commit
  3. 29 Mar, 2010 3 commits
  4. 24 Mar, 2010 5 commits
  5. 23 Mar, 2010 2 commits
    • Bjorn Helgaas's avatar
      resources: add interfaces that return conflict information · 66f1207b
      Bjorn Helgaas authored
      
      
      request_resource() and insert_resource() only return success or failure,
      which no information about what existing resource conflicted with the
      proposed new reservation.  This patch adds request_resource_conflict()
      and insert_resource_conflict(), which return the conflicting resource.
      
      Callers may use this for better error messages or to adjust the new
      resource and retry the request.
      Signed-off-by: default avatarBjorn Helgaas <bjorn.helgaas@hp.com>
      Signed-off-by: default avatarJesse Barnes <jbarnes@virtuousgeek.org>
      66f1207b
    • John Stultz's avatar
      time: Fix accumulation bug triggered by long delay. · 830ec045
      John Stultz authored
      
      
      The logarithmic accumulation done in the timekeeping has some overflow
      protection that limits the max shift value. That means it will take
      more then shift loops to accumulate all of the cycles. This causes
      the shift decrement to underflow, which causes the loop to never exit.
      
      The simplest fix would be simply to do a:
      	if (shift)
      		shift--;
      
      However that is not optimal, as we know the cycle offset is larger
      then the interval << shift, the above would make shift drop to zero,
      then we would be spinning for quite awhile accumulating at interval
      chunks at a time.
      
      Instead, this patch only decreases shift if the offset is smaller
      then cycle_interval << shift.  This makes sure we accumulate using
      the largest chunks possible without overflowing tick_length, and limits
      the number of iterations through the loop.
      
      This issue was found and reported by Sonic Zhang, who also tested the fix.
      Many thanks your explanation and testing!
      Reported-by: default avatarSonic Zhang <sonic.adi@gmail.com>
      Signed-off-by: default avatarJohn Stultz <johnstul@us.ibm.com>
      Tested-by: default avatarSonic Zhang <sonic.adi@gmail.com>
      LKML-Reference: <1268948850-5225-1-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      830ec045
  6. 21 Mar, 2010 1 commit
  7. 19 Mar, 2010 1 commit
    • Steven Rostedt's avatar
      ring-buffer: Do 8 byte alignment for 64 bit that can not handle 4 byte align · 2271048d
      Steven Rostedt authored
      
      
      The ring buffer uses 4 byte alignment while recording events into the
      buffer, even on 64bit machines. This saves space when there are lots
      of events being recorded at 4 byte boundaries.
      
      The ring buffer has a zero copy method to write into the buffer, with
      the reserving of space and then committing it. This may cause problems
      when writing an 8 byte word into a 4 byte alignment (not 8). For x86 and
      PPC this is not an issue, but on some architectures this would cause an
      out-of-alignment exception.
      
      This patch uses CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS to determine
      if it is OK to use 4 byte alignments on 64 bit machines. If it is not,
      it forces the ring buffer event header to be 8 bytes and not 4,
      and will align the length of the data to be 8 byte aligned.
      This keeps the data payload at 8 byte alignments and will allow these
      machines to run without issue.
      
      The trick to this is that the header can be either 4 bytes or 8 bytes
      depending on the length of the data payload. The 4 byte header
      has a length field that supports up to 112 bytes. If the length of
      the data is more than 112, the length field is set to zero, and the actual
      length is stored in the next 4 bytes after the header.
      
      When CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS is not set, the code forces
      zero in the 4 byte header forcing the length to be stored in the 4 byte
      array, even with a small data load. It also forces the length of the
      data load to be 8 byte aligned. The combination of these two guarantee
      that the data is always at 8 byte alignment.
      Tested-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
                 (on sparc64)
      Reported-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      2271048d
  8. 17 Mar, 2010 2 commits
    • Frederic Weisbecker's avatar
      perf: Fix unexported generic perf_arch_fetch_caller_regs · dcd5c166
      Frederic Weisbecker authored
      
      
      perf_arch_fetch_caller_regs() is exported for the overriden x86
      version, but not for the generic weak version.
      
      As a general rule, weak functions should not have their symbol
      exported in the same file they are defined.
      
      So let's export it on trace_event_perf.c as it is used by trace
      events only.
      
      This fixes:
      
      	ERROR: ".perf_arch_fetch_caller_regs" [fs/xfs/xfs.ko] undefined!
      	ERROR: ".perf_arch_fetch_caller_regs" [arch/powerpc/platforms/cell/spufs/spufs.ko] undefined!
      
      -v2: And also only build it if trace events are enabled.
      -v3: Fix changelog mistake
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <1268697902-9518-1-git-send-regression-fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      dcd5c166
    • KOSAKI Motohiro's avatar
      sched: Use proper type in sched_getaffinity() · 8bc037fb
      KOSAKI Motohiro authored
      
      
      Using the proper type fixes the following compiler warning:
      
        kernel/sched.c:4850: warning: comparison of distinct pointer types lacks a cast
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: torvalds@linux-foundation.org
      Cc: travis@sgi.com
      Cc: peterz@infradead.org
      Cc: drepper@redhat.com
      Cc: rja@sgi.com
      Cc: sharyath@in.ibm.com
      Cc: steiner@sgi.com
      LKML-Reference: <20100317090046.4C79.A69D9226@jp.fujitsu.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8bc037fb
  9. 16 Mar, 2010 2 commits
  10. 15 Mar, 2010 1 commit
    • KOSAKI Motohiro's avatar
      sched: sched_getaffinity(): Allow less than NR_CPUS length · cd3d8031
      KOSAKI Motohiro authored
      
      
      [ Note, this commit changes the syscall ABI for > 1024 CPUs systems. ]
      
      Recently, some distro decided to use NR_CPUS=4096 for mysterious reasons.
      Unfortunately, glibc sched interface has the following definition:
      
      	# define __CPU_SETSIZE  1024
      	# define __NCPUBITS     (8 * sizeof (__cpu_mask))
      	typedef unsigned long int __cpu_mask;
      	typedef struct
      	{
      	  __cpu_mask __bits[__CPU_SETSIZE / __NCPUBITS];
      	} cpu_set_t;
      
      It mean, if NR_CPUS is bigger than 1024, cpu_set_t makes an
      ABI issue ...
      
      More recently, Sharyathi Nagesh reported following test program makes
      misterious syscall failure:
      
       -----------------------------------------------------------------------
       #define _GNU_SOURCE
       #include<stdio.h>
       #include<errno.h>
       #include<sched.h>
      
       int main()
       {
           cpu_set_t set;
           if (sched_getaffinity(0, sizeof(cpu_set_t), &set) < 0)
               printf("\n Call is failing with:%d", errno);
       }
       -----------------------------------------------------------------------
      
      Because the kernel assumes len argument of sched_getaffinity() is bigger
      than NR_CPUS. But now it is not correct.
      
      Now we are faced with the following annoying dilemma, due to
      the limitations of the glibc interface built in years ago:
      
       (1) if we change glibc's __CPU_SETSIZE definition, we lost
           binary compatibility of _all_ application.
      
       (2) if we don't change it, we also lost binary compatibility of
           Sharyathi's use case.
      
      Then, I would propse to change the rule of the len argument of
      sched_getaffinity().
      
      Old:
      	len should be bigger than NR_CPUS
      New:
      	len should be bigger than maximum possible cpu id
      
      This creates the following behavior:
      
       (A) In the real 4096 cpus machine, the above test program still
           return -EINVAL.
      
       (B) NR_CPUS=4096 but the machine have less than 1024 cpus (almost
           all machines in the world), the above can run successfully.
      
      Fortunatelly, BIG SGI machine is mainly used for HPC use case. It means
      they can rebuild their programs.
      
      IOW we hope they are not annoyed by this issue ...
      Reported-by: default avatarSharyathi Nagesh <sharyath@in.ibm.com>
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarUlrich Drepper <drepper@redhat.com>
      Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jack Steiner <steiner@sgi.com>
      Cc: Russ Anderson <rja@sgi.com>
      Cc: Mike Travis <travis@sgi.com>
      LKML-Reference: <20100312161316.9520.A69D9226@jp.fujitsu.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      cd3d8031
  11. 13 Mar, 2010 5 commits
    • Steven Rostedt's avatar
      tracing: Do not record user stack trace from NMI context · b6345879
      Steven Rostedt authored
      
      
      A bug was found with Li Zefan's ftrace_stress_test that caused applications
      to segfault during the test.
      
      Placing a tracing_off() in the segfault code, and examining several
      traces, I found that the following was always the case. The lock tracer
      was enabled (lockdep being required) and userstack was enabled. Testing
      this out, I just enabled the two, but that was not good enough. I needed
      to run something else that could trigger it. Running a load like hackbench
      did not work, but executing a new program would. The following would
      trigger the segfault within seconds:
      
        # echo 1 > /debug/tracing/options/userstacktrace
        # echo 1 > /debug/tracing/events/lock/enable
        # while :; do ls > /dev/null ; done
      
      Enabling the function graph tracer and looking at what was happening
      I finally noticed that all cashes happened just after an NMI.
      
       1)               |    copy_user_handle_tail() {
       1)               |      bad_area_nosemaphore() {
       1)               |        __bad_area_nosemaphore() {
       1)               |          no_context() {
       1)               |            fixup_exception() {
       1)   0.319 us    |              search_exception_tables();
       1)   0.873 us    |            }
      [...]
       1)   0.314 us    |  __rcu_read_unlock();
       1)   0.325 us    |    native_apic_mem_write();
       1)   0.943 us    |  }
       1)   0.304 us    |  rcu_nmi_exit();
      [...]
       1)   0.479 us    |  find_vma();
       1)               |  bad_area() {
       1)               |    __bad_area() {
      
      After capturing several traces of failures, all of them happened
      after an NMI. Curious about this, I added a trace_printk() to the NMI
      handler to read the regs->ip to see where the NMI happened. In which I
      found out it was here:
      
      ffffffff8135b660 <page_fault>:
      ffffffff8135b660:       48 83 ec 78             sub    $0x78,%rsp
      ffffffff8135b664:       e8 97 01 00 00          callq  ffffffff8135b800 <error_entry>
      
      What was happening is that the NMI would happen at the place that a page
      fault occurred. It would call rcu_read_lock() which was traced by
      the lock events, and the user_stack_trace would run. This would trigger
      a page fault inside the NMI. I do not see where the CR2 register is
      saved or restored in NMI handling. This means that it would corrupt
      the page fault handling that the NMI interrupted.
      
      The reason the while loop of ls helped trigger the bug, was that
      each execution of ls would cause lots of pages to be faulted in, and
      increase the chances of the race happening.
      
      The simple solution is to not allow user stack traces in NMI context.
      After this patch, I ran the above "ls" test for a couple of hours
      without any issues. Without this patch, the bug would trigger in less
      than a minute.
      
      Cc: stable@kernel.org
      Reported-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      b6345879
    • Steven Rostedt's avatar
      tracing: Disable buffer switching when starting or stopping trace · a2f80714
      Steven Rostedt authored
      
      
      When the trace iterator is read, tracing_start() and tracing_stop()
      is called to stop tracing while the iterator is processing the trace
      output.
      
      These functions disable both the standard buffer and the max latency
      buffer. But if the wakeup tracer is running, it can switch these
      buffers between the two disables:
      
        buffer = global_trace.buffer;
        if (buffer)
            ring_buffer_record_disable(buffer);
      
            <<<--------- swap happens here
      
        buffer = max_tr.buffer;
        if (buffer)
            ring_buffer_record_disable(buffer);
      
      What happens is that we disabled the same buffer twice. On tracing_start()
      we can enable the same buffer twice. All ring_buffer_record_disable()
      must be matched with a ring_buffer_record_enable() or the buffer
      can be disable permanently, or enable prematurely, and cause a bug
      where a reset happens while a trace is commiting.
      
      This patch protects these two by taking the ftrace_max_lock to prevent
      a switch from occurring.
      
      Found with Li Zefan's ftrace_stress_test.
      
      Cc: stable@kernel.org
      Reported-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      a2f80714
    • Steven Rostedt's avatar
      tracing: Use same local variable when resetting the ring buffer · 283740c6
      Steven Rostedt authored
      
      
      In the ftrace code that resets the ring buffer it references the
      buffer with a local variable, but then uses the tr->buffer as the
      parameter to reset. If the wakeup tracer is running, which can
      switch the tr->buffer with the max saved buffer, this can break
      the requirement of disabling the buffer before the reset.
      
         buffer = tr->buffer;
         ring_buffer_record_disable(buffer);
         synchronize_sched();
         __tracing_reset(tr->buffer, cpu);
      
      If the tr->buffer is swapped, then the reset is not happening to the
      buffer that was disabled. This will cause the ring buffer to fail.
      
      Found with Li Zefan's ftrace_stress_test.
      
      Cc: stable@kernel.org
      Reported-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      283740c6
    • Steven Rostedt's avatar
      function-graph: Init curr_ret_stack with ret_stack · ea14eb71
      Steven Rostedt authored
      
      
      If the graph tracer is active, and a task is forked but the allocating of
      the processes graph stack fails, it can cause crash later on.
      
      This is due to the temporary stack being NULL, but the curr_ret_stack
      variable is copied from the parent. If it is not -1, then in
      ftrace_graph_probe_sched_switch() the following:
      
      	for (index = next->curr_ret_stack; index >= 0; index--)
      		next->ret_stack[index].calltime += timestamp;
      
      Will cause a kernel OOPS.
      
      Found with Li Zefan's ftrace_stress_test.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      ea14eb71
    • Lai Jiangshan's avatar
      ring-buffer: Move disabled check into preempt disable section · 52fbe9cd
      Lai Jiangshan authored
      
      
      The ring buffer resizing and resetting relies on a schedule RCU
      action. The buffers are disabled, a synchronize_sched() is called
      and then the resize or reset takes place.
      
      But this only works if the disabling of the buffers are within the
      preempt disabled section, otherwise a window exists that the buffers
      can be written to while a reset or resize takes place.
      
      Cc: stable@kernel.org
      Reported-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      LKML-Reference: <4B949E43.2010906@cn.fujitsu.com>
      Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      52fbe9cd
  12. 12 Mar, 2010 15 commits