1. 14 Dec, 2009 2 commits
  2. 12 Dec, 2009 1 commit
    • Linus Torvalds's avatar
      [BKL] add 'might_sleep()' to the outermost lock taker · f01eb364
      Linus Torvalds authored
      As shown by the previous patch (6698e347
      
      : "tty: Fix BKL taken under a
      spinlock bug introduced in the BKL split") the BKL removal is prone to
      some subtle issues, where removing the BKL in one place may in fact make
      a previously nested BKL call the new outer call, and then prone to nasty
      deadlocks with other spinlocks.
      
      In general, we should never take the BKL while we're holding a spinlock,
      so let's just add a "might_sleep()" to it (even though the BKL doesn't
      technically sleep - at least not yet), and we'll get nice warnings the
      next time this kind of problem happens during BKL removal.
      Acked-and-Tested-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f01eb364
  3. 28 Sep, 2009 1 commit
    • Frederic Weisbecker's avatar
      tracing: Pushdown the bkl tracepoints calls · 925936eb
      Frederic Weisbecker authored
      
      Currently we are calling the bkl tracepoint callbacks just before the
      bkl lock/unlock operations, ie the tracepoint call is not inside a
      lock_kernel() function but inside a lock_kernel() macro. Hence the
      bkl trace event header must be included from smp_lock.h. This raises
      some nasty circular header dependencies:
      
      linux/smp_lock.h -> trace/events/bkl.h -> trace/define_trace.h
      -> trace/ftrace.h -> linux/ftrace_event.h -> linux/hardirq.h
      -> linux/smp_lock.h
      
      This results in incomplete event declarations, spurious event
      definitions and other kind of funny behaviours.
      
      This is hardly fixable without ugly workarounds. So instead, we push
      the file name, line number and function name as lock_kernel()
      parameters, so that we only deal with the trace event header from
      lib/kernel_lock.c
      
      This adds two parameters to lock_kernel() and unlock_kernel() but
      it should be fine wrt to performances because this pair dos not seem
      to be called in fast paths.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      925936eb
  4. 24 Sep, 2009 1 commit
    • Frederic Weisbecker's avatar
      tracing/bkl: Add bkl ftrace events · 96a2c464
      Frederic Weisbecker authored
      
      Add two events lock_kernel and unlock_kernel() to trace the bkl uses.
      This opens the door for userspace tools to perform statistics about
      the callsites that use it, dependencies with other locks (by pairing
      the trace with lock events), use with recursivity and so on...
      
      The {__reacquire,release}_kernel_lock() events are not traced because
      these are called from schedule, thus the sched events are sufficient
      to trace them.
      
      Example of a trace:
      
      hald-addon-stor-4152  [000]   165.875501: unlock_kernel: depth: 0, fs/block_dev.c:1358 __blkdev_put()
      hald-addon-stor-4152  [000]   167.832974: lock_kernel: depth: 0, fs/block_dev.c:1167 __blkdev_get()
      
      How to get the callsites that acquire it recursively:
      
      cd /debug/tracing/events/bkl
      echo "lock_depth > 0" > filter
      
      firefox-4951  [001]   206.276967: unlock_kernel: depth: 1, fs/reiserfs/super.c:575 reiserfs_dirty_inode()
      
      You can also filter by file and/or line.
      
      v2: Use of FILTER_PTR_STRING attribute for files and lines fields to
          make them traceable.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      96a2c464
  5. 06 Mar, 2009 1 commit
  6. 10 May, 2008 1 commit
    • Linus Torvalds's avatar
      BKL: revert back to the old spinlock implementation · 8e3e076c
      Linus Torvalds authored
      The generic semaphore rewrite had a huge performance regression on AIM7
      (and potentially other BKL-heavy benchmarks) because the generic
      semaphores had been rewritten to be simple to understand and fair.  The
      latter, in particular, turns a semaphore-based BKL implementation into a
      mess of scheduling.
      
      The attempt to fix the performance regression failed miserably (see the
      previous commit 00b41ec2
      
       'Revert
      "semaphore: fix"'), and so for now the simple and sane approach is to
      instead just go back to the old spinlock-based BKL implementation that
      never had any issues like this.
      
      This patch also has the advantage of being reported to fix the
      regression completely according to Yanmin Zhang, unlike the semaphore
      hack which still left a couple percentage point regression.
      
      As a spinlock, the BKL obviously has the potential to be a latency
      issue, but it's not really any different from any other spinlock in that
      respect.  We do want to get rid of the BKL asap, but that has been the
      plan for several years.
      
      These days, the biggest users are in the tty layer (open/release in
      particular) and Alan holds out some hope:
      
        "tty release is probably a few months away from getting cured - I'm
         afraid it will almost certainly be the very last user of the BKL in
         tty to get fixed as it depends on everything else being sanely locked."
      
      so while we're not there yet, we do have a plan of action.
      Tested-by: default avatarYanmin Zhang <yanmin_zhang@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Matthew Wilcox <matthew@wil.cx>
      Cc: Alexander Viro <viro@ftp.linux.org.uk>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8e3e076c
  7. 18 Apr, 2008 1 commit
  8. 17 Apr, 2008 1 commit
  9. 25 Jan, 2008 1 commit
  10. 19 Oct, 2007 1 commit
  11. 03 Jul, 2006 1 commit
  12. 26 Jun, 2006 1 commit
  13. 10 Sep, 2005 1 commit
    • Ingo Molnar's avatar
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar authored
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset ...
      fb1c8f93
  14. 21 Jun, 2005 1 commit
    • Ingo Molnar's avatar
      [PATCH] smp_processor_id() cleanup · 39c715b7
      Ingo Molnar authored
      
      This patch implements a number of smp_processor_id() cleanup ideas that
      Arjan van de Ven and I came up with.
      
      The previous __smp_processor_id/_smp_processor_id/smp_processor_id API
      spaghetti was hard to follow both on the implementational and on the
      usage side.
      
      Some of the complexity arose from picking wrong names, some of the
      complexity comes from the fact that not all architectures defined
      __smp_processor_id.
      
      In the new code, there are two externally visible symbols:
      
       - smp_processor_id(): debug variant.
      
       - raw_smp_processor_id(): nondebug variant. Replaces all existing
         uses of _smp_processor_id() and __smp_processor_id(). Defined
         by every SMP architecture in include/asm-*/smp.h.
      
      There is one new internal symbol, dependent on DEBUG_PREEMPT:
      
       - debug_smp_processor_id(): internal debug variant, mapped to
                                   smp_processor_id().
      
      Also, i moved debug_smp_processor_id() from lib/kernel_lock.c into a new
      lib/smp_processor_id.c file.  All related comments got updated and/or
      clarified.
      
      I have build/boot tested the following 8 .config combinations on x86:
      
       {SMP,UP} x {PREEMPT,!PREEMPT} x {DEBUG_PREEMPT,!DEBUG_PREEMPT}
      
      I have also build/boot tested x64 on UP/PREEMPT/DEBUG_PREEMPT.  (Other
      architectures are untested, but should work just fine.)
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarArjan van de Ven <arjan@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      39c715b7
  15. 16 Apr, 2005 1 commit
    • Linus Torvalds's avatar
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds authored
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4