1. 25 Feb, 2008 2 commits
  2. 23 Feb, 2008 2 commits
    • Linus Torvalds's avatar
      Add memory barrier semantics to wake_up() & co · 04e2f174
      Linus Torvalds authored
      
      Oleg Nesterov and others have pointed out that on some architectures,
      the traditional sequence of
      
      	set_current_state(TASK_INTERRUPTIBLE);
      	if (CONDITION)
      		return;
      	schedule();
      
      is racy wrt another CPU doing
      
      	CONDITION = 1;
      	wake_up_process(p);
      
      because while set_current_state() has a memory barrier separating
      setting of the TASK_INTERRUPTIBLE state from reading of the CONDITION
      variable, there is no such memory barrier on the wakeup side.
      
      Now, wake_up_process() does actually take a spinlock before it reads and
      sets the task state on the waking side, and on x86 (and many other
      architectures) that spinlock is in fact equivalent to a memory barrier,
      but that is not generally guaranteed.  The write that sets CONDITION
      could move into the critical region protected by the runqueue spinlock.
      
      However, adding a smp_wmb() to before the spinlock should now order the
      writing of CONDITION wrt the lock itself, which in turn is ordered wrt
      the accesses within the spinlock (which includes the reading of the old
      state).
      
      This should thus close the race (which probably has never been seen in
      practice, but since smp_wmb() is a no-op on x86, it's not like this will
      make anything worse either on the most common architecture where the
      spinlock already gave the required protection).
      Acked-by: default avatarOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: default avatarDmitry Adamushko <dmitry.adamushko@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      04e2f174
    • Srinivasa Ds's avatar
      kprobes: refuse kprobe insertion on add/sub_preempt_counter() · 43627582
      Srinivasa Ds authored
      
      Kprobes makes use of preempt_disable(),preempt_enable_noresched() and these
      functions inturn call add/sub_preempt_count().  So we need to refuse user from
      inserting probe in to these functions.
      
      This patch disallows user from probing add/sub_preempt_count().
      Signed-off-by: default avatarSrinivasa DS <srinivasa@in.ibm.com>
      Acked-by: default avatarAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      43627582
  3. 13 Feb, 2008 7 commits
  4. 08 Feb, 2008 1 commit
  5. 31 Jan, 2008 1 commit
  6. 30 Jan, 2008 1 commit
    • Nick Piggin's avatar
      spinlock: lockbreak cleanup · 95c354fe
      Nick Piggin authored
      
      The break_lock data structure and code for spinlocks is quite nasty.
      Not only does it double the size of a spinlock but it changes locking to
      a potentially less optimal trylock.
      
      Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
      __raw_spin_is_contended that uses the lock data itself to determine whether
      there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
      not set.
      
      Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
      decouple it from the spinlock implementation, and make it typesafe (rwlocks
      do not have any need_lockbreak sites -- why do they even get bloated up
      with that break_lock then?).
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      95c354fe
  7. 25 Jan, 2008 26 commits