1. 26 Jun, 2008 1 commit
  2. 19 May, 2008 2 commits
    • Paul E. McKenney's avatar
      rcu: add rcu_barrier_sched() and rcu_barrier_bh() · 70f12f84
      Paul E. McKenney authored
      
      Add rcu_barrier_sched() and rcu_barrier_bh().  With these in place,
      rcutorture no longer gives the occasional oops when repeatedly starting
      and stopping torturing rcu_bh.  Also adds the API needed to flush out
      pre-existing call_rcu_sched() callbacks.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      70f12f84
    • Paul E. McKenney's avatar
      rcu: add call_rcu_sched() · 4446a36f
      Paul E. McKenney authored
      
      Fourth cut of patch to provide the call_rcu_sched().  This is again to
      synchronize_sched() as call_rcu() is to synchronize_rcu().
      
      Should be fine for experimental and -rt use, but not ready for inclusion.
      With some luck, I will be able to tell Andrew to come out of hiding on
      the next round.
      
      Passes multi-day rcutorture sessions with concurrent CPU hotplugging.
      
      Fixes since the first version include a bug that could result in
      indefinite blocking (spotted by Gautham Shenoy), better resiliency
      against CPU-hotplug operations, and other minor fixes.
      
      Fixes since the second version include reworking grace-period detection
      to avoid deadlocks that could happen when running concurrently with
      CPU hotplug, adding Mathieu's fix to avoid the softlockup messages,
      as well as Mathieu's fix to allow use earlier in boot.
      
      Fixes since the third version include a wrong-CPU bug spotted by
      Andrew, getting rid of the obsolete synchronize_kernel API that somehow
      snuck back in, merging spin_unlock() and local_irq_restore() in a
      few places, commenting the code that checks for quiescent states based
      on interrupting from user-mode execution or the idle loop, removing
      some inline attributes, and some code-style changes.
      
      Known/suspected shortcomings:
      
      o	I still do not entirely trust the sleep/wakeup logic.  Next step
      	will be to use a private snapshot of the CPU online mask in
      	rcu_sched_grace_period() -- if the CPU wasn't there at the start
      	of the grace period, we don't need to hear from it.  And the
      	bit about accounting for changes in online CPUs inside of
      	rcu_sched_grace_period() is ugly anyway.
      
      o	It might be good for rcu_sched_grace_period() to invoke
      	resched_cpu() when a given CPU wasn't responding quickly,
      	but resched_cpu() is declared static...
      
      This patch also fixes a long-standing bug in the earlier preemptable-RCU
      implementation of synchronize_rcu() that could result in loss of
      concurrent external changes to a task's CPU affinity mask.  I still cannot
      remember who reported this...
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: default avatarMathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      4446a36f
  3. 13 Feb, 2008 1 commit
  4. 25 Jan, 2008 3 commits
  5. 22 Jan, 2008 1 commit
  6. 17 Oct, 2007 1 commit
  7. 11 Oct, 2007 1 commit
  8. 09 May, 2007 1 commit
    • Rafael J. Wysocki's avatar
      Add suspend-related notifications for CPU hotplug · 8bb78442
      Rafael J. Wysocki authored
      
      Since nonboot CPUs are now disabled after tasks and devices have been
      frozen and the CPU hotplug infrastructure is used for this purpose, we need
      special CPU hotplug notifications that will help the CPU-hotplug-aware
      subsystems distinguish normal CPU hotplug events from CPU hotplug events
      related to a system-wide suspend or resume operation in progress.  This
      patch introduces such notifications and causes them to be used during
      suspend and resume transitions.  It also changes all of the
      CPU-hotplug-aware subsystems to take these notifications into consideration
      (for now they are handled in the same way as the corresponding "normal"
      ones).
      
      [oleg@tv-sign.ru: cleanups]
      Signed-off-by: default avatarRafael J. Wysocki <rjw@sisk.pl>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: default avatarOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8bb78442
  9. 07 Dec, 2006 1 commit
    • Eric Dumazet's avatar
      [PATCH] rcu: add a prefetch() in rcu_do_batch() · 1c69d921
      Eric Dumazet authored
      
      On some workloads, (for example when lot of close() syscalls are done), RCU
      qlen can be quite large, and RCU heads are no longer in cpu cache when
      rcu_do_batch() is called.
      
      This patch adds a prefetch() in rcu_do_batch() to give CPU a hint to bring
      back cache lines containing 'struct rcu_head's.
      
      Most list manipulations macros include prefetch(), but not open coded ones
      (at least with current C compilers :) )
      
      I got a nice speedup on a trivial benchmark (3.48 us per iteration instead
      of 3.95 us on a 1.6 GHz Pentium-M)
      
      while (1) { pipe(p); close(fd[0]); close(fd[1]);}
      Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      1c69d921
  10. 04 Oct, 2006 1 commit
    • Oleg Nesterov's avatar
      [PATCH] rcu: simplify/improve batch tuning · 20e9751b
      Oleg Nesterov authored
      
      Kill a hard-to-calculate 'rsinterval' boot parameter and per-cpu
      rcu_data.last_rs_qlen.  Instead, it adds adds a flag rcu_ctrlblk.signaled,
      which records the fact that one of CPUs has sent a resched IPI since the
      last rcu_start_batch().
      
      Roughly speaking, we need two rcu_start_batch()s in order to move callbacks
      from ->nxtlist to ->donelist.  This means that when ->qlen exceeds qhimark
      and continues to grow, we should send a resched IPI, and then do it again
      after we gone through a quiescent state.
      
      On the other hand, if it was already sent, we don't need to do it again
      when another CPU detects overflow of the queue.
      Signed-off-by: default avatarOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: default avatarPaul E. McKenney <paulmck@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      20e9751b
  11. 13 Sep, 2006 1 commit
  12. 31 Jul, 2006 1 commit
  13. 03 Jul, 2006 1 commit
  14. 27 Jun, 2006 3 commits
  15. 23 Jun, 2006 1 commit
  16. 15 May, 2006 1 commit
  17. 26 Apr, 2006 2 commits
  18. 24 Mar, 2006 1 commit
  19. 23 Mar, 2006 2 commits
  20. 20 Mar, 2006 1 commit
  21. 08 Mar, 2006 1 commit
    • Dipankar Sarma's avatar
      [PATCH] rcu batch tuning · 21a1ea9e
      Dipankar Sarma authored
      
      This patch adds new tunables for RCU queue and finished batches.  There are
      two types of controls - number of completed RCU updates invoked in a batch
      (blimit) and monitoring for high rate of incoming RCUs on a cpu (qhimark,
      qlowmark).
      
      By default, the per-cpu batch limit is set to a small value.  If the input
      RCU rate exceeds the high watermark, we do two things - force quiescent
      state on all cpus and set the batch limit of the CPU to INTMAX.  Setting
      batch limit to INTMAX forces all finished RCUs to be processed in one shot.
       If we have more than INTMAX RCUs queued up, then we have bigger problems
      anyway.  Once the incoming queued RCUs fall below the low watermark, the
      batch limit is set to the default.
      Signed-off-by: default avatarDipankar Sarma <dipankar@in.ibm.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      21a1ea9e
  22. 10 Jan, 2006 2 commits
  23. 09 Jan, 2006 2 commits
  24. 08 Jan, 2006 3 commits
  25. 12 Dec, 2005 2 commits
    • Srivatsa Vaddagiri's avatar
      [PATCH] Fix RCU race in access of nohz_cpu_mask · c3f59023
      Srivatsa Vaddagiri authored
      
      Accessing nohz_cpu_mask before incrementing rcp->cur is racy.  It can cause
      tickless idle CPUs to be included in rsp->cpumask, which will extend
      graceperiods unnecessarily.
      
      Fix this race.  It has been tested using extensions to RCU torture module
      that forces various CPUs to become idle.
      Signed-off-by: default avatarSrivatsa Vaddagiri <vatsa@in.ibm.com>
      Cc: Dipankar Sarma <dipankar@in.ibm.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      c3f59023
    • Dipankar Sarma's avatar
      [PATCH] add rcu_barrier() synchronization point · ab4720ec
      Dipankar Sarma authored
      
      This introduces a new interface - rcu_barrier() which waits until all
      the RCUs queued until this call have been completed.
      
      Reiser4 needs this, because we do more than just freeing memory object
      in our RCU callback: we also remove it from the list hanging off
      super-block.  This means, that before freeing reiser4-specific portion
      of super-block (during umount) we have to wait until all pending RCU
      callbacks are executed.
      
      The only change of reiser4 made to the original patch, is exporting of
      rcu_barrier().
      
      Cc: Hans Reiser <reiser@namesys.com>
      Cc: Vladimir V. Saveliev <vs@namesys.com>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      ab4720ec
  26. 30 Oct, 2005 1 commit
  27. 17 Oct, 2005 2 commits
    • Eric Dumazet's avatar
      [PATCH] rcu: keep rcu callback event counter · 5ee832db
      Eric Dumazet authored
      
      This makes call_rcu() keep track of how many events there are on the RCU
      list, and cause a reschedule event when the list gets too long.
      
      This helps keep RCU event lists down.
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      5ee832db
    • Linus Torvalds's avatar
      Increase default RCU batching sharply · 2cc78eb5
      Linus Torvalds authored
      
      Dipankar made RCU limit the batch size to improve latency, but that
      approach is unworkable: it can cause the RCU queues to grow without
      bounds, since the batch limiter ended up limiting the callbacks.
      
      So make the limit much higher, and start planning on instead limiting
      the batch size by doing RCU callbacks more often if the queue looks like
      it might be growing too long.
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      2cc78eb5