1. 06 Jan, 2009 1 commit
  2. 29 Dec, 2008 1 commit
  3. 10 Dec, 2008 3 commits
    • Andrew Morton's avatar
      revert "percpu_counter: new function percpu_counter_sum_and_set" · 02d21168
      Andrew Morton authored
      Revert
      
          commit e8ced39d
      
      
          Author: Mingming Cao <cmm@us.ibm.com>
          Date:   Fri Jul 11 19:27:31 2008 -0400
      
              percpu_counter: new function percpu_counter_sum_and_set
      
      As described in
      
      	revert "percpu counter: clean up percpu_counter_sum_and_set()"
      
      the new percpu_counter_sum_and_set() is racy against updates to the
      cpu-local accumulators on other CPUs.  Revert that change.
      
      This means that ext4 will be slow again.  But correct.
      Reported-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Cc: <stable@kernel.org>		[2.6.27.x]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      02d21168
    • Andrew Morton's avatar
      revert "percpu counter: clean up percpu_counter_sum_and_set()" · 71c5576f
      Andrew Morton authored
      Revert
      
          commit 1f7c14c6
          Author: Mingming Cao <cmm@us.ibm.com>
          Date:   Thu Oct 9 12:50:59 2008 -0400
      
              percpu counter: clean up percpu_counter_sum_and_set()
      
      Before this patch we had the following:
      
      percpu_counter_sum(): return the percpu_counter's value
      
      percpu_counter_sum_and_set(): return the percpu_counter's value, copying
      that value into the central value and zeroing the per-cpu counters before
      returning.
      
      After this patch, percpu_counter_sum_and_set() has gone, and
      percpu_counter_sum() gets the old percpu_counter_sum_and_set()
      functionality.
      
      Problem is, as Eric points out, the old percpu_counter_sum_and_set()
      functionality was racy and wrong.  It zeroes out counters on "other" cpus,
      without holding any locks which will prevent races agaist updates from
      those other CPUS.
      
      This patch reverts 1f7c14c6.  This means
      that percpu_counter_sum_and_set() still has the race, but
      percpu_counter_sum() does not.
      
      Note that this is not a simple revert - ext4 has since started using
      percpu_counter_sum() for its dirty_blocks counter as well.
      
      Note that this revert patch changes percpu_counter_sum() semantics.
      
      Before the patch, a call to percpu_counter_sum() will bring the counter's
      central counter mostly up-to-date, so a following percpu_counter_read()
      will return a close value.
      
      After this patch, a call to percpu_counter_sum() will leave the counter's
      central accumulator unaltered, so a subsequent call to
      percpu_counter_read() can now return a significantly inaccurate result.
      
      If there is any code in the tree which was introduced after
      e8ced39d
      
       was merged, and which depends
      upon the new percpu_counter_sum() semantics, that code will break.
      Reported-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Cc: <linux-ext4@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      71c5576f
    • Eric Dumazet's avatar
      percpu_counter: fix CPU unplug race in percpu_counter_destroy() · fd3d664f
      Eric Dumazet authored
      
      We should first delete the counter from percpu_counters list
      before freeing memory, or a percpu_counter_hotcpu_callback()
      could dereference a NULL pointer.
      Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Acked-by: default avatarDavid S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mingming Cao <cmm@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fd3d664f
  4. 09 Oct, 2008 1 commit
  5. 11 Jul, 2008 1 commit
    • Mingming Cao's avatar
      percpu_counter: new function percpu_counter_sum_and_set · e8ced39d
      Mingming Cao authored
      
      Delayed allocation need to check free blocks at every write time.
      percpu_counter_read_positive() is not quit accurate. delayed
      allocation need a more accurate accounting, but using
      percpu_counter_sum_positive() is frequently is quite expensive.
      
      This patch added a new function to update center counter when sum
      per-cpu counter, to increase the accurate rate for next
      percpu_counter_read() and require less calling expensive
      percpu_counter_sum().
      Signed-off-by: default avatarMingming Cao <cmm@us.ibm.com>
      Signed-off-by: default avatar"Theodore Ts'o" <tytso@mit.edu>
      e8ced39d
  6. 30 Apr, 2008 1 commit
  7. 19 Oct, 2007 1 commit
  8. 17 Oct, 2007 8 commits
  9. 16 Jul, 2007 2 commits
  10. 23 Jun, 2006 2 commits