1. 09 Oct, 2008 4 commits
    • Tejun Heo's avatar
      block: move stats from disk to part0 · 074a7aca
      Tejun Heo authored
      
      Move stats related fields - stamp, in_flight, dkstats - from disk to
      part0 and unify stat handling such that...
      
      * part_stat_*() now updates part0 together if the specified partition
        is not part0.  ie. part_stat_*() are now essentially all_stat_*().
      
      * {disk|all}_stat_*() are gone.
      
      * part_round_stats() is updated similary.  It handles part0 stats
        automatically and disk_round_stats() is killed.
      
      * part_{inc|dec}_in_fligh() is implemented which automatically updates
        part0 stats for parts other than part0.
      
      * disk_map_sector_rcu() is updated to return part0 if no part matches.
        Combined with the above changes, this makes NULL special case
        handling in callers unnecessary.
      
      * Separate stats show code paths for disk are collapsed into part
        stats show code paths.
      
      * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()
      
      While at it, reposition stat handling macros a bit and add missing
      parentheses around macro parameters.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      074a7aca
    • Tejun Heo's avatar
      block: fix diskstats access · c9959059
      Tejun Heo authored
      
      There are two variants of stat functions - ones prefixed with double
      underbars which don't care about preemption and ones without which
      disable preemption before manipulating per-cpu counters.  It's unclear
      whether the underbarred ones assume that preemtion is disabled on
      entry as some callers don't do that.
      
      This patch unifies diskstats access by implementing disk_stat_lock()
      and disk_stat_unlock() which take care of both RCU (for partition
      access) and preemption (for per-cpu counter access).  diskstats access
      should always be enclosed between the two functions.  As such, there's
      no need for the versions which disables preemption.  They're removed
      and double underbars ones are renamed to drop the underbars.  As an
      extra argument is added, there's no danger of using the old version
      unconverted.
      
      disk_stat_lock() uses get_cpu() and returns the cpu index and all
      diskstat functions which access per-cpu counters now has @cpu
      argument to help RT.
      
      This change adds RCU or preemption operations at some places but also
      collapses several preemption ops into one at others.  Overall, the
      performance difference should be negligible as all involved ops are
      very lightweight per-cpu ones.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      c9959059
    • Jens Axboe's avatar
      960e739d
    • Mikulas Patocka's avatar
      drop vmerge accounting · 5df97b91
      Mikulas Patocka authored
      
      Remove hw_segments field from struct bio and struct request. Without virtual
      merge accounting they have no purpose.
      Signed-off-by: default avatarMikulas Patocka <mpatocka@redhat.com>
      Signed-off-by: default avatarJens Axboe <jens.axboe@oracle.com>
      5df97b91
  2. 21 Jul, 2008 1 commit
  3. 30 Jun, 2008 1 commit
    • Dan Williams's avatar
      md: resolve external metadata handling deadlock in md_allow_write · b5470dc5
      Dan Williams authored
      
      md_allow_write() marks the metadata dirty while holding mddev->lock and then
      waits for the write to complete.  For externally managed metadata this causes a
      deadlock as userspace needs to take the lock to communicate that the metadata
      update has completed.
      
      Change md_allow_write() in the 'external' case to start the 'mark active'
      operation and then return -EAGAIN.  The expected side effects while waiting for
      userspace to write 'active' to 'array_state' are holding off reshape (code
      currently handles -ENOMEM), cause some 'stripe_cache_size' change requests to
      fail, cause some GET_BITMAP_FILE ioctl requests to fall back to GFP_NOIO, and
      cause updates to 'raid_disks' to fail.  Except for 'stripe_cache_size' changes
      these failures can be mitigated by coordinating with mdmon.
      
      md_write_start() still prevents writes from occurring until the metadata
      handler has had a chance to take action as it unconditionally waits for
      MD_CHANGE_CLEAN to be cleared.
      
      [neilb@suse.de: return -EAGAIN, try GFP_NOIO]
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      b5470dc5
  4. 27 Jun, 2008 2 commits
  5. 24 May, 2008 3 commits
    • NeilBrown's avatar
      md: restart recovery cleanly after device failure. · dfc70645
      NeilBrown authored
      
      When we get any IO error during a recovery (rebuilding a spare), we abort
      the recovery and restart it.
      
      For RAID6 (and multi-drive RAID1) it may not be best to restart at the
      beginning: when multiple failures can be tolerated, the recovery may be
      able to continue and re-doing all that has already been done doesn't make
      sense.
      
      We already have the infrastructure to record where a recovery is up to
      and restart from there, but it is not being used properly.
      This is because:
        - We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
          which causes the recovery not be be checkpointed.
        - We remove spares and then re-added them which loses important state
          information.
      
      The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
      needed.  If there is an error, the relevant drive will be marked as
      Faulty, and that is enough to ensure correct handling of the error.  So we
      first remove MD_RECOVERY_ERR, changing some of the uses of it to
      MD_RECOVERY_INTR.
      
      Then we cause the attempt to remove a non-faulty device from an array to
      fail (unless recovery is impossible as the array is too degraded).  Then
      when remove_and_add_spares attempts to remove the devices on which
      recovery can continue, it will fail, they will remain in place, and
      recovery will continue on them as desired.
      
      Issue:  If we are halfway through rebuilding a spare and another drive
      fails, and a new spare is immediately available,  do we want to:
       1/ complete the current rebuild, then go back and rebuild the new spare or
       2/ restart the rebuild from the start and rebuild both devices in
          parallel.
      
      Both options can be argued for.  The code currently takes option 2 as
        a/ this requires least code change
        b/ this results in a minimally-degraded array in minimal time.
      
      Cc: "Eivind Sarto" <ivan@kasenna.com>
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dfc70645
    • NeilBrown's avatar
      md: raid1: Fix restoration of bio between failed read and write. · 698b18c1
      NeilBrown authored
      
      When performing a "recovery" or "check" pass on a RAID1 array, we read
      from each device and possible, if there is a difference or a read error,
      write back to some devices.
      
      We use the same 'bio' for both read and write, resetting various fields
      between the two operations.
      
      We forgot to reset bv_offset and bv_len however.  These are often left
      unchanged, but in the case where there is an IO error one or two sectors
      into a page, they are changed.
      
      This results in correctable errors not being corrected properly.  It does
      not result in any data corruption.
      
      Cc: "Fairbanks, David" <David.Fairbanks@stratus.com>
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      698b18c1
    • NeilBrown's avatar
      md: fix possible oops when removing a bitmap from an active array · 84255d10
      NeilBrown authored
      
      It is possible to add a write-intent bitmap to an active array, or remove
      the bitmap that is there.
      
      When we do with the 'quiesce' the array, which causes make_request to
      block in "wait_barrier()".
      
      However we are sampling the value of "mddev->bitmap" before the
      wait_barrier call, and using it afterwards.  This can result in using a
      bitmap structure that has been freed.
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      84255d10
  6. 14 May, 2008 1 commit
    • Neil Brown's avatar
      Remove blkdev warning triggered by using md · e7e72bf6
      Neil Brown authored
      
      As setting and clearing queue flags now requires that we hold a spinlock
      on the queue, and as blk_queue_stack_limits is called without that lock,
      get the lock inside blk_queue_stack_limits.
      
      For blk_queue_stack_limits to be able to find the right lock, each md
      personality needs to set q->queue_lock to point to the appropriate lock.
      Those personalities which didn't previously use a spin_lock, us
      q->__queue_lock.  So always initialise that lock when allocated.
      
      With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
      longer cause warnings as it will be clear that the proper lock is held.
      
      Thanks to Dan Williams for review and fixing the silly bugs.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Alistair John Strachan <alistair@devzero.co.uk>
      Cc: Nick Piggin <npiggin@suse.de>
      Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
      Cc: Jacek Luczak <difrost.kernel@gmail.com>
      Cc: Prakash Punnoor <prakash@punnoor.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e7e72bf6
  7. 30 Apr, 2008 1 commit
  8. 28 Apr, 2008 1 commit
  9. 04 Mar, 2008 2 commits
    • NeilBrown's avatar
      md: fix possible raid1/raid10 deadlock on read error during resync · 1c830532
      NeilBrown authored
      
      Thanks to K.Tanaka and the scsi fault injection framework, here is a fix for
      another possible deadlock in raid1/raid10 error handing.
      
      If a read request returns an error while a resync is happening and a resync
      request is pending, the attempt to fix the error will block until the resync
      progresses, and the resync will block until the read request completes.  Thus
      a deadlock.
      
      This patch fixes the problem.
      
      Cc: "K.Tanaka" <k-tanaka@ce.jp.nec.com>
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1c830532
    • NeilBrown's avatar
      md: fix deadlock in md/raid1 and md/raid10 when handling a read error · a35e63ef
      NeilBrown authored
      
      When handling a read error, we freeze the array to stop any other IO while
      attempting to over-write with correct data.
      
      This is done in the raid1d(raid10d) thread and must wait for all submitted IO
      to complete (except for requests that failed and are sitting in the retry
      queue - these are counted in ->nr_queue and will stay there during a freeze).
      
      However write requests need attention from raid1d as bitmap updates might be
      required.  This can cause a deadlock as raid1 is waiting for requests to
      finish that themselves need attention from raid1d.
      
      So we create a new function 'flush_pending_writes' to give that attention, and
      call it in freeze_array to be sure that we aren't waiting on raid1d.
      
      Thanks to "K.Tanaka" <k-tanaka@ce.jp.nec.com> for finding and reporting this
      problem.
      
      Cc: "K.Tanaka" <k-tanaka@ce.jp.nec.com>
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a35e63ef
  10. 06 Feb, 2008 3 commits
  11. 09 Nov, 2007 1 commit
  12. 19 Oct, 2007 1 commit
  13. 17 Oct, 2007 1 commit
  14. 16 Oct, 2007 1 commit
  15. 10 Oct, 2007 1 commit
  16. 22 Aug, 2007 2 commits
  17. 24 Jul, 2007 1 commit
  18. 17 Jul, 2007 1 commit
    • NeilBrown's avatar
      md: change bitmap_unplug and others to void functions · 4ad13663
      NeilBrown authored
      
      bitmap_unplug only ever returns 0, so it may as well be void.  Two callers try
      to print a message if it returns non-zero, but that message is already printed
      by bitmap_file_kick.
      
      write_page returns an error which is not consistently checked.  It always
      causes BITMAP_WRITE_ERROR to be set on an error, and that can more
      conveniently be checked.
      
      When the return of write_page is checked, an error causes bitmap_file_kick to
      be called - so move that call into write_page - and protect against recursive
      calls into bitmap_file_kick.
      
      bitmap_update_sb returns an error that is never checked.
      
      So make these 'void' and be consistent about checking the bit.
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4ad13663
  19. 16 Jun, 2007 1 commit
  20. 10 May, 2007 1 commit
  21. 09 May, 2007 2 commits
    • Linus Torvalds's avatar
      Revert "md: improve partition detection in md array" · 44ce6294
      Linus Torvalds authored
      This reverts commit 5b479c91
      
      .
      
      Quoth Neil Brown:
      
        "It causes an oops when auto-detecting raid arrays, and it doesn't
         seem easy to fix.
      
         The array may not be 'open' when do_md_run is called, so
         bdev->bd_disk might be NULL, so bd_set_size can oops.
      
         This whole approach of opening an md device before it has been
         assembled just seems to get more and more painful.  I think I'm going
         to have to come up with something clever to provide both backward
         comparability with usage expectation, and sane integration into the
         rest of the kernel."
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      44ce6294
    • NeilBrown's avatar
      md: improve partition detection in md array · 5b479c91
      NeilBrown authored
      
      md currently uses ->media_changed to make sure rescan_partitions
      is call on md array after they are assembled.
      
      However that doesn't happen until the array is opened, which is later
      than some people would like.
      
      So use blkdev_ioctl to do the rescan immediately that the
      array has been assembled.
      
      This means we can remove all the ->change infrastructure as it was only used
      to trigger a partition rescan.
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5b479c91
  22. 26 Jan, 2007 2 commits
  23. 11 Jan, 2007 1 commit
  24. 13 Dec, 2006 1 commit
  25. 10 Dec, 2006 1 commit
    • NeilBrown's avatar
      [PATCH] md: assorted md and raid1 one-liners · 17571284
      NeilBrown authored
      
      Fix few bugs that meant that:
        - superblocks weren't alway written at exactly the right time (this
          could show up if the array was not written to - writting to the array
          causes lots of superblock updates and so hides these errors).
      
        - restarting device recovery after a clean shutdown (version-1 metadata
          only) didn't work as intended (or at all).
      
      1/ Ensure superblock is updated when a new device is added.
      2/ Remove an inappropriate test on MD_RECOVERY_SYNC in md_do_sync.
         The body of this if takes one of two branches depending on whether
         MD_RECOVERY_SYNC is set, so testing it in the clause of the if
         is wrong.
      3/ Flag superblock for updating after a resync/recovery finishes.
      4/ If we find the neeed to restart a recovery in the middle (version-1
         metadata only) make sure a full recovery (not just as guided by
         bitmaps) does get done.
      Signed-off-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
      17571284
  26. 28 Oct, 2006 1 commit
  27. 03 Oct, 2006 2 commits