1. 17 Dec, 2010 1 commit
    • Martin K. Petersen's avatar
      block: Deprecate QUEUE_FLAG_CLUSTER and use queue_limits instead · e692cb66
      Martin K. Petersen authored
      
      When stacking devices, a request_queue is not always available. This
      forced us to have a no_cluster flag in the queue_limits that could be
      used as a carrier until the request_queue had been set up for a
      metadevice.
      
      There were several problems with that approach. First of all it was up
      to the stacking device to remember to set queue flag after stacking had
      completed. Also, the queue flag and the queue limits had to be kept in
      sync at all times. We got that wrong, which could lead to us issuing
      commands that went beyond the max scatterlist limit set by the driver.
      
      The proper fix is to avoid having two flags for tracking the same thing.
      We deprecate QUEUE_FLAG_CLUSTER and use the queue limit directly in the
      block layer merging functions. The queue_limit 'no_cluster' is turned
      into 'cluster' to avoid double negatives and to ease stacking.
      Clustering defaults to being enabled as before. The queue flag logic is
      removed from the stacking function, and explicitly setting the cluster
      flag is no longer necessary in DM and MD.
      Reported-by: default avatarEd Lin <ed.lin@promise.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      e692cb66
  2. 09 Dec, 2010 4 commits
    • NeilBrown's avatar
      md: protect against NULL reference when waiting to start a raid10. · 589a594b
      NeilBrown authored
      
      When we fail to start a raid10 for some reason, we call
      md_unregister_thread to kill the thread that was created.
      
      Unfortunately md_thread() will then make one call into the handler
      (raid10d) even though md_wakeup_thread has not been called.  This is
      not safe and as md_unregister_thread is called after mddev->private
      has been set to NULL, it will definitely cause a NULL dereference.
      
      So fix this at both ends:
       - md_thread should only call the handler if THREAD_WAKEUP has been
         set.
       - raid10 should call md_unregister_thread before setting things
         to NULL just like all the other raid modules do.
      
      This is applicable to 2.6.35 and later.
      
      Cc: stable@kernel.org
      Reported-by: default avatar"Citizen" <citizen_lee@thecus.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      589a594b
    • NeilBrown's avatar
      md: fix bug with re-adding of partially recovered device. · 1a855a06
      NeilBrown authored
      
      With v0.90 metadata, a hot-spare does not become a full member of the
      array until recovery is complete.  So if we re-add such a device to
      the array, we know that all of it is as up-to-date as the event count
      would suggest, and so it a bitmap-based recovery is possible.
      
      However with v1.x metadata, the hot-spare immediately becomes a full
      member of the array, but it record how much of the device has been
      recovered.  If the array is stopped and re-assembled recovery starts
      from this point.
      
      When such a device is hot-added to an array we currently lose the 'how
      much is recovered' information and incorrectly included it as a full
      in-sync member (after bitmap-based fixup).
      This is wrong and unsafe and could corrupt data.
      
      So be more careful about setting saved_raid_disk - which is what
      guides the re-adding of devices back into an array.
      The new code matches the code in slot_store which does a similar
      thing, which is encouraging.
      
      This is suitable for any -stable kernel.
      Reported-by: default avatar"Dailey, Nate" <Nate.Dailey@stratus.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      1a855a06
    • NeilBrown's avatar
      md: fix possible deadlock in handling flush requests. · a035fc3e
      NeilBrown authored
      As recorded in
          https://bugzilla.kernel.org/show_bug.cgi?id=24012
      
      
      
      it is possible for a flush request through md to hang.  This is due to
      an interaction between the recursion avoidance in
      generic_make_request, the insistence in md of only having one flush
      active at a time, and the possibility of dm (or md) submitting two
      flush requests to a device from the one generic_make_request.
      
      If a generic_make_request call into dm causes two flush requests to be
      queued (as happens if the dm table has two targets - they get one
      each), these two will be queued inside generic_make_request.
      
      Assume they are for the same md device.
      The first is processed and causes 1 or more flush requests to be sent
      to lower devices.  These get queued within generic_make_request too.
      Then the second flush to the md device gets handled and it blocks
      waiting for the first flush to complete.  But it won't complete until
      the two lower-device requests complete, and they haven't even been
      submitted yet as they are on the generic_make_request queue.
      
      The deadlock can be broken by using a separate thread to submit the
      requests to lower devices.  md has such a thread readily available:
      md_wq.
      
      So use it to submit these requests.
      Reported-by: default avatarGiacomo Catenazzi <cate@cateee.net>
      Tested-by: default avatarGiacomo Catenazzi <cate@cateee.net>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      a035fc3e
    • NeilBrown's avatar
      md: move code in to submit_flushes. · a7a07e69
      NeilBrown authored
      
      submit_flushes is called from exactly one place.
      Move the code that is before and after that call into
      submit_flushes.
      
      This has not functional change, but will make the next patch
      smaller and easier to follow.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      a7a07e69
  3. 08 Dec, 2010 1 commit
  4. 24 Nov, 2010 2 commits
    • Darrick J. Wong's avatar
      md: Call blk_queue_flush() to establish flush/fua support · be20e6c6
      Darrick J. Wong authored
      
      Before 2.6.37, the md layer had a mechanism for catching I/Os with the
      barrier flag set, and translating the barrier into barriers for all
      the underlying devices.  With 2.6.37, I/O barriers have become plain
      old flushes, and the md code was updated to reflect this.  However,
      one piece was left out -- the md layer does not tell the block layer
      that it supports flushes or FUA access at all, which results in md
      silently dropping flush requests.
      
      Since the support already seems there, just add this one piece of
      bookkeeping.
      Signed-off-by: default avatarDarrick J. Wong <djwong@us.ibm.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      be20e6c6
    • Justin Maggard's avatar
      md: fix return value of rdev_size_change() · c26a44ed
      Justin Maggard authored
      When trying to grow an array by enlarging component devices,
      rdev_size_store() expects the return value of rdev_size_change() to be
      in sectors, but the actual value is returned in KBs.
      
      This functionality was broken by commit
           dd8ac336
      
      
      so this patch is suitable for any kernel since 2.6.30.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarJustin Maggard <jmaggard10@gmail.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      c26a44ed
  5. 13 Nov, 2010 3 commits
    • Tejun Heo's avatar
      block: clean up blkdev_get() wrappers and their users · d4d77629
      Tejun Heo authored
      
      After recent blkdev_get() modifications, open_by_devnum() and
      open_bdev_exclusive() are simple wrappers around blkdev_get().
      Replace them with blkdev_get_by_dev() and blkdev_get_by_path().
      
      blkdev_get_by_dev() is identical to open_by_devnum().
      blkdev_get_by_path() is slightly different in that it doesn't
      automatically add %FMODE_EXCL to @mode.
      
      All users are converted.  Most conversions are mechanical and don't
      introduce any behavior difference.  There are several exceptions.
      
      * btrfs now sets FMODE_EXCL in btrfs_device->mode, so there's no
        reason to OR it explicitly on blkdev_put().
      
      * gfs2, nilfs2 and the generic mount_bdev() now set FMODE_EXCL in
        sb->s_mode.
      
      * With the above changes, sb->s_mode now always should contain
        FMODE_EXCL.  WARN_ON_ONCE() added to kill_block_super() to detect
        errors.
      
      The new blkdev_get_*() functions are with proper docbook comments.
      While at it, add function description to blkdev_get() too.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Cc: Philipp Reisner <philipp.reisner@linbit.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Mike Snitzer <snitzer@redhat.com>
      Cc: Joern Engel <joern@lazybastard.org>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: KONISHI Ryusuke <konishi.ryusuke@lab.ntt.co.jp>
      Cc: reiserfs-devel@vger.kernel.org
      Cc: xfs-masters@oss.sgi.com
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      d4d77629
    • Tejun Heo's avatar
      block: make blkdev_get/put() handle exclusive access · e525fd89
      Tejun Heo authored
      
      Over time, block layer has accumulated a set of APIs dealing with bdev
      open, close, claim and release.
      
      * blkdev_get/put() are the primary open and close functions.
      
      * bd_claim/release() deal with exclusive open.
      
      * open/close_bdev_exclusive() are combination of open and claim and
        the other way around, respectively.
      
      * bd_link/unlink_disk_holder() to create and remove holder/slave
        symlinks.
      
      * open_by_devnum() wraps bdget() + blkdev_get().
      
      The interface is a bit confusing and the decoupling of open and claim
      makes it impossible to properly guarantee exclusive access as
      in-kernel open + claim sequence can disturb the existing exclusive
      open even before the block layer knows the current open if for another
      exclusive access.  Reorganize the interface such that,
      
      * blkdev_get() is extended to include exclusive access management.
        @holder argument is added and, if is @FMODE_EXCL specified, it will
        gain exclusive access atomically w.r.t. other exclusive accesses.
      
      * blkdev_put() is similarly extended.  It now takes @mode argument and
        if @FMODE_EXCL is set, it releases an exclusive access.  Also, when
        the last exclusive claim is released, the holder/slave symlinks are
        removed automatically.
      
      * bd_claim/release() and close_bdev_exclusive() are no longer
        necessary and either made static or removed.
      
      * bd_link_disk_holder() remains the same but bd_unlink_disk_holder()
        is no longer necessary and removed.
      
      * open_bdev_exclusive() becomes a simple wrapper around lookup_bdev()
        and blkdev_get().  It also has an unexpected extra bdev_read_only()
        test which probably should be moved into blkdev_get().
      
      * open_by_devnum() is modified to take @holder argument and pass it to
        blkdev_get().
      
      Most of bdev open/close operations are unified into blkdev_get/put()
      and most exclusive accesses are tested atomically at the open time (as
      it should).  This cleans up code and removes some, both valid and
      invalid, but unnecessary all the same, corner cases.
      
      open_bdev_exclusive() and open_by_devnum() can use further cleanup -
      rename to blkdev_get_by_path() and blkdev_get_by_devt() and drop
      special features.  Well, let's leave them for another day.
      
      Most conversions are straight-forward.  drbd conversion is a bit more
      involved as there was some reordering, but the logic should stay the
      same.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarNeil Brown <neilb@suse.de>
      Acked-by: default avatarRyusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Acked-by: default avatarPhilipp Reisner <philipp.reisner@linbit.com>
      Cc: Peter Osterlund <petero2@telia.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Mark Fasheh <mfasheh@suse.com>
      Cc: Joel Becker <joel.becker@oracle.com>
      Cc: Alex Elder <aelder@sgi.com>
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: dm-devel@redhat.com
      Cc: drbd-dev@lists.linbit.com
      Cc: Leo Chen <leochen@broadcom.com>
      Cc: Scott Branden <sbranden@broadcom.com>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
      Cc: Joern Engel <joern@logfs.org>
      Cc: reiserfs-devel@vger.kernel.org
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      e525fd89
    • Tejun Heo's avatar
      block: simplify holder symlink handling · e09b457b
      Tejun Heo authored
      
      Code to manage symlinks in /sys/block/*/{holders|slaves} are overly
      complex with multiple holder considerations, redundant extra
      references to all involved kobjects, unused generic kobject holder
      support and unnecessary mixup with bd_claim/release functionalities.
      
      Strip it down to what's necessary (single gendisk holder) and make it
      use a separate interface.  This is a step for cleaning up
      bd_claim/release.  This patch makes dm-table slightly more complex but
      it will be simplified again with further changes.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarNeil Brown <neilb@suse.de>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: dm-devel@redhat.com
      e09b457b
  6. 10 Nov, 2010 1 commit
  7. 28 Oct, 2010 5 commits
    • NeilBrown's avatar
      md: use separate bio pool for each md device. · a167f663
      NeilBrown authored
      
      bio_clone and bio_alloc allocate from a common bio pool.
      If an md device is stacked with other devices that use this pool, or under
      something like swap which uses the pool, then the multiple calls on
      the pool can cause deadlocks.
      
      So allocate a local bio pool for each md array and use that rather
      than the common pool.
      
      This pool is used both for regular IO and metadata updates.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      a167f663
    • NeilBrown's avatar
      md: change type of first arg to sync_page_io. · 2b193363
      NeilBrown authored
      
      Currently sync_page_io takes a 'bdev'.
      Every caller passes 'rdev->bdev'.
      We will soon want another field out of the rdev in sync_page_io,
      So just pass the rdev instead of the bdev out of it.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      2b193363
    • Tejun Heo's avatar
      md: fix and update workqueue usage · e804ac78
      Tejun Heo authored
      
      Workqueue usage in md has two problems.
      
      * Flush can be used during or depended upon by memory reclaim, but md
        uses the system workqueue for flush_work which may lead to deadlock.
      
      * md depends on flush_scheduled_work() to achieve exclusion against
        completion of removal of previous instances.  flush_scheduled_work()
        may incur unexpected amount of delay and is scheduled to be removed.
      
      This patch adds two workqueues to md - md_wq and md_misc_wq.  The
      former is guaranteed to make forward progress under memory pressure
      and serves flush_work.  The latter serves as the flush domain for
      other works.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      e804ac78
    • NeilBrown's avatar
      md: remove md_mutex locking. · 4b532c9b
      NeilBrown authored
      
      lock_kernel calls were recently pushed down into open/release
      functions.
      md doesn't need that protection.
      Then the BKL calls were change to md_mutex.  We don't need those
      either.
      So remove it all.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      4b532c9b
    • NeilBrown's avatar
      md: Fix regression with raid1 arrays without persistent metadata. · d97a41dc
      NeilBrown authored
      A RAID1 which has no persistent metadata, whether internal or
      external, will hang on the first write.
      This is caused by commit  070dc6dd
      
      
      In that case, MD_CHANGE_PENDING never gets cleared.
      
      So during md_update_sb, is neither persistent or external,
      clear MD_CHANGE_PENDING.
      
      This is suitable for 2.6.36-stable.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Cc: stable@kernel.org
      d97a41dc
  8. 05 Oct, 2010 1 commit
    • Arnd Bergmann's avatar
      block: autoconvert trivial BKL users to private mutex · 2a48fc0a
      Arnd Bergmann authored
      
      The block device drivers have all gained new lock_kernel
      calls from a recent pushdown, and some of the drivers
      were already using the BKL before.
      
      This turns the BKL into a set of per-driver mutexes.
      Still need to check whether this is safe to do.
      
      file=$1
      name=$2
      if grep -q lock_kernel ${file} ; then
          if grep -q 'include.*linux.mutex.h' ${file} ; then
                  sed -i '/include.*<linux\/smp_lock.h>/d' ${file}
          else
                  sed -i 's/include.*<linux\/smp_lock.h>.*$/include <linux\/mutex.h>/g' ${file}
          fi
          sed -i ${file} \
              -e "/^#include.*linux.mutex.h/,$ {
                      1,/^\(static\|int\|long\)/ {
                           /^\(static\|int\|long\)/istatic DEFINE_MUTEX(${name}_mutex);
      
      } }"  \
          -e "s/\(un\)*lock_kernel\>[ ]*()/mutex_\1lock(\&${name}_mutex)/g" \
          -e '/[      ]*cycle_kernel_lock();/d'
      else
          sed -i -e '/include.*\<smp_lock.h\>/d' ${file}  \
                      -e '/cycle_kernel_lock()/d'
      fi
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      2a48fc0a
  9. 16 Sep, 2010 2 commits
    • NeilBrown's avatar
      md: fix v1.x metadata update when a disk is missing. · ddcf3522
      NeilBrown authored
      
      If an array with 1.x metadata is assembled with the last disk missing,
      md doesn't properly record the fact that the disk was missing.
      
      This is unlikely to cause a real problem as the event count will be
      different to the count on the missing disk so it won't be included in
      the array.  However it could still cause confusion.
      
      So make sure we clear all the relevant slots, not just the early ones.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      ddcf3522
    • NeilBrown's avatar
      md: call md_update_sb even for 'external' metadata arrays. · 126925c0
      NeilBrown authored
      
      Now that we depend on md_update_sb to clear variable bits in
      mddev->flags (rather than trying not to set them) it is important to
      always call md_update_sb when appropriate.
      
      md_check_recovery has this job but explicitly avoids it for ->external
      metadata arrays.  This is not longer appropraite, or needed.
      
      However we do want to avoid taking the mddev lock if only
      MD_CHANGE_PENDING is set as that is not cleared by md_update_sb for
      external-metadata arrays.
      Reported-by: default avatar"Kwolek, Adam" <adam.kwolek@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      126925c0
  10. 10 Sep, 2010 1 commit
    • Tejun Heo's avatar
      md: implment REQ_FLUSH/FUA support · e9c7469b
      Tejun Heo authored
      
      This patch converts md to support REQ_FLUSH/FUA instead of now
      deprecated REQ_HARDBARRIER.  In the core part (md.c), the following
      changes are notable.
      
      * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
        processing of other requests and thus there is no reason to mark the
        queue congested while FLUSH/FUA is in progress.
      
      * REQ_FLUSH/FUA failures are final and its users don't need retry
        logic.  Retry logic is removed.
      
      * Preflush needs to be issued to all member devices but FUA writes can
        be handled the same way as other writes - their processing can be
        deferred to request_queue of member devices.  md_barrier_request()
        is renamed to md_flush_request() and simplified accordingly.
      
      For linear, raid0 and multipath, the core changes are enough.  raid1,
      5 and 10 need the following conversions.
      
      * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
        request_queues of member devices.  Barrier related logic removed.
      
      * raid5: Queue draining logic dropped.  FUA bit is propagated through
        biodrain and stripe resconstruction such that all the updated parts
        of the stripe are written out with FUA writes if any of the dirtying
        writes was FUA.  preread_active_stripes handling in make_request()
        is updated as suggested by Neil Brown.
      
      * raid10: FUA bit needs to be propagated to write clones.
      
      linear, raid0, 1, 5 and 10 tested.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarNeil Brown <neilb@suse.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      e9c7469b
  11. 30 Aug, 2010 2 commits
    • NeilBrown's avatar
      md: resolve confusion of MD_CHANGE_CLEAN · 070dc6dd
      NeilBrown authored
      
      MD_CHANGE_CLEAN is used for two different purposes and this leads to
      confusion.
      One of the purposes is largely mirrored by MD_CHANGE_PENDING which is
      not used for anything else, so have MD_CHANGE_PENDING take over that
      purpose fully.
      
      The two purposes are:
       1/ tell md_update_sb that an update is needed and that it is just a
         clean/dirty transition.
       2/ tell user-space that an transition from clean to dirty is pending
          (something wants to write), and tell te kernel (by clearin the
          flag) that the transition is OK.
      
      The first purpose remains wit MD_CHANGE_CLEAN, the second is moved
      fully to MD_CHANGE_PENDING.
      
      This means that various places which conditionally set or cleared
      MD_CHANGE_CLEAN no longer need to be conditional.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      070dc6dd
    • Dan Williams's avatar
      md: don't clear MD_CHANGE_CLEAN in md_update_sb() for external arrays · bd52b746
      Dan Williams authored
      
      If this bit is cleared in md_update_sb() the kernel will allow writes to the
      array if userspace triggers md_allow_write(), e.g. through stripe_cache_size,
      when mdmon is not active.  When mdmon is active the array transitions to
      active-idle bypassing write-pending, setting up a race for mdmon to set the
      array clean before a write arrives.
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      bd52b746
  12. 17 Aug, 2010 1 commit
    • NeilBrown's avatar
      Update recovery_offset even when external metadata is used. · 3a3a5ddb
      NeilBrown authored
      
      The update of ->recovery_offset in sync_sbs is appropriate even then external
      metadata is in use.  However sync_sbs is only called when native
      metadata is used.
      
      So move that update in to the top of md_update_sb (which is the only
      caller of sync_sbs) before the test on ->external.
      
      This moves the update out of ->write_lock protection, but those fields
      only need ->reconfig_mutex protection which they still have.
      
      Also move the test on ->persistent up to where ->external is set as
      for metadata update purposes they are the same.
      
      Clear MD_CHANGE_DEVS and MD_CHANGE_CLEAN as they can only be confusing
      if ->external is set or ->persistent isn't.
      
      Finally move the update of ->utime down as it is only relevent (like
      the ->events update) for native metadata.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      Reported-by: default avatar"Kwolek, Adam" <adam.kwolek@intel.com>
      3a3a5ddb
  13. 08 Aug, 2010 3 commits
    • NeilBrown's avatar
      md: clean up do_md_stop · 6e17b027
      NeilBrown authored
      
      There is only one error exit from do_md_stop, so make that more
      explicit and discard the 'err' variable.
      Also drop the 'revalidate' variable by moving the unlock calls around.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      6e17b027
    • NeilBrown's avatar
      md: fix another deadlock with removing sysfs attributes. · bb4f1e9d
      NeilBrown authored
      
      Move the deletion of sysfs attributes from reconfig_mutex to
      open_mutex didn't really help as a process can try to take
      open_mutex while holding reconfig_mutex, so the same deadlock can
      happen, just requiring one more process to be involved in the chain.
      
      I looks like I cannot easily use locking to wait for the sysfs
      deletion to complete, so don't.
      
      The only things that we cannot do while the deletions are still
      pending is other things which can change the sysfs namespace: run,
      takeover, stop.  Each of these can fail with -EBUSY.
      So set a flag while doing a sysfs deletion, and fail run, takeover,
      stop if that flag is set.
      
      This is suitable for 2.6.35.x
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      bb4f1e9d
    • Dan Williams's avatar
      md: move revalidate_disk() back outside open_mutex · 147e0b6a
      Dan Williams authored
      Commit b821eaa5
      
       "md: remove ->changed and related code" moved
      revalidate_disk() under open_mutex, and lockdep noticed.
      
      [ INFO: possible circular locking dependency detected ]
      2.6.32-mdadm-locking #1
      -------------------------------------------------------
      mdadm/3640 is trying to acquire lock:
       (&bdev->bd_mutex){+.+.+.}, at: [<ffffffff811acecb>] revalidate_disk+0x5b/0x90
      
      but task is already holding lock:
       (&mddev->open_mutex){+.+...}, at: [<ffffffffa055e07a>] do_md_stop+0x4a/0x4d0 [md_mod]
      
      which lock already depends on the new lock.
      
      It is suitable for 2.6.35.x
      
      Cc: <stable@kernel.org>
      Reported-by: default avatarPrzemyslaw Czarnowski <przemyslaw.hawrylewicz.czarnowski@intel.com>
      Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      147e0b6a
  14. 07 Aug, 2010 2 commits
    • Arnd Bergmann's avatar
      block: push down BKL into .open and .release · 6e9624b8
      Arnd Bergmann authored
      
      The open and release block_device_operations are currently
      called with the BKL held. In order to change that, we must
      first make sure that all drivers that currently rely
      on this have no regressions.
      
      This blindly pushes the BKL into all .open and .release
      operations for all block drivers to prepare for the
      next step. The drivers can subsequently replace the BKL
      with their own locks or remove it completely when it can
      be shown that it is not needed.
      
      The functions blkdev_get and blkdev_put are the only
      remaining users of the big kernel lock in the block
      layer, besides a few uses in the ioctl code, none
      of which need to serialize with blkdev_{get,put}.
      
      Most of these two functions is also under the protection
      of bdev->bd_mutex, including the actual calls to
      ->open and ->release, and the common code does not
      access any global data structures that need the BKL.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarChristoph Hellwig <hch@infradead.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      6e9624b8
    • Christoph Hellwig's avatar
      block: unify flags for struct bio and struct request · 7b6d91da
      Christoph Hellwig authored
      
      Remove the current bio flags and reuse the request flags for the bio, too.
      This allows to more easily trace the type of I/O from the filesystem
      down to the block driver.  There were two flags in the bio that were
      missing in the requests:  BIO_RW_UNPLUG and BIO_RW_AHEAD.  Also I've
      renamed two request flags that had a superflous RW in them.
      
      Note that the flags are in bio.h despite having the REQ_ name - as
      blkdev.h includes bio.h that is the only way to go for now.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      7b6d91da
  15. 25 Jul, 2010 8 commits
    • NeilBrown's avatar
      md/bitmap: separate out loading a bitmap from initialising the structures. · 69e51b44
      NeilBrown authored
      
      dm makes this distinction between ->ctr and ->resume, so we need to
      too.
      
      Also get the new bitmap_load to clear out the bitmap first, as this is
      most consistent with the dm suspend/resume approach
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      69e51b44
    • NeilBrown's avatar
      md/bitmap: clean up plugging calls. · b63d7c2e
      NeilBrown authored
      
      1/ use md_unplug in bitmap.c as we will soon be using bitmaps under
        arrays with no queue attached.
      
      2/ Don't bother plugging the queue when we set a bit in the bitmap.
         The reason for this was to encourage as many bits as possible to
         get set before we unplug and write stuff out.
         However every personality already plugs the queue after
         bitmap_startwrite either directly (raid1/raid10) or be setting
         STRIPE_BIT_DELAY which causes the queue to be plugged later
         (raid5).
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      b63d7c2e
    • NeilBrown's avatar
      md/plug: optionally use plugger to unplug an array during resync/recovery. · 252ac522
      NeilBrown authored
      
      If an array doesn't have a 'queue' then md_do_sync cannot
      unplug it.
      In that case it will have a 'plugger', so make that available
      to the mddev, and use it to unplug the array if needed.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      252ac522
    • NeilBrown's avatar
      md/raid5: add simple plugging infrastructure. · 2ac87401
      NeilBrown authored
      
      md/raid5 uses the plugging infrastructure provided by the block layer
      and 'struct request_queue'.  However when we plug raid5 under dm there
      is no request queue so we cannot use that.
      
      So create a similar infrastructure that is much lighter weight and use
      it for raid5.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      2ac87401
    • NeilBrown's avatar
      md: add support for raising dm events. · 768a418d
      NeilBrown authored
      
      dm uses scheduled work to raise events to user-space.
      So allow md device to have work_structs and schedule them on an error.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      768a418d
    • NeilBrown's avatar
      md: export various start/stop interfaces · 390ee602
      NeilBrown authored
      
      export entry points for starting and stopping md arrays.
      This will be used by a module to make md/raid5 work under
      dm.
      Also stop calling md_stop_writes from md_stop, as that won't
      work well with dm - it will want to call the two separately.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      390ee602
    • NeilBrown's avatar
      md: split out md_rdev_init · e8bb9a83
      NeilBrown authored
      
      This functionality will be needed separately in a subsequent patch, so
      split it into it's own exported function.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      e8bb9a83
    • NeilBrown's avatar
      md: be more careful setting MD_CHANGE_CLEAN · 676e42d8
      NeilBrown authored
      
      When MD_CHANGE_CLEAN is set we might block in md_write_start.
      So we should only set it when fairly sure that something will clear
      it.
      
      There are two places where it is set so as to encourage a metadata
      update to record the progress of resync/recovery.  This should only
      be done if the internal metadata update mechanisms are in use, which
      can be tested by by inspecting '->persistent'.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      676e42d8
  16. 20 Jul, 2010 1 commit
  17. 23 Jun, 2010 2 commits
    • NeilBrown's avatar
      md: Don't update ->recovery_offset when reshaping an array to fewer devices. · 70fffd0b
      NeilBrown authored
      
      When an array is reshaped to have fewer devices, the reshape proceeds
      from the end of the devices to the beginning.
      
      If a device happens to be non-In_sync (which is possible but rare)
      we would normally update the ->recovery_offset as the reshape
      progresses. However that would be wrong as the recover_offset records
      that the early part of the device is in_sync, while in fact it would
      only be the later part that is in_sync, and in any case the offset
      number would be measured from the wrong end of the device.
      
      Relatedly, if after a reshape a spare is discovered to not be
      recoverred all the way to the end, not allow spare_active
      to incorporate it in the array.
      
      This becomes relevant in the following sample scenario:
      
      A 4 drive RAID5 is converted to a 6 drive RAID6 in a combined
      operation.
      The RAID5->RAID6 conversion will cause a 5 drive to be included as a
      spare, then the 5drive -> 6drive reshape will effectively rebuild that
      spare as it progresses.  The 6th drive is treated as in_sync the whole
      time as there is never any case that we might consider reading from
      it, but must not because there is no valid data.
      
      If we interrupt this reshape part-way through and reverse it to return
      to a 5-drive RAID6 (or event a 4-drive RAID5), we don't want to update
      the recovery_offset - as that would be wrong - and we don't want to
      include that spare as active in the 5-drive RAID6 when the reversed
      reshape completed and it will be mostly out-of-sync still.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      70fffd0b
    • NeilBrown's avatar
      md: fix handling of array level takeover that re-arranges devices. · e93f68a1
      NeilBrown authored
      
      Most array level changes leave the list of devices largely unchanged,
      possibly causing one at the end to become redundant.
      However conversions between RAID0 and RAID10 need to renumber
      all devices (except 0).
      
      This renumbering is currently being done in the ->run method when the
      new personality takes over.  However this is too late as the common
      code in md.c might already have invalidated some of the devices if
      they had a ->raid_disk number that appeared to high.
      
      Moving it into the ->takeover method is too early as the array is
      still active at that time and wrong ->raid_disk numbers could cause
      confusion.
      
      So add a ->new_raid_disk field to mdk_rdev_s and use it to communicate
      the new raid_disk number.
      Now the common code knows exactly which devices need to be renumbered,
      and which can be invalidated, and can do it all at a convenient time
      when the array is suspend.
      It can also update some symlinks in sysfs which previously were not be
      updated correctly.
      Reported-by: default avatarMaciej Trela <maciej.trela@intel.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      e93f68a1