1. 09 Aug, 2010 2 commits
    • Al Viro's avatar
      simplify checks for I_CLEAR/I_FREEING · a4ffdde6
      Al Viro authored
      
      add I_CLEAR instead of replacing I_FREEING with it.  I_CLEAR is
      equivalent to I_FREEING for almost all code looking at either;
      it's there to keep track of having called clear_inode() exactly
      once per inode lifetime, at some point after having set I_FREEING.
      I_CLEAR and I_FREEING never get set at the same time with the
      current code, so we can switch to setting i_flags to I_FREEING | I_CLEAR
      instead of I_CLEAR without loss of information.  As the result of
      such change, checks become simpler and the amount of code that needs
      to know about I_CLEAR shrinks a lot.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      a4ffdde6
    • Christoph Hellwig's avatar
      remove inode_setattr · 1025774c
      Christoph Hellwig authored
      
      Replace inode_setattr with opencoded variants of it in all callers.  This
      moves the remaining call to vmtruncate into the filesystem methods where it
      can be replaced with the proper truncate sequence.
      
      In a few cases it was obvious that we would never end up calling vmtruncate
      so it was left out in the opencoded variant:
      
       spufs: explicitly checks for ATTR_SIZE earlier
       btrfs,hugetlbfs,logfs,dlmfs: explicitly clears ATTR_SIZE earlier
       ufs: contains an opencoded simple_seattr + truncate that sets the filesize just above
      
      In addition to that ncpfs called inode_setattr with handcrafted iattrs,
      which allowed to trim down the opencoded variant.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      1025774c
  2. 11 Jun, 2010 2 commits
  3. 27 May, 2010 1 commit
  4. 26 May, 2010 4 commits
  5. 25 May, 2010 10 commits
  6. 21 May, 2010 1 commit
  7. 30 Mar, 2010 2 commits
    • Josef Bacik's avatar
      Btrfs: kill max_extent mount option · 287a0ab9
      Josef Bacik authored
      
      As Yan pointed out, theres not much reason for all this complicated math to
      account for file extents being split up into max_extent chunks, since they are
      likely to all end up in the same leaf anyway.  Since there isn't much reason to
      use max_extent, just remove the option altogether so we have one less thing we
      need to test.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      287a0ab9
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Guess-its-ok-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  8. 15 Mar, 2010 5 commits
    • Josef Bacik's avatar
      Btrfs: cache the extent state everywhere we possibly can V2 · 2ac55d41
      Josef Bacik authored
      
      This patch just goes through and fixes everybody that does
      
      lock_extent()
      blah
      unlock_extent()
      
      to use
      
      lock_extent_bits()
      blah
      unlock_extent_cached()
      
      and pass around a extent_state so we only have to do the searches once per
      function.  This gives me about a 3 mb/s boots on my random write test.  I have
      not converted some things, like the relocation and ioctl's, since they aren't
      heavily used and the relocation stuff is in the middle of being re-written.  I
      also changed the clear_extent_bit() to only unset the cached state if we are
      clearing EXTENT_LOCKED and related stuff, so we can do things like this
      
      lock_extent_bits()
      clear delalloc bits
      unlock_extent_cached()
      
      without losing our cached state.  I tested this thoroughly and turned on
      LEAK_DEBUG to make sure we weren't leaking extent states, everything worked out
      fine.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      2ac55d41
    • Josef Bacik's avatar
      Btrfs: cache ordered extent when completing io · 5a1a3df1
      Josef Bacik authored
      
      When finishing io we run btrfs_dec_test_ordered_pending, and then immediately
      run btrfs_lookup_ordered_extent, but btrfs_dec_test_ordered_pending does that
      already, so we're searching twice when we don't have to.  This patch lets us
      pass a btrfs_ordered_extent in to btrfs_dec_test_ordered_pending so if we do
      complete io on that ordered extent we can just use the one we found then instead
      of having to do another btrfs_lookup_ordered_extent.  This made my fio job with
      the other patch go from 24 mb/s to 29 mb/s.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      5a1a3df1
    • Miao Xie's avatar
      btrfs: fix btrfs_mkdir goto for no free objectids · 0be2e981
      Miao Xie authored
      
      btrfs_mkdir() must jump to the place of ending transaction after
      btrfs_find_free_objectid() failed. Or this transaction can't end.
      Signed-off-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      0be2e981
    • Chris Mason's avatar
      Btrfs: add new defrag-range ioctl. · 1e701a32
      Chris Mason authored
      
      The btrfs defrag ioctl was limited to doing the entire file.  This
      commit adds a new interface that can defrag a specific range inside
      the file.
      
      It can also force compression on the file, allowing you to selectively
      compress individual files after they were created, even when mount -o
      compress isn't turned on.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      1e701a32
    • Josef Bacik's avatar
      Btrfs: change how we mount subvolumes · 73f73415
      Josef Bacik authored
      
      This work is in preperation for being able to set a different root as the
      default mounting root.
      
      There is currently a problem with how we mount subvolumes.  We cannot currently
      mount a subvolume of a subvolume, you can only mount subvolumes/snapshots of the
      default subvolume.  So say you take a snapshot of the default subvolume and call
      it snap1, and then take a snapshot of snap1 and call it snap2, so now you have
      
      /
      /snap1
      /snap1/snap2
      
      as your available volumes.  Currently you can only mount / and /snap1,
      you cannot mount /snap1/snap2.  To fix this problem instead of passing
      subvolid=<name> you must pass in subvolid=<treeid>, where <treeid> is
      the tree id that gets spit out via the subvolume listing you get from
      the subvolume listing patches (btrfs filesystem list).  This allows us
      to mount /, /snap1 and /snap1/snap2 as the root volume.
      
      In addition to the above, we also now read the default dir item in the
      tree root to get the root key that it points to.  For now this just
      points at what has always been the default subvolme, but later on I plan
      to change it to point at whatever root you want to be the new default
      root, so you can just set the default mount and not have to mount with
      -o subvolid=<treeid>.  I tested this out with the above scenario and it
      worked perfectly.  Thanks,
      
      mount -o subvol operates inside the selected subvolid.  For example:
      
      mount -o subvol=snap1,subvolid=256 /dev/xxx /mnt
      
      /mnt will have the snap1 directory for the subvolume with id
      256.
      
      mount -o subvol=snap /dev/xxx /mnt
      
      /mnt will be the snap directory of whatever the default subvolume
      is.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      73f73415
  9. 05 Mar, 2010 1 commit
  10. 04 Feb, 2010 2 commits
  11. 28 Jan, 2010 3 commits
    • Josef Bacik's avatar
      Btrfs: run orphan cleanup on default fs root · e3acc2a6
      Josef Bacik authored
      This patch revert's commit
      
      6c090a11
      
      
      
      Since it introduces this problem where we can run orphan cleanup on a
      volume that can have orphan entries re-added.  Instead of my original
      fix, Yan Zheng pointed out that we can just revert my original fix and
      then run the orphan cleanup in open_ctree after we look up the fs_root.
      I have tested this with all the tests that gave me problems and this
      patch fixes both problems.  Thanks,
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      e3acc2a6
    • Aneesh Kumar K.V's avatar
      Btrfs: Use correct values when updating inode i_size on fallocate · d1ea6a61
      Aneesh Kumar K.V authored
      
      commit f2bc9dd07e3424c4ec5f3949961fe053d47bc825
      Author: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Date:   Wed Jan 20 12:57:53 2010 +0530
      
          Btrfs: Use correct values when updating inode i_size on fallocate
      
          Even though we allocate more, we should be updating inode i_size
          as per the arguments passed
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      d1ea6a61
    • Chris Mason's avatar
      Btrfs: Add mount -o compress-force · a555f810
      Chris Mason authored
      
      The default btrfs mount -o compress mode will quickly back off
      compressing a file if it notices that compression does not reduce the
      size of the data being written.  This can save considerable CPU because
      all future writes to the file go through uncompressed.
      
      But some files are both very large and have mixed data stored in
      them.  In that case, we want to add the ability to always try
      compressing data before writing it.
      
      This commit adds mount -o compress-force.  A later commit will add
      a new inode flag that does the same thing.
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      a555f810
  12. 17 Jan, 2010 2 commits
    • Josef Bacik's avatar
      Btrfs: fix regression in orphan cleanup · 6c090a11
      Josef Bacik authored
      Currently orphan cleanup only ever gets triggered if we cross subvolumes during
      a lookup, which means that if we just mount a plain jane fs that has orphans in
      it, they will never get cleaned up.  This results in panic's like these
      
      http://www.kerneloops.org/oops.php?number=1109085
      
      
      
      where adding an orphan entry results in -EEXIST being returned and we panic.  In
      order to fix this, we check to see on lookup if our root has had the orphan
      cleanup done, and if not go ahead and do it.  This is easily reproduceable by
      running this testcase
      
      #include <sys/types.h>
      #include <sys/stat.h>
      #include <fcntl.h>
      #include <string.h>
      #include <unistd.h>
      #include <stdio.h>
      
      int main(int argc, char **argv)
      {
      	char data[4096];
      	char newdata[4096];
      	int fd1, fd2;
      
      	memset(data, 'a', 4096);
      	memset(newdata, 'b', 4096);
      
      	while (1) {
      		int i;
      
      		fd1 = creat("file1", 0666);
      		if (fd1 < 0)
      			break;
      
      		for (i = 0; i < 512; i++)
      			write(fd1, data, 4096);
      
      		fsync(fd1);
      		close(fd1);
      
      		fd2 = creat("file2", 0666);
      		if (fd2 < 0)
      			break;
      
      		ftruncate(fd2, 4096 * 512);
      
      		for (i = 0; i < 512; i++)
      			write(fd2, newdata, 4096);
      		close(fd2);
      
      		i = rename("file2", "file1");
      		unlink("file1");
      	}
      
      	return 0;
      }
      
      and then pulling the power on the box, and then trying to run that test again
      when the box comes back up.  I've tested this locally and it fixes the problem.
      Thanks to Tomas Carnecky for helping me track this down initially.
      Signed-off-by: default avatarJosef Bacik <josef@redhat.com>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      6c090a11
    • Jan Engelhardt's avatar
      btrfs: fix missing last-entry in readdir(3) · 406266ab
      Jan Engelhardt authored
      
      parent 49313cdac7b34c9f7ecbb1780cfc648b1c082cd7 (v2.6.32-1-g49313cd)
      commit ff48c08e1c05c67e8348ab6f8a24de8034e0e34d
      Author: Jan Engelhardt <jengelh@medozas.de>
      Date:   Wed Dec 9 22:57:36 2009 +0100
      
      Btrfs: fix missing last-entry in readdir(3)
      
      When one does a 32-bit readdir(3), the last entry of a directory is
      missing. This is however not due to passing a large value to filldir,
      but it seems to have to do with glibc doing telldir or something
      quirky. In any case, this patch fixes it in practice.
      Signed-off-by: default avatarJan Engelhardt <jengelh@medozas.de>
      Signed-off-by: default avatarChris Mason <chris.mason@oracle.com>
      406266ab
  13. 17 Dec, 2009 5 commits