1. 09 Mar, 2016 3 commits
  2. 22 Feb, 2016 32 commits
  3. 21 Feb, 2016 5 commits
    • Nikola Majkić's avatar
      a13d6a7a
    • Nikola Majkić's avatar
      Revert "staging: binder: add vm_fault handler" · dd99f166
      Nikola Majkić authored
      This reverts commit 564ebe5c.
      dd99f166
    • Nikola Majkić's avatar
      Revert "ion: Skip zeroing on secure buffers" · dadfc25c
      Nikola Majkić authored
      This reverts commit e03742da.
      dadfc25c
    • Dave Chinner's avatar
      sync: don't block the flusher thread waiting on IO · 4f7d4769
      Dave Chinner authored
      
      When sync does it's WB_SYNC_ALL writeback, it issues data Io and
      then immediately waits for IO completion. This is done in the
      context of the flusher thread, and hence completely ties up the
      flusher thread for the backing device until all the dirty inodes
      have been synced. On filesystems that are dirtying inodes constantly
      and quickly, this means the flusher thread can be tied up for
      minutes per sync call and hence badly affect system level write IO
      performance as the page cache cannot be cleaned quickly.
      
      We already have a wait loop for IO completion for sync(2), so cut
      this out of the flusher thread and delegate it to wait_sb_inodes().
      Hence we can do rapid IO submission, and then wait for it all to
      complete.
      
      Effect of sync on fsmark before the patch:
      
      FSUse%        Count         Size    Files/sec     App Overhead
      .....
           0       640000         4096      35154.6          1026984
           0       720000         4096      36740.3          1023844
           0       800000         4096      36184.6           916599
           0       880000         4096       1282.7          1054367
           0       960000         4096       3951.3           918773
           0      1040000         4096      40646.2           996448
           0      1120000         4096      43610.1           895647
           0      1200000         4096      40333.1           921048
      
      And a single sync pass took:
      
        real    0m52.407s
        user    0m0.000s
        sys     0m0.090s
      
      After the patch, there is no impact on fsmark results, and each
      individual sync(2) operation run concurrently with the same fsmark
      workload takes roughly 7s:
      
        real    0m6.930s
        user    0m0.000s
        sys     0m0.039s
      
      IOWs, sync is 7-8x faster on a busy filesystem and does not have an
      adverse impact on ongoing async data write operations.
      Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      
      Change-Id: I9e55d65f5ecb2305497711d4688f0647d9346035
      4f7d4769
    • Jan Kara's avatar
      writeback: Fix occasional slow sync(1) · b445dc9e
      Jan Kara authored
      
      In case when system contains no dirty pages, wakeup_flusher_threads()
      will submit WB_SYNC_NONE writeback for 0 pages so wb_writeback() exits
      immediately without doing anything. Thus sync(1) will write all the
      dirty inodes from a WB_SYNC_ALL writeback pass which is slow.
      
      Fix the problem by using get_nr_dirty_pages() in
      wakeup_flusher_threads() instead of calculating number of dirty pages
      manually. That function also takes number of dirty inodes into account.
      
      Change-Id: I458027ae08d9a5a93202a7b97ace1f8da7a18a07
      CC: stable@vger.kernel.org
      Reported-by: default avatarPaul Taysom <taysom@chromium.org>
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      b445dc9e