- 10 Jan, 2008 4 commits
-
-
Herbert Xu authored
It's better to return silently than crash and burn when someone feeds us a zero length. In particular the null digest algorithm when used as part of authenc will do that to us. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Unfortunately the generic chaining hasn't been ported to all architectures yet, and notably not s390. So this patch restores the chainging that we've been using previously which does work everywhere. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The scatterwalk infrastructure is used by algorithms so it needs to move out of crypto for future users that may live in drivers/crypto or asm/*/crypto. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch converts the crypto scatterwalk code to use the generic scatterlist chaining rather the version specific to crypto. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 22 Oct, 2007 1 commit
-
-
Jens Axboe authored
Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 16 Oct, 2007 1 commit
-
-
Jens Axboe authored
It's a subsystem function, prefix it as such. Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Jens Axboe <jens.axboe@oracle.com>
-
- 10 Oct, 2007 2 commits
-
-
Herbert Xu authored
When scatterwalk is built as a module digest.c was broken because it requires the crypto_km_types structure which is in scatterwalk. This patch removes the crypto_km_types structure by encoding the logic into crypto_kmap_type directly. In fact, this even saves a few bytes of code (not to mention the data structure itself) on i386 which is about the only place where it's needed. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds the function scatterwalk_map_and_copy which reads or writes a chunk of data from a scatterlist at a given offset. It will be used by authenc which would read/write the authentication data at the end of the cipher/plain text. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 30 Mar, 2007 3 commits
-
-
Herbert Xu authored
On platforms where flush_dcache_page is needed we're currently flushing the next page right than the one we've just processed. This patch fixes the off-by-one error. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
In the scatterwalk_copychunks loop, We should be advancing by len_this_page and not nbytes. The latter is the total length. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Herbert Xu authored
In the scatterwalk_copychunks loop, We should be advancing by len_this_page and not nbytes. The latter is the total length. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Mar, 2007 1 commit
-
-
J. Bruce Fields authored
In the loop in scatterwalk_copychunks(), if walk->offset is zero, then scatterwalk_pagedone rounds that up to the nearest page boundary: walk->offset += PAGE_SIZE - 1; walk->offset &= PAGE_MASK; which is a no-op in this case, so we don't advance to the next element of the scatterlist array: if (walk->offset >= walk->sg->offset + walk->sg->length) scatterwalk_start(walk, sg_next(walk->sg)); and we end up copying the same data twice. It appears that other callers of scatterwalk_{page}done first advance walk->offset, so I believe that's the correct thing to do here. This caused a bug in NFS when run with krb5p security, which would cause some writes to fail with permissions errors--for example, writes of less than 8 bytes (the des blocksize) at the start of a file. A git-bisect shows the bug was originally introduced by 5c64097a , first in 2.6.19-rc1. Signed-off-by:
"J. Bruce Fields" <bfields@citi.umich.edu> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 20 Sep, 2006 1 commit
-
-
Herbert Xu authored
This patch prepares the scatterwalk code for use by the new block cipher type. Firstly it halves the size of scatter_walk on 32-bit platforms. This is important as we allocate at least two of these objects on the stack for each block cipher operation. It also exports the symbols since the block cipher code can be built as a module. Finally there is a hack in scatterwalk_unmap that relies on progress being made. Unfortunately, for hardware crypto we can't guarantee progress to be made since the hardware can fail. So this also gets rid of the hack by not advancing the address returned by scatterwalk_map. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au>
-
- 07 Feb, 2006 1 commit
-
-
Al Viro authored
A bunch of asm/bug.h includes are both not needed (since it will get pulled anyway) and bogus (since they are done too early). Removed. Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 06 Jul, 2005 1 commit
-
-
Herbert Xu authored
The VIA Padlock device is able to perform much better when multiple blocks are fed to it at once. As this device offers an exceptional throughput rate it is worthwhile to optimise the infrastructure specifically for it. We shift the existing page-sized fast path down to the CBC/ECB functions. We can then replace the CBC/ECB functions with functions provided by the underlying algorithm that performs the multi-block operations. As a side-effect this improves the performance of large cipher operations for all existing algorithm implementations. I've measured the gain to be around 5% for 3DES and 15% for AES. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 16 Apr, 2005 1 commit
-
-
Linus Torvalds authored
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-