summaryrefslogtreecommitdiff
path: root/crypto
AgeCommit message (Collapse)AuthorFilesLines
2019-05-22crypto: ccm - fix incompatibility between "ccm" and "ccm_base"Eric Biggers1-26/+18
commit 6a1faa4a43f5fabf9cbeaa742d916e7b5e73120f upstream. CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499dfa0 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: gcm - fix incompatibility between "gcm" and "gcm_base"Eric Biggers1-23/+11
commit f699594d436960160f6d5ba84ed4a222f20d11cd upstream. GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: crct10dif-generic - fix use via crypto_shash_digest()Eric Biggers1-7/+4
commit 307508d1072979f4435416f87936f87eaeb82054 upstream. The ->digest() method of crct10dif-generic reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: 2d31e518a428 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: skcipher - don't WARN on unprocessed data after slow walk stepEric Biggers1-2/+7
commit dcaca01a42cc2c425154a13412b4124293a6e11e upstream. skcipher_walk_done() assumes it's a bug if, after the "slow" path is executed where the next chunk of data is processed via a bounce buffer, the algorithm says it didn't process all bytes. Thus it WARNs on this. However, this can happen legitimately when the message needs to be evenly divisible into "blocks" but isn't, and the algorithm has a 'walksize' greater than the block size. For example, ecb-aes-neonbs sets 'walksize' to 128 bytes and only supports messages evenly divisible into 16-byte blocks. If, say, 17 message bytes remain but they straddle scatterlist elements, the skcipher_walk code will take the "slow" path and pass the algorithm all 17 bytes in the bounce buffer. But the algorithm will only be able to process 16 bytes, triggering the WARN. Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code already does, is the right behavior. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: chacha20poly1305 - set cra_name correctlyEric Biggers1-2/+2
commit 5e27f38f1f3f45a0c938299c3a34a2d2db77165a upstream. If the rfc7539 template is instantiated with specific implementations, e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than "rfc7539(chacha20,poly1305)", then the implementation names end up included in the instance's cra_name. This is incorrect because it then prevents all users from allocating "rfc7539(chacha20,poly1305)", if the highest priority implementations of chacha20 and poly1305 were selected. Also, the self-tests aren't run on an instance allocated in this way. Fix it by setting the instance's cra_name from the underlying algorithms' actual cra_names, rather than from the requested names. This matches what other templates do. Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-22crypto: salsa20 - don't access already-freed walk.ivEric Biggers1-1/+1
commit edaf28e996af69222b2cb40455dbb5459c2b875a upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-04-27crypto: x86/poly1305 - fix overflow during partial reductionEric Biggers1-1/+43
commit 678cce4019d746da6c680c48ba9e6d417803e127 upstream. The x86_64 implementation of Poly1305 produces the wrong result on some inputs because poly1305_4block_avx2() incorrectly assumes that when partially reducing the accumulator, the bits carried from limb 'd4' to limb 'h0' fit in a 32-bit integer. This is true for poly1305-generic which processes only one block at a time. However, it's not true for the AVX2 implementation, which processes 4 blocks at a time and therefore can produce intermediate limbs about 4x larger. Fix it by making the relevant calculations use 64-bit arithmetic rather than 32-bit. Note that most of the carries already used 64-bit arithmetic, but the d4 -> h0 carry was different for some reason. To be safe I also made the same change to the corresponding SSE2 code, though that only operates on 1 or 2 blocks at a time. I don't think it's really needed for poly1305_block_sse2(), but it doesn't hurt because it's already x86_64 code. It *might* be needed for poly1305_2block_sse2(), but overflows aren't easy to reproduce there. This bug was originally detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. But also add a test vector which reproduces it directly (in the AVX2 case). Fixes: b1ccc8f4b631 ("crypto: poly1305 - Add a four block AVX2 variant for x86_64") Fixes: c70f4abef07a ("crypto: poly1305 - Add a SSE2 SIMD variant for x86_64") Cc: <stable@vger.kernel.org> # v4.3+ Cc: Martin Willi <martin@strongswan.org> Cc: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: testmgr - skip crc32c context test for ahash algorithmsEric Biggers1-4/+10
commit eb5e6730db98fcc4b51148b4a819fa4bf864ae54 upstream. Instantiating "cryptd(crc32c)" causes a crypto self-test failure because the crypto_alloc_shash() in alg_test_crc32c() fails. This is because cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can only be accessed through the ahash API, unlike shash algorithms which can be accessed through both the ahash and shash APIs. As the test is testing the shash descriptor format which is only applicable to shash algorithms, skip it for ahash algorithms. (Note that it's still important to fix crypto self-test failures even for weird algorithm instantiations like cryptd(crc32c) that no one would really use; in fips_enabled mode unprivileged users can use them to panic the kernel, and also they prevent treating a crypto self-test failure as a bug when fuzzing the kernel.) Fixes: 8e3ee85e68c5 ("crypto: crc32c - Test descriptor context format") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: skcipher - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers1-9/+18
commit b1f6b4bf416b49f00f3abc49c639371cdecaaad1 upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of memory, then priv::table will be left NULL. After that, encryption with that tfm will cause a NULL pointer dereference. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: f8d33fac8480 ("crypto: skcipher - prevent using skciphers without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: pcbc - remove bogus memcpy()s with src == destEric Biggers1-10/+4
commit 251b7aea34ba3c4d4fdfa9447695642eb8b8b098 upstream. The memcpy()s in the PCBC implementation use walk->iv as both the source and destination, which has undefined behavior. These memcpy()'s are actually unneeded, because walk->iv is already used to hold the previous plaintext block XOR'd with the previous ciphertext block. Thus, walk->iv is already updated to its final value. So remove the broken and unnecessary memcpy()s. Fixes: 91652be5d1b9 ("[CRYPTO] pcbc: Add Propagated CBC template") Cc: <stable@vger.kernel.org> # v2.6.21+ Cc: David Howells <dhowells@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: morus - fix handling chunked inputsEric Biggers2-12/+14
commit d644f1c8746ed24f81075480f9e9cb3777ae8d65 upstream. The generic MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: hash - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers2-14/+32
commit ba7d7433a0e998c902132bd47330e355a1eaa894 upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a key, to prevent the tfm from being used until a new key is set. Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so ->setkey() for those must nevertheless be atomic. That's fine for now since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's not intended that OPTIONAL_KEY be used much. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: aegis - fix handling chunked inputsEric Biggers3-21/+21
commit 0f533e67d26f228ea5dfdacc8a4bdeb487af5208 upstream. The generic AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: aead - set CRYPTO_TFM_NEED_KEY if ->setkey() failsEric Biggers1-1/+3
commit 6ebc97006b196aafa9df0497fdfa866cf26f259b upstream. Some algorithms have a ->setkey() method that is not atomic, in the sense that setting a key can fail after changes were already made to the tfm context. In this case, if a key was already set the tfm can end up in a state that corresponds to neither the old key nor the new key. For example, in gcm.c, if the kzalloc() fails due to lack of memory, then the CTR part of GCM will have the new key but GHASH will not. It's not feasible to make all ->setkey() methods atomic, especially ones that have to key multiple sub-tfms. Therefore, make the crypto API set CRYPTO_TFM_NEED_KEY if ->setkey() fails, to prevent the tfm from being used until a new key is set. [Cc stable mainly because when introducing the NEED_KEY flag I changed AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG previously didn't have this problem. So these "incompletely keyed" states became theoretically accessible via AF_ALG -- though, the opportunities for causing real mischief seem pretty limited.] Fixes: dc26c17f743a ("crypto: aead - prevent using AEADs without setting key") Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: ahash - fix another early termination in hash walkEric Biggers1-7/+7
commit 77568e535af7c4f97eaef1e555bf0af83772456c upstream. Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and "michael_mic", fail the improved hash tests because they sometimes produce the wrong digest. The bug is that in the case where a scatterlist element crosses pages, not all the data is actually hashed because the scatterlist walk terminates too early. This happens because the 'nbytes' variable in crypto_hash_walk_done() is assigned the number of bytes remaining in the page, then later interpreted as the number of bytes remaining in the scatterlist element. Fix it. Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: cfb - remove bogus memcpy() with src == destEric Biggers1-4/+4
commit 6c2e322b3621dc8be72e5c86d4fdb587434ba625 upstream. The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the source and destination, which has undefined behavior. It is unneeded because walk->iv is already used to hold the previous ciphertext block; thus, walk->iv is already updated to its final value. So, remove it. Also, note that in-place decryption is the only case where the previous ciphertext block is not directly available. Therefore, as a related cleanup I also updated crypto_cfb_encrypt_segment() to directly use the previous ciphertext block rather than save it into walk->iv. This makes it consistent with in-place encryption and out-of-place decryption; now only in-place decryption is different, because it has to be. Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode") Cc: <stable@vger.kernel.org> # v4.17+ Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-03-23crypto: cfb - add missing 'chunksize' propertyEric Biggers2-0/+31
commit 394a9e044702e6a8958a5e89d2a291605a587a2a upstream. Like some other block cipher mode implementations, the CFB implementation assumes that while walking through the scatterlist, a partial block does not occur until the end. But the walk is incorrectly being done with a blocksize of 1, as 'cra_blocksize' is set to 1 (since CFB is a stream cipher) but no 'chunksize' is set. This bug causes incorrect encryption/decryption for some scatterlist layouts. Fix it by setting the 'chunksize'. Also extend the CFB test vectors to cover this bug as well as cases where the message length is not a multiple of the block size. Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode") Cc: <stable@vger.kernel.org> # v4.17+ Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-02-23net: crypto set sk to NULL when af_alg_release.Mao Wenan1-1/+3
[ Upstream commit 9060cb719e61b685ec0102574e10337fa5f445ea ] KASAN has found use-after-free in sockfs_setattr. The existed commit 6d8c50dcb029 ("socket: close race condition between sock_close() and sockfs_setattr()") is to fix this simillar issue, but it seems to ignore that crypto module forgets to set the sk to NULL after af_alg_release. KASAN report details as below: BUG: KASAN: use-after-free in sockfs_setattr+0x120/0x150 Write of size 4 at addr ffff88837b956128 by task syz-executor0/4186 CPU: 2 PID: 4186 Comm: syz-executor0 Not tainted xxx + #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: dump_stack+0xca/0x13e print_address_description+0x79/0x330 ? vprintk_func+0x5e/0xf0 kasan_report+0x18a/0x2e0 ? sockfs_setattr+0x120/0x150 sockfs_setattr+0x120/0x150 ? sock_register+0x2d0/0x2d0 notify_change+0x90c/0xd40 ? chown_common+0x2ef/0x510 chown_common+0x2ef/0x510 ? chmod_common+0x3b0/0x3b0 ? __lock_is_held+0xbc/0x160 ? __sb_start_write+0x13d/0x2b0 ? __mnt_want_write+0x19a/0x250 do_fchownat+0x15c/0x190 ? __ia32_sys_chmod+0x80/0x80 ? trace_hardirqs_on_thunk+0x1a/0x1c __x64_sys_fchownat+0xbf/0x160 ? lockdep_hardirqs_on+0x39a/0x5e0 do_syscall_64+0xc8/0x580 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x462589 Code: f7 d8 64 89 02 b8 ff ff ff ff c3 66 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fb4b2c83c58 EFLAGS: 00000246 ORIG_RAX: 0000000000000104 RAX: ffffffffffffffda RBX: 000000000072bfa0 RCX: 0000000000462589 RDX: 0000000000000000 RSI: 00000000200000c0 RDI: 0000000000000007 RBP: 0000000000000005 R08: 0000000000001000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb4b2c846bc R13: 00000000004bc733 R14: 00000000006f5138 R15: 00000000ffffffff Allocated by task 4185: kasan_kmalloc+0xa0/0xd0 __kmalloc+0x14a/0x350 sk_prot_alloc+0xf6/0x290 sk_alloc+0x3d/0xc00 af_alg_accept+0x9e/0x670 hash_accept+0x4a3/0x650 __sys_accept4+0x306/0x5c0 __x64_sys_accept4+0x98/0x100 do_syscall_64+0xc8/0x580 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 4184: __kasan_slab_free+0x12e/0x180 kfree+0xeb/0x2f0 __sk_destruct+0x4e6/0x6a0 sk_destruct+0x48/0x70 __sk_free+0xa9/0x270 sk_free+0x2a/0x30 af_alg_release+0x5c/0x70 __sock_release+0xd3/0x280 sock_close+0x1a/0x20 __fput+0x27f/0x7f0 task_work_run+0x136/0x1b0 exit_to_usermode_loop+0x1a7/0x1d0 do_syscall_64+0x461/0x580 entry_SYSCALL_64_after_hwframe+0x49/0xbe Syzkaller reproducer: r0 = perf_event_open(&(0x7f0000000000)={0x0, 0x70, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, @perf_config_ext}, 0x0, 0x0, 0xffffffffffffffff, 0x0) r1 = socket$alg(0x26, 0x5, 0x0) getrusage(0x0, 0x0) bind(r1, &(0x7f00000001c0)=@alg={0x26, 'hash\x00', 0x0, 0x0, 'sha256-ssse3\x00'}, 0x80) r2 = accept(r1, 0x0, 0x0) r3 = accept4$unix(r2, 0x0, 0x0, 0x0) r4 = dup3(r3, r0, 0x0) fchownat(r4, &(0x7f00000000c0)='\x00', 0x0, 0x0, 0x1000) Fixes: 6d8c50dcb029 ("socket: close race condition between sock_close() and sockfs_setattr()") Signed-off-by: Mao Wenan <maowenan@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-02-12crypto: aes_ti - disable interrupts while accessing S-boxEric Biggers2-1/+20
[ Upstream commit 0a6a40c2a8c184a2fb467efacfb1cd338d719e0b ] In the "aes-fixed-time" AES implementation, disable interrupts while accessing the S-box, in order to make cache-timing attacks more difficult. Previously it was possible for the CPU to be interrupted while the S-box was loaded into L1 cache, potentially evicting the cachelines and causing later table lookups to be time-variant. In tests I did on x86 and ARM, this doesn't affect performance significantly. Responsiveness is potentially a concern, but interrupts are only disabled for a single AES block. Note that even after this change, the implementation still isn't necessarily guaranteed to be constant-time; see https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion of the many difficulties involved in writing truly constant-time AES software. But it's valuable to make such attacks more difficult. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-01-26crypto: ecc - regularize scalar for scalar multiplicationVitaly Chikunov1-4/+12
[ Upstream commit 3da2c1dfdb802b184eea0653d1e589515b52d74b ] ecc_point_mult is supposed to be used with a regularized scalar, otherwise, it's possible to deduce the position of the top bit of the scalar with timing attack. This is important when the scalar is a private key. ecc_point_mult is already using a regular algorithm (i.e. having an operation flow independent of the input scalar) but regularization step is not implemented. Arrange scalar to always have fixed top bit by adding a multiple of the curve order (n). References: The constant time regularization step is based on micro-ecc by Kenneth MacKay and also referenced in the literature (Bernstein, D. J., & Lange, T. (2017). Montgomery curves and the Montgomery ladder. (Cryptology ePrint Archive; Vol. 2017/293). s.l.: IACR. Chapter 4.6.2.) Signed-off-by: Vitaly Chikunov <vt@altlinux.org> Cc: kernel-hardening@lists.openwall.com Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2019-01-22crypto: authenc - fix parsing key with misaligned rta_lenEric Biggers1-3/+11
commit 8f9c469348487844328e162db57112f7d347c49f upstream. Keys for "authenc" AEADs are formatted as an rtattr containing a 4-byte 'enckeylen', followed by an authentication key and an encryption key. crypto_authenc_extractkeys() parses the key to find the inner keys. However, it fails to consider the case where the rtattr's payload is longer than 4 bytes but not 4-byte aligned, and where the key ends before the next 4-byte aligned boundary. In this case, 'keylen -= RTA_ALIGN(rta->rta_len);' underflows to a value near UINT_MAX. This causes a buffer overread and crash during crypto_ahash_setkey(). Fix it by restricting the rtattr payload to the expected size. Reproducer using AF_ALG: #include <linux/if_alg.h> #include <linux/rtnetlink.h> #include <sys/socket.h> int main() { int fd; struct sockaddr_alg addr = { .salg_type = "aead", .salg_name = "authenc(hmac(sha256),cbc(aes))", }; struct { struct rtattr attr; __be32 enckeylen; char keys[1]; } __attribute__((packed)) key = { .attr.rta_len = sizeof(key), .attr.rta_type = 1 /* CRYPTO_AUTHENC_KEYA_PARAM */, }; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, &key, sizeof(key)); } It caused: BUG: unable to handle kernel paging request at ffff88007ffdc000 PGD 2e01067 P4D 2e01067 PUD 2e04067 PMD 2e05067 PTE 0 Oops: 0000 [#1] SMP CPU: 0 PID: 883 Comm: authenc Not tainted 4.20.0-rc1-00108-g00c9fe37a7f27 #13 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014 RIP: 0010:sha256_ni_transform+0xb3/0x330 arch/x86/crypto/sha256_ni_asm.S:155 [...] Call Trace: sha256_ni_finup+0x10/0x20 arch/x86/crypto/sha256_ssse3_glue.c:321 crypto_shash_finup+0x1a/0x30 crypto/shash.c:178 shash_digest_unaligned+0x45/0x60 crypto/shash.c:186 crypto_shash_digest+0x24/0x40 crypto/shash.c:202 hmac_setkey+0x135/0x1e0 crypto/hmac.c:66 crypto_shash_setkey+0x2b/0xb0 crypto/shash.c:66 shash_async_setkey+0x10/0x20 crypto/shash.c:223 crypto_ahash_setkey+0x2d/0xa0 crypto/ahash.c:202 crypto_authenc_setkey+0x68/0x100 crypto/authenc.c:96 crypto_aead_setkey+0x2a/0xc0 crypto/aead.c:62 aead_setkey+0xc/0x10 crypto/algif_aead.c:526 alg_setkey crypto/af_alg.c:223 [inline] alg_setsockopt+0xfe/0x130 crypto/af_alg.c:256 __sys_setsockopt+0x6d/0xd0 net/socket.c:1902 __do_sys_setsockopt net/socket.c:1913 [inline] __se_sys_setsockopt net/socket.c:1910 [inline] __x64_sys_setsockopt+0x1f/0x30 net/socket.c:1910 do_syscall_64+0x4a/0x180 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe Fixes: e236d4a89a2f ("[CRYPTO] authenc: Move enckeylen into key itself") Cc: <stable@vger.kernel.org> # v2.6.25+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-01-22crypto: authencesn - Avoid twice completion call in decrypt pathHarsh Jain1-1/+1
commit a7773363624b034ab198c738661253d20a8055c2 upstream. Authencesn template in decrypt path unconditionally calls aead_request_complete after ahash_verify which leads to following kernel panic in after decryption. [ 338.539800] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004 [ 338.548372] PGD 0 P4D 0 [ 338.551157] Oops: 0000 [#1] SMP PTI [ 338.554919] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G W I 4.19.7+ #13 [ 338.564431] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0 07/29/10 [ 338.572212] RIP: 0010:esp_input_done2+0x350/0x410 [esp4] [ 338.578030] Code: ff 0f b6 68 10 48 8b 83 c8 00 00 00 e9 8e fe ff ff 8b 04 25 04 00 00 00 83 e8 01 48 98 48 8b 3c c5 10 00 00 00 e9 f7 fd ff ff <8b> 04 25 04 00 00 00 83 e8 01 48 98 4c 8b 24 c5 10 00 00 00 e9 3b [ 338.598547] RSP: 0018:ffff911c97803c00 EFLAGS: 00010246 [ 338.604268] RAX: 0000000000000002 RBX: ffff911c4469ee00 RCX: 0000000000000000 [ 338.612090] RDX: 0000000000000000 RSI: 0000000000000130 RDI: ffff911b87c20400 [ 338.619874] RBP: 0000000000000000 R08: ffff911b87c20498 R09: 000000000000000a [ 338.627610] R10: 0000000000000001 R11: 0000000000000004 R12: 0000000000000000 [ 338.635402] R13: ffff911c89590000 R14: ffff911c91730000 R15: 0000000000000000 [ 338.643234] FS: 0000000000000000(0000) GS:ffff911c97800000(0000) knlGS:0000000000000000 [ 338.652047] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 338.658299] CR2: 0000000000000004 CR3: 00000001ec20a000 CR4: 00000000000006f0 [ 338.666382] Call Trace: [ 338.669051] <IRQ> [ 338.671254] esp_input_done+0x12/0x20 [esp4] [ 338.675922] chcr_handle_resp+0x3b5/0x790 [chcr] [ 338.680949] cpl_fw6_pld_handler+0x37/0x60 [chcr] [ 338.686080] chcr_uld_rx_handler+0x22/0x50 [chcr] [ 338.691233] uldrx_handler+0x8c/0xc0 [cxgb4] [ 338.695923] process_responses+0x2f0/0x5d0 [cxgb4] [ 338.701177] ? bitmap_find_next_zero_area_off+0x3a/0x90 [ 338.706882] ? matrix_alloc_area.constprop.7+0x60/0x90 [ 338.712517] ? apic_update_irq_cfg+0x82/0xf0 [ 338.717177] napi_rx_handler+0x14/0xe0 [cxgb4] [ 338.722015] net_rx_action+0x2aa/0x3e0 [ 338.726136] __do_softirq+0xcb/0x280 [ 338.730054] irq_exit+0xde/0xf0 [ 338.733504] do_IRQ+0x54/0xd0 [ 338.736745] common_interrupt+0xf/0xf Fixes: 104880a6b470 ("crypto: authencesn - Convert to new AEAD...") Signed-off-by: Harsh Jain <harsh@chelsio.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-01-22crypto: sm3 - fix undefined shift by >= width of valueEric Biggers1-1/+1
commit d45a90cb5d061fa7d411b974b950fe0b8bc5f265 upstream. sm3_compress() calls rol32() with shift >= 32, which causes undefined behavior. This is easily detected by enabling CONFIG_UBSAN. Explicitly AND with 31 to make the behavior well defined. Fixes: 4f0fc1600edb ("crypto: sm3 - add OSCCA SM3 secure hash") Cc: <stable@vger.kernel.org> # v4.15+ Cc: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-01-09crypto: cfb - fix decryptionDmitry Eremin-Solenikov1-1/+1
commit fa4600734b74f74d9169c3015946d4722f8bcf79 upstream. crypto_cfb_decrypt_segment() incorrectly XOR'ed generated keystream with IV, rather than with data stream, resulting in incorrect decryption. Test vectors will be added in the next patch. Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-01-09crypto: testmgr - add AES-CFB testsDmitry Eremin-Solenikov3-0/+88
commit 7da66670775d201f633577f5b15a4bbeebaaa2b0 upstream. Add AES128/192/256-CFB testvectors from NIST SP800-38A. Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-13crypto: do not free algorithm before usingPan Bian3-6/+12
commit e5bde04ccce64d808f8b00a489a1fe5825d285cb upstream. In multiple functions, the algorithm fields are read after its reference is dropped through crypto_mod_put. In this case, the algorithm memory may be freed, resulting in use-after-free bugs. This patch delays the put operation until the algorithm is never used. Fixes: 79c65d179a40 ("crypto: cbc - Convert to skcipher") Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode") Fixes: 043a44001b9e ("crypto: pcbc - Convert to skcipher") Cc: <stable@vger.kernel.org> Signed-off-by: Pan Bian <bianpan2016@163.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-12-01crypto: simd - correctly take reqsize of wrapped skcipher into accountArd Biesheuvel1-2/+3
[ Upstream commit 508a1c4df085a547187eed346f1bfe5e381797f1 ] The simd wrapper's skcipher request context structure consists of a single subrequest whose size is taken from the subordinate skcipher. However, in simd_skcipher_init(), the reqsize that is retrieved is not from the subordinate skcipher but from the cryptd request structure, whose size is completely unrelated to the actual wrapped skcipher. Reported-by: Qian Cai <cai@gmx.us> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Qian Cai <cai@gmx.us> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sasha Levin <sashal@kernel.org>
2018-11-21crypto: user - fix leaking uninitialized memory to userspaceEric Biggers1-9/+9
commit f43f39958beb206b53292801e216d9b8a660f087 upstream. All bytes of the NETLINK_CRYPTO report structures must be initialized, since they are copied to userspace. The change from strncpy() to strlcpy() broke this. As a minimal fix, change it back. Fixes: 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion") Cc: <stable@vger.kernel.org> # v4.12+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13crypto: speck - remove SpeckJason A. Donenfeld5-1084/+0
commit 578bdaabd015b9b164842c3e8ace9802f38e7ecc upstream. These are unused, undesired, and have never actually been used by anybody. The original authors of this code have changed their mind about its inclusion. While originally proposed for disk encryption on low-end devices, the idea was discarded [1] in favor of something else before that could really get going. Therefore, this patch removes Speck. [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Eric Biggers <ebiggers@google.com> Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13crypto: aegis/generic - fix for big endian systemsArd Biesheuvel1-11/+9
commit 4a34e3c2f2f48f47213702a84a123af0fe21ad60 upstream. Use the correct __le32 annotation and accessors to perform the single round of AES encryption performed inside the AEGIS transform. Otherwise, tcrypt reports: alg: aead: Test 1 failed on encryption for aegis128-generic 00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e alg: aead: Test 1 failed on encryption for aegis128l-generic 00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28 alg: aead: Test 1 failed on encryption for aegis256-generic 00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13crypto: morus/generic - fix for big endian systemsArd Biesheuvel2-17/+6
commit 5a8dedfa3276e88c5865f265195d63d72aec3e72 upstream. Omit the endian swabbing when folding the lengths of the assoc and crypt input buffers into the state to finalize the tag. This is not necessary given that the memory representation of the state is in machine native endianness already. This fixes an error reported by tcrypt running on a big endian system: alg: aead: Test 2 failed on encryption for morus640-generic 00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b 00000010: 21 alg: aead: Test 2 failed on encryption for morus1280-generic 00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee 00000010: 5f Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13crypto: tcrypt - fix ghash-generic speed testHoria Geantă1-0/+3
commit 331351f89c36bf7d03561a28b6f64fa10a9f6f3a upstream. ghash is a keyed hash algorithm, thus setkey needs to be called. Otherwise the following error occurs: $ modprobe tcrypt mode=318 sec=1 testing speed of async ghash-generic (ghash-generic) tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates): tcrypt: hashing failed ret=-126 Cc: <stable@vger.kernel.org> # 4.6+ Fixes: 0660511c0bee ("crypto: tcrypt - Use ahash") Tested-by: Franck Lenormand <franck.lenormand@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-13crypto: lrw - Fix out-of bounds access on counter overflowOndrej Mosnacek1-1/+6
commit fbe1a850b3b1522e9fc22319ccbbcd2ab05328d2 upstream. When the LRW block counter overflows, the current implementation returns 128 as the index to the precomputed multiplication table, which has 128 entries. This patch fixes it to return the correct value (127). Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode") Cc: <stable@vger.kernel.org> # 2.6.20+ Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-11-04Revert "net: simplify sock_poll_wait"Karsten Graul1-1/+1
[ Upstream commit 89ab066d4229acd32e323f1569833302544a4186 ] This reverts commit dd979b4df817e9976f18fb6f9d134d6bc4a3c317. This broke tcp_poll for SMC fallback: An AF_SMC socket establishes an internal TCP socket for the initial handshake with the remote peer. Whenever the SMC connection can not be established this TCP socket is used as a fallback. All socket operations on the SMC socket are then forwarded to the TCP socket. In case of poll, the file->private_data pointer references the SMC socket because the TCP socket has no file assigned. This causes tcp_poll to wait on the wrong socket. Signed-off-by: Karsten Graul <kgraul@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-08-18Merge tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2-5/+9
Pull DMAengine updates from Vinod Koul: "This round brings couple of framework changes, a new driver and usual driver updates: - new managed helper for dmaengine framework registration - split dmaengine pause capability to pause and resume and allow drivers to report that individually - update dma_request_chan_by_mask() to handle deferred probing - move imx-sdma to use virt-dma - new driver for Actions Semi Owl family S900 controller - minor updates to intel, renesas, mv_xor, pl330 etc" * tag 'dmaengine-4.19-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (46 commits) dmaengine: Add Actions Semi Owl family S900 DMA driver dt-bindings: dmaengine: Add binding for Actions Semi Owl SoCs dmaengine: sh: rcar-dmac: Should not stop the DMAC by rcar_dmac_sync_tcr() dmaengine: mic_x100_dma: use the new helper to simplify the code dmaengine: add a new helper dmaenginem_async_device_register dmaengine: imx-sdma: add memcpy interface dmaengine: imx-sdma: add SDMA_BD_MAX_CNT to replace '0xffff' dmaengine: dma_request_chan_by_mask() to handle deferred probing dmaengine: pl330: fix irq race with terminate_all dmaengine: Revert "dmaengine: mv_xor_v2: enable COMPILE_TEST" dmaengine: mv_xor_v2: use {lower,upper}_32_bits to configure HW descriptor address dmaengine: mv_xor_v2: enable COMPILE_TEST dmaengine: mv_xor_v2: move unmap to before callback dmaengine: mv_xor_v2: convert callback to helper function dmaengine: mv_xor_v2: kill the tasklets upon exit dmaengine: mv_xor_v2: explicitly freeup irq dmaengine: sh: rcar-dmac: Add dma_pause operation dmaengine: sh: rcar-dmac: add a new function to clear CHCR.DE with barrier dmaengine: idma64: Support dmaengine_terminate_sync() dmaengine: hsu: Support dmaengine_terminate_sync() ...
2018-08-16Replace magic for trusting the secondary keyring with #defineYannik Sembritzki1-1/+1
Replace the use of a magic number that indicates that verify_*_signature() should use the secondary keyring with a symbol. Signed-off-by: Yannik Sembritzki <yannik@sembritzki.me> Signed-off-by: David Howells <dhowells@redhat.com> Cc: keyrings@vger.kernel.org Cc: linux-security-module@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-15Merge branch 'next-integrity' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security Pull integrity updates from James Morris: "This adds support for EVM signatures based on larger digests, contains a new audit record AUDIT_INTEGRITY_POLICY_RULE to differentiate the IMA policy rules from the IMA-audit messages, addresses two deadlocks due to either loading or searching for crypto algorithms, and cleans up the audit messages" * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: EVM: fix return value check in evm_write_xattrs() integrity: prevent deadlock during digsig verification. evm: Allow non-SHA1 digital signatures evm: Don't deadlock if a crypto algorithm is unavailable integrity: silence warning when CONFIG_SECURITYFS is not enabled ima: Differentiate auditing policy rules from "audit" actions ima: Do not audit if CONFIG_INTEGRITY_AUDIT is not set ima: Use audit_log_format() rather than audit_log_string() ima: Call audit_log_string() rather than logging it untrusted
2018-08-15Merge branch 'linus' of ↵Linus Torvalds36-526/+722
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto updates from Herbert Xu: "API: - Fix dcache flushing crash in skcipher. - Add hash finup self-tests. - Reschedule during speed tests. Algorithms: - Remove insecure vmac and replace it with vmac64. - Add public key verification for DH/ECDH. Drivers: - Decrease priority of sha-mb on x86. - Improve NEON latency/throughput on ARM64. - Add md5/sha384/sha512/des/3des to inside-secure. - Support eip197d in inside-secure. - Only register algorithms supported by the host in virtio. - Add cts and remove incompatible cts1 from ccree. - Add hisilicon SEC security accelerator driver. - Replace msm hwrng driver with qcom pseudo rng driver. Misc: - Centralize CRC polynomials" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (121 commits) crypto: arm64/ghash-ce - implement 4-way aggregation crypto: arm64/ghash-ce - replace NEON yield check with block limit crypto: hisilicon - sec_send_request() can be static lib/mpi: remove redundant variable esign crypto: arm64/aes-ce-gcm - don't reload key schedule if avoidable crypto: arm64/aes-ce-gcm - implement 2-way aggregation crypto: arm64/aes-ce-gcm - operate on two input blocks at a time crypto: dh - make crypto_dh_encode_key() make robust crypto: dh - fix calculating encoded key size crypto: ccp - Check for NULL PSP pointer at module unload crypto: arm/chacha20 - always use vrev for 16-bit rotates crypto: ccree - allow bigger than sector XTS op crypto: ccree - zero all of request ctx before use crypto: ccree - remove cipher ivgen left overs crypto: ccree - drop useless type flag during reg crypto: ablkcipher - fix crash flushing dcache in error path crypto: blkcipher - fix crash flushing dcache in error path crypto: skcipher - fix crash flushing dcache in error path crypto: skcipher - remove unnecessary setting of walk->nbytes crypto: scatterwalk - remove scatterwalk_samebuf() ...
2018-08-03crypto: dh - make crypto_dh_encode_key() make robustEric Biggers1-14/+16
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than overflowing the buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: dh - fix calculating encoded key sizeEric Biggers2-7/+7
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size', causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and fix the lengths of the test vectors to match this. Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com Fixes: e3fe0ae12962 ("crypto: dh - add public key verification test") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: ablkcipher - fix crash flushing dcache in error pathEric Biggers1-31/+26
Like the skcipher_walk and blkcipher_walk cases: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of ablkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing ablkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: bf06099db18a ("crypto: skcipher - Add ablkcipher_walk interfaces") Cc: <stable@vger.kernel.org> # v2.6.35+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: blkcipher - fix crash flushing dcache in error pathEric Biggers1-28/+26
Like the skcipher_walk case: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of blkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing blkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "ecb(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> # v2.6.19+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: skcipher - fix crash flushing dcache in error pathEric Biggers1-26/+27
scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of skcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing skcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "cbc(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: skcipher - remove unnecessary setting of walk->nbytesEric Biggers1-1/+0
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't make sense because actually walk->nbytes needs to be set to the length of the first step in the walk, which may be less than walk->total. This is done by skcipher_walk_next() which is called immediately afterwards. Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(), which is a better default value in case it's forgotten to be set later. Therefore, remove the unnecessary assignment to walk->nbytes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: scatterwalk - remove 'chain' argument from scatterwalk_crypto_chain()Eric Biggers3-5/+5
All callers pass chain=0 to scatterwalk_crypto_chain(). Remove this unneeded parameter. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: skcipher - fix aligning block size in skcipher_copy_iv()Eric Biggers1-1/+1
The ALIGN() macro needs to be passed the alignment, not the alignmask (which is the alignment minus 1). Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: tcrypt - reschedule during speed testsHoria Geantă1-12/+24
Avoid RCU stalls in the case of non-preemptible kernel and lengthy speed tests by rescheduling when advancing from one block size to another. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03crypto: drbg - in-place cipher operation for CTRStephan Müller1-20/+14
The cipher implementations of the kernel crypto API favor in-place cipher operations. Thus, switch the CTR cipher operation in the DRBG to perform in-place operations. This is implemented by using the output buffer as input buffer and zeroizing it before the cipher operation to implement a CTR encryption of a NULL buffer. The speed improvement is quite visibile with the following comparison using the LRNG implementation. Without the patch set: 16 bytes| 12.267661 MB/s| 61338304 bytes | 5000000213 ns 32 bytes| 23.603770 MB/s| 118018848 bytes | 5000000073 ns 64 bytes| 46.732262 MB/s| 233661312 bytes | 5000000241 ns 128 bytes| 90.038042 MB/s| 450190208 bytes | 5000000244 ns 256 bytes| 160.399616 MB/s| 801998080 bytes | 5000000393 ns 512 bytes| 259.878400 MB/s| 1299392000 bytes | 5000001675 ns 1024 bytes| 386.050662 MB/s| 1930253312 bytes | 5000001661 ns 2048 bytes| 493.641728 MB/s| 2468208640 bytes | 5000001598 ns 4096 bytes| 581.835981 MB/s| 2909179904 bytes | 5000003426 ns With the patch set: 16 bytes | 17.051142 MB/s | 85255712 bytes | 5000000854 ns 32 bytes | 32.695898 MB/s | 163479488 bytes | 5000000544 ns 64 bytes | 64.490739 MB/s | 322453696 bytes | 5000000954 ns 128 bytes | 123.285043 MB/s | 616425216 bytes | 5000000201 ns 256 bytes | 233.434573 MB/s | 1167172864 bytes | 5000000573 ns 512 bytes | 384.405197 MB/s | 1922025984 bytes | 5000000671 ns 1024 bytes | 566.313370 MB/s | 2831566848 bytes | 5000001080 ns 2048 bytes | 744.518042 MB/s | 3722590208 bytes | 5000000926 ns 4096 bytes | 867.501670 MB/s | 4337508352 bytes | 5000002181 ns Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-03Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxHerbert Xu6-10/+29
Merge mainline to pick up c7513c2a2714 ("crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair").
2018-07-30net: simplify sock_poll_waitChristoph Hellwig1-1/+1
The wait_address argument is always directly derived from the filp argument, so remove it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David S. Miller <davem@davemloft.net>