diff options
author | Yueyi Li <liyueyi@live.com> | 2018-12-24 07:40:07 +0000 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2019-04-17 08:38:46 +0200 |
commit | 902eaef7781cef521dba7e0299b24e5f4b78b688 (patch) | |
tree | 8eedaa0b27324aa908dd28bd90972e9cbe26fba2 /arch/arm64 | |
parent | 40177a7931e0043f9f5016e370c0695e4bae6b19 (diff) | |
download | linux-rpi3-902eaef7781cef521dba7e0299b24e5f4b78b688.tar.gz linux-rpi3-902eaef7781cef521dba7e0299b24e5f4b78b688.tar.bz2 linux-rpi3-902eaef7781cef521dba7e0299b24e5f4b78b688.zip |
arm64: kaslr: Reserve size of ARM64_MEMSTART_ALIGN in linear region
[ Upstream commit c8a43c18a97845e7f94ed7d181c11f41964976a2 ]
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), the top 4K of kernel
virtual address space may be mapped to physical addresses despite being
reserved for ERR_PTR values.
Fix the randomization of the linear region so that we avoid mapping the
last page of the virtual address space.
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: liyueyi <liyueyi@live.com>
[will: rewrote commit message; merged in suggestion from Ard]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin (Microsoft) <sashal@kernel.org>
Diffstat (limited to 'arch/arm64')
-rw-r--r-- | arch/arm64/mm/init.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c index 787e27964ab9..774c3e17c798 100644 --- a/arch/arm64/mm/init.c +++ b/arch/arm64/mm/init.c @@ -450,7 +450,7 @@ void __init arm64_memblock_init(void) * memory spans, randomize the linear region as well. */ if (memstart_offset_seed > 0 && range >= ARM64_MEMSTART_ALIGN) { - range = range / ARM64_MEMSTART_ALIGN + 1; + range /= ARM64_MEMSTART_ALIGN; memstart_addr -= ARM64_MEMSTART_ALIGN * ((range * memstart_offset_seed) >> 16); } |