diff options
author | David Woodhouse <dwmw@amazon.co.uk> | 2023-06-28 10:55:03 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2023-07-01 13:16:22 +0200 |
commit | 42a018a796d1eedb0d7c38b2778ef3dbf05aca36 (patch) | |
tree | 214174a473c66b8c5c2668279cc85424018f6d21 /mm | |
parent | a149174ff8bbdd8d5a0a591feaa3668ea4f43387 (diff) | |
download | linux-starfive-42a018a796d1eedb0d7c38b2778ef3dbf05aca36.tar.gz linux-starfive-42a018a796d1eedb0d7c38b2778ef3dbf05aca36.tar.bz2 linux-starfive-42a018a796d1eedb0d7c38b2778ef3dbf05aca36.zip |
mm/mmap: Fix error return in do_vmi_align_munmap()
commit 6c26bd4384da24841bac4f067741bbca18b0fb74 upstream,
If mas_store_gfp() in the gather loop failed, the 'error' variable that
ultimately gets returned was not being set. In many cases, its original
value of -ENOMEM was still in place, and that was fine. But if VMAs had
been split at the start or end of the range, then 'error' could be zero.
Change to the 'error = foo(); if (error) goto â¦' idiom to fix the bug.
Also clean up a later case which avoided the same bug by *explicitly*
setting error = -ENOMEM right before calling the function that might
return -ENOMEM.
In a final cosmetic change, move the 'Point of no return' comment to
*after* the goto. That's been in the wrong place since the preallocation
was removed, and this new error path was added.
Fixes: 606c812eb1d5 ("mm/mmap: Fix error path in do_vmi_align_munmap()")
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Cc: stable@vger.kernel.org
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/mmap.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/mmap.c b/mm/mmap.c index d82a95956def..f031fa267913 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -2404,7 +2404,8 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, break; } mas_set_range(&mas_detach, next->vm_start, next->vm_end - 1); - if (mas_store_gfp(&mas_detach, next, GFP_KERNEL)) + error = mas_store_gfp(&mas_detach, next, GFP_KERNEL); + if (error) goto munmap_gather_failed; if (next->vm_flags & VM_LOCKED) locked_vm += vma_pages(next); @@ -2456,6 +2457,7 @@ do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma, mas_set_range(mas, start, end - 1); } #endif + /* Point of no return */ mas_store_prealloc(mas, NULL); mm->locked_vm -= locked_vm; |