diff options
author | Waiman Long <longman@redhat.com> | 2023-08-25 12:49:47 -0400 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2023-09-02 15:17:34 -0700 |
commit | e68d343d2720779362cb7160cb7f4bd24979b2b4 (patch) | |
tree | f5f56525ff09ad3e80f5e2463ab9195b3a1859f1 /mm | |
parent | f945116e4e191cd543ecd56d9f13e6331494847c (diff) | |
download | linux-starfive-e68d343d2720779362cb7160cb7f4bd24979b2b4.tar.gz linux-starfive-e68d343d2720779362cb7160cb7f4bd24979b2b4.tar.bz2 linux-starfive-e68d343d2720779362cb7160cb7f4bd24979b2b4.zip |
mm/kmemleak: move up cond_resched() call in page scanning loop
Commit bde5f6bc68db ("kmemleak: add scheduling point to kmemleak_scan()")
added a cond_resched() call to the struct page scanning loop to prevent
soft lockup from happening. However, soft lockup can still happen in that
loop in some corner cases when the pages that satisfy the "!(pfn & 63)"
check are skipped for some reasons.
Fix this corner case by moving up the cond_resched() check so that it will
be called every 64 pages unconditionally.
Link: https://lkml.kernel.org/r/20230825164947.1317981-1-longman@redhat.com
Fixes: bde5f6bc68db ("kmemleak: add scheduling point to kmemleak_scan()")
Signed-off-by: Waiman Long <longman@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Yisheng Xie <xieyisheng1@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/kmemleak.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 2918150e31bd..54c2c90d3abc 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -1584,6 +1584,9 @@ static void kmemleak_scan(void) for (pfn = start_pfn; pfn < end_pfn; pfn++) { struct page *page = pfn_to_online_page(pfn); + if (!(pfn & 63)) + cond_resched(); + if (!page) continue; @@ -1594,8 +1597,6 @@ static void kmemleak_scan(void) if (page_count(page) == 0) continue; scan_block(page, page + 1, NULL); - if (!(pfn & 63)) - cond_resched(); } } put_online_mems(); |