summaryrefslogtreecommitdiff
path: root/mm/khugepaged.c
diff options
context:
space:
mode:
authorMiaohe Lin <linmiaohe@huawei.com>2022-06-25 17:28:10 +0800
committerakpm <akpm@linux-foundation.org>2022-07-03 18:08:51 -0700
commitdd5ff79d4ab85ac0cdb5f87e8fee4c4725255c4b (patch)
treec63b081d50e8412ef105c312aa366940730e36d9 /mm/khugepaged.c
parentf673bd7c2654a0e2a1ec59417dcf9b7ceae9c14c (diff)
downloadlinux-starfive-dd5ff79d4ab85ac0cdb5f87e8fee4c4725255c4b.tar.gz
linux-starfive-dd5ff79d4ab85ac0cdb5f87e8fee4c4725255c4b.tar.bz2
linux-starfive-dd5ff79d4ab85ac0cdb5f87e8fee4c4725255c4b.zip
mm/khugepaged: remove unneeded shmem_huge_enabled() check
Patch series "A few cleanup patches for khugepaged", v2. This series contains a few cleaup patches to remove unneeded return value, use helper macro, fix typos and so on. More details can be found in the respective changelogs. This patch (of 7): If we reach here, khugepaged_scan_mm_slot() has already made sure that hugepage is enabled for shmem, via its call to hugepage_vma_check(). Remove this duplicated check. Link: https://lkml.kernel.org/r/20220625092816.4856-1-linmiaohe@huawei.com Link: https://lkml.kernel.org/r/20220625092816.4856-2-linmiaohe@huawei.com Signed-off-by: Miaohe Lin <linmiaohe@huawei.com> Reviewed-by: Yang Shi <shy828301@gmail.com> Reviewed-by: Zach O'Keefe <zokeefe@google.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Howells <dhowells@redhat.com> Cc: NeilBrown <neilb@suse.de> Cc: Alistair Popple <apopple@nvidia.com> Cc: David Hildenbrand <david@redhat.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/khugepaged.c')
-rw-r--r--mm/khugepaged.c2
1 files changed, 0 insertions, 2 deletions
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 16be62d493cd..34e6b4604aa1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2145,8 +2145,6 @@ skip:
if (khugepaged_scan.address < hstart)
khugepaged_scan.address = hstart;
VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK);
- if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma))
- goto skip;
while (khugepaged_scan.address < hend) {
int ret;