Skip to content

Commit c19caad

Browse files
jchu314atgithubSherryYang1
authored andcommitted
mm: make page_mapped_in_vma() hugetlb walk aware
When a process consumes a UE in a page, the memory failure handler attempts to collect information for a potential SIGBUS. If the page is an anonymous page, page_mapped_in_vma(page, vma) is invoked in order to 1. retrieve the vaddr from the process' address space, 2. verify that the vaddr is indeed mapped to the poisoned page, where 'page' is the precise small page with UE. It's been observed that when injecting poison to a non-head subpage of an anonymous hugetlb page, no SIGBUS shows up, while injecting to the head page produces a SIGBUS. The cause is that, though hugetlb_walk() returns a valid pmd entry (on x86), but check_pte() detects mismatch between the head page per the pmd and the input subpage. Thus the vaddr is considered not mapped to the subpage and the process is not collected for SIGBUS purpose. This is the calling stack: collect_procs_anon page_mapped_in_vma page_vma_mapped_walk hugetlb_walk huge_pte_lock check_pte check_pte() header says that it "check if [pvmw->pfn, @pvmw->pfn + @pvmw->nr_pages) is mapped at the @pvmw->pte" but practically works only if pvmw->pfn is the head page pfn at pvmw->pte. Hindsight acknowledging that some pvmw->pte could point to a hugepage of some sort such that it makes sense to make check_pte() work for hugepage. Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Jane Chu <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Kirill A. Shuemov <[email protected]> Cc: linmiaohe <[email protected]> Cc: Matthew Wilcow (Oracle) <[email protected]> Cc: Peter Xu <[email protected]> Cc: <[email protected]> Signed-off-by: Andrew Morton <[email protected]> (cherry picked from commit 442b1ec) Conflicts: mm/page_vma_mapped.c Conflict due to lack of upstream commits 9651eea ("mm: correct stale comment of function check_pte") 2aff7a4 ("mm: Convert page_vma_mapped_walk to work on PFNs") 8f0b747 ("mm/page_vma_mapped.c: use helper function huge_pte_lock") not backporting them because #1 and #3 are trivial, #2 involves more code than the issue this patch is addressing. The change here in the backport works in the same spirit with minimal impact. Orabug: 37956589 Signed-off-by: Jane Chu <[email protected]> Reviewed-by: William Roche <[email protected]> Signed-off-by: Vijayendra Suman <[email protected]> (cherry picked from commit 9364f96) Conflicts: mm/page_vma_mapped.c Minor conflict due to lack of upstream commit 5b8d6e3 ("mm/page_vma_mapped.c: explicitly compare pfn for normal, hugetlbfs and THP page") Orabug: 38146326 Signed-off-by: Larry Bassel <[email protected]> Reviewed-by: Jane Chu <[email protected]> Signed-off-by: Sherry Yang <[email protected]>
1 parent 9c46862 commit c19caad

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

mm/page_vma_mapped.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,9 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
108108
pfn = pte_pfn(*pvmw->pte);
109109
}
110110

111-
return pfn_in_hpage(pvmw->page, pfn);
111+
if (unlikely(PageHuge(pvmw->page)))
112+
return pfn_in_hpage(compound_head(pvmw->page), pfn);
113+
return pfn_in_hpage((pvmw->page), pfn);
112114
}
113115

114116
static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)

0 commit comments

Comments
 (0)