diff options
author | David S. Miller <davem@sunset.davemloft.net> | 2006-08-28 00:33:03 -0700 |
---|---|---|
committer | David S. Miller <davem@sunset.davemloft.net> | 2006-08-29 21:23:31 -0700 |
commit | 47f2c3604f47579ac5c173f8b402dc6cd8e2e8fa (patch) | |
tree | e6801f2664730e13019dd0e23e71ac50c898ca88 /arch/sparc64 | |
parent | dc709bd190c130b299ac19d596594256265c042a (diff) | |
download | linux-3.10-47f2c3604f47579ac5c173f8b402dc6cd8e2e8fa.tar.gz linux-3.10-47f2c3604f47579ac5c173f8b402dc6cd8e2e8fa.tar.bz2 linux-3.10-47f2c3604f47579ac5c173f8b402dc6cd8e2e8fa.zip |
[SPARC64]: Fix X server hangs due to large pages.
This problem was introduced by changeset
14778d9072e53d2171f66ffd9657daff41acfaed
Unlike the hugetlb code paths, the normal fault code is not setup to
propagate PTE changes for large page sizes correctly like the ones we
make for I/O mappings in io_remap_pfn_range().
It is absolutely necessary to update all sub-ptes of a largepage
mapping on a fault. Adding special handling for this would add
considerably complexity to tlb_batch_add(). So let's just side-step
the issue and forcefully dirty any writable PTEs created by
io_remap_pfn_range().
The only other real option would be to disable to large PTE code of
io_remap_pfn_range() and we really don't want to do that.
Much thanks to Mikael Pettersson for tracking down this problem and
testing debug patches.
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'arch/sparc64')
-rw-r--r-- | arch/sparc64/mm/generic.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/sparc64/mm/generic.c b/arch/sparc64/mm/generic.c index 8cb06205d26..af9d81db0b3 100644 --- a/arch/sparc64/mm/generic.c +++ b/arch/sparc64/mm/generic.c @@ -69,6 +69,8 @@ static inline void io_remap_pte_range(struct mm_struct *mm, pte_t * pte, } else offset += PAGE_SIZE; + if (pte_write(entry)) + entry = pte_mkdirty(entry); do { BUG_ON(!pte_none(*pte)); set_pte_at(mm, address, pte, entry); |