summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorJames Bottomley <James.Bottomley@suse.de>2010-01-25 11:42:20 -0600
committerJames Bottomley <James.Bottomley@suse.de>2010-01-25 11:42:20 -0600
commit9df5f74194871ebd0e51ef5ad2eca5084acaaaba (patch)
treee167b9ec3a7948e0706754de4a303dc018ec9817 /Documentation
parent6b7b284958d47b77d06745b36bc7f36dab769d9b (diff)
downloadlinux-3.10-9df5f74194871ebd0e51ef5ad2eca5084acaaaba.tar.gz
linux-3.10-9df5f74194871ebd0e51ef5ad2eca5084acaaaba.tar.bz2
linux-3.10-9df5f74194871ebd0e51ef5ad2eca5084acaaaba.zip
mm: add coherence API for DMA to vmalloc/vmap areas
On Virtually Indexed architectures (which don't do automatic alias resolution in their caches), we have to flush via the correct virtual address to prepare pages for DMA. On some architectures (like arm) we cannot prevent the CPU from doing data movein along the alias (and thus giving stale read data), so we not only have to introduce a flush API to push dirty cache lines out, but also an invalidate API to kill inconsistent cache lines that may have moved in before DMA changed the data Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/cachetlb.txt24
1 files changed, 24 insertions, 0 deletions
diff --git a/Documentation/cachetlb.txt b/Documentation/cachetlb.txt
index da42ab414c4..b231414bb8b 100644
--- a/Documentation/cachetlb.txt
+++ b/Documentation/cachetlb.txt
@@ -377,3 +377,27 @@ maps this page at its virtual address.
All the functionality of flush_icache_page can be implemented in
flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
remove this interface completely.
+
+The final category of APIs is for I/O to deliberately aliased address
+ranges inside the kernel. Such aliases are set up by use of the
+vmap/vmalloc API. Since kernel I/O goes via physical pages, the I/O
+subsystem assumes that the user mapping and kernel offset mapping are
+the only aliases. This isn't true for vmap aliases, so anything in
+the kernel trying to do I/O to vmap areas must manually manage
+coherency. It must do this by flushing the vmap range before doing
+I/O and invalidating it after the I/O returns.
+
+ void flush_kernel_vmap_range(void *vaddr, int size)
+ flushes the kernel cache for a given virtual address range in
+ the vmap area. This is to make sure that any data the kernel
+ modified in the vmap range is made visible to the physical
+ page. The design is to make this area safe to perform I/O on.
+ Note that this API does *not* also flush the offset map alias
+ of the area.
+
+ void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates
+ the cache for a given virtual address range in the vmap area
+ which prevents the processor from making the cache stale by
+ speculatively reading data while the I/O was occurring to the
+ physical pages. This is only necessary for data reads into the
+ vmap area.