path: root/drivers/gpu/drm
AgeCommit message (Collapse)AuthorFilesLines
2015-06-08drm/exynos: add LPD: Low Power Display mechanism supporttizen_LPDInki Dae11-2/+297
The purpose of this mechanism is to transfer the only framebuffer region updated by X server to Display panel to save power consumed by Display relevant DMA devices. Change-Id: I2589c617d817833011e761c9c8835e287ff2fb7c Signed-off-by: Inki Dae <>
2015-03-04drm/exynos: gsc: always use hw buffer 0 until queue management get fixedMarek Szyprowski1-12/+12
Buffer sequence selection is broken and must be fixed. For the time being always queue buffers for hw id 0, because hardware always operates on the first src and dst buffer. This fixes IOMMU faults and makes the driver usable from userspace. Suggested-by: Andrzej Hajda <> Signed-off-by: Marek Szyprowski <> Change-Id: I46f43a5ad8b714a78bad7383bc5e532bf5015ecd
2015-02-26drm/exynos/ipp: Validate buffer enqueue requestsBeata Michalska1-1/+38
As for now there is no validation of incoming buffer enqueue request as far as the gem buffers are being concerned. This might lead to some undesired cases when the driver tries to operate on invalid buffers (wiht no valid gem object handle i.e.). Add some basic checks to rule out those potential issues. Change-Id: I117b5c566169d33fd46646068f835f48b333da73 Signed-off-by: Beata Michalska <>
2015-02-11drm/exynos: debugfs: add debugfs interface and gem_info nodesubmit/tizen/20150212.024415accepted/tizen/wearable/20150213.030440accepted/tizen/tv/20150213.025935accepted/tizen/mobile/20150213.030609accepted/tizen/common/20150212.145007YoungJun Cho5-0/+133
The memps requires gem_info with gem_names to analyze graphics(video) shared memory, so adds gem_info node with debugfs interface. Change-Id: Ia923aa53c1508174e874d36001f53b0c42daac21 Signed-off-by: YoungJun Cho <>
2015-02-12drm: use common drm_gem_dmabuf_release in i915/exynos driversDaniel Vetter3-35/+4
Note that this is slightly tricky since both drivers store their native objects in dma_buf->priv. But both also embed the base drm_gem_object at the first position, so the implicit cast is ok. To use the release helper we need to export it, too. Change-Id: I37e9ffec79c90304d444ae9b6c47346f125feb49 Cc: Inki Dae <> Cc: Intel Graphics Development <> Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> [This patch is necessary for commit 7f663e197afa drm/prime: proper locking+refcounting for obj->dma_buf link] Signed-off-by: Seung-Woo Kim <>
2015-02-02exynos/drm: g2d: return 0 instead of ret in g2d_map_cmdlist_gemsubmit/tizen/20150202.023401accepted/tizen/wearable/20150202.095817accepted/tizen/tv/20150202.095715accepted/tizen/mobile/20150202.100101accepted/tizen/common/20150202.085553Chanho Park1-1/+1
This patch fixes a return value of g2d_map_cmdlist_gem. Since applied dmabuf_sync, the return value was changed to ret when success return. This is wrong value when everything is successful. Change-Id: I0ddb735b8f894ec065ee90865d0a8b45bf892b8e Reported-by: Voloshynov Sergii <> Signed-off-by: Voloshynov Sergii <> Signed-off-by: Chanho Park <>
2015-01-04drm: delete unconsumed pending event list in drm_events_releaseYoungJun Cho1-1/+3
When there are unconsumed pending events, the events are destroyed by calling destroy callback, but the events list are remained, because there is no list_del(). It is possible that the page flip request is handled after drm_events_release() is called and before drm_fb_release(). In this case a drm_pending_event is remained not freed. So exynos driver checks again to remove it in its post close routine. But the file_priv->event_list contains undeleted ones, this can make oops for accessing invalid memory. Signed-off-by: YoungJun Cho <> Signed-off-by: Kyungmin Park <> Signed-off-by: Dave Airlie <> Change-Id: I25a471f4f4929150542eb6273c7673b9f44936b6 [back-ported from mainline to fix use after free issue] Signed-off-by: Seung-Woo Kim <>
2014-12-17drm/exynos: add support for exynos5420 mixerRahul Sharma2-12/+39
Add support for exynos5420 mixer IP in the drm mixer driver. Signed-off-by: Rahul Sharma <> Acked-by: Seung-Woo Kim <> Reviewed-by: Tomasz Figa <> Signed-off-by: Inki Dae <>
2014-12-17drm/exynos: enable support for exynos5420 hdmi deviceRahul Sharma1-1/+167
Enable support for hdmi for exynos5420 hdmiphy. Add compatible string in the of_match table. Also added hdmiphy configuration values for exynos5420. Signed-off-by: Rahul Sharma <> Signed-off-by: Shirish S <> Signed-off-by: Inki Dae <>
2014-12-17drm/exynos: add support for apb mapped phys in hdmi driverRahul Sharma2-73/+109
Previous SoCs have hdmi phys which are accessible through dedicated i2c lines. Newer SoCs have Apb mapped hdmi phys. Hdmi driver is modified to support apb mapped phys. Signed-off-by: Rahul Sharma <> Signed-off-by: Inki Dae <>
2014-11-25drm/exynos: add dmabuf sync support for page flipInki Dae1-0/+74
This patch adds dmabuf sync support for page flip. With this patch, a dma buffer shared with cpu or other dma devices, which buffer is accessed by display or HDMI controllers, could be synchronized implicitly. Change-Id: I7bd88b293d4d99b87488c0c1c8b07cb72acfb5e6 Signed-off-by: Inki Dae <>
2014-11-25drm/exynos: remove all codes relevant to dmabuf-syncsubmit/tizen/20141124.161713accepted/tizen/common/20141124.171414Inki Dae4-183/+0
This patch removes existing codes relevant to dmabuf-sync. For dmabuf-sync support, new codes will be added later. Change-Id: I1f794f95941519d8ef791f6fda01ab5f1e32807a Signed-off-by: Inki Dae <>
2014-11-21dma-buf/dmabuf-sync: add dmabuf sync frameworkInki Dae3-14/+13
The DMA Buffer synchronization API provides buffer synchronization mechanism based on DMA buffer sharing machanism[1], dmafence and reservation frameworks[2]; i.e., buffer access control to CPU and DMA, and easy-to-use interfaces for device drivers and user application. And this API can be used for all dma devices using system memory as dma buffer, especially for most ARM based SoCs. For more details, please refer to Documentation/dma-buf-syc.txt [1] [2] Change-Id: I3b2084a3c331fc06992fa8d2a4c71378e88b10b5 Signed-off-by: Inki Dae <>
2014-11-21local: drm/exynos: fix dmabuf variable nameChanho Park2-3/+3
Change-Id: Ie106e49107d687d053259fa35f889374a8cc0924 Signed-off-by: Chanho Park <>
2014-11-21gpu/drm: fix compile error since backportedChanho Park2-51/+1
Change-Id: I5c9a62578057b164898c8f7880d0566e813dba65 Signed-off-by: Chanho Park <>
2014-11-21drm/vma: add access management helpersDavid Herrmann2-0/+156
The VMA offset manager uses a device-global address-space. Hence, any user can currently map any offset-node they want. They only need to guess the right offset. If we wanted per open-file offset spaces, we'd either need VM_NONLINEAR mappings or multiple "struct address_space" trees. As both doesn't really scale, we implement access management in the VMA manager itself. We use an rb-tree to store open-files for each VMA node. On each mmap call, GEM, TTM or the drivers must check whether the current user is allowed to map this file. We add a separate lock for each node as there is no generic lock available for the caller to protect the node easily. As we currently don't know whether an object may be used for mmap(), we have to do access management for all objects. If it turns out to slow down handle creation/deletion significantly, we can optimize it in several ways: - Most times only a single filp is added per bo so we could use a static "struct file *main_filp" which is checked/added/removed first before we fall back to the rbtree+drm_vma_offset_file. This could be even done lockless with rcu. - Let user-space pass a hint whether mmap() should be supported on the bo and avoid access-management if not. - .. there are probably more ideas once we have benchmarks .. v2: add drm_vma_node_verify_access() helper Signed-off-by: David Herrmann <> Signed-off-by: Dave Airlie <>
2014-11-21drm/mm: add "best_match" flag to drm_mm_insert_node()David Herrmann6-36/+30
Add a "best_match" flag similar to the drm_mm_search_*() helpers so we can convert TTM to use them in follow up patches. We can also inline the non-generic helpers and move them into the header to allow compile-time optimizations. To make calls to drm_mm_{search,insert}_node() more readable, this converts the boolean argument to a flagset. There are pending patches that add additional flags for top-down allocators and more. v2: - use flag parameter instead of boolean "best_match" - convert *_search_free() helpers to also use flags argument Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/i915/i915_gem.c Change-Id: I77640db74616de3c9ae874531f71bbd81b89d5fa
2014-11-21drm/vma: provide drm_vma_node_unmap() helperDavid Herrmann2-10/+2
Instead of unmapping the nodes in TTM and GEM users manually, we provide a generic wrapper which does the correct thing for all vma-nodes. v2: remove bdev->dev_mapping test in ttm_bo_unmap_virtual_unlocked() as ttm_mem_io_free_vm() does nothing in that case (io_reserved_vm is 0). v4: Fix docbook comments v5: use drm_vma_node_size() Cc: Dave Airlie <> Cc: Maarten Lankhorst <> Cc: Thomas Hellstrom <> Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/ttm/ttm_bo.c Change-Id: I4be1eeef8e5b4e81b5966449e2bf3691d8270aae
2014-11-21drm: add unified vma offset managerDavid Herrmann2-1/+283
If we want to map GPU memory into user-space, we need to linearize the addresses to not confuse mm-core. Currently, GEM and TTM both implement their own offset-managers to assign a pgoff to each object for user-space CPU access. GEM uses a hash-table, TTM uses an rbtree. This patch provides a unified implementation that can be used to replace both. TTM allows partial mmaps with a given offset, so we cannot use hashtables as the start address may not be known at mmap time. Hence, we use the rbtree-implementation of TTM. We could easily update drm_mm to use an rbtree instead of a linked list for it's object list and thus drop the rbtree from the vma-manager. However, this would slow down drm_mm object allocation for all other use-cases (rbtree insertion) and add another 4-8 bytes to each mm node. Hence, use the separate tree but allow for later migration. This is a rewrite of the 2012-proposal by David Airlie <> v2: - fix Docbook integration - drop drm_mm_node_linked() and use drm_mm_node_allocated() - remove unjustified likely/unlikely usage (but keep for rbtree paths) - remove BUG_ON() as drm_mm already does that - clarify page-based vs. byte-based addresses - use drm_vma_node_reset() for initialization, too v4: - allow external locking via drm_vma_offset_un/lock_lookup() - add locked lookup helper drm_vma_offset_lookup_locked() v5: - fix drm_vma_offset_lookup() to correctly validate range-mismatches (fix (offset > start + pages)) - fix drm_vma_offset_exact_lookup() to actually do what it says - remove redundant vm_pages member (add drm_vma_node_size() helper) - remove unneeded goto - fix documentation Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: Documentation/DocBook/drm.tmpl drivers/gpu/drm/Makefile Change-Id: If3427d06b0f9b24c65268912bb75c1b90fe9ad26
2014-11-21drm/gem: completely close gem_open vs. gem_close racesDaniel Vetter1-11/+31
The gem flink name holds a reference onto the object itself, and this self-reference would prevent an flink'ed object from every being freed. To break that loop we remove the flink name when the last userspace handle disappears, i.e. when obj->handle_count reaches 0. Now in gem_open we drop the dev->object_name_lock between the flink name lookup and actually adding the handle. This means a concurrent gem_close of the last handle could result in the flink name getting reaped right inbetween, i.e. Thread 1 Thread 2 gem_open gem_close flink -> obj lookup handle_count drops to 0 remove flink name create_handle handle_count++ If someone now flinks this object again, we'll get a new flink name. We can close this race by removing the lock dropping and making the entire lookup+handle_create sequence atomic. Unfortunately to still be able to share the handle_create logic this requires a handle_create_tail function which drops the lock - we can't hold the object_name_lock while calling into a driver's ->gem_open callback. Note that for flink fixing this race isn't really important, since racing gem_open against gem_close is clearly a userspace bug. And no matter how the race ends, we won't leak any references. But with dma-buf where the userspace dma-buf fd itself is refcounted this is a valid sequence and hence we should fix it. Therefore this patch here is just a warm-up exercise (and for consistency between flink buffer sharing and dma-buf buffer sharing with self-imports). Also note that this extension of the critical section in gem_open protected by dev->object_name_lock only works because it's now a mutex: A spinlock would conflict with the potential memory allocation in idr_preload(). This is exercises by igt/gem_flink_race/flink_name. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: switch dev->object_name_lock to a mutexDaniel Vetter2-9/+13
I want to wrap the creation of a dma-buf from a gem object in it, so that the obj->export_dma_buf cache can be atomically filled in. Instead of creating a new mutex just for that variable I've figured I can reuse the existing dev->object_name_lock, especially since the new semantics will exactly mirror the flink obj->name already protected by that lock. v2: idr_preload/idr_preload_end is now an atomic section, so need to move the mutex locking outside. [airlied: fix up conflict with patch to make debugfs use lock] Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/drm_info.c Change-Id: Ic4ca630b9c6092c942208ee9a04409d4f6561fc0
2014-11-21drm/gem: make drm_gem_object_handle_unreference_unlocked staticDaniel Vetter1-1/+1
No one outside of drm should use this, the official interfaces are drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting is purely an implementation detail of gem. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: fix up flink name create raceDaniel Vetter3-13/+22
This is the 2nd attempt, I've always been a bit dissatisified with the tricky nature of the first one: The issue is that the flink ioctl can race with calling gem_close on the last gem handle. In that case we'll end up with a zero handle count, but an flink name (and it's corresponding reference). Which results in a neat space leak. In my first attempt I've solved this by rechecking the handle count. But fundamentally the issue is that ->handle_count isn't your usual refcount - it can be resurrected from 0 among other things. For those special beasts atomic_t often suggest way more ordering that it actually guarantees. To prevent being tricked by those hairy semantics take the easy way out and simply protect the handle with the existing dev->object_name_lock. With that change implemented it's dead easy to fix the flink vs. gem close reace: When we try to create the name we simply have to check whether there's still officially a gem handle around and if not refuse to create the flink name. Since the handle count decrement and flink name destruction is now also protected by that lock the reace is gone and we can't ever leak the flink reference again. Outside of the drm core only the exynos driver looks at the handle count, and tbh I have no idea why (it's just for debug dmesg output luckily). I've considered inlining the drm_gem_object_handle_free, but I plan to add more name-like things (like the exported dma_buf) to this scheme, so it's clearer to leave the handle freeing in its own function. This is exercised by the new gem_flink_race i-g-t testcase, which on my snb leaks gem objects at a rate of roughly 1k objects/s. v2: Fix up the error path handling in handle_create and make it more robust by simply calling object_handle_unreference. v3: Fix up the handle_unreference logic bug - atomic_dec_and_test retursn 1 for 0. Oops. v4: Squash in inlining of drm_gem_object_handle_reference as suggested by Dave Airlie and add a note that we now have a testcase. Cc: Dave Airlie <> Cc: Inki Dae <> Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: WARN about unbalanced handle refcountsDaniel Vetter1-1/+1
Trying to drop a reference we don't have is a pretty serious bug. Trying to paper over it is an even worse offense. So scream into dmesg with a big WARN in case that ever happens. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: remove bogus NULL check from drm_gem_object_handle_unreference_unlockedDaniel Vetter1-3/+0
Calling this function with a NULL object is simply a bug, so papering over a NULL object not a good idea. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: move drm_gem_object_handle_unreference_unlocked into drm_gem.cDaniel Vetter1-35/+54
We have three callers of this function now and it's neither performance critical nor really small. So an inline function feels like overkill and unecessarily separates the different parts of the code. Since all callers of drm_gem_object_handle_free are now in drm_gem.c we can make that static (and remove the unused EXPORT_SYMBOL). To avoid a forward declaration move it (and drm_gem_object_free_bug) up a bit. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: add shmem get/put page helpersRob Clark1-0/+103
Basically just extracting some code duplicated in gma500, omapdrm, udl, and upcoming msm driver. Signed-off-by: Rob Clark <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: add drm_gem_create_mmap_offset_size()Rob Clark1-4/+24
Variant of drm_gem_create_mmap_offset() which doesn't make the assumption that virtual size and physical size (obj->size) are the same. This is needed in omapdrm to deal with tiled buffers. And lets us get rid of a duplicated and slightly modified version of drm_gem_create_mmap_offset() in omapdrm. Signed-off-by: Rob Clark <> Reviewed-by: David Herrmann <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: create drm_gem_dumb_destroyDaniel Vetter37-167/+27
All the gem based kms drivers really want the same function to destroy a dumb framebuffer backing storage object. So give it to them and roll it out in all drivers. This still leaves the option open for kms drivers which don't use GEM for backing storage, but it does decently simplify matters for gem drivers. Acked-by: Inki Dae <> Acked-by: Laurent Pinchart <> Cc: Intel Graphics Development <> Cc: Ben Skeggs <> Reviwed-by: Rob Clark <> Cc: Alex Deucher <> Acked-by: Patrik Jakobsson <> Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/rcar-du/rcar_du_drv.c Change-Id: I991aad3f0745732f203a85ff8b5f43e328c045a6
2014-11-21drm/gem: fix mmap vma size calculationsDavid Herrmann1-1/+1
The VMA manager is page-size based so drm_vma_node_size() returns the size in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small buffers. This bug was introduced in commit: 0de23977cfeb5b357ec884ba15417ae118ff9e9b "drm/gem: convert to new unified vma manager" Fixes i915 gtt mmap failure reported by Sedat Dilek in: Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ] Cc: Daniel Vetter <> Cc: Chris Wilson <> Signed-off-by: David Herrmann <> Reported-by: Sedat Dilek <> Tested-by: Sedat Dilek <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: convert to new unified vma managerDavid Herrmann8-178/+56
Use the new vma manager instead of the old hashtable. Also convert all drivers to use the new convenience helpers. This drops all the (map_list.hash.key << PAGE_SHIFT) non-sense. Locking and access-management is exactly the same as before with an additional lock inside of the vma-manager, which strictly wouldn't be needed for gem. v2: - rebase on drm-next - init nodes via drm_vma_node_reset() in drm_gem.c v3: - fix tegra v4: - remove duplicate if (drm_vma_node_has_offset()) checks - inline now trivial drm_vma_node_offset_addr() calls v5: - skip node-reset on gem-init due to kzalloc() - do not allow mapping gem-objects with offsets (backwards compat) - remove unneccessary casts Cc: Inki Dae <> Cc: Rob Clark <> Cc: Dave Airlie <> Cc: Thierry Reding <> Signed-off-by: David Herrmann <> Acked-by: Patrik Jakobsson <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: simplify object initializationDavid Herrmann6-29/+18
drm_gem_object_init() and drm_gem_private_object_init() do exactly the same (except for shmem alloc) so make the first use the latter to reduce code duplication. Also drop the return code from drm_gem_private_object_init(). It seems unlikely that we will extend it any time soon so no reason to keep it around. This simplifies code paths in drivers, too. Last but not least, fix gma500 to call drm_gem_object_release() before freeing objects that were allocated via drm_gem_private_object_init(). That isn't actually necessary for now, but might be in the future. Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Reviewed-by: Patrik Jakobsson <> Acked-by: Rob Clark <> Signed-off-by: Dave Airlie <>
2014-11-21drm: make drm_mm_init() return voidDavid Herrmann4-21/+5
There is no reason to return "int" as this function never fails. Furthermore, several drivers (ast, sis) already depend on this. Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: add mutex lock when using drm_gem_mmap_objYoungJun Cho3-0/+291
The drm_gem_mmap_obj() has to be protected with dev->struct_mutex, but some caller functions do not. So it adds mutex lock to missing callers and adds assertion to check whether drm_gem_mmap_obj() is called with mutex lock or not. Signed-off-by: YoungJun Cho <> Signed-off-by: Seung-Woo Kim <> Signed-off-by: Kyungmin Park <> Reviewed-by: Maarten Lankhorst <> Reviewed-by: Laurent Pinchart <> Reviewed-by: Rob Clark <> Reviewed-by: Maarten Lankhorst <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/drm_gem_cma_helper.c drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c Change-Id: Icb683c218b3455f113c073c33166faab5a7fcc4c
2014-11-21drm/gem: Split drm_gem_mmap() into object search and object mappingLaurent Pinchart1-31/+52
The drm_gem_mmap() function first finds the GEM object to be mapped based on the fake mmap offset and then maps the object. Split the object mapping code into a standalone drm_gem_mmap_obj() function that can be used to implement dma-buf mmap() operations. Signed-off-by: Laurent Pinchart <> Reviewed-by: Rob Clark <>
2014-11-21drm/prime: Simplify drm_gem_remove_prime_handlesDaniel Vetter1-5/+0
with the reworking semantics and locking of the obj->dma_buf pointer this pointer is always set as long as there's still a gem handle around and a dma_buf associated with this gem object. Also, the per file-priv lookup-cache for dma-buf importing is also unified between foreign and native objects. Hence we don't need to special case the clean any more and can simply drop the clause which only runs for foreing objects, i.e. with obj->import_attach set. Note that with this change (actually with the previous one to always set up obj->dma_buf even for foreign objects) it is no longer required to set obj->import_attach when importing a foreing object. So update comments accordingly, too. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21dma-buf: use reservation objectsMaarten Lankhorst3-3/+230
This allows reservation objects to be used in dma-buf. it's required for implementing polling support on the fences that belong to a dma-buf. Signed-off-by: Maarten Lankhorst <> Acked-by: Mauro Carvalho Chehab <> #drivers/media/v4l2-core/ Acked-by: Thomas Hellstrom <> #drivers/gpu/drm/ttm Acked-by: Sumit Semwal <> Acked-by: Daniel Vetter <> Signed-off-by: Vincent Stehlé <> #drivers/gpu/drm/armada/ Signed-off-by: Greg Kroah-Hartman <> Signed-off-by: Chanho Park <> Conflicts: drivers/gpu/drm/armada/armada_gem.c drivers/gpu/drm/drm_prime.c drivers/gpu/drm/exynos/exynos_drm_dmabuf.c drivers/gpu/drm/i915/i915_gem_dmabuf.c drivers/gpu/drm/nouveau/nouveau_drm.c drivers/gpu/drm/nouveau/nouveau_gem.h drivers/gpu/drm/nouveau/nouveau_prime.c drivers/gpu/drm/radeon/radeon_drv.c drivers/gpu/drm/tegra/gem.c drivers/gpu/drm/ttm/ttm_object.c drivers/staging/android/ion/ion.c Change-Id: I44fbb1f41500deaf9067eb5d7e1c6ed758231d69
2014-11-21drm/prime: double lock typoDan Carpenter1-1/+1
There is a typo so deadlocks on error instead of unlocking. Signed-off-by: Dan Carpenter <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: Always add exported buffers to the handle cacheDaniel Vetter2-35/+52
... not only when the dma-buf is freshly created. In contrived examples someone else could have exported/imported the dma-buf already and handed us the gem object with a flink name. If such on object gets reexported as a dma_buf we won't have it in the handle cache already, which breaks the guarantee that for dma-buf imports we always hand back an existing handle if there is one. This is exercised by igt/prime_self_import/with_one_bo_two_files Now if we extend the locked sections just a notch more we can also plug th racy buf/handle cache setup in handle_to_fd: If evil userspace races a concurrent gem close against a prime export operation we can end up tearing down the gem handle before the dma buf handle cache is set up. When handle_to_fd gets around to adding the handle to the cache there will be no one left to clean it up, effectily leaking the bo (and the dma-buf, since the handle cache holds a ref on the dma-buf): Thread A Thread B handle_to_fd: lookup gem object from handle creates new dma_buf gem_close on the same handle obj->dma_buf is set, but file priv buf handle cache has no entry obj->handle_count drops to 0 drm_prime_add_buf_handle sets up the handle cache -> We have a dma-buf reference in the handle cache, but since the handle_count of the gem object already dropped to 0 no on will clean it up. When closing the drm device fd we'll hit the WARN_ON in drm_prime_destroy_file_private. The important change is to extend the critical section of the filp->prime.lock to cover the gem handle lookup. This serializes with a concurrent gem handle close. This leak is exercised by igt/prime_self_import/export-vs-gem_close-race Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: make drm_prime_lookup_buf_handle staticDaniel Vetter1-14/+15
... and move it to the top of the function to avoid a forward declaration. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: proper locking+refcounting for obj->dma_buf linkDaniel Vetter3-17/+126
The export dma-buf cache is semantically similar to an flink name. So semantically it makes sense to treat it the same and remove the name (i.e. the dma_buf pointer) and its references when the last gem handle disappears. Again we need to be careful, but double so: Not just could someone race and export with a gem close ioctl (so we need to recheck obj->handle_count again when assigning the new name), but multiple exports can also race against each another. This is prevented by holding the dev->object_name_lock across the entire section which touches obj->dma_buf. With the new scheme we also need to reinstate the obj->dma_buf link at import time (in case the only reference userspace has held in-between was through the dma-buf fd and not through any native gem handle). For simplicity we don't check whether it's a native object but unconditionally set up that link - with the new scheme of removing the obj->dma_buf reference when the last handle disappears we can do that. To make it clear that this is not just for exported buffers anymore als rename it from export_dma_buf to dma_buf. To make sure that now one can race a fd_to_handle or handle_to_fd with gem_close we use the same tricks as in flink of extending the dev->object_name_locking critical section. With this change we finally have a guaranteed 1:1 relationship (at least for native objects) between gem objects and dma-bufs, even accounting for races (which can happen since the dma-buf itself holds a reference while in-flight). This prevent igt/prime_self_import/export-vs-gem_close-race from Oopsing the kernel. There is still a leak though since the per-file priv dma-buf/handle cache handling is racy. That will be fixed in a later patch. v2: Remove the bogus dma_buf_put from the export_and_register_object failure path if we've raced with the handle count dropping to 0. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Signed-off-by: Chanho Park <> Conflicts: drivers/gpu/drm/drm_gem.c Change-Id: I915b0e73cedffa0ba358cf00510e19dccfcb4703
2014-11-21drm/prime: clarify logic a bit in drm_gem_prime_fd_to_handleDaniel Vetter1-3/+1
if (!ret) implies that ret == 0, so no need to clear it again. And explicitly check for ret == 0 to indicate that we're checking an errno integer. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: shrink critical section protected by prime lockDaniel Vetter1-2/+2
When exporting a gem object as a dma-buf the critical section for the per-fd prime lock is just the adding (and in case of errors, removing) of the handle to the per-fd lookup cache. So restrict the critical section to just that part of the function. This simplifies later reordering. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: use proper pointer in drm_gem_prime_handle_to_fdDaniel Vetter1-8/+8
Part of the function uses the properly-typed dmabuf variable, the other an untyped void *buf. Kill the later. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: fix error path in drm_gem_prime_fd_to_handleDaniel Vetter1-1/+1
handle_unreference only clears up the obj->name and the reference, but would leave a dangling handle in the idr. The right thing to do is to call handle_delete. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: remove cargo-cult locking from map_sg helperDaniel Vetter1-3/+0
I've checked both implementations (radeon/nouveau) and they both grab the page array from ttm simply by dereferencing it and then wrapping it up with drm_prime_pages_to_sg in the callback and map it with dma_map_sg (in the helper). Only the grabbing of the underlying page array is anything we need to be concerned about, and either those pages are pinned independently, or we're screwed no matter what. And indeed, nouveau/radeon pin the backing storage in their attach/detach functions. Since I've created this patch cma prime support for dma_buf was added. drm_gem_cma_prime_get_sg_table only calls kzalloc and the creates&maps the sg table with dma_get_sgtable. It doesn't touch any gem object state otherwise. So the cma helpers also look safe. The only thing we might claim it does is prevent concurrent mapping of dma_buf attachments. But a) that's not allowed and b) the current code is racy already since it checks whether the sg mapping exists _before_ grabbing the lock. So the dev->struct_mutex locking here does absolutely nothing useful, but only distracts. Remove it. This should also help Maarten's work to eventually pin the backing storage more dynamically by preventing locking inversions around dev->struct_mutex. v2: Add analysis for recently added cma helper prime code. Cc: Laurent Pinchart <> Cc: Maarten Lankhorst <> Acked-by: Laurent Pinchart <> Acked-by: Maarten Lankhorst <> Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm: add mmap function to prime helpersJoonyoung Shim1-1/+7
This adds to call low-level mmap() from prime helpers. Signed-off-by: Joonyoung Shim <> Acked-by: Laurent Pinchart <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: fix sgt NULL checkingJoonyoung Shim1-5/+6
The drm_gem_map_detach() can be called with sgt is NULL. Signed-off-by: Joonyoung Shim <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: fix up handle_to_fd ioctl return valueDaniel Vetter1-2/+5
In commit da34242e5e0638312130f5bd5d2d277afbc6f806 Author: YoungJun Cho <> Date: Wed Jun 26 10:21:42 2013 +0900 drm/prime: add return check for dma_buf_fd the failure case handling was fixed up. But in the case when we already had the buffer exported it changed the return value: Previously we've return 0 on success, now we return the fd. This ABI change has been caught by i-g-t/prime_self_import/with_one_bo. Bugzilla: Cc: YoungJun Cho <> Cc: Seung-Woo Kim <> Cc: Kyungmin Park <> Tested-by: lu hua <> Signed-off-by: Daniel Vetter <> Reviewed-by: YoungJun Cho <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: add return check for dma_buf_fdYoungJun Cho1-11/+28
The dma_buf_fd() can return error when it fails to prepare fd, so the dma_buf needs to be put. Signed-off-by: YoungJun Cho <> Signed-off-by: Seung-Woo Kim <> Signed-off-by: Kyungmin Park <> Signed-off-by: Dave Airlie <>