AgeCommit message (Collapse)AuthorFilesLines
2014-11-25dma-buf/dmabuf-sync: add dmabuf_sync_reference_reservation functionsubmit/tizen/20141126.081420accepted/tizen/mobile/20141126.082557accepted/tizen/common/20141126.162632Inki Dae1-0/+46
This patch adds dmabuf_sync_reference_reservation function that can get/drop a reference to all fences added to reservation_object of a given dmabuf object, and calls it before blocked and after waked up to avoid null pointer dereference. Change-Id: I32f31b5a54e233d4a06b9915d26b04cda4e31d01 Signed-off-by: Inki Dae <>
2014-11-25drm/exynos: add dmabuf sync support for page flipInki Dae1-0/+74
This patch adds dmabuf sync support for page flip. With this patch, a dma buffer shared with cpu or other dma devices, which buffer is accessed by display or HDMI controllers, could be synchronized implicitly. Change-Id: I7bd88b293d4d99b87488c0c1c8b07cb72acfb5e6 Signed-off-by: Inki Dae <>
2014-11-24dma-buf/dmabuf-sync: fix request order checkInki Dae1-2/+9
This patch fixes infinite loop issue which could be incurred when same task requested sync before previous sync of the same task is signaled. For this, it checks if the context of a given sync object is same as previous one. Change-Id: I214a3011de4a8513d9aea5f5d56d85acf3ffae7e Signed-off-by: Inki Dae <>
2014-11-25cpufreq: stats: fix index of last entity in load_tableSeung-Woo Kim1-1/+1
Array load_table should be pointed with last entity index instead of maximum count in array. Change-Id: I097421908cdfc875679d84900f46e907833daca3 Signed-off-by: Seung-Woo Kim <>
2014-11-25drm/exynos: remove all codes relevant to dmabuf-syncsubmit/tizen/20141124.161713accepted/tizen/common/20141124.171414Inki Dae4-183/+0
This patch removes existing codes relevant to dmabuf-sync. For dmabuf-sync support, new codes will be added later. Change-Id: I1f794f95941519d8ef791f6fda01ab5f1e32807a Signed-off-by: Inki Dae <>
2014-11-24packaging: squash patches from upstream to HEAD-10submit/tizen/20141124.132219accepted/tizen/common/20141124.153414Chanho Park1-0/+1
This patch make one squashed patches from upstream to HEAD - 10 patches. This can fix argument list long error on qemu build environment Change-Id: I17e17a43d0516cdbb5383a1e0e8f479d68719d61 Signed-off-by: Chanho Park <>
2014-11-21devfreq: exynos4: Add sentinel into array to fix out-of-bound accesssubmit/tizen_common/20141124.090428submit/tizen/20141123.134900submit/tizen/20141121.110247Seung-Woo Kim1-0/+1
Without sentinel, of_match_node() to array causes out-of-bound memory access. So this patch adds sentinel into exynos4_busfreq_match. Change-Id: Iec2390fd367a22e6388eb97fae21eafc3836e206 Signed-off-by: Seung-Woo Kim <>
2014-11-21selftests: dma-buf: add selftest for dmabuf-syncChanho Park2-0/+196
Change-Id: Id5c0cf374ae497dc1c9825109de7dbc5c14ee4dc Signed-off-by: Chanho Park <>
2014-11-21dma-buf: add dma-buf-test driverChanho Park3-1/+334
Change-Id: Ia7d3608d3ec033453147c1bb500aca144ee42902 Signed-off-by: Chanho Park <>
2014-11-21dma-buf: add fcntl system call supportInki Dae1-0/+29
This patch adds lock callback to dmabuf framework. And this callback will be called by fcntl request. With this patch, fcntl system call can be used by userspace application for they can use dmabuf sync mechanism. Change-Id: Id3631cbc21e84c986e2efe040881e401ade180e8 Signed-off-by: Inki Dae <>
2014-11-21dma-buf/dmabuf-sync: add dmabuf sync frameworkInki Dae8-15/+1422
The DMA Buffer synchronization API provides buffer synchronization mechanism based on DMA buffer sharing machanism[1], dmafence and reservation frameworks[2]; i.e., buffer access control to CPU and DMA, and easy-to-use interfaces for device drivers and user application. And this API can be used for all dma devices using system memory as dma buffer, especially for most ARM based SoCs. For more details, please refer to Documentation/dma-buf-syc.txt [1] [2] Change-Id: I3b2084a3c331fc06992fa8d2a4c71378e88b10b5 Signed-off-by: Inki Dae <>
2014-11-21local: drm/exynos: fix dmabuf variable nameChanho Park2-3/+3
Change-Id: Ie106e49107d687d053259fa35f889374a8cc0924 Signed-off-by: Chanho Park <>
2014-11-21gpu/drm: fix compile error since backportedChanho Park2-51/+1
Change-Id: I5c9a62578057b164898c8f7880d0566e813dba65 Signed-off-by: Chanho Park <>
2014-11-21drm/vma: add access management helpersDavid Herrmann3-3/+192
The VMA offset manager uses a device-global address-space. Hence, any user can currently map any offset-node they want. They only need to guess the right offset. If we wanted per open-file offset spaces, we'd either need VM_NONLINEAR mappings or multiple "struct address_space" trees. As both doesn't really scale, we implement access management in the VMA manager itself. We use an rb-tree to store open-files for each VMA node. On each mmap call, GEM, TTM or the drivers must check whether the current user is allowed to map this file. We add a separate lock for each node as there is no generic lock available for the caller to protect the node easily. As we currently don't know whether an object may be used for mmap(), we have to do access management for all objects. If it turns out to slow down handle creation/deletion significantly, we can optimize it in several ways: - Most times only a single filp is added per bo so we could use a static "struct file *main_filp" which is checked/added/removed first before we fall back to the rbtree+drm_vma_offset_file. This could be even done lockless with rcu. - Let user-space pass a hint whether mmap() should be supported on the bo and avoid access-management if not. - .. there are probably more ideas once we have benchmarks .. v2: add drm_vma_node_verify_access() helper Signed-off-by: David Herrmann <> Signed-off-by: Dave Airlie <>
2014-11-21drm/mm: add "best_match" flag to drm_mm_insert_node()David Herrmann7-54/+66
Add a "best_match" flag similar to the drm_mm_search_*() helpers so we can convert TTM to use them in follow up patches. We can also inline the non-generic helpers and move them into the header to allow compile-time optimizations. To make calls to drm_mm_{search,insert}_node() more readable, this converts the boolean argument to a flagset. There are pending patches that add additional flags for top-down allocators and more. v2: - use flag parameter instead of boolean "best_match" - convert *_search_free() helpers to also use flags argument Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/i915/i915_gem.c Change-Id: I77640db74616de3c9ae874531f71bbd81b89d5fa
2014-11-21drm/vma: provide drm_vma_node_unmap() helperDavid Herrmann3-10/+24
Instead of unmapping the nodes in TTM and GEM users manually, we provide a generic wrapper which does the correct thing for all vma-nodes. v2: remove bdev->dev_mapping test in ttm_bo_unmap_virtual_unlocked() as ttm_mem_io_free_vm() does nothing in that case (io_reserved_vm is 0). v4: Fix docbook comments v5: use drm_vma_node_size() Cc: Dave Airlie <> Cc: Maarten Lankhorst <> Cc: Thomas Hellstrom <> Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/ttm/ttm_bo.c Change-Id: I4be1eeef8e5b4e81b5966449e2bf3691d8270aae
2014-11-21drm: add unified vma offset managerDavid Herrmann4-1/+613
If we want to map GPU memory into user-space, we need to linearize the addresses to not confuse mm-core. Currently, GEM and TTM both implement their own offset-managers to assign a pgoff to each object for user-space CPU access. GEM uses a hash-table, TTM uses an rbtree. This patch provides a unified implementation that can be used to replace both. TTM allows partial mmaps with a given offset, so we cannot use hashtables as the start address may not be known at mmap time. Hence, we use the rbtree-implementation of TTM. We could easily update drm_mm to use an rbtree instead of a linked list for it's object list and thus drop the rbtree from the vma-manager. However, this would slow down drm_mm object allocation for all other use-cases (rbtree insertion) and add another 4-8 bytes to each mm node. Hence, use the separate tree but allow for later migration. This is a rewrite of the 2012-proposal by David Airlie <> v2: - fix Docbook integration - drop drm_mm_node_linked() and use drm_mm_node_allocated() - remove unjustified likely/unlikely usage (but keep for rbtree paths) - remove BUG_ON() as drm_mm already does that - clarify page-based vs. byte-based addresses - use drm_vma_node_reset() for initialization, too v4: - allow external locking via drm_vma_offset_un/lock_lookup() - add locked lookup helper drm_vma_offset_lookup_locked() v5: - fix drm_vma_offset_lookup() to correctly validate range-mismatches (fix (offset > start + pages)) - fix drm_vma_offset_exact_lookup() to actually do what it says - remove redundant vm_pages member (add drm_vma_node_size() helper) - remove unneeded goto - fix documentation Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: Documentation/DocBook/drm.tmpl drivers/gpu/drm/Makefile Change-Id: If3427d06b0f9b24c65268912bb75c1b90fe9ad26
2014-11-21local: fence: use smp_mb__before_atomic_incChanho Park1-2/+2
The smp_mb__before_atomic is hard to merge it because there is too many precedence patches. Thus, just use smp_mb__before_atomic_inc because it always convert to smb_mb. Change-Id: Ia31d488eaf218cc4585d9256457855e1a9d6b321 Signed-off-by: Chanho Park <>
2014-11-21drm/gem: completely close gem_open vs. gem_close racesDaniel Vetter2-11/+34
The gem flink name holds a reference onto the object itself, and this self-reference would prevent an flink'ed object from every being freed. To break that loop we remove the flink name when the last userspace handle disappears, i.e. when obj->handle_count reaches 0. Now in gem_open we drop the dev->object_name_lock between the flink name lookup and actually adding the handle. This means a concurrent gem_close of the last handle could result in the flink name getting reaped right inbetween, i.e. Thread 1 Thread 2 gem_open gem_close flink -> obj lookup handle_count drops to 0 remove flink name create_handle handle_count++ If someone now flinks this object again, we'll get a new flink name. We can close this race by removing the lock dropping and making the entire lookup+handle_create sequence atomic. Unfortunately to still be able to share the handle_create logic this requires a handle_create_tail function which drops the lock - we can't hold the object_name_lock while calling into a driver's ->gem_open callback. Note that for flink fixing this race isn't really important, since racing gem_open against gem_close is clearly a userspace bug. And no matter how the race ends, we won't leak any references. But with dma-buf where the userspace dma-buf fd itself is refcounted this is a valid sequence and hence we should fix it. Therefore this patch here is just a warm-up exercise (and for consistency between flink buffer sharing and dma-buf buffer sharing with self-imports). Also note that this extension of the critical section in gem_open protected by dev->object_name_lock only works because it's now a mutex: A spinlock would conflict with the potential memory allocation in idr_preload(). This is exercises by igt/gem_flink_race/flink_name. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: switch dev->object_name_lock to a mutexDaniel Vetter3-10/+14
I want to wrap the creation of a dma-buf from a gem object in it, so that the obj->export_dma_buf cache can be atomically filled in. Instead of creating a new mutex just for that variable I've figured I can reuse the existing dev->object_name_lock, especially since the new semantics will exactly mirror the flink obj->name already protected by that lock. v2: idr_preload/idr_preload_end is now an atomic section, so need to move the mutex locking outside. [airlied: fix up conflict with patch to make debugfs use lock] Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/drm_info.c Change-Id: Ic4ca630b9c6092c942208ee9a04409d4f6561fc0
2014-11-21drm/gem: make drm_gem_object_handle_unreference_unlocked staticDaniel Vetter2-2/+1
No one outside of drm should use this, the official interfaces are drm_gem_handle_create and drm_gem_handle_delete. The handle refcounting is purely an implementation detail of gem. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: fix up flink name create raceDaniel Vetter4-22/+32
This is the 2nd attempt, I've always been a bit dissatisified with the tricky nature of the first one: The issue is that the flink ioctl can race with calling gem_close on the last gem handle. In that case we'll end up with a zero handle count, but an flink name (and it's corresponding reference). Which results in a neat space leak. In my first attempt I've solved this by rechecking the handle count. But fundamentally the issue is that ->handle_count isn't your usual refcount - it can be resurrected from 0 among other things. For those special beasts atomic_t often suggest way more ordering that it actually guarantees. To prevent being tricked by those hairy semantics take the easy way out and simply protect the handle with the existing dev->object_name_lock. With that change implemented it's dead easy to fix the flink vs. gem close reace: When we try to create the name we simply have to check whether there's still officially a gem handle around and if not refuse to create the flink name. Since the handle count decrement and flink name destruction is now also protected by that lock the reace is gone and we can't ever leak the flink reference again. Outside of the drm core only the exynos driver looks at the handle count, and tbh I have no idea why (it's just for debug dmesg output luckily). I've considered inlining the drm_gem_object_handle_free, but I plan to add more name-like things (like the exported dma_buf) to this scheme, so it's clearer to leave the handle freeing in its own function. This is exercised by the new gem_flink_race i-g-t testcase, which on my snb leaks gem objects at a rate of roughly 1k objects/s. v2: Fix up the error path handling in handle_create and make it more robust by simply calling object_handle_unreference. v3: Fix up the handle_unreference logic bug - atomic_dec_and_test retursn 1 for 0. Oops. v4: Squash in inlining of drm_gem_object_handle_reference as suggested by Dave Airlie and add a note that we now have a testcase. Cc: Dave Airlie <> Cc: Inki Dae <> Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: WARN about unbalanced handle refcountsDaniel Vetter1-1/+1
Trying to drop a reference we don't have is a pretty serious bug. Trying to paper over it is an even worse offense. So scream into dmesg with a big WARN in case that ever happens. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: remove bogus NULL check from drm_gem_object_handle_unreference_unlockedDaniel Vetter1-3/+0
Calling this function with a NULL object is simply a bug, so papering over a NULL object not a good idea. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: move drm_gem_object_handle_unreference_unlocked into drm_gem.cDaniel Vetter2-55/+55
We have three callers of this function now and it's neither performance critical nor really small. So an inline function feels like overkill and unecessarily separates the different parts of the code. Since all callers of drm_gem_object_handle_free are now in drm_gem.c we can make that static (and remove the unused EXPORT_SYMBOL). To avoid a forward declaration move it (and drm_gem_object_free_bug) up a bit. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: remove drm_gem_object_handle_unreferenceDaniel Vetter1-18/+0
It's unused, everyone is using the _unlocked variant only. Signed-off-by: Daniel Vetter <> Reviewed-by: Rob Clark <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: add shmem get/put page helpersRob Clark2-0/+107
Basically just extracting some code duplicated in gma500, omapdrm, udl, and upcoming msm driver. Signed-off-by: Rob Clark <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: add drm_gem_create_mmap_offset_size()Rob Clark2-4/+25
Variant of drm_gem_create_mmap_offset() which doesn't make the assumption that virtual size and physical size (obj->size) are the same. This is needed in omapdrm to deal with tiled buffers. And lets us get rid of a duplicated and slightly modified version of drm_gem_create_mmap_offset() in omapdrm. Signed-off-by: Rob Clark <> Reviewed-by: David Herrmann <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: create drm_gem_dumb_destroyDaniel Vetter43-185/+32
All the gem based kms drivers really want the same function to destroy a dumb framebuffer backing storage object. So give it to them and roll it out in all drivers. This still leaves the option open for kms drivers which don't use GEM for backing storage, but it does decently simplify matters for gem drivers. Acked-by: Inki Dae <> Acked-by: Laurent Pinchart <> Cc: Intel Graphics Development <> Cc: Ben Skeggs <> Reviwed-by: Rob Clark <> Cc: Alex Deucher <> Acked-by: Patrik Jakobsson <> Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/rcar-du/rcar_du_drv.c Change-Id: I991aad3f0745732f203a85ff8b5f43e328c045a6
2014-11-21drm/gem: fix mmap vma size calculationsDavid Herrmann1-1/+1
The VMA manager is page-size based so drm_vma_node_size() returns the size in pages. However, drm_gem_mmap_obj() requires the size in bytes. Apply PAGE_SHIFT so we no longer get EINVAL during mmaps due to too small buffers. This bug was introduced in commit: 0de23977cfeb5b357ec884ba15417ae118ff9e9b "drm/gem: convert to new unified vma manager" Fixes i915 gtt mmap failure reported by Sedat Dilek in: Re: linux-next: Tree for Jul 25 [ call-trace: drm | drm-intel related? ] Cc: Daniel Vetter <> Cc: Chris Wilson <> Signed-off-by: David Herrmann <> Reported-by: Sedat Dilek <> Tested-by: Sedat Dilek <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: convert to new unified vma managerDavid Herrmann11-186/+62
Use the new vma manager instead of the old hashtable. Also convert all drivers to use the new convenience helpers. This drops all the (map_list.hash.key << PAGE_SHIFT) non-sense. Locking and access-management is exactly the same as before with an additional lock inside of the vma-manager, which strictly wouldn't be needed for gem. v2: - rebase on drm-next - init nodes via drm_vma_node_reset() in drm_gem.c v3: - fix tegra v4: - remove duplicate if (drm_vma_node_has_offset()) checks - inline now trivial drm_vma_node_offset_addr() calls v5: - skip node-reset on gem-init due to kzalloc() - do not allow mapping gem-objects with offsets (backwards compat) - remove unneccessary casts Cc: Inki Dae <> Cc: Rob Clark <> Cc: Dave Airlie <> Cc: Thierry Reding <> Signed-off-by: David Herrmann <> Acked-by: Patrik Jakobsson <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: simplify object initializationDavid Herrmann7-31/+20
drm_gem_object_init() and drm_gem_private_object_init() do exactly the same (except for shmem alloc) so make the first use the latter to reduce code duplication. Also drop the return code from drm_gem_private_object_init(). It seems unlikely that we will extend it any time soon so no reason to keep it around. This simplifies code paths in drivers, too. Last but not least, fix gma500 to call drm_gem_object_release() before freeing objects that were allocated via drm_gem_private_object_init(). That isn't actually necessary for now, but might be in the future. Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Reviewed-by: Patrik Jakobsson <> Acked-by: Rob Clark <> Signed-off-by: Dave Airlie <>
2014-11-21drm: make drm_mm_init() return voidDavid Herrmann5-24/+8
There is no reason to return "int" as this function never fails. Furthermore, several drivers (ast, sis) already depend on this. Signed-off-by: David Herrmann <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/gem: add mutex lock when using drm_gem_mmap_objYoungJun Cho3-0/+291
The drm_gem_mmap_obj() has to be protected with dev->struct_mutex, but some caller functions do not. So it adds mutex lock to missing callers and adds assertion to check whether drm_gem_mmap_obj() is called with mutex lock or not. Signed-off-by: YoungJun Cho <> Signed-off-by: Seung-Woo Kim <> Signed-off-by: Kyungmin Park <> Reviewed-by: Maarten Lankhorst <> Reviewed-by: Laurent Pinchart <> Reviewed-by: Rob Clark <> Reviewed-by: Maarten Lankhorst <> Signed-off-by: Dave Airlie <> Conflicts: drivers/gpu/drm/drm_gem_cma_helper.c drivers/gpu/drm/omapdrm/omap_gem_dmabuf.c Change-Id: Icb683c218b3455f113c073c33166faab5a7fcc4c
2014-11-21drm/gem: Split drm_gem_mmap() into object search and object mappingLaurent Pinchart2-31/+54
The drm_gem_mmap() function first finds the GEM object to be mapped based on the fake mmap offset and then maps the object. Split the object mapping code into a standalone drm_gem_mmap_obj() function that can be used to implement dma-buf mmap() operations. Signed-off-by: Laurent Pinchart <> Reviewed-by: Rob Clark <>
2014-11-21drm/prime: Simplify drm_gem_remove_prime_handlesDaniel Vetter2-6/+15
with the reworking semantics and locking of the obj->dma_buf pointer this pointer is always set as long as there's still a gem handle around and a dma_buf associated with this gem object. Also, the per file-priv lookup-cache for dma-buf importing is also unified between foreign and native objects. Hence we don't need to special case the clean any more and can simply drop the clause which only runs for foreing objects, i.e. with obj->import_attach set. Note that with this change (actually with the previous one to always set up obj->dma_buf even for foreign objects) it is no longer required to set obj->import_attach when importing a foreing object. So update comments accordingly, too. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21seqcount: Add lockdep functionality to seqcount/seqlock structuresJohn Stultz7-21/+90
Currently seqlocks and seqcounts don't support lockdep. After running across a seqcount related deadlock in the timekeeping code, I used a less-refined and more focused variant of this patch to narrow down the cause of the issue. This is a first-pass attempt to properly enable lockdep functionality on seqlocks and seqcounts. Since seqcounts are used in the vdso gettimeofday code, I've provided non-lockdep accessors for those needs. I've also handled one case where there were nested seqlock writers and there may be more edge cases. Comments and feedback would be appreciated! Signed-off-by: John Stultz <> Signed-off-by: Peter Zijlstra <> Cc: Eric Dumazet <> Cc: Li Zefan <> Cc: Mathieu Desnoyers <> Cc: Steven Rostedt <> Cc: "David S. Miller" <> Cc: Link: Signed-off-by: Ingo Molnar <>
2014-11-21seqlock: Add a new locking reader typeWaiman Long1-5/+63
The sequence lock (seqlock) was originally designed for the cases where the readers do not need to block the writers by making the readers retry the read operation when the data change. Since then, the use cases have been expanded to include situations where a thread does not need to change the data (effectively a reader) at all but have to take the writer lock because it can't tolerate changes to the protected structure. Some examples are the d_path() function and the getcwd() syscall in fs/dcache.c where the functions take the writer lock on rename_lock even though they don't need to change anything in the protected data structure at all. This is inefficient as a reader is now blocking other sequence number reading readers from moving forward by pretending to be a writer. This patch tries to eliminate this inefficiency by introducing a new type of locking reader to the seqlock locking mechanism. This new locking reader will try to take an exclusive lock preventing other writers and locking readers from going forward. However, it won't affect the progress of the other sequence number reading readers as the sequence number won't be changed. Signed-off-by: Waiman Long <> Cc: Alexander Viro <> Signed-off-by: Linus Torvalds <>
2014-11-21lockdep: Introduce lock_acquire_exclusive()/shared() helper macrosMichel Lespinasse1-69/+23
In lockdep.h, the spinlock/mutex/rwsem/rwlock/lock_map acquire macros have different definitions based on the value of CONFIG_PROVE_LOCKING. We have separate ifdefs for each of these definitions, which seems redundant. Introduce lock_acquire_{exclusive,shared,shared_recursive} helpers which will have different definitions based on CONFIG_PROVE_LOCKING. Then all other helper macros can be defined based on the above ones, which reduces the amount of ifdefined code. Signed-off-by: Michel Lespinasse <> Signed-off-by: Andrew Morton <> Signed-off-by: Peter Zijlstra <> Cc: Oleg Nesterov <> Cc: Lai Jiangshan <> Cc: "Srivatsa S. Bhat" <> Cc: Rusty Russell <> Cc: Andi Kleen <> Cc: "Paul E. McKenney" <> Cc: Steven Rostedt <> Link: Signed-off-by: Ingo Molnar <>
2014-11-21reservation: add suppport for read-only access using rcuMaarten Lankhorst5-54/+400
This adds some extra functions to deal with rcu. reservation_object_get_fences_rcu() will obtain the list of shared and exclusive fences without obtaining the ww_mutex. reservation_object_wait_timeout_rcu() will wait on all fences of the reservation_object, without obtaining the ww_mutex. reservation_object_test_signaled_rcu() will test if all fences of the reservation_object are signaled without using the ww_mutex. reservation_object_get_excl and reservation_object_get_list require the reservation object to be held, updating requires write_seqcount_begin/end. If only the exclusive fence is needed, rcu_dereference followed by fence_get_rcu can be used, if the shared fences are needed it's recommended to use the supplied functions. Signed-off-by: Maarten Lankhorst <> Acked-by: Sumit Semwal <> Acked-by: Daniel Vetter <> Reviewed-By: Thomas Hellstrom <> Signed-off-by: Greg Kroah-Hartman <>
2014-11-21reservation: update api and add some helpersMaarten Lankhorst4-19/+229
Move the list of shared fences to a struct, and return it in reservation_object_get_list(). Add reservation_object_get_excl to get the exclusive fence. Add reservation_object_reserve_shared(), which reserves space in the reservation_object for 1 more shared fence. reservation_object_add_shared_fence() and reservation_object_add_excl_fence() are used to assign a new fence to a reservation_object pointer, to complete a reservation. Changes since v1: - Add reservation_object_get_excl, reorder code a bit. Signed-off-by: Maarten Lankhorst <> Acked-by: Sumit Semwal <> Acked-by: Daniel Vetter <> Signed-off-by: Greg Kroah-Hartman <>
2014-11-21dma-buf: add poll support, v3Maarten Lankhorst2-0/+147
Thanks to Fengguang Wu for spotting a missing static cast. v2: - Kill unused variable need_shared. v3: - Clarify the BUG() in dma_buf_release some more. (Rob Clark) Signed-off-by: Maarten Lankhorst <> Acked-by: Sumit Semwal <> Acked-by: Daniel Vetter <> Signed-off-by: Greg Kroah-Hartman <> Signed-off-by: Chanho Park <> Conflicts: drivers/dma-buf/dma-buf.c Change-Id: I6c0d192dfd53809a16d3564e3863c1d1f0f348c7
2014-11-21reservation: add support for fences to enable cross-device synchronisationMaarten Lankhorst1-1/+19
Signed-off-by: Maarten Lankhorst <> Acked-by: Sumit Semwal <> Acked-by: Daniel Vetter <> Reviewed-by: Rob Clark <> Signed-off-by: Greg Kroah-Hartman <>
2014-11-21mutex: Move ww_mutex definitions to ww_mutex.hMaarten Lankhorst5-359/+381
Move the definitions for wound/wait mutexes out to a separate header, ww_mutex.h. This reduces clutter in mutex.h, and increases readability. Suggested-by: Linus Torvalds <> Signed-off-by: Maarten Lankhorst <> Acked-by: Peter Zijlstra <> Acked-by: Rik van Riel <> Acked-by: Maarten Lankhorst <> Cc: Dave Airlie <> Link: [ Tidied up the code a bit. ] Signed-off-by: Ingo Molnar <>
2014-11-21dma-buf: use reservation objectsMaarten Lankhorst7-9/+260
This allows reservation objects to be used in dma-buf. it's required for implementing polling support on the fences that belong to a dma-buf. Signed-off-by: Maarten Lankhorst <> Acked-by: Mauro Carvalho Chehab <> #drivers/media/v4l2-core/ Acked-by: Thomas Hellstrom <> #drivers/gpu/drm/ttm Acked-by: Sumit Semwal <> Acked-by: Daniel Vetter <> Signed-off-by: Vincent Stehlé <> #drivers/gpu/drm/armada/ Signed-off-by: Greg Kroah-Hartman <> Signed-off-by: Chanho Park <> Conflicts: drivers/gpu/drm/armada/armada_gem.c drivers/gpu/drm/drm_prime.c drivers/gpu/drm/exynos/exynos_drm_dmabuf.c drivers/gpu/drm/i915/i915_gem_dmabuf.c drivers/gpu/drm/nouveau/nouveau_drm.c drivers/gpu/drm/nouveau/nouveau_gem.h drivers/gpu/drm/nouveau/nouveau_prime.c drivers/gpu/drm/radeon/radeon_drv.c drivers/gpu/drm/tegra/gem.c drivers/gpu/drm/ttm/ttm_object.c drivers/staging/android/ion/ion.c Change-Id: I44fbb1f41500deaf9067eb5d7e1c6ed758231d69
2014-11-21drm/prime: double lock typoDan Carpenter1-1/+1
There is a typo so deadlocks on error instead of unlocking. Signed-off-by: Dan Carpenter <> Reviewed-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: Always add exported buffers to the handle cacheDaniel Vetter3-36/+53
... not only when the dma-buf is freshly created. In contrived examples someone else could have exported/imported the dma-buf already and handed us the gem object with a flink name. If such on object gets reexported as a dma_buf we won't have it in the handle cache already, which breaks the guarantee that for dma-buf imports we always hand back an existing handle if there is one. This is exercised by igt/prime_self_import/with_one_bo_two_files Now if we extend the locked sections just a notch more we can also plug th racy buf/handle cache setup in handle_to_fd: If evil userspace races a concurrent gem close against a prime export operation we can end up tearing down the gem handle before the dma buf handle cache is set up. When handle_to_fd gets around to adding the handle to the cache there will be no one left to clean it up, effectily leaking the bo (and the dma-buf, since the handle cache holds a ref on the dma-buf): Thread A Thread B handle_to_fd: lookup gem object from handle creates new dma_buf gem_close on the same handle obj->dma_buf is set, but file priv buf handle cache has no entry obj->handle_count drops to 0 drm_prime_add_buf_handle sets up the handle cache -> We have a dma-buf reference in the handle cache, but since the handle_count of the gem object already dropped to 0 no on will clean it up. When closing the drm device fd we'll hit the WARN_ON in drm_prime_destroy_file_private. The important change is to extend the critical section of the filp->prime.lock to cover the gem handle lookup. This serializes with a concurrent gem handle close. This leak is exercised by igt/prime_self_import/export-vs-gem_close-race Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: make drm_prime_lookup_buf_handle staticDaniel Vetter2-15/+15
... and move it to the top of the function to avoid a forward declaration. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>
2014-11-21drm/prime: proper locking+refcounting for obj->dma_buf linkDaniel Vetter4-19/+136
The export dma-buf cache is semantically similar to an flink name. So semantically it makes sense to treat it the same and remove the name (i.e. the dma_buf pointer) and its references when the last gem handle disappears. Again we need to be careful, but double so: Not just could someone race and export with a gem close ioctl (so we need to recheck obj->handle_count again when assigning the new name), but multiple exports can also race against each another. This is prevented by holding the dev->object_name_lock across the entire section which touches obj->dma_buf. With the new scheme we also need to reinstate the obj->dma_buf link at import time (in case the only reference userspace has held in-between was through the dma-buf fd and not through any native gem handle). For simplicity we don't check whether it's a native object but unconditionally set up that link - with the new scheme of removing the obj->dma_buf reference when the last handle disappears we can do that. To make it clear that this is not just for exported buffers anymore als rename it from export_dma_buf to dma_buf. To make sure that now one can race a fd_to_handle or handle_to_fd with gem_close we use the same tricks as in flink of extending the dev->object_name_locking critical section. With this change we finally have a guaranteed 1:1 relationship (at least for native objects) between gem objects and dma-bufs, even accounting for races (which can happen since the dma-buf itself holds a reference while in-flight). This prevent igt/prime_self_import/export-vs-gem_close-race from Oopsing the kernel. There is still a leak though since the per-file priv dma-buf/handle cache handling is racy. That will be fixed in a later patch. v2: Remove the bogus dma_buf_put from the export_and_register_object failure path if we've raced with the handle count dropping to 0. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <> Signed-off-by: Chanho Park <> Conflicts: drivers/gpu/drm/drm_gem.c Change-Id: I915b0e73cedffa0ba358cf00510e19dccfcb4703
2014-11-21drm/prime: clarify logic a bit in drm_gem_prime_fd_to_handleDaniel Vetter1-3/+1
if (!ret) implies that ret == 0, so no need to clear it again. And explicitly check for ret == 0 to indicate that we're checking an errno integer. Signed-off-by: Daniel Vetter <> Signed-off-by: Dave Airlie <>