diff options
author | Thomas Hellstrom <thellstrom@vmware.com> | 2010-11-17 12:28:29 +0000 |
---|---|---|
committer | Dave Airlie <airlied@redhat.com> | 2010-11-22 13:25:18 +1000 |
commit | 702adba22433c175e8429a47760f35ca16caf1cd (patch) | |
tree | a9c6a1ad8ebaf9970a87b7047357b6d7232b70e6 /drivers/gpu/drm/radeon/radeon_object.c | |
parent | 96726fe50feae74812a2ccf5d5da23cb01c0a413 (diff) | |
download | kernel-common-702adba22433c175e8429a47760f35ca16caf1cd.tar.gz kernel-common-702adba22433c175e8429a47760f35ca16caf1cd.tar.bz2 kernel-common-702adba22433c175e8429a47760f35ca16caf1cd.zip |
drm/ttm/radeon/nouveau: Kill the bo lock in favour of a bo device fence_lock
The bo lock used only to protect the bo sync object members, and since it
is a per bo lock, fencing a buffer list will see a lot of locks and unlocks.
Replace it with a per-device lock that protects the sync object members on
*all* bos. Reading and setting these members will always be very quick, so
the risc of heavy lock contention is microscopic. Note that waiting for
sync objects will always take place outside of this lock.
The bo device fence lock will eventually be replaced with a seqlock /
rcu mechanism so we can determine that a bo is idle under a
rcu / read seqlock.
However this change will allow us to batch fencing and unreserving of
buffers with a minimal amount of locking.
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jerome Glisse <j.glisse@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Diffstat (limited to 'drivers/gpu/drm/radeon/radeon_object.c')
-rw-r--r-- | drivers/gpu/drm/radeon/radeon_object.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c index 1d067743fee0..e939cb6a91cc 100644 --- a/drivers/gpu/drm/radeon/radeon_object.c +++ b/drivers/gpu/drm/radeon/radeon_object.c @@ -369,11 +369,11 @@ void radeon_bo_list_fence(struct list_head *head, void *fence) list_for_each_entry(lobj, head, list) { bo = lobj->bo; - spin_lock(&bo->tbo.lock); + spin_lock(&bo->tbo.bdev->fence_lock); old_fence = (struct radeon_fence *)bo->tbo.sync_obj; bo->tbo.sync_obj = radeon_fence_ref(fence); bo->tbo.sync_obj_arg = NULL; - spin_unlock(&bo->tbo.lock); + spin_unlock(&bo->tbo.bdev->fence_lock); if (old_fence) { radeon_fence_unref(&old_fence); } |