diff options
author | Dan Williams <dan.j.williams@intel.com> | 2009-01-06 11:38:14 -0700 |
---|---|---|
committer | Dan Williams <dan.j.williams@intel.com> | 2009-01-06 11:38:14 -0700 |
commit | 6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1 (patch) | |
tree | afba24357d1f4ff69ccb2b39a19542546590a50b /crypto/async_tx | |
parent | 07f2211e4fbce6990722d78c4f04225da9c0e9cf (diff) | |
download | linux-stable-6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1.tar.gz linux-stable-6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1.tar.bz2 linux-stable-6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1.zip |
dmaengine: up-level reference counting to the module level
Simply, if a client wants any dmaengine channel then prevent all dmaengine
modules from being removed. Once the clients are done re-enable module
removal.
Why?, beyond reducing complication:
1/ Tracking reference counts per-transaction in an efficient manner, as
is currently done, requires a complicated scheme to avoid cache-line
bouncing effects.
2/ Per-transaction ref-counting gives the false impression that a
dma-driver can be gracefully removed ahead of its user (net, md, or
dma-slave)
3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
if such an engine were built one day we still would not need to notify
clients of remove events. The driver can simply return NULL to a
->prep() request, something that is much easier for a client to handle.
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Diffstat (limited to 'crypto/async_tx')
-rw-r--r-- | crypto/async_tx/async_tx.c | 4 |
1 files changed, 0 insertions, 4 deletions
diff --git a/crypto/async_tx/async_tx.c b/crypto/async_tx/async_tx.c index 8cfac182165d..43fe4cbe71e6 100644 --- a/crypto/async_tx/async_tx.c +++ b/crypto/async_tx/async_tx.c @@ -198,8 +198,6 @@ dma_channel_add_remove(struct dma_client *client, /* add the channel to the generic management list */ master_ref = kmalloc(sizeof(*master_ref), GFP_KERNEL); if (master_ref) { - /* keep a reference until async_tx is unloaded */ - dma_chan_get(chan); init_dma_chan_ref(master_ref, chan); spin_lock_irqsave(&async_tx_lock, flags); list_add_tail_rcu(&master_ref->node, @@ -221,8 +219,6 @@ dma_channel_add_remove(struct dma_client *client, spin_lock_irqsave(&async_tx_lock, flags); list_for_each_entry(ref, &async_tx_master_list, node) if (ref->chan == chan) { - /* permit backing devices to go away */ - dma_chan_put(ref->chan); list_del_rcu(&ref->node); call_rcu(&ref->rcu, free_dma_chan_ref); found = 1; |