diff options
author | Manfred Spraul <manfred@colorfullife.com> | 2013-09-30 13:45:06 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-10-18 07:45:48 -0700 |
commit | 901f6fedc5340d66e2ca67c70dfee926cb5a1ea0 (patch) | |
tree | 95fbc8a2a729c57538e3aba54c84fe380a20c50f | |
parent | 184076a9f9306c9bef6843bf4cc7b7e15b8fc7b4 (diff) | |
download | linux-3.10-901f6fedc5340d66e2ca67c70dfee926cb5a1ea0.tar.gz linux-3.10-901f6fedc5340d66e2ca67c70dfee926cb5a1ea0.tar.bz2 linux-3.10-901f6fedc5340d66e2ca67c70dfee926cb5a1ea0.zip |
ipc/sem.c: optimize sem_lock()
commit 6d07b68ce16ae9535955ba2059dedba5309c3ca1 upstream.
Operations that need access to the whole array must guarantee that there
are no simple operations ongoing. Right now this is achieved by
spin_unlock_wait(sem->lock) on all semaphores.
If complex_count is nonzero, then this spin_unlock_wait() is not
necessary, because it was already performed in the past by the thread
that increased complex_count and even though sem_perm.lock was dropped
inbetween, no simple operation could have started, because simple
operations cannot start when complex_count is non-zero.
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Cc: Mike Galbraith <bitbucket@online.de>
Cc: Rik van Riel <riel@redhat.com>
Reviewed-by: Davidlohr Bueso <davidlohr@hp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r-- | ipc/sem.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/ipc/sem.c b/ipc/sem.c index 4a92c0447ad..e20658d76bb 100644 --- a/ipc/sem.c +++ b/ipc/sem.c @@ -257,12 +257,20 @@ static void sem_rcu_free(struct rcu_head *head) * Caller must own sem_perm.lock. * New simple ops cannot start, because simple ops first check * that sem_perm.lock is free. + * that a) sem_perm.lock is free and b) complex_count is 0. */ static void sem_wait_array(struct sem_array *sma) { int i; struct sem *sem; + if (sma->complex_count) { + /* The thread that increased sma->complex_count waited on + * all sem->lock locks. Thus we don't need to wait again. + */ + return; + } + for (i = 0; i < sma->sem_nsems; i++) { sem = sma->sem_base + i; spin_unlock_wait(&sem->lock); |