summaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorDaniel Vetter <daniel.vetter@ffwll.ch>2013-06-20 13:31:17 +0200
committerChanho Park <chanho61.park@samsung.com>2014-11-18 11:43:27 +0900
commit0a91dc191b6a008f340dae5d4c07a6c50ce41ece (patch)
tree8cd0aab2f0db1716810d1020ba122ec0f90e8900 /include
parent4b0f1266e117c32618dd6e15f732b77e43140bc2 (diff)
downloadlinux-3.10-0a91dc191b6a008f340dae5d4c07a6c50ce41ece.tar.gz
linux-3.10-0a91dc191b6a008f340dae5d4c07a6c50ce41ece.tar.bz2
linux-3.10-0a91dc191b6a008f340dae5d4c07a6c50ce41ece.zip
mutex: Add w/w mutex slowpath debugging
Injects EDEADLK conditions at pseudo-random interval, with exponential backoff up to UINT_MAX (to ensure that every lock operation still completes in a reasonable time). This way we can test the wound slowpath even for ww mutex users where contention is never expected, and the ww deadlock avoidance algorithm is only needed for correctness against malicious userspace. An example would be protecting kernel modesetting properties, which thanks to single-threaded X isn't really expected to contend, ever. I've looked into using the CONFIG_FAULT_INJECTION infrastructure, but decided against it for two reasons: - EDEADLK handling is mandatory for ww mutex users and should never affect the outcome of a syscall. This is in contrast to -ENOMEM injection. So fine configurability isn't required. - The fault injection framework only allows to set a simple probability for failure. Now the probability that a ww mutex acquire stage with N locks will never complete (due to too many injected EDEADLK backoffs) is zero. But the expected number of ww_mutex_lock operations for the completely uncontended case would be O(exp(N)). The per-acuiqire ctx exponential backoff solution choosen here only results in O(log N) overhead due to injection and so O(log N * N) lock operations. This way we can fail with high probability (and so have good test coverage even for fancy backoff and lock acquisition paths) without running into patalogical cases. Note that EDEADLK will only ever be injected when we managed to acquire the lock. This prevents any behaviour changes for users which rely on the EALREADY semantics. Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: dri-devel@lists.freedesktop.org Cc: linaro-mm-sig@lists.linaro.org Cc: rostedt@goodmis.org Cc: daniel@ffwll.ch Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20130620113117.4001.21681.stgit@patser Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/mutex.h8
1 files changed, 8 insertions, 0 deletions
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index a56b0ccc8a6..3793ed7feee 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -98,6 +98,10 @@ struct ww_acquire_ctx {
#ifdef CONFIG_DEBUG_LOCK_ALLOC
struct lockdep_map dep_map;
#endif
+#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
+ unsigned deadlock_inject_interval;
+ unsigned deadlock_inject_countdown;
+#endif
};
struct ww_mutex {
@@ -283,6 +287,10 @@ static inline void ww_acquire_init(struct ww_acquire_ctx *ctx,
&ww_class->acquire_key, 0);
mutex_acquire(&ctx->dep_map, 0, 0, _RET_IP_);
#endif
+#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
+ ctx->deadlock_inject_interval = 1;
+ ctx->deadlock_inject_countdown = ctx->stamp & 0xf;
+#endif
}
/**