path: root/kernel/hung_task.c
AgeCommit message (Collapse)AuthorFilesLines
2012-04-25hung task debugging: Inject NMI when hung and going to panicSasha Levin1-1/+3
Send an NMI to all CPUs when a hung task is detected and the hung task code is configured to panic. This gives us a fairly uptodate snapshot of all CPUs in the system. This lets us get stack trace of all CPUs which makes life easier trying to debug a deadlock, and the NMI doesn't change anything since the next step is a kernel panic. Signed-off-by: Sasha Levin <> Cc: Linus Torvalds <> Cc: Andrew Morton <> Link: [ extended the changelog a bit ] Signed-off-by: Ingo Molnar <>
2012-03-05hung_task: fix the broken rcu_lock_break() logicOleg Nesterov1-4/+7
check_hung_uninterruptible_tasks()->rcu_lock_break() introduced by "softlockup: check all tasks in hung_task" commit ce9dbe24 looks absolutely wrong. - rcu_lock_break() does put_task_struct(). If the task has exited it is not safe to even read its ->state, nothing protects this task_struct. - The TASK_DEAD checks are wrong too. Contrary to the comment, we can't use it to check if the task was unhashed. It can be unhashed without TASK_DEAD, or it can be valid with TASK_DEAD. For example, an autoreaping task can do release_task(current) long before it sets TASK_DEAD in do_exit(). Or, a zombie task can have ->state == TASK_DEAD but release_task() was not called, and in this case we must not break the loop. Change this code to check pid_alive() instead, and do this before we drop the reference to the task_struct. Note: while_each_thread() under rcu_read_lock() is not really safe, it can livelock. This will be fixed later, but fortunately in this case the "max_count" logic saves us anyway. Signed-off-by: Oleg Nesterov <> Acked-by: Frederic Weisbecker <> Acked-by: Mandeep Singh Baines <> Acked-by: Paul E. McKenney <> Cc: Tetsuo Handa <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2012-01-04hung_task: fix false positive during vforkMandeep Singh Baines1-4/+10
vfork parent uninterruptibly and unkillably waits for its child to exec/exit. This wait is of unbounded length. Ignore such waits in the hung_task detector. Signed-off-by: Mandeep Singh Baines <> Reported-by: Sasha Levin <> LKML-Reference: <1325344394.28904.43.camel@lappy> Cc: Linus Torvalds <> Cc: Ingo Molnar <> Cc: Peter Zijlstra <> Cc: Andrew Morton <> Cc: John Kacur <> Cc: Signed-off-by: Linus Torvalds <>
2011-10-31kernel: Map most files to use export.h instead of module.hPaul Gortmaker1-1/+1
The changed files were only including linux/module.h for the EXPORT_SYMBOL infrastructure, and nothing else. Revector them onto the isolated export header for faster compile times. Nothing to see here but a whole lot of instances of: -#include <linux/module.h> +#include <linux/export.h> This commit is only changing the kernel dir; next targets will probably be mm, fs, the arch dirs, etc. Signed-off-by: Paul Gortmaker <>
2011-04-28watchdog, hung_task_timeout: Add Kconfig configurable defaultJeff Mahoney1-1/+1
This patch allows the default value for sysctl_hung_task_timeout_secs to be set at build time. The feature carries virtually no overhead, so it makes sense to keep it enabled. On heavily loaded systems, though, it can end up triggering stack traces when there is no bug other than the system being underprovisioned. We use this patch to keep the hung task facility available but disabled at boot-time. The default of 120 seconds is preserved. As a note, commit e162b39a may have accidentally reverted commit fb822db4, which raised the default from 120 seconds to 480 seconds. Signed-off-by: Jeff Mahoney <> Acked-by: Mandeep Singh Baines <> Link: Signed-off-by: Ingo Molnar <>
2010-08-17lockup detector: Fix grammar by adding a missing "to" in the commentsJohn Kacur1-1/+1
This fixes a minor grammar problem in the comments in hung_task.c Signed-off-by: John Kacur <> Cc: Peter Zijlstra <> LKML-Reference: <> Signed-off-by: Ingo Molnar <>
2010-08-17lockdep: Remove __debug_show_held_locksJohn Kacur1-1/+1
There is no longer any functional difference between __debug_show_held_locks() and debug_show_held_locks(), so remove the former. Signed-off-by: John Kacur <> Cc: Peter Zijlstra <> LKML-Reference: <> Signed-off-by: Ingo Molnar <>
2009-11-27softlockup: Fix hung_task_check_count sysctlAnton Blanchard1-1/+1
I'm seeing spikes of up to 0.5ms in khungtaskd on a large machine. To reduce this source of jitter I tried setting hung_task_check_count to 0: # echo 0 > /proc/sys/kernel/hung_task_check_count which didn't have the intended response. Change to a post increment of max_count, so a value of 0 means check 0 tasks. Signed-off-by: Anton Blanchard <> Acked-by: Frederic Weisbecker <> Cc: LKML-Reference: <20091127022820.GU32182@kryten> Signed-off-by: Ingo Molnar <>
2009-09-24sysctl: remove "struct file *" argument of ->proc_handlerAlexey Dobriyan1-2/+2
It's unused. It isn't needed -- read or write flag is already passed and sysctl shouldn't care about the rest. It _was_ used in two places at arch/frv for some reason. Signed-off-by: Alexey Dobriyan <> Cc: David Howells <> Cc: "Eric W. Biederman" <> Cc: Al Viro <> Cc: Ralf Baechle <> Cc: Martin Schwidefsky <> Cc: Ingo Molnar <> Cc: "David S. Miller" <> Cc: James Morris <> Signed-off-by: Andrew Morton <> Signed-off-by: Linus Torvalds <>
2009-02-11softlockup: ensure the task has been switched out onceFrederic Weisbecker1-1/+7
When we check if a task has been switched out since the last scan, we might have a race condition on the following scenario: - the task is freshly created and scheduled - it puts its state to TASK_UNINTERRUPTIBLE and is not yet switched out - check_hung_task() scans this task and will report a false positive because t->nvcsw + t->nivcsw == t->last_switch_count == 0 Add a check for such cases. Signed-off-by: Frederic Weisbecker <> Acked-by: Mandeep Singh Baines <> Signed-off-by: Ingo Molnar <>
2009-02-09softlockup: remove timestamp checking from hung_taskMandeep Singh Baines1-39/+9
Impact: saves sizeof(long) bytes per task_struct By guaranteeing that sysctl_hung_task_timeout_secs have elapsed between tasklist scans we can avoid using timestamps. Signed-off-by: Mandeep Singh Baines <> Signed-off-by: Ingo Molnar <>
2009-02-05softlockup: convert read_lock in hung_task to rcu_read_lockMandeep Singh Baines1-2/+2
Since the tasklist is protected by rcu list operations, it is safe to convert the read_lock()s to rcu_read_lock(). Suggested-by: Peter Zijlstra <> Signed-off-by: Mandeep Singh Baines <> Signed-off-by: Ingo Molnar <>
2009-02-05softlockup: check all tasks in hung_taskMandeep Singh Baines1-2/+37
Impact: extend the scope of hung-task checks Changed the default value of hung_task_check_count to PID_MAX_LIMIT. hung_task_batch_count added to put an upper bound on the critical section. Every hung_task_batch_count checks, the rcu lock is never held for a too long time. Keeping the critical section small minimizes time preemption is disabled and keeps rcu grace periods small. To prevent following a stale pointer, get_task_struct is called on g and t. To verify that g and t have not been unhashed while outside the critical section, the task states are checked. The design was proposed by Frédéric Weisbecker. Signed-off-by: Mandeep Singh Baines <> Suggested-by: Frédéric Weisbecker <> Acked-by: Andrew Morton <> Signed-off-by: Ingo Molnar <>
2009-01-18softlockup: fix potential race in hung_task when resetting timeoutMandeep Singh Baines1-8/+16
Impact: fix potential false panic A potential race exists if sysctl_hung_task_timeout_secs is reset to 0 while inside check_hung_uniterruptible_tasks(). If check_task() is entered, a comparison with 0 will result in a false hung_task being detected. If sysctl_hung_task_panic is set, the system will panic. Signed-off-by: Mandeep Singh Baines <> Signed-off-by: Ingo Molnar <>
2009-01-16softlockup: decouple hung tasks check from softlockup detectionMandeep Singh Baines1-0/+198
Decoupling allows: * hung tasks check to happen at very low priority * hung tasks check and softlockup to be enabled/disabled independently at compile and/or run-time * individual panic settings to be enabled disabled independently at compile and/or run-time * softlockup threshold to be reduced without increasing hung tasks poll frequency (hung task check is expensive relative to softlock watchdog) * hung task check to be zero over-head when disabled at run-time Signed-off-by: Mandeep Singh Baines <> Signed-off-by: Ingo Molnar <>