diff options
author | Ingo Molnar <mingo@elte.hu> | 2008-07-12 07:29:02 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-07-12 07:29:02 +0200 |
commit | ae94b8075a2ed58d2318ef03827b25bc844f844e (patch) | |
tree | 7211f558d841cb40e1015d0b7527643ad55cb8e7 | |
parent | eca91e7838ec92e8c12122849ec7539c4765b689 (diff) | |
parent | a26929fb489188ff959b1715ee67f0c9f84405b5 (diff) | |
download | linux-3.10-ae94b8075a2ed58d2318ef03827b25bc844f844e.tar.gz linux-3.10-ae94b8075a2ed58d2318ef03827b25bc844f844e.tar.bz2 linux-3.10-ae94b8075a2ed58d2318ef03827b25bc844f844e.zip |
Merge branch 'linus' into x86/core
Conflicts:
arch/x86/mm/ioremap.c
Signed-off-by: Ingo Molnar <mingo@elte.hu>
62 files changed, 1812 insertions, 190 deletions
diff --git a/Documentation/ftrace.txt b/Documentation/ftrace.txt new file mode 100644 index 00000000000..13e4bf054c3 --- /dev/null +++ b/Documentation/ftrace.txt @@ -0,0 +1,1353 @@ + ftrace - Function Tracer + ======================== + +Copyright 2008 Red Hat Inc. +Author: Steven Rostedt <srostedt@redhat.com> + + +Introduction +------------ + +Ftrace is an internal tracer designed to help out developers and +designers of systems to find what is going on inside the kernel. +It can be used for debugging or analyzing latencies and performance +issues that take place outside of user-space. + +Although ftrace is the function tracer, it also includes an +infrastructure that allows for other types of tracing. Some of the +tracers that are currently in ftrace is a tracer to trace +context switches, the time it takes for a high priority task to +run after it was woken up, the time interrupts are disabled, and +more. + + +The File System +--------------- + +Ftrace uses the debugfs file system to hold the control files as well +as the files to display output. + +To mount the debugfs system: + + # mkdir /debug + # mount -t debugfs nodev /debug + + +That's it! (assuming that you have ftrace configured into your kernel) + +After mounting the debugfs, you can see a directory called +"tracing". This directory contains the control and output files +of ftrace. Here is a list of some of the key files: + + + Note: all time values are in microseconds. + + current_tracer : This is used to set or display the current tracer + that is configured. + + available_tracers : This holds the different types of tracers that + has been compiled into the kernel. The tracers + listed here can be configured by echoing in their + name into current_tracer. + + tracing_enabled : This sets or displays whether the current_tracer + is activated and tracing or not. Echo 0 into this + file to disable the tracer or 1 (or non-zero) to + enable it. + + trace : This file holds the output of the trace in a human readable + format. + + latency_trace : This file shows the same trace but the information + is organized more to display possible latencies + in the system. + + trace_pipe : The output is the same as the "trace" file but this + file is meant to be streamed with live tracing. + Reads from this file will block until new data + is retrieved. Unlike the "trace" and "latency_trace" + files, this file is a consumer. This means reading + from this file causes sequential reads to display + more current data. Once data is read from this + file, it is consumed, and will not be read + again with a sequential read. The "trace" and + "latency_trace" files are static, and if the + tracer isn't adding more data, they will display + the same information every time they are read. + + iter_ctrl : This file lets the user control the amount of data + that is displayed in one of the above output + files. + + trace_max_latency : Some of the tracers record the max latency. + For example, the time interrupts are disabled. + This time is saved in this file. The max trace + will also be stored, and displayed by either + "trace" or "latency_trace". A new max trace will + only be recorded if the latency is greater than + the value in this file. (in microseconds) + + trace_entries : This sets or displays the number of trace + entries each CPU buffer can hold. The tracer buffers + are the same size for each CPU, so care must be + taken when modifying the trace_entries. The number + of actually entries will be the number given + times the number of possible CPUS. The buffers + are saved as individual pages, and the actual entries + will always be rounded up to entries per page. + + This can only be updated when the current_tracer + is set to "none". + + NOTE: It is planned on changing the allocated buffers + from being the number of possible CPUS to + the number of online CPUS. + + tracing_cpumask : This is a mask that lets the user only trace + on specified CPUS. The format is a hex string + representing the CPUS. + + set_ftrace_filter : When dynamic ftrace is configured in, the + code is dynamically modified to disable calling + of the function profiler (mcount). This lets + tracing be configured in with practically no overhead + in performance. This also has a side effect of + enabling or disabling specific functions to be + traced. Echoing in names of functions into this + file will limit the trace to only those files. + + set_ftrace_notrace: This has the opposite effect that + set_ftrace_filter has. Any function that is added + here will not be traced. If a function exists + in both set_ftrace_filter and set_ftrace_notrace + the function will _not_ bet traced. + + available_filter_functions : When a function is encountered the first + time by the dynamic tracer, it is recorded and + later the call is converted into a nop. This file + lists the functions that have been recorded + by the dynamic tracer and these functions can + be used to set the ftrace filter by the above + "set_ftrace_filter" file. + + +The Tracers +----------- + +Here are the list of current tracers that can be configured. + + ftrace - function tracer that uses mcount to trace all functions. + It is possible to filter out which functions that are + traced when dynamic ftrace is configured in. + + sched_switch - traces the context switches between tasks. + + irqsoff - traces the areas that disable interrupts and saves off + the trace with the longest max latency. + See tracing_max_latency. When a new max is recorded, + it replaces the old trace. It is best to view this + trace with the latency_trace file. + + preemptoff - Similar to irqsoff but traces and records the time + preemption is disabled. + + preemptirqsoff - Similar to irqsoff and preemptoff, but traces and + records the largest time irqs and/or preemption is + disabled. + + wakeup - Traces and records the max latency that it takes for + the highest priority task to get scheduled after + it has been woken up. + + none - This is not a tracer. To remove all tracers from tracing + simply echo "none" into current_tracer. + + +Examples of using the tracer +---------------------------- + +Here are typical examples of using the tracers with only controlling +them with the debugfs interface (without using any user-land utilities). + +Output format: +-------------- + +Here's an example of the output format of the file "trace" + + -------- +# tracer: ftrace +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + bash-4251 [01] 10152.583854: path_put <-path_walk + bash-4251 [01] 10152.583855: dput <-path_put + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput + -------- + +A header is printed with the trace that is represented. In this case +the tracer is "ftrace". Then a header showing the format. Task name +"bash", the task PID "4251", the CPU that it was running on +"01", the timestamp in <secs>.<usecs> format, the function name that was +traced "path_put" and the parent function that called this function +"path_walk". + +The sched_switch tracer also includes tracing of task wake ups and +context switches. + + ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S + ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S + ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R + events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R + kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R + ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R + +Wake ups are represented by a "+" and the context switches show +"==>". The format is: + + Context switches: + + Previous task Next Task + + <pid>:<prio>:<state> ==> <pid>:<prio>:<state> + + Wake ups: + + Current task Task waking up + + <pid>:<prio>:<state> + <pid>:<prio>:<state> + +The prio is the internal kernel priority, which is inverse to the +priority that is usually displayed by user-space tools. Zero represents +the highest priority (99). Prio 100 starts the "nice" priorities with +100 being equal to nice -20 and 139 being nice 19. The prio "140" is +reserved for the idle task which is the lowest priority thread (pid 0). + + +Latency trace format +-------------------- + +For traces that display latency times, the latency_trace file gives +a bit more information to see why a latency happened. Here's a typical +trace. + +# tracer: irqsoff +# +irqsoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: apic_timer_interrupt + => ended at: do_softirq + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt) + <idle>-0 0d.s. 97us : __do_softirq (do_softirq) + <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq) + + +vim:ft=help + + +This shows that the current tracer is "irqsoff" tracing the time +interrupts are disabled. It gives the trace version and the kernel +this was executed on (2.6.26-rc8). Then it displays the max latency +in microsecs (97 us). The number of trace entries displayed +by the total number recorded (both are three: #3/3). The type of +preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero +and reserved for later use. #P is the number of online CPUS (#P:2). + +The task is the process that was running when the latency happened. +(swapper pid: 0). + +The start and stop that caused the latencies: + + apic_timer_interrupt is where the interrupts were disabled. + do_softirq is where they were enabled again. + +The next lines after the header are the trace itself. The header +explains which is which. + + cmd: The name of the process in the trace. + + pid: The PID of that process. + + CPU#: The CPU that the process was running on. + + irqs-off: 'd' interrupts are disabled. '.' otherwise. + + need-resched: 'N' task need_resched is set, '.' otherwise. + + hardirq/softirq: + 'H' - hard irq happened inside a softirq. + 'h' - hard irq is running + 's' - soft irq is running + '.' - normal context. + + preempt-depth: The level of preempt_disabled + +The above is mostly meaningful for kernel developers. + + time: This differs from the trace output where as the trace output + contained a absolute timestamp. This timestamp is relative + to the start of the first entry in the the trace. + + delay: This is just to help catch your eye a bit better. And + needs to be fixed to be only relative to the same CPU. + The marks is determined by the difference between this + current trace and the next trace. + '!' - greater than preempt_mark_thresh (default 100) + '+' - greater than 1 microsecond + ' ' - less than or equal to 1 microsecond. + + The rest is the same as the 'trace' file. + + +iter_ctrl +--------- + +The iter_ctrl file is used to control what gets printed in the trace +output. To see what is available, simply cat the file: + + cat /debug/tracing/iter_ctrl + print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \ + noblock nostacktrace nosched-tree + +To disable one of the options, echo in the option appended with "no". + + echo noprint-parent > /debug/tracing/iter_ctrl + +To enable an option, leave off the "no". + + echo sym-offest > /debug/tracing/iter_ctrl + +Here are the available options: + + print-parent - On function traces, display the calling function + as well as the function being traced. + + print-parent: + bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul + + noprint-parent: + bash-4000 [01] 1477.606694: simple_strtoul + + + sym-offset - Display not only the function name, but also the offset + in the function. For example, instead of seeing just + "ktime_get" you will see "ktime_get+0xb/0x20" + + sym-offset: + bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0 + + sym-addr - this will also display the function address as well as + the function name. + + sym-addr: + bash-4000 [01] 1477.606694: simple_strtoul <c0339346> + + verbose - This deals with the latency_trace file. + + bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ + (+0.000ms): simple_strtoul (strict_strtoul) + + raw - This will display raw numbers. This option is best for use with + user applications that can translate the raw numbers better than + having it done in the kernel. + + hex - similar to raw, but the numbers will be in a hexadecimal format. + + bin - This will print out the formats in raw binary. + + block - TBD (needs update) + + stacktrace - This is one of the options that changes the trace itself. + When a trace is recorded, so is the stack of functions. + This allows for back traces of trace sites. + + sched-tree - TBD (any users??) + + +sched_switch +------------ + +This tracer simply records schedule switches. Here's an example +on how to implement it. + + # echo sched_switch > /debug/tracing/current_tracer + # echo 1 > /debug/tracing/tracing_enabled + # sleep 1 + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/trace + +# tracer: sched_switch +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R + bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R + sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R + bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S + bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R + sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R + bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D + bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R + <idle>-0 [00] 240.132589: 0:140:R + 4:115:S + <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R + ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R + <idle>-0 [00] 240.132598: 0:140:R + 4:115:S + <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R + ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R + sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R + [...] + + +As we have discussed previously about this format, the header shows +the name of the trace and points to the options. The "FUNCTION" +is a misnomer since here it represents the wake ups and context +switches. + +The sched_switch only lists the wake ups (represented with '+') +and context switches ('==>') with the previous task or current +first followed by the next task or task waking up. The format for both +of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO +is the inverse of the actual priority with zero (0) being the highest +priority and the nice values starting at 100 (nice -20). Below is +a quick chart to map the kernel priority to user land priorities. + + Kernel priority: 0 to 99 ==> user RT priority 99 to 0 + Kernel priority: 100 to 139 ==> user nice -20 to 19 + Kernel priority: 140 ==> idle task priority + +The task states are: + + R - running : wants to run, may not actually be running + S - sleep : process is waiting to be woken up (handles signals) + D - deep sleep : process must be woken up (ignores signals) + T - stopped : process suspended + t - traced : process is being traced (with something like gdb) + Z - zombie : process waiting to be cleaned up + X - unknown + + +ftrace_enabled +-------------- + +The following tracers give different output depending on whether +or not the sysctl ftrace_enabled is set. To set ftrace_enabled, +one can either use the sysctl function or set it via the proc +file system interface. + + sysctl kernel.ftrace_enabled=1 + + or + + echo 1 > /proc/sys/kernel/ftrace_enabled + +To disable ftrace_enabled simply replace the '1' with '0' in +the above commands. + +When ftrace_enabled is set the tracers will also record the functions +that are within the trace. The descriptions of the tracers +will also show an example with ftrace enabled. + + +irqsoff +------- + +When interrupts are disabled, the CPU can not react to any other +external event (besides NMIs and SMIs). This prevents the timer +interrupt from triggering or the mouse interrupt from letting the +kernel know of a new mouse event. The result is a latency with the +reaction time. + +The irqsoff tracer tracks the time interrupts are disabled and when +they are re-enabled. When a new maximum latency is hit, it saves off +the trace so that it may be retrieved at a later time. Every time a +new maximum in reached, the old saved trace is discarded and the new +trace is saved. + +To reset the maximum, echo 0 into tracing_max_latency. Here's an +example: + + # echo irqsoff > /debug/tracing/current_tracer + # echo 0 > /debug/tracing/tracing_max_latency + # echo 1 > /debug/tracing/tracing_enabled + # ls -ltr + [...] + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/latency_trace +# tracer: irqsoff +# +irqsoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 6 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: copy_page_range + => ended at: copy_page_range + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + bash-4269 1...1 0us+: _spin_lock (copy_page_range) + bash-4269 1...1 7us : _spin_unlock (copy_page_range) + bash-4269 1...2 7us : trace_preempt_on (copy_page_range) + + +vim:ft=help + +Here we see that that we had a latency of 6 microsecs (which is +very good). The spin_lock in copy_page_range disabled interrupts. +The difference between the 6 and the displayed timestamp 7us is +because the clock must have incremented between the time of recording +the max latency and recording the function that had that latency. + +Note the above had ftrace_enabled not set. If we set the ftrace_enabled +we get a much larger output: + +# tracer: irqsoff +# +irqsoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: __alloc_pages_internal + => ended at: __alloc_pages_internal + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal) + ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist) + ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk) + ls-4339 0d..1 4us : add_preempt_count (_spin_lock) + ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk) + ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue) + ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest) + ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk) + ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue) + ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest) + ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk) + ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue) +[...] + ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue) + ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest) + ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk) + ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue) + ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest) + ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk) + ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock) + ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal) + ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal) + + +vim:ft=help + + +Here we traced a 50 microsecond latency. But we also see all the +functions that were called during that time. Note that enabling +function tracing we endure an added overhead. This overhead may +extend the latency times. But never the less, this trace has provided +some very helpful debugging. + + +preemptoff +---------- + +When preemption is disabled we may be able to receive interrupts but +the task can not be preempted and a higher priority task must wait +for preemption to be enabled again before it can preempt a lower +priority task. + +The preemptoff tracer traces the places that disables preemption. +Like the irqsoff, it records the maximum latency that preemption +was disabled. The control of preemptoff is much like the irqsoff. + + # echo preemptoff > /debug/tracing/current_tracer + # echo 0 > /debug/tracing/tracing_max_latency + # echo 1 > /debug/tracing/tracing_enabled + # ls -ltr + [...] + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/latency_trace +# tracer: preemptoff +# +preemptoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: do_IRQ + => ended at: __do_softirq + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + sshd-4261 0d.h. 0us+: irq_enter (do_IRQ) + sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq) + sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq) + + +vim:ft=help + +This has some more changes. Preemption was disabled when an interrupt +came in (notice the 'h'), and was enabled while doing a softirq. +(notice the 's'). But we also see that interrupts have been disabled +when entering the preempt off section and leaving it (the 'd'). +We do not know if interrupts were enabled in the mean time. + +# tracer: preemptoff +# +preemptoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: remove_wait_queue + => ended at: __do_softirq + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue) + sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue) + sshd-4261 0d..1 2us : do_IRQ (common_interrupt) + sshd-4261 0d..1 2us : irq_enter (do_IRQ) + sshd-4261 0d..1 2us : idle_cpu (irq_enter) + sshd-4261 0d..1 3us : add_preempt_count (irq_enter) + sshd-4261 0d.h1 3us : idle_cpu (irq_enter) + sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ) +[...] + sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock) + sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq) + sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq) + sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq) + sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock) + sshd-4261 0d.h1 14us : irq_exit (do_IRQ) + sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit) + sshd-4261 0d..2 15us : do_softirq (irq_exit) + sshd-4261 0d... 15us : __do_softirq (do_softirq) + sshd-4261 0d... 16us : __local_bh_disable (__do_softirq) + sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable) + sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable) + sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable) + sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable) +[...] + sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable) + sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable) + sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable) + sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable) + sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip) + sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip) + sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable) + sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable) +[...] + sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq) + sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq) + + +The above is an example of the preemptoff trace with ftrace_enabled +set. Here we see that interrupts were disabled the entire time. +The irq_enter code lets us know that we entered an interrupt 'h'. +Before that, the functions being traced still show that it is not +in an interrupt, but we can see by the functions themselves that +this is not the case. + +Notice that the __do_softirq when called doesn't have a preempt_count. +It may seem that we missed a preempt enabled. What really happened +is that the preempt count is held on the threads stack and we +switched to the softirq stack (4K stacks in effect). The code +does not copy the preempt count, but because interrupts are disabled +we don't need to worry about it. Having a tracer like this is good +to let people know what really happens inside the kernel. + + +preemptirqsoff +-------------- + +Knowing the locations that have interrupts disabled or preemption +disabled for the longest times is helpful. But sometimes we would +like to know when either preemption and/or interrupts are disabled. + +The following code: + + local_irq_disable(); + call_function_with_irqs_off(); + preempt_disable(); + call_function_with_irqs_and_preemption_off(); + local_irq_enable(); + call_function_with_preemption_off(); + preempt_enable(); + +The irqsoff tracer will record the total length of +call_function_with_irqs_off() and +call_function_with_irqs_and_preemption_off(). + +The preemptoff tracer will record the total length of +call_function_with_irqs_and_preemption_off() and +call_function_with_preemption_off(). + +But neither will trace the time that interrupts and/or preemption +is disabled. This total time is the time that we can not schedule. +To record this time, use the preemptirqsoff tracer. + +Again, using this trace is much like the irqsoff and preemptoff tracers. + + # echo preemptoff > /debug/tracing/current_tracer + # echo 0 > /debug/tracing/tracing_max_latency + # echo 1 > /debug/tracing/tracing_enabled + # ls -ltr + [...] + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/latency_trace +# tracer: preemptirqsoff +# +preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: apic_timer_interrupt + => ended at: __do_softirq + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt) + ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq) + ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq) + + +vim:ft=help + + +The trace_hardirqs_off_thunk is called from assembly on x86 when +interrupts are disabled in the assembly code. Without the function +tracing, we don't know if interrupts were enabled within the preemption +points. We do see that it started with preemption enabled. + +Here is a trace with ftrace_enabled set: + + +# tracer: preemptirqsoff +# +preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) + ----------------- + => started at: write_chan + => ended at: __do_softirq + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + ls-4473 0.N.. 0us : preempt_schedule (write_chan) + ls-4473 0dN.1 1us : _spin_lock (schedule) + ls-4473 0dN.1 2us : add_preempt_count (_spin_lock) + ls-4473 0d..2 2us : put_prev_task_fair (schedule) +[...] + ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts) + ls-4473 0d..2 13us : __switch_to (schedule) + sshd-4261 0d..2 14us : finish_task_switch (schedule) + sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch) + sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave) + sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set) + sshd-4261 0d..2 16us : do_IRQ (common_interrupt) + sshd-4261 0d..2 17us : irq_enter (do_IRQ) + sshd-4261 0d..2 17us : idle_cpu (irq_enter) + sshd-4261 0d..2 18us : add_preempt_count (irq_enter) + sshd-4261 0d.h2 18us : idle_cpu (irq_enter) + sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ) + sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq) + sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock) + sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq) + sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock) +[...] + sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq) + sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock) + sshd-4261 0d.h2 29us : irq_exit (do_IRQ) + sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit) + sshd-4261 0d..3 30us : do_softirq (irq_exit) + sshd-4261 0d... 30us : __do_softirq (do_softirq) + sshd-4261 0d... 31us : __local_bh_disable (__do_softirq) + sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable) + sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable) +[...] + sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip) + sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip) + sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt) + sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt) + sshd-4261 0d.s3 45us : idle_cpu (irq_enter) + sshd-4261 0d.s3 46us : add_preempt_count (irq_enter) + sshd-4261 0d.H3 46us : idle_cpu (irq_enter) + sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt) + sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt) +[...] + sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt) + sshd-4261 0d.H3 82us : ktime_get (tick_program_event) + sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get) + sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts) + sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts) + sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event) + sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event) + sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt) + sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit) + sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit) + sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable) +[...] + sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action) + sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq) + sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq) + sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq) + sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable) + sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq) + sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq) + + +This is a very interesting trace. It started with the preemption of +the ls task. We see that the task had the "need_resched" bit set +with the 'N' in the trace. Interrupts are disabled in the spin_lock +and the trace started. We see that a schedule took place to run +sshd. When the interrupts were enabled we took an interrupt. +On return of the interrupt the softirq ran. We took another interrupt +while running the softirq as we see with the capital 'H'. + + +wakeup +------ + +In Real-Time environment it is very important to know the wakeup +time it takes for the highest priority task that wakes up to the +time it executes. This is also known as "schedule latency". +I stress the point that this is about RT tasks. It is also important +to know the scheduling latency of non-RT tasks, but the average +schedule latency is better for non-RT tasks. Tools like +LatencyTop is more appropriate for such measurements. + +Real-Time environments is interested in the worst case latency. +That is the longest latency it takes for something to happen, and +not the average. We can have a very fast scheduler that may only +have a large latency once in a while, but that would not work well +with Real-Time tasks. The wakeup tracer was designed to record +the worst case wakeups of RT tasks. Non-RT tasks are not recorded +because the tracer only records one worst case and tracing non-RT +tasks that are unpredictable will overwrite the worst case latency +of RT tasks. + +Since this tracer only deals with RT tasks, we will run this slightly +different than we did with the previous tracers. Instead of performing +an 'ls' we will run 'sleep 1' under 'chrt' which changes the +priority of the task. + + # echo wakeup > /debug/tracing/current_tracer + # echo 0 > /debug/tracing/tracing_max_latency + # echo 1 > /debug/tracing/tracing_enabled + # chrt -f 5 sleep 1 + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/latency_trace +# tracer: wakeup +# +wakeup latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5) + ----------------- + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / + <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process) + <idle>-0 1d..4 4us : schedule (cpu_idle) + + +vim:ft=help + + +Running this on an idle system we see that it only took 4 microseconds +to perform the task switch. Note, since the trace marker in the +schedule is before the actual "switch" we stop the tracing when +the recorded task is about to schedule in. This may change if +we add a new marker at the end of the scheduler. + +Notice that the recorded task is 'sleep' with the PID of 4901 and it +has an rt_prio of 5. This priority is user-space priority and not +the internal kernel priority. The policy is 1 for SCHED_FIFO and 2 +for SCHED_RR. + +Doing the same with chrt -r 5 and ftrace_enabled set. + +# tracer: wakeup +# +wakeup latency trace v1.1.5 on 2.6.26-rc8 +-------------------------------------------------------------------- + latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) + ----------------- + | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5) + ----------------- + +# _------=> CPU# +# / _-----=> irqs-off +# | / _----=> need-resched +# || / _---=> hardirq/softirq +# ||| / _--=> preempt-depth +# |||| / +# ||||| delay +# cmd pid ||||| time | caller +# \ / ||||| \ | / +ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process) +ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb) +ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up) +ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup) +ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr) +ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup) +ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up) +ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up) +[...] +ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt) +ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit) +ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit) +ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq) +[...] +ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks) +ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq) +ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable) +ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd) +ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd) +ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched) +ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched) +ksoftirq-7 1.N.2 33us : schedule (__cond_resched) +ksoftirq-7 1.N.2 33us : add_preempt_count (schedule) +ksoftirq-7 1.N.3 34us : hrtick_clear (schedule) +ksoftirq-7 1dN.3 35us : _spin_lock (schedule) +ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock) +ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule) +ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair) +[...] +ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline) +ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock) +ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline) +ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock) +ksoftirq-7 1d..4 50us : schedule (__cond_resched) + +The interrupt went off while running ksoftirqd. This task runs at +SCHED_OTHER. Why didn't we see the 'N' set early? This may be +a harmless bug with x86_32 and 4K stacks. The need_reched() function +that tests if we need to reschedule looks on the actual stack. +Where as the setting of the NEED_RESCHED bit happens on the +task's stack. But because we are in a hard interrupt, the test +is with the interrupts stack which has that to be false. We don't +see the 'N' until we switch back to the task's stack. + +ftrace +------ + +ftrace is not only the name of the tracing infrastructure, but it +is also a name of one of the tracers. The tracer is the function +tracer. Enabling the function tracer can be done from the +debug file system. Make sure the ftrace_enabled is set otherwise +this tracer is a nop. + + # sysctl kernel.ftrace_enabled=1 + # echo ftrace > /debug/tracing/current_tracer + # echo 1 > /debug/tracing/tracing_enabled + # usleep 1 + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/trace +# tracer: ftrace +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + bash-4003 [00] 123.638713: finish_task_switch <-schedule + bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch + bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq + bash-4003 [00] 123.638715: hrtick_set <-schedule + bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set + bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave + bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set + bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore + bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set + bash-4003 [00] 123.638718: sub_preempt_count <-schedule + bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule + bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run + bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion + bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common + bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq +[...] + + +Note: It is sometimes better to enable or disable tracing directly from +a program, because the buffer may be overflowed by the echo commands +before you get to the point you want to trace. It is also easier to +stop the tracing at the point that you hit the part that you are +interested in. Since the ftrace buffer is a ring buffer with the +oldest data being overwritten, usually it is sufficient to start the +tracer with an echo command but have you code stop it. Something +like the following is usually appropriate for this. + +int trace_fd; +[...] +int main(int argc, char *argv[]) { + [...] + trace_fd = open("/debug/tracing/tracing_enabled", O_WRONLY); + [...] + if (condition_hit()) { + write(trace_fd, "0", 1); + } + [...] +} + + +dynamic ftrace +-------------- + +If CONFIG_DYNAMIC_FTRACE is set, then the system will run with +virtually no overhead when function tracing is disabled. The way +this works is the mcount function call (placed at the start of +every kernel function, produced by the -pg switch in gcc), starts +of pointing to a simple return. + +When dynamic ftrace is initialized, it calls kstop_machine to make it +act like a uniprocessor so that it can freely modify code without +worrying about other processors executing that same code. At +initialization, the mcount calls are change to call a "record_ip" +function. After this, the first time a kernel function is called, +it has the calling address saved in a hash table. + +Later on the ftraced kernel thread is awoken and will again call +kstop_machine if new functions have been recorded. The ftraced thread +will change all calls to mcount to "nop". Just calling mcount +and having mcount return has shown a 10% overhead. By converting +it to a nop, there is no recordable overhead to the system. + +One special side-effect to the recording of the functions being +traced, is that we can now selectively choose which functions we +want to trace and which ones we want the mcount calls to remain as +nops. + +Two files that contain to the enabling and disabling of recorded +functions are: + + set_ftrace_filter + +and + + set_ftrace_notrace + +A list of available functions that you can add to this files is listed +in: + + available_filter_functions + + # cat /debug/tracing/available_filter_functions +put_prev_task_idle +kmem_cache_create +pick_next_task_rt +get_online_cpus +pick_next_task_fair +mutex_lock +[...] + +If I'm only interested in sys_nanosleep and hrtimer_interrupt: + + # echo sys_nanosleep hrtimer_interrupt \ + > /debug/tracing/set_ftrace_filter + # echo ftrace > /debug/tracing/current_tracer + # echo 1 > /debug/tracing/tracing_enabled + # usleep 1 + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/trace +# tracer: ftrace +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt + usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call + <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt + +To see what functions are being traced, you can cat the file: + + # cat /debug/tracing/set_ftrace_filter +hrtimer_interrupt +sys_nanosleep + + +Perhaps this isn't enough. The filters also allow simple wild cards. +Only the following is currently available + + <match>* - will match functions that begins with <match> + *<match> - will match functions that end with <match> + *<match>* - will match functions that have <match> in it + +Thats all the wild cards that are allowed. + + <match>*<match> will not work. + + # echo hrtimer_* > /debug/tracing/set_ftrace_filter + +Produces: + +# tracer: ftrace +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + bash-4003 [00] 1480.611794: hrtimer_init <-copy_process + bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set + bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear + bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel + <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt + <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt + <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt + <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt + <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt + + +Notice that we lost the sys_nanosleep. + + # cat /debug/tracing/set_ftrace_filter +hrtimer_run_queues +hrtimer_run_pending +hrtimer_init +hrtimer_cancel +hrtimer_try_to_cancel +hrtimer_forward +hrtimer_start +hrtimer_reprogram +hrtimer_force_reprogram +hrtimer_get_next_event +hrtimer_interrupt +hrtimer_nanosleep +hrtimer_wakeup +hrtimer_get_remaining +hrtimer_get_res +hrtimer_init_sleeper + + +This is because the '>' and '>>' act just like they do in bash. +To rewrite the filters, use '>' +To append to the filters, use '>>' + +To clear out a filter so that all functions will be recorded again. + + # echo > /debug/tracing/set_ftrace_filter + # cat /debug/tracing/set_ftrace_filter + # + +Again, now we want to append. + + # echo sys_nanosleep > /debug/tracing/set_ftrace_filter + # cat /debug/tracing/set_ftrace_filter +sys_nanosleep + # echo hrtimer_* >> /debug/tracing/set_ftrace_filter + # cat /debug/tracing/set_ftrace_filter +hrtimer_run_queues +hrtimer_run_pending +hrtimer_init +hrtimer_cancel +hrtimer_try_to_cancel +hrtimer_forward +hrtimer_start +hrtimer_reprogram +hrtimer_force_reprogram +hrtimer_get_next_event +hrtimer_interrupt +sys_nanosleep +hrtimer_nanosleep +hrtimer_wakeup +hrtimer_get_remaining +hrtimer_get_res +hrtimer_init_sleeper + + +The set_ftrace_notrace prevents those functions from being traced. + + # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace + +Produces: + +# tracer: ftrace +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + bash-4043 [01] 115.281644: finish_task_switch <-schedule + bash-4043 [01] 115.281645: hrtick_set <-schedule + bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set + bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run + bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion + bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run + bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop + bash-4043 [01] 115.281648: wake_up_process <-kthread_stop + bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process + +We can see that there's no more lock or preempt tracing. + +ftraced +------- + +As mentioned above, when dynamic ftrace is configured in, a kernel +thread wakes up once a second and checks to see if there are mcount +calls that need to be converted into nops. If there is not, then +it simply goes back to sleep. But if there is, it will call +kstop_machine to convert the calls to nops. + +There may be a case that you do not want this added latency. +Perhaps you are doing some audio recording and this activity might +cause skips in the playback. There is an interface to disable +and enable the ftraced kernel thread. + + # echo 0 > /debug/tracing/ftraced_enabled + +This will disable the calling of the kstop_machine to update the +mcount calls to nops. Remember that there's a large overhead +to calling mcount. Without this kernel thread, that overhead will +exist. + +Any write to the ftraced_enabled file will cause the kstop_machine +to run if there are recorded calls to mcount. This means that a +user can manually perform the updates when they want to by simply +echoing a '0' into the ftraced_enabled file. + +The updates are also done at the beginning of enabling a tracer +that uses ftrace function recording. + + +trace_pipe +---------- + +The trace_pipe outputs the same as trace, but the effect on the +tracing is different. Every read from trace_pipe is consumed. +This means that subsequent reads will be different. The trace +is live. + + # echo ftrace > /debug/tracing/current_tracer + # cat /debug/tracing/trace_pipe > /tmp/trace.out & +[1] 4153 + # echo 1 > /debug/tracing/tracing_enabled + # usleep 1 + # echo 0 > /debug/tracing/tracing_enabled + # cat /debug/tracing/trace +# tracer: ftrace +# +# TASK-PID CPU# TIMESTAMP FUNCTION +# | | | | | + + # + # cat /tmp/trace.out + bash-4043 [00] 41.267106: finish_task_switch <-schedule + bash-4043 [00] 41.267106: hrtick_set <-schedule + bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set + bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run + bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion + bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run + bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop + bash-4043 [00] 41.267110: wake_up_process <-kthread_stop + bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process + bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up + + +Note, reading the trace_pipe will block until more input is added. +By changing the tracer, trace_pipe will issue an EOF. We needed +to set the ftrace tracer _before_ cating the trace_pipe file. + + +trace entries +------------- + +Having too much or not enough data can be troublesome in diagnosing +some issue in the kernel. The file trace_entries is used to modify +the size of the internal trace buffers. The numbers listed +is the number of entries that can be recorded per CPU. To know +the full size, multiply the number of possible CPUS with the +number of entries. + + # cat /debug/tracing/trace_entries +65620 + +Note, to modify this you must have tracing fulling disabled. To do that, +echo "none" into the current_tracer. + + # echo none > /debug/tracing/current_tracer + # echo 100000 > /debug/tracing/trace_entries + # cat /debug/tracing/trace_entries +100045 + + +Notice that we echoed in 100,000 but the size is 100,045. The entries +are held by individual pages. It allocates the number of pages it takes +to fulfill the request. If more entries may fit on the last page +it will add them. + + # echo 1 > /debug/tracing/trace_entries + # cat /debug/tracing/trace_entries +85 + +This shows us that 85 entries can fit on a single page. + +The number of pages that will be allocated is a percentage of available +memory. Allocating too much will produces an error. + + # echo 1000000000000 > /debug/tracing/trace_entries +-bash: echo: write error: Cannot allocate memory + # cat /debug/tracing/trace_entries +85 + diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt index 17f1f91af35..946b66e1b65 100644 --- a/Documentation/networking/ip-sysctl.txt +++ b/Documentation/networking/ip-sysctl.txt @@ -148,9 +148,9 @@ tcp_available_congestion_control - STRING but not loaded. tcp_base_mss - INTEGER - The initial value of search_low to be used by Packetization Layer - Path MTU Discovery (MTU probing). If MTU probing is enabled, - this is the inital MSS used by the connection. + The initial value of search_low to be used by the packetization layer + Path MTU discovery (MTU probing). If MTU probing is enabled, + this is the initial MSS used by the connection. tcp_congestion_control - STRING Set the congestion control algorithm to be used for new @@ -185,10 +185,9 @@ tcp_frto - INTEGER timeouts. It is particularly beneficial in wireless environments where packet loss is typically due to random radio interference rather than intermediate router congestion. F-RTO is sender-side - only modification. Therefore it does not require any support from - the peer, but in a typical case, however, where wireless link is - the local access link and most of the data flows downlink, the - faraway servers should have F-RTO enabled to take advantage of it. + only modification. Therefore it does not require any support from + the peer. + If set to 1, basic version is enabled. 2 enables SACK enhanced F-RTO if flow uses SACK. The basic version can be used also when SACK is in use though scenario(s) with it exists where F-RTO @@ -276,7 +275,7 @@ tcp_mem - vector of 3 INTEGERs: min, pressure, max memory. tcp_moderate_rcvbuf - BOOLEAN - If set, TCP performs receive buffer autotuning, attempting to + If set, TCP performs receive buffer auto-tuning, attempting to automatically size the buffer (no greater than tcp_rmem[2]) to match the size required by the path for full throughput. Enabled by default. @@ -336,7 +335,7 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max pressure. Default: 8K - default: default size of receive buffer used by TCP sockets. + default: initial size of receive buffer used by TCP sockets. This value overrides net.core.rmem_default used by other protocols. Default: 87380 bytes. This value results in window of 65535 with default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit @@ -344,8 +343,10 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max max: maximal size of receive buffer allowed for automatically selected receiver buffers for TCP socket. This value does not override - net.core.rmem_max, "static" selection via SO_RCVBUF does not use this. - Default: 87380*2 bytes. + net.core.rmem_max. Calling setsockopt() with SO_RCVBUF disables + automatic tuning of that socket's receive buffer size, in which + case this value is ignored. + Default: between 87380B and 4MB, depending on RAM size. tcp_sack - BOOLEAN Enable select acknowledgments (SACKS). @@ -358,7 +359,7 @@ tcp_slow_start_after_idle - BOOLEAN Default: 1 tcp_stdurg - BOOLEAN - Use the Host requirements interpretation of the TCP urg pointer field. + Use the Host requirements interpretation of the TCP urgent pointer field. Most hosts use the older BSD interpretation, so if you turn this on Linux might not communicate correctly with them. Default: FALSE @@ -371,12 +372,12 @@ tcp_synack_retries - INTEGER tcp_syncookies - BOOLEAN Only valid when the kernel was compiled with CONFIG_SYNCOOKIES Send out syncookies when the syn backlog queue of a socket - overflows. This is to prevent against the common 'syn flood attack' + overflows. This is to prevent against the common 'SYN flood attack' Default: FALSE Note, that syncookies is fallback facility. It MUST NOT be used to help highly loaded servers to stand - against legal connection rate. If you see synflood warnings + against legal connection rate. If you see SYN flood warnings in your logs, but investigation shows that they occur because of overload with legal connections, you should tune another parameters until this warning disappear. @@ -386,7 +387,7 @@ tcp_syncookies - BOOLEAN to use TCP extensions, can result in serious degradation of some services (f.e. SMTP relaying), visible not by you, but your clients and relays, contacting you. While you see - synflood warnings in logs not being really flooded, your server + SYN flood warnings in logs not being really flooded, your server is seriously misconfigured. tcp_syn_retries - INTEGER @@ -419,19 +420,21 @@ tcp_window_scaling - BOOLEAN Enable window scaling as defined in RFC1323. tcp_wmem - vector of 3 INTEGERs: min, default, max - min: Amount of memory reserved for send buffers for TCP socket. + min: Amount of memory reserved for send buffers for TCP sockets. Each TCP socket has rights to use it due to fact of its birth. Default: 4K - default: Amount of memory allowed for send buffers for TCP socket - by default. This value overrides net.core.wmem_default used - by other protocols, it is usually lower than net.core.wmem_default. + default: initial size of send buffer used by TCP sockets. This + value overrides net.core.wmem_default used by other protocols. + It is usually lower than net.core.wmem_default. Default: 16K - max: Maximal amount of memory allowed for automatically selected - send buffers for TCP socket. This value does not override - net.core.wmem_max, "static" selection via SO_SNDBUF does not use this. - Default: 128K + max: Maximal amount of memory allowed for automatically tuned + send buffers for TCP sockets. This value does not override + net.core.wmem_max. Calling setsockopt() with SO_SNDBUF disables + automatic tuning of that socket's send buffer size, in which case + this value is ignored. + Default: between 64K and 4MB, depending on RAM size. tcp_workaround_signed_windows - BOOLEAN If set, assume no receipt of a window scaling option means the @@ -1060,24 +1063,193 @@ bridge-nf-filter-pppoe-tagged - BOOLEAN Default: 1 -UNDOCUMENTED: +proc/sys/net/sctp/* Variables: + +addip_enable - BOOLEAN + Enable or disable extension of Dynamic Address Reconfiguration + (ADD-IP) functionality specified in RFC5061. This extension provides + the ability to dynamically add and remove new addresses for the SCTP + associations. + + 1: Enable extension. + + 0: Disable extension. + + Default: 0 + +addip_noauth_enable - BOOLEAN + Dynamic Address Reconfiguration (ADD-IP) requires the use of + authentication to protect the operations of adding or removing new + addresses. This requirement is mandated so that unauthorized hosts + would not be able to hijack associations. However, older + implementations may not have implemented this requirement while + allowing the ADD-IP extension. For reasons of interoperability, + we provide this variable to control the enforcement of the + authentication requirement. + + 1: Allow ADD-IP extension to be used without authentication. This + should only be set in a closed environment for interoperability + with older implementations. + + 0: Enforce the authentication requirement + + Default: 0 + +auth_enable - BOOLEAN + Enable or disable Authenticated Chunks extension. This extension + provides the ability to send and receive authenticated chunks and is + required for secure operation of Dynamic Address Reconfiguration + (ADD-IP) extension. + + 1: Enable this extension. + 0: Disable this extension. + + Default: 0 + +prsctp_enable - BOOLEAN + Enable or disable the Partial Reliability extension (RFC3758) which + is used to notify peers that a given DATA should no longer be expected. + + 1: Enable extension + 0: Disable + + Default: 1 + +max_burst - INTEGER + The limit of the number of new packets that can be initially sent. It + controls how bursty the generated traffic can be. + + Default: 4 + +association_max_retrans - INTEGER + Set the maximum number for retransmissions that an association can + attempt deciding that the remote end is unreachable. If this value + is exceeded, the association is terminated. + + Default: 10 + +max_init_retransmits - INTEGER + The maximum number of retransmissions of INIT and COOKIE-ECHO chunks + that an association will attempt before declaring the destination + unreachable and terminating. + + Default: 8 + +path_max_retrans - INTEGER + The maximum number of retransmissions that will be attempted on a given + path. Once this threshold is exceeded, the path is considered + unreachable, and new traffic will use a different path when the + association is multihomed. + + Default: 5 + +rto_initial - INTEGER + The initial round trip timeout value in milliseconds that will be used + in calculating round trip times. This is the initial time interval + for retransmissions. + + Default: 3000 -dev_weight FIXME -discovery_slots FIXME -discovery_timeout FIXME -fast_poll_increase FIXME -ip6_queue_maxlen FIXME -lap_keepalive_time FIXME -lo_cong FIXME -max_baud_rate FIXME -max_dgram_qlen FIXME -max_noreply_time FIXME -max_tx_data_size FIXME -max_tx_window FIXME -min_tx_turn_time FIXME -mod_cong FIXME -no_cong FIXME -no_cong_thresh FIXME -slot_timeout FIXME -warn_noreply_time FIXME +rto_max - INTEGER + The maximum value (in milliseconds) of the round trip timeout. This + is the largest time interval that can elapse between retransmissions. + + Default: 60000 + +rto_min - INTEGER + The minimum value (in milliseconds) of the round trip timeout. This + is the smallest time interval the can elapse between retransmissions. + + Default: 1000 + +hb_interval - INTEGER + The interval (in milliseconds) between HEARTBEAT chunks. These chunks + are sent at the specified interval on idle paths to probe the state of + a given path between 2 associations. + + Default: 30000 + +sack_timeout - INTEGER + The amount of time (in milliseconds) that the implementation will wait + to send a SACK. + + Default: 200 + +valid_cookie_life - INTEGER + The default lifetime of the SCTP cookie (in milliseconds). The cookie + is used during association establishment. + + Default: 60000 + +cookie_preserve_enable - BOOLEAN + Enable or disable the ability to extend the lifetime of the SCTP cookie + that is used during the establishment phase of SCTP association + + 1: Enable cookie lifetime extension. + 0: Disable + + Default: 1 + +rcvbuf_policy - INTEGER + Determines if the receive buffer is attributed to the socket or to + association. SCTP supports the capability to create multiple + associations on a single socket. When using this capability, it is + possible that a single stalled association that's buffering a lot + of data may block other associations from delivering their data by + consuming all of the receive buffer space. To work around this, + the rcvbuf_policy could be set to attribute the receiver buffer space + to each association instead of the socket. This prevents the described + blocking. + + 1: rcvbuf space is per association + 0: recbuf space is per socket + + Default: 0 + +sndbuf_policy - INTEGER + Similar to rcvbuf_policy above, this applies to send buffer space. + + 1: Send buffer is tracked per association + 0: Send buffer is tracked per socket. + + Default: 0 + +sctp_mem - vector of 3 INTEGERs: min, pressure, max + Number of pages allowed for queueing by all SCTP sockets. + + min: Below this number of pages SCTP is not bothered about its + memory appetite. When amount of memory allocated by SCTP exceeds + this number, SCTP starts to moderate memory usage. + + pressure: This value was introduced to follow format of tcp_mem. + + max: Number of pages allowed for queueing by all SCTP sockets. + + Default is calculated at boot time from amount of available memory. + +sctp_rmem - vector of 3 INTEGERs: min, default, max + See tcp_rmem for a description. + +sctp_wmem - vector of 3 INTEGERs: min, default, max + See tcp_wmem for a description. + +UNDOCUMENTED: +/proc/sys/net/core/* + dev_weight FIXME + +/proc/sys/net/unix/* + max_dgram_qlen FIXME + +/proc/sys/net/irda/* + fast_poll_increase FIXME + warn_noreply_time FIXME + discovery_slots FIXME + slot_timeout FIXME + max_baud_rate FIXME + discovery_timeout FIXME + lap_keepalive_time FIXME + max_noreply_time FIXME + max_tx_data_size FIXME + max_tx_window FIXME + min_tx_turn_time FIXME diff --git a/arch/powerpc/kernel/of_platform.c b/arch/powerpc/kernel/of_platform.c index e79ad8afda0..3f37a6e6277 100644 --- a/arch/powerpc/kernel/of_platform.c +++ b/arch/powerpc/kernel/of_platform.c @@ -76,6 +76,8 @@ struct of_device* of_platform_device_create(struct device_node *np, return NULL; dev->dma_mask = 0xffffffffUL; + dev->dev.coherent_dma_mask = DMA_32BIT_MASK; + dev->dev.bus = &of_platform_bus_type; /* We do not fill the DMA ops for platform devices by default. diff --git a/arch/x86/kernel/.gitignore b/arch/x86/kernel/.gitignore index 4ea38a39aed..08f4fd73146 100644 --- a/arch/x86/kernel/.gitignore +++ b/arch/x86/kernel/.gitignore @@ -1,2 +1,3 @@ vsyscall.lds vsyscall_32.lds +vmlinux.lds diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 45e546c4ba7..115f13ee40c 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -300,6 +300,29 @@ void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size) } EXPORT_SYMBOL(ioremap_cache); +static void __iomem *ioremap_default(resource_size_t phys_addr, + unsigned long size) +{ + unsigned long flags; + void *ret; + int err; + + /* + * - WB for WB-able memory and no other conflicting mappings + * - UC_MINUS for non-WB-able memory with no other conflicting mappings + * - Inherit from confliting mappings otherwise + */ + err = reserve_memtype(phys_addr, phys_addr + size, -1, &flags); + if (err < 0) + return NULL; + + ret = (void *) __ioremap_caller(phys_addr, size, flags, + __builtin_return_address(0)); + + free_memtype(phys_addr, phys_addr + size); + return (void __iomem *)ret; +} + /** * iounmap - Free a IO remapping * @addr: virtual address from ioremap_* @@ -365,7 +388,7 @@ void *xlate_dev_mem_ptr(unsigned long phys) if (page_is_ram(start >> PAGE_SHIFT)) return __va(phys); - addr = (void __force *)ioremap(start, PAGE_SIZE); + addr = (void __force *)ioremap_default(start, PAGE_SIZE); if (addr) addr = (void *)((unsigned long)addr | (phys & ~PAGE_MASK)); diff --git a/crypto/chainiv.c b/crypto/chainiv.c index 6da3f577e4d..9affadee328 100644 --- a/crypto/chainiv.c +++ b/crypto/chainiv.c @@ -117,6 +117,7 @@ static int chainiv_init(struct crypto_tfm *tfm) static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx) { int queued; + int err = ctx->err; if (!ctx->queue.qlen) { smp_mb__before_clear_bit(); @@ -131,7 +132,7 @@ static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx) BUG_ON(!queued); out: - return ctx->err; + return err; } static int async_chainiv_postpone_request(struct skcipher_givcrypt_request *req) @@ -227,6 +228,7 @@ static void async_chainiv_do_postponed(struct work_struct *work) postponed); struct skcipher_givcrypt_request *req; struct ablkcipher_request *subreq; + int err; /* Only handle one request at a time to avoid hogging keventd. */ spin_lock_bh(&ctx->lock); @@ -241,7 +243,11 @@ static void async_chainiv_do_postponed(struct work_struct *work) subreq = skcipher_givcrypt_reqctx(req); subreq->base.flags |= CRYPTO_TFM_REQ_MAY_SLEEP; - async_chainiv_givencrypt_tail(req); + err = async_chainiv_givencrypt_tail(req); + + local_bh_disable(); + skcipher_givcrypt_complete(req, err); + local_bh_enable(); } static int async_chainiv_init(struct crypto_tfm *tfm) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 6beabc5abd0..e47f6e02133 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -586,12 +586,6 @@ static void test_cipher(char *algo, int enc, j = 0; for (i = 0; i < tcount; i++) { - data = kzalloc(template[i].ilen, GFP_KERNEL); - if (!data) - continue; - - memcpy(data, template[i].input, template[i].ilen); - if (template[i].iv) memcpy(iv, template[i].iv, MAX_IVLEN); else @@ -613,10 +607,8 @@ static void test_cipher(char *algo, int enc, printk("setkey() failed flags=%x\n", crypto_ablkcipher_get_flags(tfm)); - if (!template[i].fail) { - kfree(data); + if (!template[i].fail) goto out; - } } temp = 0; diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c index 3ff8b14420d..9330b7922f6 100644 --- a/drivers/ata/libata-acpi.c +++ b/drivers/ata/libata-acpi.c @@ -29,14 +29,16 @@ enum { ATA_ACPI_FILTER_SETXFER = 1 << 0, ATA_ACPI_FILTER_LOCK = 1 << 1, + ATA_ACPI_FILTER_DIPM = 1 << 2, ATA_ACPI_FILTER_DEFAULT = ATA_ACPI_FILTER_SETXFER | - ATA_ACPI_FILTER_LOCK, + ATA_ACPI_FILTER_LOCK | + ATA_ACPI_FILTER_DIPM, }; static unsigned int ata_acpi_gtf_filter = ATA_ACPI_FILTER_DEFAULT; module_param_named(acpi_gtf_filter, ata_acpi_gtf_filter, int, 0644); -MODULE_PARM_DESC(acpi_gtf_filter, "filter mask for ACPI _GTF commands, set to filter out (0x1=set xfermode, 0x2=lock/freeze lock)"); +MODULE_PARM_DESC(acpi_gtf_filter, "filter mask for ACPI _GTF commands, set to filter out (0x1=set xfermode, 0x2=lock/freeze lock, 0x4=DIPM)"); #define NO_PORT_MULT 0xffff #define SATA_ADR(root, pmp) (((root) << 16) | (pmp)) @@ -195,6 +197,10 @@ static void ata_acpi_handle_hotplug(struct ata_port *ap, struct ata_device *dev, /* This device does not support hotplug */ return; + if (event == ACPI_NOTIFY_BUS_CHECK || + event == ACPI_NOTIFY_DEVICE_CHECK) + status = acpi_evaluate_integer(handle, "_STA", NULL, &sta); + spin_lock_irqsave(ap->lock, flags); switch (event) { @@ -202,7 +208,6 @@ static void ata_acpi_handle_hotplug(struct ata_port *ap, struct ata_device *dev, case ACPI_NOTIFY_DEVICE_CHECK: ata_ehi_push_desc(ehi, "ACPI event"); - status = acpi_evaluate_integer(handle, "_STA", NULL, &sta); if (ACPI_FAILURE(status)) { ata_port_printk(ap, KERN_ERR, "acpi: failed to determine bay status (0x%x)\n", @@ -690,6 +695,14 @@ static int ata_acpi_filter_tf(const struct ata_taskfile *tf, return 1; } + if (ata_acpi_gtf_filter & ATA_ACPI_FILTER_DIPM) { + /* inhibit enabling DIPM */ + if (tf->command == ATA_CMD_SET_FEATURES && + tf->feature == SETFEATURES_SATA_ENABLE && + tf->nsect == SATA_DIPM) + return 1; + } + return 0; } diff --git a/drivers/ata/pata_sis.c b/drivers/ata/pata_sis.c index e82c66e8d31..26345d7b531 100644 --- a/drivers/ata/pata_sis.c +++ b/drivers/ata/pata_sis.c @@ -56,6 +56,7 @@ static const struct sis_laptop sis_laptop[] = { { 0x5513, 0x1043, 0x1107 }, /* ASUS A6K */ { 0x5513, 0x1734, 0x105F }, /* FSC Amilo A1630 */ { 0x5513, 0x1071, 0x8640 }, /* EasyNote K5305 */ + { 0x5513, 0x1039, 0x5513 }, /* Targa Visionary 1000 */ /* end marker */ { 0, } }; diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c index 1b9a8704781..0e6df289cb4 100644 --- a/drivers/char/ipmi/ipmi_watchdog.c +++ b/drivers/char/ipmi/ipmi_watchdog.c @@ -755,9 +755,8 @@ static ssize_t ipmi_write(struct file *file, rv = ipmi_heartbeat(); if (rv) return rv; - return 1; } - return 0; + return len; } static ssize_t ipmi_read(struct file *file, diff --git a/drivers/char/rtc.c b/drivers/char/rtc.c index 5f80a9dff57..909cac93fa2 100644 --- a/drivers/char/rtc.c +++ b/drivers/char/rtc.c @@ -678,12 +678,13 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel) if (arg != (1<<tmp)) return -EINVAL; + rtc_freq = arg; + spin_lock_irqsave(&rtc_lock, flags); if (hpet_set_periodic_freq(arg)) { spin_unlock_irqrestore(&rtc_lock, flags); return 0; } - rtc_freq = arg; val = CMOS_READ(RTC_FREQ_SELECT) & 0xf0; val |= (16 - tmp); diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c index 13a4bdd4e4d..c7a977bc03e 100644 --- a/drivers/char/tpm/tpm_tis.c +++ b/drivers/char/tpm/tpm_tis.c @@ -623,6 +623,7 @@ static struct pnp_device_id tpm_pnp_tbl[] __devinitdata = { {"IFX0102", 0}, /* Infineon */ {"BCM0101", 0}, /* Broadcom */ {"NSC1200", 0}, /* National */ + {"ICO0102", 0}, /* Intel */ /* Add new here */ {"", 0}, /* User Specified */ {"", 0} /* Terminator */ diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c index 8934178a23e..95f82cfb6c5 100644 --- a/drivers/infiniband/hw/cxgb3/iwch_provider.c +++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c @@ -1096,7 +1096,9 @@ static ssize_t show_fw_ver(struct device *dev, struct device_attribute *attr, ch struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev; PDBG("%s dev 0x%p\n", __func__, dev); + rtnl_lock(); lldev->ethtool_ops->get_drvinfo(lldev, &info); + rtnl_unlock(); return sprintf(buf, "%s\n", info.fw_version); } @@ -1109,7 +1111,9 @@ static ssize_t show_hca(struct device *dev, struct device_attribute *attr, struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev; PDBG("%s dev 0x%p\n", __func__, dev); + rtnl_lock(); lldev->ethtool_ops->get_drvinfo(lldev, &info); + rtnl_unlock(); return sprintf(buf, "%s\n", info.driver); } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 54c8ee28fcc..3b27df52456 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2017,12 +2017,7 @@ static int __handle_issuing_new_read_requests5(struct stripe_head *sh, */ s->uptodate++; return 0; /* uptodate + compute == disks */ - } else if ((s->uptodate < disks - 1) && - test_bit(R5_Insync, &dev->flags)) { - /* Note: we hold off compute operations while checks are - * in flight, but we still prefer 'compute' over 'read' - * hence we only read if (uptodate < * disks-1) - */ + } else if (test_bit(R5_Insync, &dev->flags)) { set_bit(R5_LOCKED, &dev->flags); set_bit(R5_Wantread, &dev->flags); if (!test_and_set_bit(STRIPE_OP_IO, &sh->ops.pending)) diff --git a/drivers/net/irda/nsc-ircc.c b/drivers/net/irda/nsc-ircc.c index a7714da7c28..effc1ce8179 100644 --- a/drivers/net/irda/nsc-ircc.c +++ b/drivers/net/irda/nsc-ircc.c @@ -152,6 +152,7 @@ static chipio_t pnp_info; static const struct pnp_device_id nsc_ircc_pnp_table[] = { { .id = "NSC6001", .driver_data = 0 }, { .id = "IBM0071", .driver_data = 0 }, + { .id = "HWPC224", .driver_data = 0 }, { } }; diff --git a/drivers/net/irda/via-ircc.c b/drivers/net/irda/via-ircc.c index 58e12878458..04ad3573b15 100644 --- a/drivers/net/irda/via-ircc.c +++ b/drivers/net/irda/via-ircc.c @@ -1546,6 +1546,7 @@ static int via_ircc_net_open(struct net_device *dev) IRDA_WARNING("%s, unable to allocate dma2=%d\n", driver_name, self->io.dma2); free_irq(self->io.irq, self); + free_dma(self->io.dma); return -EAGAIN; } } @@ -1606,6 +1607,8 @@ static int via_ircc_net_close(struct net_device *dev) EnAllInt(iobase, OFF); free_irq(self->io.irq, dev); free_dma(self->io.dma); + if (self->io.dma2 != self->io.dma) + free_dma(self->io.dma2); return 0; } diff --git a/drivers/net/tun.c b/drivers/net/tun.c index 7ab94c825b5..b9018bfa0a9 100644 --- a/drivers/net/tun.c +++ b/drivers/net/tun.c @@ -602,6 +602,12 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr) tun->attached = 1; get_net(dev_net(tun->dev)); + /* Make sure persistent devices do not get stuck in + * xoff state. + */ + if (netif_running(tun->dev)) + netif_wake_queue(tun->dev); + strcpy(ifr->ifr_name, tun->dev->name); return 0; diff --git a/drivers/net/wireless/hostap/hostap_cs.c b/drivers/net/wireless/hostap/hostap_cs.c index 80039a0ae02..3b4e55cf33c 100644 --- a/drivers/net/wireless/hostap/hostap_cs.c +++ b/drivers/net/wireless/hostap/hostap_cs.c @@ -777,8 +777,10 @@ static int hostap_cs_suspend(struct pcmcia_device *link) int dev_open = 0; struct hostap_interface *iface = NULL; - if (dev) - iface = netdev_priv(dev); + if (!dev) + return -ENODEV; + + iface = netdev_priv(dev); PDEBUG(DEBUG_EXTRA, "%s: CS_EVENT_PM_SUSPEND\n", dev_info); if (iface && iface->local) @@ -798,8 +800,10 @@ static int hostap_cs_resume(struct pcmcia_device *link) int dev_open = 0; struct hostap_interface *iface = NULL; - if (dev) - iface = netdev_priv(dev); + if (!dev) + return -ENODEV; + + iface = netdev_priv(dev); PDEBUG(DEBUG_EXTRA, "%s: CS_EVENT_PM_RESUME\n", dev_info); diff --git a/drivers/net/wireless/iwlwifi/iwl-3945.c b/drivers/net/wireless/iwlwifi/iwl-3945.c index f5387a7a76c..55ac850744b 100644 --- a/drivers/net/wireless/iwlwifi/iwl-3945.c +++ b/drivers/net/wireless/iwlwifi/iwl-3945.c @@ -449,7 +449,7 @@ static void iwl3945_dbg_report_frame(struct iwl3945_priv *priv, if (print_summary) { char *title; - u32 rate; + int rate; if (hundred) title = "100Frames"; @@ -487,7 +487,7 @@ static void iwl3945_dbg_report_frame(struct iwl3945_priv *priv, * but you can hack it to show more, if you'd like to. */ if (dataframe) IWL_DEBUG_RX("%s: mhd=0x%04x, dst=0x%02x, " - "len=%u, rssi=%d, chnl=%d, rate=%u, \n", + "len=%u, rssi=%d, chnl=%d, rate=%d, \n", title, fc, header->addr1[5], length, rssi, channel, rate); else { diff --git a/drivers/net/wireless/libertas/scan.c b/drivers/net/wireless/libertas/scan.c index d448c9702a0..387d4878af2 100644 --- a/drivers/net/wireless/libertas/scan.c +++ b/drivers/net/wireless/libertas/scan.c @@ -567,11 +567,11 @@ static int lbs_process_bss(struct bss_descriptor *bss, pos += 8; /* beacon interval is 2 bytes long */ - bss->beaconperiod = le16_to_cpup((void *) pos); + bss->beaconperiod = get_unaligned_le16(pos); pos += 2; /* capability information is 2 bytes long */ - bss->capability = le16_to_cpup((void *) pos); + bss->capability = get_unaligned_le16(pos); lbs_deb_scan("process_bss: capabilities 0x%04x\n", bss->capability); pos += 2; diff --git a/drivers/net/wireless/rt2x00/rt2400pci.c b/drivers/net/wireless/rt2x00/rt2400pci.c index 560b9c73c0b..b36ed1c6c74 100644 --- a/drivers/net/wireless/rt2x00/rt2400pci.c +++ b/drivers/net/wireless/rt2x00/rt2400pci.c @@ -731,6 +731,17 @@ static int rt2400pci_init_registers(struct rt2x00_dev *rt2x00dev) (rt2x00dev->rx->data_size / 128)); rt2x00pci_register_write(rt2x00dev, CSR9, reg); + rt2x00pci_register_read(rt2x00dev, CSR14, ®); + rt2x00_set_field32(®, CSR14_TSF_COUNT, 0); + rt2x00_set_field32(®, CSR14_TSF_SYNC, 0); + rt2x00_set_field32(®, CSR14_TBCN, 0); + rt2x00_set_field32(®, CSR14_TCFP, 0); + rt2x00_set_field32(®, CSR14_TATIMW, 0); + rt2x00_set_field32(®, CSR14_BEACON_GEN, 0); + rt2x00_set_field32(®, CSR14_CFP_COUNT_PRELOAD, 0); + rt2x00_set_field32(®, CSR14_TBCM_PRELOAD, 0); + rt2x00pci_register_write(rt2x00dev, CSR14, reg); + rt2x00pci_register_write(rt2x00dev, CNT3, 0x3f080000); rt2x00pci_register_read(rt2x00dev, ARCSR0, ®); diff --git a/drivers/net/wireless/rt2x00/rt2500pci.c b/drivers/net/wireless/rt2x00/rt2500pci.c index a5ed54b6926..f7731fb8255 100644 --- a/drivers/net/wireless/rt2x00/rt2500pci.c +++ b/drivers/net/wireless/rt2x00/rt2500pci.c @@ -824,6 +824,17 @@ static int rt2500pci_init_registers(struct rt2x00_dev *rt2x00dev) rt2x00_set_field32(®, CSR11_CW_SELECT, 0); rt2x00pci_register_write(rt2x00dev, CSR11, reg); + rt2x00pci_register_read(rt2x00dev, CSR14, ®); + rt2x00_set_field32(®, CSR14_TSF_COUNT, 0); + rt2x00_set_field32(®, CSR14_TSF_SYNC, 0); + rt2x00_set_field32(®, CSR14_TBCN, 0); + rt2x00_set_field32(®, CSR14_TCFP, 0); + rt2x00_set_field32(®, CSR14_TATIMW, 0); + rt2x00_set_field32(®, CSR14_BEACON_GEN, 0); + rt2x00_set_field32(®, CSR14_CFP_COUNT_PRELOAD, 0); + rt2x00_set_field32(®, CSR14_TBCM_PRELOAD, 0); + rt2x00pci_register_write(rt2x00dev, CSR14, reg); + rt2x00pci_register_write(rt2x00dev, CNT3, 0); rt2x00pci_register_read(rt2x00dev, TXCSR8, ®); diff --git a/drivers/net/wireless/rt2x00/rt2500usb.c b/drivers/net/wireless/rt2x00/rt2500usb.c index 61e59c17a60..d90512f97b3 100644 --- a/drivers/net/wireless/rt2x00/rt2500usb.c +++ b/drivers/net/wireless/rt2x00/rt2500usb.c @@ -801,6 +801,13 @@ static int rt2500usb_init_registers(struct rt2x00_dev *rt2x00dev) rt2x00_set_field16(®, TXRX_CSR8_BBP_ID1_VALID, 0); rt2500usb_register_write(rt2x00dev, TXRX_CSR8, reg); + rt2500usb_register_read(rt2x00dev, TXRX_CSR19, ®); + rt2x00_set_field16(®, TXRX_CSR19_TSF_COUNT, 0); + rt2x00_set_field16(®, TXRX_CSR19_TSF_SYNC, 0); + rt2x00_set_field16(®, TXRX_CSR19_TBCN, 0); + rt2x00_set_field16(®, TXRX_CSR19_BEACON_GEN, 0); + rt2500usb_register_write(rt2x00dev, TXRX_CSR19, reg); + rt2500usb_register_write(rt2x00dev, TXRX_CSR21, 0xe78f); rt2500usb_register_write(rt2x00dev, MAC_CSR9, 0xff1d); diff --git a/drivers/net/wireless/rt2x00/rt61pci.c b/drivers/net/wireless/rt2x00/rt61pci.c index 14bc7b28165..c3afb5cbe80 100644 --- a/drivers/net/wireless/rt2x00/rt61pci.c +++ b/drivers/net/wireless/rt2x00/rt61pci.c @@ -1201,6 +1201,15 @@ static int rt61pci_init_registers(struct rt2x00_dev *rt2x00dev) rt2x00_set_field32(®, TXRX_CSR8_ACK_CTS_54MBS, 42); rt2x00pci_register_write(rt2x00dev, TXRX_CSR8, reg); + rt2x00pci_register_read(rt2x00dev, TXRX_CSR9, ®); + rt2x00_set_field32(®, TXRX_CSR9_BEACON_INTERVAL, 0); + rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 0); + rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, 0); + rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 0); + rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0); + rt2x00_set_field32(®, TXRX_CSR9_TIMESTAMP_COMPENSATE, 0); + rt2x00pci_register_write(rt2x00dev, TXRX_CSR9, reg); + rt2x00pci_register_write(rt2x00dev, TXRX_CSR15, 0x0000000f); rt2x00pci_register_write(rt2x00dev, MAC_CSR6, 0x00000fff); diff --git a/drivers/net/wireless/rt2x00/rt73usb.c b/drivers/net/wireless/rt2x00/rt73usb.c index 83cc0147f69..46e9e081fbf 100644 --- a/drivers/net/wireless/rt2x00/rt73usb.c +++ b/drivers/net/wireless/rt2x00/rt73usb.c @@ -1006,6 +1006,15 @@ static int rt73usb_init_registers(struct rt2x00_dev *rt2x00dev) rt2x00_set_field32(®, TXRX_CSR8_ACK_CTS_54MBS, 42); rt73usb_register_write(rt2x00dev, TXRX_CSR8, reg); + rt73usb_register_read(rt2x00dev, TXRX_CSR9, ®); + rt2x00_set_field32(®, TXRX_CSR9_BEACON_INTERVAL, 0); + rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 0); + rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, 0); + rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 0); + rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0); + rt2x00_set_field32(®, TXRX_CSR9_TIMESTAMP_COMPENSATE, 0); + rt73usb_register_write(rt2x00dev, TXRX_CSR9, reg); + rt73usb_register_write(rt2x00dev, TXRX_CSR15, 0x0000000f); rt73usb_register_read(rt2x00dev, MAC_CSR6, ®); diff --git a/drivers/net/wireless/zd1211rw/zd_mac.c b/drivers/net/wireless/zd1211rw/zd_mac.c index 418606ac1c3..694e95d35fd 100644 --- a/drivers/net/wireless/zd1211rw/zd_mac.c +++ b/drivers/net/wireless/zd1211rw/zd_mac.c @@ -765,6 +765,7 @@ static void zd_op_remove_interface(struct ieee80211_hw *hw, { struct zd_mac *mac = zd_hw_mac(hw); mac->type = IEEE80211_IF_TYPE_INVALID; + zd_set_beacon_interval(&mac->chip, 0); zd_write_mac_addr(&mac->chip, NULL); } diff --git a/drivers/net/wireless/zd1211rw/zd_usb.c b/drivers/net/wireless/zd1211rw/zd_usb.c index 8941f5eb96c..6cdad976460 100644 --- a/drivers/net/wireless/zd1211rw/zd_usb.c +++ b/drivers/net/wireless/zd1211rw/zd_usb.c @@ -64,6 +64,7 @@ static struct usb_device_id usb_ids[] = { { USB_DEVICE(0x079b, 0x0062), .driver_info = DEVICE_ZD1211B }, { USB_DEVICE(0x1582, 0x6003), .driver_info = DEVICE_ZD1211B }, { USB_DEVICE(0x050d, 0x705c), .driver_info = DEVICE_ZD1211B }, + { USB_DEVICE(0x083a, 0xe506), .driver_info = DEVICE_ZD1211B }, { USB_DEVICE(0x083a, 0x4505), .driver_info = DEVICE_ZD1211B }, { USB_DEVICE(0x0471, 0x1236), .driver_info = DEVICE_ZD1211B }, { USB_DEVICE(0x13b1, 0x0024), .driver_info = DEVICE_ZD1211B }, diff --git a/drivers/rapidio/rio-driver.c b/drivers/rapidio/rio-driver.c index 3ce9f3defc1..956d3e79f6a 100644 --- a/drivers/rapidio/rio-driver.c +++ b/drivers/rapidio/rio-driver.c @@ -101,8 +101,8 @@ static int rio_device_probe(struct device *dev) if (error >= 0) { rdev->driver = rdrv; error = 0; + } else rio_dev_put(rdev); - } } return error; } diff --git a/drivers/ssb/driver_pcicore.c b/drivers/ssb/driver_pcicore.c index d28c5386809..538c570df33 100644 --- a/drivers/ssb/driver_pcicore.c +++ b/drivers/ssb/driver_pcicore.c @@ -537,6 +537,13 @@ int ssb_pcicore_dev_irqvecs_enable(struct ssb_pcicore *pc, int err = 0; u32 tmp; + if (dev->bus->bustype != SSB_BUSTYPE_PCI) { + /* This SSB device is not on a PCI host-bus. So the IRQs are + * not routed through the PCI core. + * So we must not enable routing through the PCI core. */ + goto out; + } + if (!pdev) goto out; bus = pdev->bus; diff --git a/drivers/usb/host/ohci-au1xxx.c b/drivers/usb/host/ohci-au1xxx.c index f90fe0c7373..68c17f5ea8e 100644 --- a/drivers/usb/host/ohci-au1xxx.c +++ b/drivers/usb/host/ohci-au1xxx.c @@ -8,7 +8,7 @@ * Bus Glue for AMD Alchemy Au1xxx * * Written by Christopher Hoover <ch@hpl.hp.com> - * Based on fragments of previous driver by Rusell King et al. + * Based on fragments of previous driver by Russell King et al. * * Modified for LH7A404 from ohci-sa1111.c * by Durgesh Pattamatta <pattamattad@sharpsec.com> diff --git a/drivers/usb/host/ohci-lh7a404.c b/drivers/usb/host/ohci-lh7a404.c index 13c12ed2225..1ef5d482c14 100644 --- a/drivers/usb/host/ohci-lh7a404.c +++ b/drivers/usb/host/ohci-lh7a404.c @@ -8,7 +8,7 @@ * Bus Glue for Sharp LH7A404 * * Written by Christopher Hoover <ch@hpl.hp.com> - * Based on fragments of previous driver by Rusell King et al. + * Based on fragments of previous driver by Russell King et al. * * Modified for LH7A404 from ohci-sa1111.c * by Durgesh Pattamatta <pattamattad@sharpsec.com> diff --git a/drivers/usb/host/ohci-s3c2410.c b/drivers/usb/host/ohci-s3c2410.c index ead4772f0f2..3c7a740cfe0 100644 --- a/drivers/usb/host/ohci-s3c2410.c +++ b/drivers/usb/host/ohci-s3c2410.c @@ -8,7 +8,7 @@ * USB Bus Glue for Samsung S3C2410 * * Written by Christopher Hoover <ch@hpl.hp.com> - * Based on fragments of previous driver by Rusell King et al. + * Based on fragments of previous driver by Russell King et al. * * Modified for S3C2410 from ohci-sa1111.c, ohci-omap.c and ohci-lh7a40.c * by Ben Dooks, <ben@simtec.co.uk> diff --git a/drivers/usb/host/ohci-sa1111.c b/drivers/usb/host/ohci-sa1111.c index 0f48f2d9922..2e9dceb9bb9 100644 --- a/drivers/usb/host/ohci-sa1111.c +++ b/drivers/usb/host/ohci-sa1111.c @@ -8,7 +8,7 @@ * SA1111 Bus Glue * * Written by Christopher Hoover <ch@hpl.hp.com> - * Based on fragments of previous driver by Rusell King et al. + * Based on fragments of previous driver by Russell King et al. * * This file is licenced under the GPL. */ diff --git a/drivers/video/fsl-diu-fb.c b/drivers/video/fsl-diu-fb.c index 712dabc6269..09d7e22c6fe 100644 --- a/drivers/video/fsl-diu-fb.c +++ b/drivers/video/fsl-diu-fb.c @@ -1324,7 +1324,7 @@ static int fsl_diu_suspend(struct of_device *ofdev, pm_message_t state) { struct fsl_diu_data *machine_data; - machine_data = dev_get_drvdata(&dev->dev); + machine_data = dev_get_drvdata(&ofdev->dev); disable_lcdc(machine_data->fsl_diu_info[0]); return 0; @@ -1334,7 +1334,7 @@ static int fsl_diu_resume(struct of_device *ofdev) { struct fsl_diu_data *machine_data; - machine_data = dev_get_drvdata(&dev->dev); + machine_data = dev_get_drvdata(&ofdev->dev); enable_lcdc(machine_data->fsl_diu_info[0]); return 0; diff --git a/fs/exec.c b/fs/exec.c index da94a6f05df..fd9234379e8 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -610,7 +610,7 @@ int setup_arg_pages(struct linux_binprm *bprm, bprm->exec -= stack_shift; down_write(&mm->mmap_sem); - vm_flags = vma->vm_flags; + vm_flags = VM_STACK_FLAGS; /* * Adjust stack execute permissions; explicitly enable for diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c index efc015c6128..44f87caf368 100644 --- a/fs/ocfs2/dlm/dlmmaster.c +++ b/fs/ocfs2/dlm/dlmmaster.c @@ -606,7 +606,9 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm, res->last_used = 0; + spin_lock(&dlm->spinlock); list_add_tail(&res->tracking, &dlm->tracking_list); + spin_unlock(&dlm->spinlock); memset(res->lvb, 0, DLM_LVB_LEN); memset(res->refmap, 0, sizeof(res->refmap)); diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c index 394d25a131a..80e20d9f278 100644 --- a/fs/ocfs2/dlmglue.c +++ b/fs/ocfs2/dlmglue.c @@ -1554,8 +1554,8 @@ out: */ int ocfs2_file_lock(struct file *file, int ex, int trylock) { - int ret, level = ex ? LKM_EXMODE : LKM_PRMODE; - unsigned int lkm_flags = trylock ? LKM_NOQUEUE : 0; + int ret, level = ex ? DLM_LOCK_EX : DLM_LOCK_PR; + unsigned int lkm_flags = trylock ? DLM_LKF_NOQUEUE : 0; unsigned long flags; struct ocfs2_file_private *fp = file->private_data; struct ocfs2_lock_res *lockres = &fp->fp_flock; @@ -1582,7 +1582,7 @@ int ocfs2_file_lock(struct file *file, int ex, int trylock) * Get the lock at NLMODE to start - that way we * can cancel the upconvert request if need be. */ - ret = ocfs2_lock_create(osb, lockres, LKM_NLMODE, 0); + ret = ocfs2_lock_create(osb, lockres, DLM_LOCK_NL, 0); if (ret < 0) { mlog_errno(ret); goto out; @@ -1597,7 +1597,7 @@ int ocfs2_file_lock(struct file *file, int ex, int trylock) } lockres->l_action = OCFS2_AST_CONVERT; - lkm_flags |= LKM_CONVERT; + lkm_flags |= DLM_LKF_CONVERT; lockres->l_requested = level; lockres_or_flags(lockres, OCFS2_LOCK_BUSY); @@ -1664,7 +1664,7 @@ void ocfs2_file_unlock(struct file *file) if (!(lockres->l_flags & OCFS2_LOCK_ATTACHED)) return; - if (lockres->l_level == LKM_NLMODE) + if (lockres->l_level == DLM_LOCK_NL) return; mlog(0, "Unlock: \"%s\" flags: 0x%lx, level: %d, act: %d\n", @@ -1678,11 +1678,11 @@ void ocfs2_file_unlock(struct file *file) lockres_or_flags(lockres, OCFS2_LOCK_BLOCKED); lockres->l_blocking = DLM_LOCK_EX; - gen = ocfs2_prepare_downconvert(lockres, LKM_NLMODE); + gen = ocfs2_prepare_downconvert(lockres, DLM_LOCK_NL); lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0); spin_unlock_irqrestore(&lockres->l_lock, flags); - ret = ocfs2_downconvert_lock(osb, lockres, LKM_NLMODE, 0, gen); + ret = ocfs2_downconvert_lock(osb, lockres, DLM_LOCK_NL, 0, gen); if (ret) { mlog_errno(ret); return; diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c index afaee301b0e..ad3d26ddfe3 100644 --- a/fs/xfs/xfs_log.c +++ b/fs/xfs/xfs_log.c @@ -2427,13 +2427,20 @@ restart: if (iclog->ic_size - iclog->ic_offset < 2*sizeof(xlog_op_header_t)) { xlog_state_switch_iclogs(log, iclog, iclog->ic_size); - /* If I'm the only one writing to this iclog, sync it to disk */ - if (atomic_read(&iclog->ic_refcnt) == 1) { + /* + * If I'm the only one writing to this iclog, sync it to disk. + * We need to do an atomic compare and decrement here to avoid + * racing with concurrent atomic_dec_and_lock() calls in + * xlog_state_release_iclog() when there is more than one + * reference to the iclog. + */ + if (!atomic_add_unless(&iclog->ic_refcnt, -1, 1)) { + /* we are the only one */ spin_unlock(&log->l_icloglock); - if ((error = xlog_state_release_iclog(log, iclog))) + error = xlog_state_release_iclog(log, iclog); + if (error) return error; } else { - atomic_dec(&iclog->ic_refcnt); spin_unlock(&log->l_icloglock); } goto restart; diff --git a/include/asm-avr32/setup.h b/include/asm-avr32/setup.h index ea3070ff13a..ff5b7cf6be4 100644 --- a/include/asm-avr32/setup.h +++ b/include/asm-avr32/setup.h @@ -2,7 +2,7 @@ * Copyright (C) 2004-2006 Atmel Corporation * * Based on linux/include/asm-arm/setup.h - * Copyright (C) 1997-1999 Russel King + * Copyright (C) 1997-1999 Russell King * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License version 2 as diff --git a/include/linux/xfrm.h b/include/linux/xfrm.h index 2ca6bae8872..fb0c215a305 100644 --- a/include/linux/xfrm.h +++ b/include/linux/xfrm.h @@ -339,6 +339,7 @@ struct xfrm_usersa_info { #define XFRM_STATE_NOPMTUDISC 4 #define XFRM_STATE_WILDRECV 8 #define XFRM_STATE_ICMP 16 +#define XFRM_STATE_AF_UNSPEC 32 }; struct xfrm_usersa_id { diff --git a/kernel/kprobes.c b/kernel/kprobes.c index d4998f81e22..1485ca8d0e0 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -79,7 +79,7 @@ static DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL; * * For such cases, we now have a blacklist */ -struct kprobe_blackpoint kprobe_blacklist[] = { +static struct kprobe_blackpoint kprobe_blacklist[] = { {"preempt_schedule",}, {NULL} /* Terminator */ }; diff --git a/kernel/printk.c b/kernel/printk.c index 1fb1382009f..625d240d7ad 100644 --- a/kernel/printk.c +++ b/kernel/printk.c @@ -670,7 +670,7 @@ static int acquire_console_semaphore_for_printk(unsigned int cpu) return retval; } -const char printk_recursion_bug_msg [] = +static const char printk_recursion_bug_msg [] = KERN_CRIT "BUG: recent printk recursion!\n"; static int printk_recursion_bug; diff --git a/kernel/rcupreempt.c b/kernel/rcupreempt.c index 5e02b774070..41d275a81df 100644 --- a/kernel/rcupreempt.c +++ b/kernel/rcupreempt.c @@ -925,26 +925,22 @@ void rcu_offline_cpu(int cpu) spin_unlock_irqrestore(&rdp->lock, flags); } -void __devinit rcu_online_cpu(int cpu) -{ - unsigned long flags; - - spin_lock_irqsave(&rcu_ctrlblk.fliplock, flags); - cpu_set(cpu, rcu_cpu_online_map); - spin_unlock_irqrestore(&rcu_ctrlblk.fliplock, flags); -} - #else /* #ifdef CONFIG_HOTPLUG_CPU */ void rcu_offline_cpu(int cpu) { } -void __devinit rcu_online_cpu(int cpu) +#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */ + +void __cpuinit rcu_online_cpu(int cpu) { -} + unsigned long flags; -#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */ + spin_lock_irqsave(&rcu_ctrlblk.fliplock, flags); + cpu_set(cpu, rcu_cpu_online_map); + spin_unlock_irqrestore(&rcu_ctrlblk.fliplock, flags); +} static void rcu_process_callbacks(struct softirq_action *unused) { diff --git a/kernel/sched.c b/kernel/sched.c index bcc22b569ee..8402944f715 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -5622,10 +5622,10 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu) double_rq_lock(rq_src, rq_dest); /* Already moved. */ if (task_cpu(p) != src_cpu) - goto out; + goto done; /* Affinity changed (again). */ if (!cpu_isset(dest_cpu, p->cpus_allowed)) - goto out; + goto fail; on_rq = p->se.on_rq; if (on_rq) @@ -5636,8 +5636,9 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu) activate_task(rq_dest, p, 0); check_preempt_curr(rq_dest, p); } +done: ret = 1; -out: +fail: double_rq_unlock(rq_src, rq_dest); return ret; } diff --git a/mm/slub.c b/mm/slub.c index 1a427c0ae83..315c392253c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1628,9 +1628,11 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, void **object; struct kmem_cache_cpu *c; unsigned long flags; + unsigned int objsize; local_irq_save(flags); c = get_cpu_slab(s, smp_processor_id()); + objsize = c->objsize; if (unlikely(!c->freelist || !node_match(c, node))) object = __slab_alloc(s, gfpflags, node, addr, c); @@ -1643,7 +1645,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s, local_irq_restore(flags); if (unlikely((gfpflags & __GFP_ZERO) && object)) - memset(object, 0, c->objsize); + memset(object, 0, objsize); return object; } diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c index 4b02d14e7ab..e1600ad8fb0 100644 --- a/net/ipv4/fib_trie.c +++ b/net/ipv4/fib_trie.c @@ -1359,17 +1359,17 @@ static int check_leaf(struct trie *t, struct leaf *l, t->stats.semantic_match_miss++; #endif if (err <= 0) - return plen; + return err; } - return -1; + return 1; } static int fn_trie_lookup(struct fib_table *tb, const struct flowi *flp, struct fib_result *res) { struct trie *t = (struct trie *) tb->tb_data; - int plen, ret = 0; + int ret; struct node *n; struct tnode *pn; int pos, bits; @@ -1393,10 +1393,7 @@ static int fn_trie_lookup(struct fib_table *tb, const struct flowi *flp, /* Just a leaf? */ if (IS_LEAF(n)) { - plen = check_leaf(t, (struct leaf *)n, key, flp, res); - if (plen < 0) - goto failed; - ret = 0; + ret = check_leaf(t, (struct leaf *)n, key, flp, res); goto found; } @@ -1421,11 +1418,9 @@ static int fn_trie_lookup(struct fib_table *tb, const struct flowi *flp, } if (IS_LEAF(n)) { - plen = check_leaf(t, (struct leaf *)n, key, flp, res); - if (plen < 0) + ret = check_leaf(t, (struct leaf *)n, key, flp, res); + if (ret > 0) goto backtrace; - - ret = 0; goto found; } diff --git a/net/ipv4/netfilter/nf_nat_snmp_basic.c b/net/ipv4/netfilter/nf_nat_snmp_basic.c index 7750c97fde7..ffeaffc3fff 100644 --- a/net/ipv4/netfilter/nf_nat_snmp_basic.c +++ b/net/ipv4/netfilter/nf_nat_snmp_basic.c @@ -439,8 +439,8 @@ static unsigned char asn1_oid_decode(struct asn1_ctx *ctx, unsigned int *len) { unsigned long subid; - unsigned int size; unsigned long *optr; + size_t size; size = eoc - ctx->pointer + 1; diff --git a/net/ipv4/tcp_probe.c b/net/ipv4/tcp_probe.c index 5ff0ce6e9d3..7ddc30f0744 100644 --- a/net/ipv4/tcp_probe.c +++ b/net/ipv4/tcp_probe.c @@ -224,7 +224,7 @@ static __init int tcpprobe_init(void) if (bufsize < 0) return -EINVAL; - tcp_probe.log = kcalloc(sizeof(struct tcp_log), bufsize, GFP_KERNEL); + tcp_probe.log = kcalloc(bufsize, sizeof(struct tcp_log), GFP_KERNEL); if (!tcp_probe.log) goto err0; diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 147588f4c7c..ff61a5cdb0b 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -749,12 +749,12 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp) } write_unlock_bh(&idev->lock); + addrconf_del_timer(ifp); + ipv6_ifa_notify(RTM_DELADDR, ifp); atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifp); - addrconf_del_timer(ifp); - /* * Purge or update corresponding prefix * diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c index 3cd1c993d52..dcf94fdfb86 100644 --- a/net/ipv6/exthdrs.c +++ b/net/ipv6/exthdrs.c @@ -445,7 +445,7 @@ looped_back: kfree_skb(skb); return -1; } - if (!ipv6_chk_home_addr(&init_net, addr)) { + if (!ipv6_chk_home_addr(dev_net(skb->dst->dev), addr)) { IP6_INC_STATS_BH(ip6_dst_idev(skb->dst), IPSTATS_MIB_INADDRERRORS); kfree_skb(skb); diff --git a/net/irda/irnetlink.c b/net/irda/irnetlink.c index 9e1fb82e322..2f05ec1037a 100644 --- a/net/irda/irnetlink.c +++ b/net/irda/irnetlink.c @@ -101,8 +101,8 @@ static int irda_nl_get_mode(struct sk_buff *skb, struct genl_info *info) hdr = genlmsg_put(msg, info->snd_pid, info->snd_seq, &irda_nl_family, 0, IRDA_NL_CMD_GET_MODE); - if (IS_ERR(hdr)) { - ret = PTR_ERR(hdr); + if (hdr == NULL) { + ret = -EMSGSIZE; goto err_out; } diff --git a/net/mac80211/main.c b/net/mac80211/main.c index 98c0b5e56ec..df0836ff1a2 100644 --- a/net/mac80211/main.c +++ b/net/mac80211/main.c @@ -530,8 +530,6 @@ static int ieee80211_stop(struct net_device *dev) local->sta_hw_scanning = 0; } - flush_workqueue(local->hw.workqueue); - sdata->u.sta.flags &= ~IEEE80211_STA_PRIVACY_INVOKED; kfree(sdata->u.sta.extra_ie); sdata->u.sta.extra_ie = NULL; @@ -555,6 +553,8 @@ static int ieee80211_stop(struct net_device *dev) ieee80211_led_radio(local, 0); + flush_workqueue(local->hw.workqueue); + tasklet_disable(&local->tx_pending_tasklet); tasklet_disable(&local->tasklet); } diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c index 4d2b582dd05..b404537c0bc 100644 --- a/net/mac80211/mlme.c +++ b/net/mac80211/mlme.c @@ -547,15 +547,14 @@ static void ieee80211_set_associated(struct net_device *dev, sdata->bss_conf.ht_bss_conf = &conf->ht_bss_conf; } - netif_carrier_on(dev); ifsta->flags |= IEEE80211_STA_PREV_BSSID_SET; memcpy(ifsta->prev_bssid, sdata->u.sta.bssid, ETH_ALEN); memcpy(wrqu.ap_addr.sa_data, sdata->u.sta.bssid, ETH_ALEN); ieee80211_sta_send_associnfo(dev, ifsta); } else { + netif_carrier_off(dev); ieee80211_sta_tear_down_BA_sessions(dev, ifsta->bssid); ifsta->flags &= ~IEEE80211_STA_ASSOCIATED; - netif_carrier_off(dev); ieee80211_reset_erp_info(dev); sdata->bss_conf.assoc_ht = 0; @@ -569,6 +568,10 @@ static void ieee80211_set_associated(struct net_device *dev, sdata->bss_conf.assoc = assoc; ieee80211_bss_info_change_notify(sdata, changed); + + if (assoc) + netif_carrier_on(dev); + wrqu.ap_addr.sa_family = ARPHRD_ETHER; wireless_send_event(dev, SIOCGIWAP, &wrqu, NULL); } @@ -3611,8 +3614,10 @@ static int ieee80211_sta_find_ibss(struct net_device *dev, spin_unlock_bh(&local->sta_bss_lock); #ifdef CONFIG_MAC80211_IBSS_DEBUG - printk(KERN_DEBUG " sta_find_ibss: selected %s current " - "%s\n", print_mac(mac, bssid), print_mac(mac2, ifsta->bssid)); + if (found) + printk(KERN_DEBUG " sta_find_ibss: selected %s current " + "%s\n", print_mac(mac, bssid), + print_mac(mac2, ifsta->bssid)); #endif /* CONFIG_MAC80211_IBSS_DEBUG */ if (found && memcmp(ifsta->bssid, bssid, ETH_ALEN) != 0 && (bss = ieee80211_rx_bss_get(dev, bssid, diff --git a/net/mac80211/rc80211_pid.h b/net/mac80211/rc80211_pid.h index 04afc13ed82..4ea7b97d1af 100644 --- a/net/mac80211/rc80211_pid.h +++ b/net/mac80211/rc80211_pid.h @@ -141,7 +141,6 @@ struct rc_pid_events_file_info { * rate behaviour values (lower means we should trust more what we learnt * about behaviour of rates, higher means we should trust more the natural * ordering of rates) - * @fast_start: if Y, push high rates right after initialization */ struct rc_pid_debugfs_entries { struct dentry *dir; @@ -154,7 +153,6 @@ struct rc_pid_debugfs_entries { struct dentry *sharpen_factor; struct dentry *sharpen_duration; struct dentry *norm_offset; - struct dentry *fast_start; }; void rate_control_pid_event_tx_status(struct rc_pid_event_buffer *buf, @@ -267,9 +265,6 @@ struct rc_pid_info { /* Normalization offset. */ unsigned int norm_offset; - /* Fast starst parameter. */ - unsigned int fast_start; - /* Rates information. */ struct rc_pid_rateinfo *rinfo; diff --git a/net/mac80211/rc80211_pid_algo.c b/net/mac80211/rc80211_pid_algo.c index a849b745bdb..bcd27c1d759 100644 --- a/net/mac80211/rc80211_pid_algo.c +++ b/net/mac80211/rc80211_pid_algo.c @@ -398,13 +398,25 @@ static void *rate_control_pid_alloc(struct ieee80211_local *local) return NULL; } + pinfo->target = RC_PID_TARGET_PF; + pinfo->sampling_period = RC_PID_INTERVAL; + pinfo->coeff_p = RC_PID_COEFF_P; + pinfo->coeff_i = RC_PID_COEFF_I; + pinfo->coeff_d = RC_PID_COEFF_D; + pinfo->smoothing_shift = RC_PID_SMOOTHING_SHIFT; + pinfo->sharpen_factor = RC_PID_SHARPENING_FACTOR; + pinfo->sharpen_duration = RC_PID_SHARPENING_DURATION; + pinfo->norm_offset = RC_PID_NORM_OFFSET; + pinfo->rinfo = rinfo; + pinfo->oldrate = 0; + /* Sort the rates. This is optimized for the most common case (i.e. * almost-sorted CCK+OFDM rates). Kind of bubble-sort with reversed * mapping too. */ for (i = 0; i < sband->n_bitrates; i++) { rinfo[i].index = i; rinfo[i].rev_index = i; - if (pinfo->fast_start) + if (RC_PID_FAST_START) rinfo[i].diff = 0; else rinfo[i].diff = i * pinfo->norm_offset; @@ -425,19 +437,6 @@ static void *rate_control_pid_alloc(struct ieee80211_local *local) break; } - pinfo->target = RC_PID_TARGET_PF; - pinfo->sampling_period = RC_PID_INTERVAL; - pinfo->coeff_p = RC_PID_COEFF_P; - pinfo->coeff_i = RC_PID_COEFF_I; - pinfo->coeff_d = RC_PID_COEFF_D; - pinfo->smoothing_shift = RC_PID_SMOOTHING_SHIFT; - pinfo->sharpen_factor = RC_PID_SHARPENING_FACTOR; - pinfo->sharpen_duration = RC_PID_SHARPENING_DURATION; - pinfo->norm_offset = RC_PID_NORM_OFFSET; - pinfo->fast_start = RC_PID_FAST_START; - pinfo->rinfo = rinfo; - pinfo->oldrate = 0; - #ifdef CONFIG_MAC80211_DEBUGFS de = &pinfo->dentries; de->dir = debugfs_create_dir("rc80211_pid", @@ -465,9 +464,6 @@ static void *rate_control_pid_alloc(struct ieee80211_local *local) de->norm_offset = debugfs_create_u32("norm_offset", S_IRUSR | S_IWUSR, de->dir, &pinfo->norm_offset); - de->fast_start = debugfs_create_bool("fast_start", - S_IRUSR | S_IWUSR, de->dir, - &pinfo->fast_start); #endif return pinfo; @@ -479,7 +475,6 @@ static void rate_control_pid_free(void *priv) #ifdef CONFIG_MAC80211_DEBUGFS struct rc_pid_debugfs_entries *de = &pinfo->dentries; - debugfs_remove(de->fast_start); debugfs_remove(de->norm_offset); debugfs_remove(de->sharpen_duration); debugfs_remove(de->sharpen_factor); diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c index 271cd01d57a..dd28fb239a6 100644 --- a/net/netfilter/nf_conntrack_proto_tcp.c +++ b/net/netfilter/nf_conntrack_proto_tcp.c @@ -844,9 +844,15 @@ static int tcp_packet(struct nf_conn *ct, /* Attempt to reopen a closed/aborted connection. * Delete this connection and look up again. */ write_unlock_bh(&tcp_lock); - if (del_timer(&ct->timeout)) + /* Only repeat if we can actually remove the timer. + * Destruction may already be in progress in process + * context and we must give it a chance to terminate. + */ + if (del_timer(&ct->timeout)) { ct->timeout.function((unsigned long)ct); - return -NF_REPEAT; + return -NF_REPEAT; + } + return -NF_DROP; } /* Fall through */ case TCP_CONNTRACK_IGNORE: diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c index fdc14a0d21a..9080c61b71a 100644 --- a/net/netlabel/netlabel_cipso_v4.c +++ b/net/netlabel/netlabel_cipso_v4.c @@ -584,12 +584,7 @@ list_start: rcu_read_unlock(); genlmsg_end(ans_skb, data); - - ret_val = genlmsg_reply(ans_skb, info); - if (ret_val != 0) - goto list_failure; - - return 0; + return genlmsg_reply(ans_skb, info); list_retry: /* XXX - this limit is a guesstimate */ diff --git a/net/netlabel/netlabel_mgmt.c b/net/netlabel/netlabel_mgmt.c index 22c19126780..44be5d5261f 100644 --- a/net/netlabel/netlabel_mgmt.c +++ b/net/netlabel/netlabel_mgmt.c @@ -386,11 +386,7 @@ static int netlbl_mgmt_listdef(struct sk_buff *skb, struct genl_info *info) rcu_read_unlock(); genlmsg_end(ans_skb, data); - - ret_val = genlmsg_reply(ans_skb, info); - if (ret_val != 0) - goto listdef_failure; - return 0; + return genlmsg_reply(ans_skb, info); listdef_failure_lock: rcu_read_unlock(); @@ -501,11 +497,7 @@ static int netlbl_mgmt_version(struct sk_buff *skb, struct genl_info *info) goto version_failure; genlmsg_end(ans_skb, data); - - ret_val = genlmsg_reply(ans_skb, info); - if (ret_val != 0) - goto version_failure; - return 0; + return genlmsg_reply(ans_skb, info); version_failure: kfree_skb(ans_skb); diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c index 52b2611a6eb..56f80872924 100644 --- a/net/netlabel/netlabel_unlabeled.c +++ b/net/netlabel/netlabel_unlabeled.c @@ -1107,11 +1107,7 @@ static int netlbl_unlabel_list(struct sk_buff *skb, struct genl_info *info) goto list_failure; genlmsg_end(ans_skb, data); - - ret_val = genlmsg_reply(ans_skb, info); - if (ret_val != 0) - goto list_failure; - return 0; + return genlmsg_reply(ans_skb, info); list_failure: kfree_skb(ans_skb); diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c index 0c9d5a6950f..fcdb45d1071 100644 --- a/net/sctp/sm_statefuns.c +++ b/net/sctp/sm_statefuns.c @@ -5899,12 +5899,6 @@ static int sctp_eat_data(const struct sctp_association *asoc, return SCTP_IERROR_NO_DATA; } - /* If definately accepting the DATA chunk, record its TSN, otherwise - * wait for renege processing. - */ - if (SCTP_CMD_CHUNK_ULP == deliver) - sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_TSN, SCTP_U32(tsn)); - chunk->data_accepted = 1; /* Note: Some chunks may get overcounted (if we drop) or overcounted @@ -5924,6 +5918,9 @@ static int sctp_eat_data(const struct sctp_association *asoc, * and discard the DATA chunk. */ if (ntohs(data_hdr->stream) >= asoc->c.sinit_max_instreams) { + /* Mark tsn as received even though we drop it */ + sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_TSN, SCTP_U32(tsn)); + err = sctp_make_op_error(asoc, chunk, SCTP_ERROR_INV_STRM, &data_hdr->stream, sizeof(data_hdr->stream)); diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c index ce6cda6b699..a1f654aea26 100644 --- a/net/sctp/ulpevent.c +++ b/net/sctp/ulpevent.c @@ -710,6 +710,11 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc, if (!skb) goto fail; + /* Now that all memory allocations for this chunk succeeded, we + * can mark it as received so the tsn_map is updated correctly. + */ + sctp_tsnmap_mark(&asoc->peer.tsn_map, ntohl(chunk->subh.data_hdr->tsn)); + /* First calculate the padding, so we don't inadvertently * pass up the wrong length to the user. * diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c index b976d9ed10e..04c41504f84 100644 --- a/net/xfrm/xfrm_user.c +++ b/net/xfrm/xfrm_user.c @@ -277,9 +277,8 @@ static void copy_from_user_state(struct xfrm_state *x, struct xfrm_usersa_info * memcpy(&x->props.saddr, &p->saddr, sizeof(x->props.saddr)); x->props.flags = p->flags; - if (!x->sel.family) + if (!x->sel.family && !(p->flags & XFRM_STATE_AF_UNSPEC)) x->sel.family = p->family; - } /* |