Age | Commit message (Collapse) | Author | Files | Lines |
|
The MIN_CONFIG is a single config that is considered to have all the
configs that are required to boot the box.
ADD_CONFIG is a list of configs that we add that may contain configs
known to be broken (set off) or just configs that we want every box to
have and this can include shared configs.
If a config has no MIN_CONFIG defined, but has multiple files defined
for the ADD_CONFIG, the test will die, because the MIN_CONFIG will
default to ADD_CONFIG. The problem is the code to open MIN_CONFIG
expects a string of one file, not multiple, and the open will fail.
Since the real minconfig that is used is a concatination of MIN_CONFIG
and ADD_CONFIG files, we change the code to open that instead of
whatever MIN_CONFIG defaults to.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The IGNORE_CONFIG file holds the configs that we don't want to change
(with their proper settings). But on start up, the make noconfig is
executed, and the configs that are on are also put into the ignore
config category. But these are configs that were forced on by the
kconfig scripts and not something that we found must be enabled to boot
our machine. By keeping the configs that are forced on by default,
separate from the configs we found that are required to boot the box, we
can get a much more interesting IGNORE_CONFIG. In fact, the
IGNORE_CONFIG can usually end up being the must have configs to boot,
and only have 6 or 7 configs set.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
If the defined OUTPUT_MIN_CONFIG in the make_min_config test exists,
then give a prompt to ask the user if they want to use that config
instead, as it is very often the case, especially when the test has been
interrupted. The OUTPUT_MIN_CONFIG is usually the config that one wants
to use to continue the test where they left off.
But if START_MIN_CONFIG is defined (thus the MIN_CONFIG is not the
default), then do not prompt, as it will be annoying if the user has
this as one of many tests, and the test pauses waiting for input, while
the user is sleeping.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
To save time, the test does not just grab any option and test
it. The Kconfig files are examined to determine the dependencies
of the configs. If a config is chosen that depends on another
config, that config will be checked first. By checking the
parents first, we can eliminate whole groups of configs that
may have been enabled.
For example, if a USB device config is chosen and depends on
CONFIG_USB, the CONFIG_USB will be tested before the device.
If CONFIG_USB is found not to be needed, it, as well as all
configs that depend on it, will be disabled and removed from
the current min_config.
Note, the code from streamline_config (make localmodconfig)
was copied and used to find the dependencies in the Kconfig file.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
After doing a make localyesconfig, your kernel configuration may
not be the most useful minimum configuration. Having a true minimum
config that you can use against other configs is very useful if
someone else has a config that breaks on your code. By only forcing
those configurations that are truly required to boot your machine
will give you less of a chance that one of your set configurations
will make the bug go away. This will give you a better chance to
be able to reproduce the reported bug matching the broken config.
Note, this does take some time, and may require you to run the
test over night, or perhaps over the weekend. But it also allows
you to interrupt it, and gives you the current minimum config
that was found till that time.
Note, this test automatically assumes a BUILD_TYPE of oldconfig
and its test type acts like boot.
TODO: add a test version that makes the config do more than just
boot, like having network access.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
There has been too many times that I put in one too many SKIP
TEST_STARTs and start the test with the default randconfig by accident
that I added this to have ktest ask the user for which test they want to
run if no TEST_START is specified.
Now if I accidently start the test with all TEST_STARTs skipped, ktest
asks what test do I want to run, and I now have a chance to kill it
before it does a make mrproper on my build directory.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Several places had the following code:
get_grub_index;
get_version;
install;
start_monitor;
return monitor;
Creating a function "start_monitor_and_boot()" replaces these mulitple
uses with a single call.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Doing a patchcheck test, there may be warnings that gcc produces which
may be OK, and the test should not fail on that commit. By adding a
IGNORE_WARNINGS option to list a space delimited SHA1s that are ignored
lets the user avoid having the test fail on certain commits.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The tar command to create the module directory is cjf, but the
extraction only had xf. This works on most versions of tar, but some
versions of tar require xjf for extraction as well.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
As multiple tests may be executed by the same server, have the test
machine name add uniqueness to the value of the temp directory.
Otherwise the temp directories may overwrite each other's tests.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
There are some cases that a patch may be needed to apply to the kernel
in patchcheck or bisect tests. Adding a PRE_BUILD option to apply the
patch and POST_BUILD to remove it, allows for this to be done easily.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
When a config is set with CONFIG_MODULES=n, it does not mean that the
kernel does not need an initrd to boot. For systems that depend on LVM
and such, an initrd must run first.
If POST_INSTALL is defined, then run the post install regardless if
modules are needed or not.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The LOG_FILE variable needs to evaluate the $ options as well.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
After a bug is found, the STOP_AFTER_FAILURE timeout is used to
determine how much output should be printed before breaking out
of the monitor loop. This is to get things like call traces and
enough infromation about the bug to help determine what caused it.
The STOP_AFTER_FAILURE is usually much shorter than the TIMEOUT
that is used to determine when to quit after no more stdio is given.
But since the stdio read uses a wait on I/O, the STOP_AFTER_FAILURE is
only checked after we get something from I/O. But if the I/O does
not return any more data, we wait the TIMEOUT period instead, even
though we already triggered a bug report.
The wait on I/O should honor the STOP_AFTER_FAILURE time if a bug has
been found.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Using the build KCONFIG_ALLCONFIG environment variable to force
the min config may not always work properly. Since ktest is
written in perl, it is trivial to read and replace the current
config with the configs specified by the min config.
Now the min config (and add configs) are read by perl and before
a make is done, these configs in the .config file are replaced
by the version in the min config.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Searching through several tests, it gets confusing which test result
is for which test. By adding the TEST_NAME option, the user can tell
which test result belongs to which test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Currently the config_bisect compares the min config with the
CONFIG_BISECT config. There may be another config that we know
is good that we want to ignore configs on. By passing in this
config it will ignore the options that are set in the good config.
Note: This only ignores the config, it does not (yet) handle
options that are different between the two configs. If the good
config has "SLAB" set and the bad config has "SLUB" it will not
find the bug if the bug had to do with changing these two options.
This is something that I intend to implement in the future.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
When a triple fault happens in a test, no call trace nor panic
is displayed. Instead, the system reboots to the good kernel.
Since the good kernel may display a boot prompt that matches the
success string, ktest may think that the test succeeded, when it
did not.
Detecting triple faults is tricky because it is hard to generalize
what a reboot looks like. The best that we can come up with for now
is to examine the Linux banner. If we detect that the Linux banner
matches the test we want to test, then look to see if we hit another
Linux banner with a different kernel is booted. This can be assumed
to be a triple fault.
We can't just check for two Linux banners because things like
early printk may cause the Linux banner to be displayed twice. Checking
for different kernel versions should be the safe bet.
If this for some reason detects a false triple boot. A new ktest
config option is also created:
DETECT_TRIPLE_FAULT
This can be set to 0 to disable this checking.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Different timeouts can cause the ktest monitor to break out of the
loop. It becomes annoying that one does not know the reason why
it exited the monitor loop. Display the cause of the reason why
the loop was exited.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
By ignoring the unset values of the minconfig in deciding
what to test in the config_bisect can cause the problem
config from being tested too.
Just do not test the configs that are set in the minconfig.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The command that is called that reboots the kernel may fail
but the return code is not passed back to the ktest.pl script.
This is because a ';' is used between the two commands and
if the second command fails, only the first command's return
code is returned. Using a '&&' between the two commands fixes
this.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Because in perl the array size returned by $#arr, is the last
index and not the actually size of the array, we end the config
bisect early, thinking there is only one config left when there
are in fact two. Thus the result has a 50% chance of picking
the correct config that caused the problem.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
There are cases where one ktest option may be used within another
ktest option. Allow them to be reused just like config variables
but there are evaluated at time of test not config processing time.
Thus having something like:
MAKE_CMD = make ARCH=${ARCH}
TEST_START
ARCH = powerpc
TEST_START
ARCH = arm
Will have the arch defined for each test iteration.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
I found that I constantly reuse information for each test case.
It would be nice to just define a variable to reuse.
For example I may have:
TEST_START
[...]
TEST = ssh root@mybox /path/to/my/script
TEST_START
[...]
TEST = ssh root@mybox /path/to/my/script
[etc]
The issue is, I may wont to change that script or one of the other
fields. Then I need to update each line individually.
With the addition of config variables (variables only used during parsing
the config) we can simplify the config files. These variables can
also be defined multiple times and each time the new value will
overwrite the old value.
The convention to use a config variable over a ktest option is to use :=
instead of =.
Now we could do:
USER := root
TARGET := mybox
TEST_SCRIPT := /path/to/my/script
TEST_CASE := ${USER}@${TARGET} ${TEST_SCRIPT}
TEST_START
[...]
TEST = ${TEST_CASE}
TEST_START
[...]
TEST = ${TEST_CASE}
[etc]
Now we just need to update the variables at the top.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The patches being checked may not leave the kernel in a state
that the next run will allow the new kernel to be copied to the
machine. Reboot to a known good kernel before continuing to the
next kernel to test.
Added option PATCHCHECK_SLEEP_TIME for the max time to sleep between
patchcheck reboots.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Reboot after each bisect run regardless if the bisect passed
or failed. The test may just be to boot the kernel and that kernel
may not have a way to copy the next kerne to it. Reboot to a known
good kernel after each bisect run.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
If the test failed due to timeout for boot, print a message saying
so. Otherwise the user will be confused to why their test just failed.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The command to run post install (for those that want initrds) was
broken. Instead of doing a substitution for the $KERNEL_VERSION
variable. It was replacing the entire command with nothing.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-ktest
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-ktest:
ktest: Add STOP_TEST_AFTER to stop the test after a period of time
ktest: Monitor kernel while running of user tests
ktest: Fix bug where the test would not end after failure
ktest: Add BISECT_FILES to run git bisect on paths
ktest: Add BISECT_SKIP
ktest: Add manual bisect
ktest: Handle kernels before make oldnoconfig
ktest: Start failure timeout on panic too
ktest: Print logfile name on failure
|
|
Currently, if a test causes constant output but never reaches a
boot prompt, or crashes, the test will never stop. Add STOP_TEST_AFTER
to create a variable that will stop (and fail) the test after it has run
for this amount of time. The default is 10 minutes. Setting this
variable to -1 will disable it.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Record the console of tests to both the console and the log.
Also, record the bug reports afte the test has completed.
Currently, if a kernel bug happens while running the userland
test, the test stops and will not record the kernel bug. This
makes it difficult to solve what happened.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The config STOP_AFTER_FAILURE is the number of seconds to continue
the test when a failure is detected. This lets the monitor record
more data to the logs and console that may be helpful in solving
the bug that was found.
But the test had a bug. If the failure caused multiple
"Call Trace" stack dumps, the start time to compare the
STOP_AFTER_FAILURE would constantly be reset. Only update the start
time at the first "Call Trace" instance.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Add the config option BISECT_FILES that allows the user to
specify what path in the kernel to run the git bisect on.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
If a during a git bisect, ktest fails on something other than
what it is testing (if BISECT_TYPE is test but it fails on build),
if BISECT_SKIP is set, then it will do a "git bisect skip" instead
of just failing the bisect and letting the user find a good commit
to test.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
For both git bisect and config bisect, if BISECT_MANUAL is set to 1,
then bisect will stop between iterations and ask the user for the
result. The actual result is ignored. This makes it possible to
use ktest.pl for bisecting configs and git and let the user examine
the results themselves and enter their own results.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
When bisecting, one may come across a kernel that does not have
make oldnoconfig. In this case, we need to run the command "yes"
into a make oldconfig. This will select defaults instead of 'n'
into each command, but it works as a work around.
Note, "yes n" will not work because a config may have a value that
"n" is not acceptable for.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Currently we just look for a Call Trace to start the time out
when to reboot the box. But if the kernel panics and does not
show a Call Trace, the test will not reboot the box after
the specified timeout.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
If the test fails and a logfile was specified. Print the name to
let the user know where to look for more information on the
failure.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
OK, the copyright allows you to write a copy, still I think the lawyers
prefer the correct spelling.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
LKML-Reference: <1295899921-11333-1-git-send-email-u.kleine-koenig@pengutronix.de>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
In keeping with the notion that all tools should be simple for
all to use. I've changed ktest.pl to ask for mandatory options
instead of just failing. It will append (or create) the options
the user types in, onto the config file.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
During the config_bisect, in case of failure, it is nice to have
the last good and bad .configs that were used. This would let
us restart the config_bisect from those configs.
Copy the last good config into the output dir as config_good,
and the last bad config as config_bad.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
The run_ssh handles the ssh variable $SSH_COMMAND, which was not
being used by the run_command in reboot_to function.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Added the options STOP_AFTER_SUCCESS and STOP_AFTER_FAILURE to
allow the user to give a time (in seconds) to stop the monitor
after a stack trace or login has been detected. Sometimes the
kernel constantly prints out to the console and this may cause
the test to run indefinitely.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
When we store failures, we create a directory that has the build_type
in it. For useconfig, it also contains the name path of the config
file it uses. This unfortunately gets its own directory on failure.
Parse off the directory name when creating the directory to store
the failures.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
By using the "use_config" for minconfig and addconfig we risk
trying to copy itself to itself, which will cause an unexpected failure.
Use a different name instead.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Added documentation for SSH_EXEC, SCP_TO_TARGET, REBOOT,
and CONFIG_BISECT and friends.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Add a compare script that makes sure that all the options in
sample.conf are used in ktest.pl, and all the options in
ktest.pl are described in sample.conf.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Added the ability to do a config_bisect. It starts with a bad
config and does the following loop.
Enable half the configs.
if none of the configs to check are not enabled
(caused by missing dependencies) enable the other half.
Run the test
if the test passes, remove the configs from the check
but enabled them for further tests (to satisfy
dependencies).
else
Remove any config that was not enabled, as we have found
a new config that can cause a failure.
loop till we have only one config left.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Updated to version 0.2.
Now have SSH_EXEC options.
Also added some cleanups for keeping track of success and
reading the config file.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|
|
Have a easy way to parse the log file for success or failure.
KTEST RESULT: ...
Suggested-by: Tim Bird <tim.bird@am.sony.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
|