summaryrefslogtreecommitdiff
path: root/doc/TESTS.rst.txt
blob: 14cb28df8cefe93bdc41bcf380831905987a31c7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
NumPy/SciPy Testing Guidelines
==============================

.. contents::


Introduction
''''''''''''

Until the 1.15 release, NumPy used the `nose`_ testing framework, it now uses
the `pytest`_ framework. The older framework is still maintained in order to
support downstream projects that use the old numpy framework, but all tests
for NumPy should use pytest.

Our goal is that every module and package in SciPy and NumPy
should have a thorough set of unit
tests. These tests should exercise the full functionality of a given
routine as well as its robustness to erroneous or unexpected input
arguments. Long experience has shown that by far the best time to
write the tests is before you write or change the code - this is
`test-driven development
<https://en.wikipedia.org/wiki/Test-driven_development>`__.  The
arguments for this can sound rather abstract, but we can assure you
that you will find that writing the tests first leads to more robust
and better designed code. Well-designed tests with good coverage make
an enormous difference to the ease of refactoring. Whenever a new bug
is found in a routine, you should write a new test for that specific
case and add it to the test suite to prevent that bug from creeping
back in unnoticed.

To run SciPy's full test suite, use the following::

  >>> import scipy
  >>> scipy.test()

or from the command line::

  $ python runtests.py

SciPy uses the testing framework from :mod:`numpy.testing`, so all
the SciPy examples shown here are also applicable to NumPy.  NumPy's full test
suite can be run as follows::

  >>> import numpy
  >>> numpy.test()

The test method may take two or more arguments; the first, ``label`` is a
string specifying what should be tested and the second, ``verbose`` is an
integer giving the level of output verbosity. See the docstring for
numpy.test for details.  The default value for ``label`` is 'fast' - which
will run the standard tests.  The string 'full' will run the full battery
of tests, including those identified as being slow to run. If ``verbose``
is 1 or less, the tests will just show information messages about the tests
that are run; but if it is greater than 1, then the tests will also provide
warnings on missing tests. So if you want to run every test and get
messages about which modules don't have tests::

  >>> scipy.test(label='full', verbose=2) # or scipy.test('full', 2)

Finally, if you are only interested in testing a subset of SciPy, for
example, the ``integrate`` module, use the following::

  >>> scipy.integrate.test()

or from the command line::

  $python runtests.py -t scipy/integrate/tests

The rest of this page will give you a basic idea of how to add unit
tests to modules in SciPy. It is extremely important for us to have
extensive unit testing since this code is going to be used by
scientists and researchers and is being developed by a large number of
people spread across the world. So, if you are writing a package that
you'd like to become part of SciPy, please write the tests as you
develop the package. Also since much of SciPy is legacy code that was
originally written without unit tests, there are still several modules
that don't have tests yet. Please feel free to choose one of these
modules and develop tests for it as you read through
this introduction.

Writing your own tests
''''''''''''''''''''''

Every Python module, extension module, or subpackage in the SciPy
package directory should have a corresponding ``test_<name>.py`` file.
Pytest examines these files for test methods (named test*) and test
classes (named Test*).

Suppose you have a SciPy module ``scipy/xxx/yyy.py`` containing a
function ``zzz()``.  To test this function you would create a test
module called ``test_yyy.py``.  If you only need to test one aspect of
``zzz``, you can simply add a test function::

  def test_zzz():
      assert_(zzz() == 'Hello from zzz')

More often, we need to group a number of tests together, so we create
a test class::

  from numpy.testing import assert_, assert_raises

  # import xxx symbols
  from scipy.xxx.yyy import zzz

  class TestZzz:
      def test_simple(self):
          assert_(zzz() == 'Hello from zzz')

      def test_invalid_parameter(self):
          assert_raises(...)

Within these test methods, ``assert_()`` and related functions are used to test
whether a certain assumption is valid. If the assertion fails, the test fails.
Note that the Python builtin ``assert`` should not be used, because it is
stripped during compilation with ``-O``.

Note that ``test_`` functions or methods should not have a docstring, because
that makes it hard to identify the test from the output of running the test
suite with ``verbose=2`` (or similar verbosity setting).  Use plain comments
(``#``) if necessary.

Labeling tests 
--------------

As an alternative to ``pytest.mark.<label>``, there are a number of labels you
can use.

Unlabeled tests like the ones above are run in the default
``scipy.test()`` run.  If you want to label your test as slow - and
therefore reserved for a full ``scipy.test(label='full')`` run, you
can label it with a decorator::

  # numpy.testing module includes 'import decorators as dec'
  from numpy.testing import dec, assert_

  @dec.slow
  def test_big(self):
      print 'Big, slow test'

Similarly for methods::

  class test_zzz:
      @dec.slow
      def test_simple(self):
          assert_(zzz() == 'Hello from zzz')

Available labels are:

- ``slow``: marks a test as taking a long time
- ``setastest(tf)``: work-around for test discovery when the test name is
  non conformant
- ``skipif(condition, msg=None)``: skips the test when ``eval(condition)`` is
  ``True``
- ``knownfailureif(fail_cond, msg=None)``: will avoid running the test if
  ``eval(fail_cond)`` is ``True``, useful for tests that conditionally segfault
- ``deprecated(conditional=True)``: filters deprecation warnings emitted in the
  test
- ``paramaterize(var, input)``: an alternative to
  `pytest.mark.paramaterized
  <https://docs.pytest.org/en/latest/parametrize.html>`_

Easier setup and teardown functions / methods
---------------------------------------------

Testing looks for module-level or class-level setup and teardown functions by
name; thus::

  def setup():
      """Module-level setup"""
      print 'doing setup'

  def teardown():
      """Module-level teardown"""
      print 'doing teardown'


  class TestMe(object):
      def setup():
          """Class-level setup"""
          print 'doing setup'

      def teardown():
          """Class-level teardown"""
          print 'doing teardown'


Setup and teardown functions to functions and methods are known as "fixtures",
and their use is not encouraged.

Parametric tests
----------------

One very nice feature of testing is allowing easy testing across a range
of parameters - a nasty problem for standard unit tests. Use the
``dec.paramaterize`` decorator.

Doctests
--------

Doctests are a convenient way of documenting the behavior of a function
and allowing that behavior to be tested at the same time.  The output
of an interactive Python session can be included in the docstring of a
function, and the test framework can run the example and compare the
actual output to the expected output.

The doctests can be run by adding the ``doctests`` argument to the
``test()`` call; for example, to run all tests (including doctests)
for numpy.lib::

>>> import numpy as np
>>> np.lib.test(doctests=True)

The doctests are run as if they are in a fresh Python instance which
has executed ``import numpy as np``. Tests that are part of a SciPy
subpackage will have that subpackage already imported. E.g. for a test
in ``scipy/linalg/tests/``, the namespace will be created such that
``from scipy import linalg`` has already executed.


``tests/``
----------

Rather than keeping the code and the tests in the same directory, we
put all the tests for a given subpackage in a ``tests/``
subdirectory. For our example, if it doesn't already exist you will
need to create a ``tests/`` directory in ``scipy/xxx/``. So the path
for ``test_yyy.py`` is ``scipy/xxx/tests/test_yyy.py``.

Once the ``scipy/xxx/tests/test_yyy.py`` is written, its possible to
run the tests by going to the ``tests/`` directory and typing::

  python test_yyy.py

Or if you add ``scipy/xxx/tests/`` to the Python path, you could run
the tests interactively in the interpreter like this::

  >>> import test_yyy
  >>> test_yyy.test()

``__init__.py`` and ``setup.py``
--------------------------------

Usually, however, adding the ``tests/`` directory to the python path
isn't desirable. Instead it would better to invoke the test straight
from the module ``xxx``. To this end, simply place the following lines
at the end of your package's ``__init__.py`` file::

  ...
  def test(level=1, verbosity=1):
      from numpy.testing import Tester
      return Tester().test(level, verbosity)

You will also need to add the tests directory in the configuration
section of your setup.py::

  ...
  def configuration(parent_package='', top_path=None):
      ...
      config.add_data_dir('tests')
      return config
  ...

Now you can do the following to test your module::

  >>> import scipy
  >>> scipy.xxx.test()

Also, when invoking the entire SciPy test suite, your tests will be
found and run::

  >>> import scipy
  >>> scipy.test()
  # your tests are included and run automatically!

Tips & Tricks
'''''''''''''

Creating many similar tests
---------------------------

If you have a collection of tests that must be run multiple times with
minor variations, it can be helpful to create a base class containing
all the common tests, and then create a subclass for each variation.
Several examples of this technique exist in NumPy; below are excerpts
from one in `numpy/linalg/tests/test_linalg.py
<https://github.com/numpy/numpy/blob/master/numpy/linalg/tests/test_linalg.py>`__::

  class LinalgTestCase:
      def test_single(self):
          a = array([[1.,2.], [3.,4.]], dtype=single)
          b = array([2., 1.], dtype=single)
          self.do(a, b)

      def test_double(self):
          a = array([[1.,2.], [3.,4.]], dtype=double)
          b = array([2., 1.], dtype=double)
          self.do(a, b)

      ...

  class TestSolve(LinalgTestCase):
      def do(self, a, b):
          x = linalg.solve(a, b)
          assert_almost_equal(b, dot(a, x))
          assert_(imply(isinstance(b, matrix), isinstance(x, matrix)))

  class TestInv(LinalgTestCase):
      def do(self, a, b):
          a_inv = linalg.inv(a)
          assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape[0]))
          assert_(imply(isinstance(a, matrix), isinstance(a_inv, matrix)))

In this case, we wanted to test solving a linear algebra problem using
matrices of several data types, using ``linalg.solve`` and
``linalg.inv``.  The common test cases (for single-precision,
double-precision, etc. matrices) are collected in ``LinalgTestCase``.

Known failures & skipping tests
-------------------------------

Sometimes you might want to skip a test or mark it as a known failure,
such as when the test suite is being written before the code it's
meant to test, or if a test only fails on a particular architecture.

To skip a test, simply use ``skipif``::

  import pytest

  @pytest.mark.skipif(SkipMyTest, reason="Skipping this test because...")
  def test_something(foo):
      ...

The test is marked as skipped if ``SkipMyTest`` evaluates to nonzero,
and the message in verbose test output is the second argument given to
``skipif``.  Similarly, a test can be marked as a known failure by
using ``xfail``::

  import pytest

  @pytest.mark.xfail(MyTestFails, reason="This test is known to fail because...")
  def test_something_else(foo):
      ...

Of course, a test can be unconditionally skipped or marked as a known
failure by using ``skip`` or ``xfail`` without argument, respectively.

A total of the number of skipped and known failing tests is displayed
at the end of the test run.  Skipped tests are marked as ``'S'`` in
the test results (or ``'SKIPPED'`` for ``verbose > 1``), and known
failing tests are marked as ``'x'`` (or ``'XFAIL'`` if ``verbose >
1``).

Tests on random data
--------------------

Tests on random data are good, but since test failures are meant to expose
new bugs or regressions, a test that passes most of the time but fails
occasionally with no code changes is not helpful. Make the random data
deterministic by setting the random number seed before generating it.  Use
either Python's ``random.seed(some_number)`` or NumPy's
``numpy.random.seed(some_number)``, depending on the source of random numbers.


.. _nose: https://nose.readthedocs.io/en/latest/
.. _pytest: https://pytest.readthedocs.io
.. _parameterization: https://docs.pytest.org/en/latest/parametrize.html