summaryrefslogtreecommitdiff
path: root/torch/testing
AgeCommit message (Collapse)AuthorFilesLines
2018-04-17Codemod to update our codebase to 0.4 standard (#6641)Tongzhou Wang1-2/+2
* Codemod to update our codebase to 0.4 standard * Update some of the test scri[ts * remove Variable in test_clip_grad_value * fix _symbolic_override_wrapper_maker
2018-04-12Separate cuda-ness from dtype. (#6470)gchanan1-5/+2
* Separate cuda-ness from dtype. There are no longer torch.cuda.int64, etc; only torch.int64 that correspond to at::ScalarType. At the python arg parser level, the corresponding ATen type is selected from the combination of (ScalarType, Layout, Device). There is also currently unused code in here for support ScalarType in native_functions; this will be used for specifying aggregate types on reduction functions. * Fix test_autograd. * Add defaults to randint_like. * Track is_cuda in py tensor types. * Fix test_sparse. * Fix multiprocessing. * Fix rnn. * Fix test_nn. * Fix flake8.
2018-04-03Reduce flakiness of math tests in test_torch.py (#6200)Sam Gross1-3/+64
This compares the torch function against the reference math funciton against a relative small set of inputs, including integers, extremes of some common functions, zero, a few numbers from randn and a few numbers near 1e6. The idea here is not to be completely exhaustive, but rather quickly expose the most common bugs. For exhaustive checks, we should evaluate torch functions against all ~4e9 possible float32 value. We compare the torch function evaluated against contiguous and non-contiguous inputs and large vs. small tensors. Also: - Make torch.allclose work with nan and +/-inf - Add torch.isclose (like numpy.isclose) - Add torch.testing.assert_allclose (like numpy.testing.assert_allclose)
2018-04-02Introduce torch.layout and split layout from dtypes. (#6145)gchanan1-7/+1
* Introduce torch.layout and split layout from dtypes. Tensors (and tensor types) now have a 'layout' attribute that returns either 'torch.strided' or 'torch.sparse_coo'. Previously, dtypes were 1-to-1 with ATen types/PyTensorTypes; the impetus behind this decision was to make things easy in the common case (i.e. specifying a type in a factory function). But this doesn't really follow for sparity, which isn't a common case. It also doesn't properly represent the concept or a dtype, which in numpy are proper scalar types (i.e. roughly the type returned from indexing the last dimension of an n-d array). But this should be the same whether or not the tensor is represented via strides, sparsity, etc. This is accomplished by: 1) having the dtype of tensor return the (device-type, scalar-type) combination, i.e. torch.cuda.float32, so both torch.cuda.FloatTensor and torch.cuda.sparse.FloatTensor have the same dtype 2) Adding a layout parameter to python functions, where the combination of (dtype, layout) maps to an ATen type that is used for dispatch. * Formatting, make init throw python_error. * Fix cuda not enabled error message. * Fix test.
2018-03-09Add torch.empty, torch.full and new_ size Tensor factory methods. (#5668)gchanan1-0/+13
* Add torch.empty, torch.full and new_ size Tensor factory methods. This adds torch.full, torch.empty equivalents of np.full, np.empty. In addition, this adds size-based Tensor factory methods new_empty, new_ones, new_full, new_zeros, which is meant to complete the separation of the legacy "new" method into data-based and size-based functions. This also fixes an issue in sparse zeros_like when the dtype didn't match the argument dtype. * Get rid of unnecessary zero in sparse tensor zeros_like. * Fix test if only 1 cuda device.
2018-02-07Add scalar module tests for common_nn. (#5095)gchanan1-1/+2
* Add scalar module tests for common_nn. * Properly skip cuda Hardshrink tests. * Fix flake8.
2018-01-31Properly fill in make_non_contiguous data for sizes that can't be mad… (#4951)gchanan1-1/+1
* Properly fill in make_non_contiguous data for sizes that can't be made contiguous. * Use clone instead of copy.
2018-01-22Add kwarg-only 'requires_grad' parameter to Variable factories. (#4748)gchanan1-2/+1
* Add kwarg-only 'requires_grad' parameter to Variable factories. Functions that create variables, e.g. torch.ones_like currently always return Variables with requires_grad=False; this is less convenient than the existing Variable constructor that has a requires_grad parameter. This commit adds the parameter at the python binding level. * Fix flake8. * Address review comments. * Match set_requires_grad implementation with tensor_new version.
2018-01-19Various testing and utility improvements including torch.testing module. (#4726)gchanan1-0/+41
* Various testing and utility improvements including torch.testing module. 1) Remove method definition for randn_like since ones_like, zeros_like do not have methods. 2) Add an empty_like native function for creating a tensor with uninitialized values. 3) Add an is_floating_point() native function, similar to is_signed(). 4) Add a torch.testing module loosely modeled after numpy.testing; currently it contains make_non_contiguous (moved from test_autograd) and randn_like (wrapper around the VariableFunction). 5) Remove code from test_autograd and test_nn that is responsible for generating grad_outputs to use with gradgradcheck. These now use gradgradcheck's own generating code. This fixes test_nn.py with scalars because gradgradcheck does the right thing here already. * Rename parameter. * Fix parameter usages.