summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2022-12-09[Packaging] Update Protobuf with v3.20.3tizen_8.0_m2_releaseaccepted/tizen/unified/dev/20240620.011054accepted/tizen/unified/20221215.051042accepted/tizen/8.0/unified/20231005.095509tizen_8.0accepted/tizen_8.0_unifiedSangjung Woo2-0/+3
This patch updates the protobuf package with v3.20.3. Change-Id: Ice58247829f689a6dc740cb39adb601f6bc87433 Signed-off-by: Sangjung Woo <sangjung.woo@samsung.com>
2022-02-04[packaging/Tizen] Package PyTorch v1.10.2 for TizenYongjoo Ahn5-0/+2481
- Add spec file to package the project - Add a python script `typing_extensions.py` which used in build time Change-Id: I9568eb83962da1cb434121fbe4980801868ff0a0 Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
2022-02-04[packaging] Import external sourcesYongjoo Ahn39-0/+0
- Import external sources used for build pytorch Change-Id: Id42cefb98e2408f2cf3a79bc9939a37e8c97ab4e Signed-off-by: Yongjoo Ahn <yongjoo1.ahn@samsung.com>
2021-12-14fix formatting CIRCLE_TAG when building docs (#67026) (#69876)Nikita Shulga2-2/+8
Summary: Similar to pytorch/text#1416 malfet, brianjo The previous code failed when tags changed from `v0.9.0` to `v0.10.0`. I tested this offline, it would be nice to somehow be actually tag the repo and see that this adds the correct documentation directory to the pytorch/pytorch.github.io repo. Pull Request resolved: https://github.com/pytorch/pytorch/pull/67026 Reviewed By: saketh-are Differential Revision: D31843381 Pulled By: malfet fbshipit-source-id: 21526ad9ed4c1751c2d7f6d621da305f166a7f55 Co-authored-by: mattip <matti.picus@gmail.com>
2021-12-10[release/1.10] Remove fgrad_input from slow_conv2d (#64280) (#69622)Eli Uriegas9-230/+105
Co-authored-by: Peter Bell <peterbell10@live.co.uk>
2021-12-10[release/1.10] fix pybind issue for get_autocast_cpu_dtype and ↵Eli Uriegas2-2/+14
get_autocast_gpu_dtype (#66396) (#69620) Co-authored-by: XiaobingSuper <xiaobing.zhang@intel.com>
2021-12-09[release/1.10] Fix adaptive_max_pool2d for channels-last on CUDA (#67697) ↵Eli Uriegas2-12/+27
(#69618) Co-authored-by: Xiao Wang <24860335+xwang233@users.noreply.github.com>
2021-12-09[release/1.10] TST Adds test for non-contiguous tensors (#64954) (#69617)Eli Uriegas3-18/+136
* TST Adds test for non-contiguous tensors (#64954) Summary: Follow up to https://github.com/pytorch/pytorch/issues/61935 This PR: 1. Adds test for non-contiguous tensors 2. Fixes bug in `NLLLoss` that was catch by the test. The reason this was not catch in `common_nn` is because `CriterionTest` overrides `test_cuda` but does not call `test_nonconfig`. cc albanD mruberry jbschlosser walterddr Pull Request resolved: https://github.com/pytorch/pytorch/pull/64954 Reviewed By: zou3519 Differential Revision: D31174149 Pulled By: jbschlosser fbshipit-source-id: a16073e59b40ccc01c82ede016b63a8db2e810f5 (cherry picked from commit 0d3bf97fd05ce6ef5ddfb0a100c78ad82914cee4) Signed-off-by: Eli Uriegas <eliuriegas@fb.com> * Cherry-pick changes from #64444 Namely, `make_weight` partial into `module_inputs_torch_nn_NLLLoss` Co-authored-by: Thomas J. Fan <thomasjpfan@gmail.com> Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-12-08[ONNX] Update onnxruntime to 1.9 for CI (#65029) (#67269) (#69641)Nikita Shulga1-3/+1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67269 Test Plan: Imported from OSS Reviewed By: ngimel, msaroufim Differential Revision: D31962516 Pulled By: malfet fbshipit-source-id: 39b3c6a4a05d7b769f0ef5ce7ea597209516cde2 Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
2021-12-08Fix strict aliasing rule violation in bitwise_binary_op (#66194) (#69619)Eli Uriegas1-5/+27
Summary: Fixes https://github.com/pytorch/pytorch/issues/66119 Failure on ARM Neoverse N1 before this PR: ``` ====================================================================== FAIL: test_bitwise_ops_cpu_int16 (__main__.TestBinaryUfuncsCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test result = test(self, **param_kwargs) File "test_binary_ufuncs.py", line 315, in test_bitwise_ops self.assertEqual(op(a, b), op(a_np, b_np)) File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 1633, in assertEqual self.assertEqual( File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 1611, in assertEqual super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg)) AssertionError: False is not true : Tensors failed to compare as equal!Found 176 different element(s) (out of 225), with the greatest difference of 21850 (-21846 vs. 4) occuring at index (0, 2). ====================================================================== FAIL: test_bitwise_ops_cpu_int32 (__main__.TestBinaryUfuncsCPU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/opt/pytorch/pytorch/torch/testing/_internal/common_device_type.py", line 373, in instantiated_test result = test(self, **param_kwargs) File "test_binary_ufuncs.py", line 315, in test_bitwise_ops self.assertEqual(op(a, b), op(a_np, b_np)) File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 1633, in assertEqual self.assertEqual( File "/opt/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 1611, in assertEqual super().assertTrue(result, msg=self._get_assert_msg(msg, debug_msg=debug_msg)) AssertionError: False is not true : Tensors failed to compare as equal!Found 188 different element(s) (out of 225), with the greatest difference of 1335341061 (-1335341056 vs. 5) occuring at index (14, 8). ---------------------------------------------------------------------- ``` which passes now. CC malfet ezyang Pull Request resolved: https://github.com/pytorch/pytorch/pull/66194 Reviewed By: dagitses, bdhirsh, ngimel Differential Revision: D31430274 Pulled By: malfet fbshipit-source-id: bcf1c9d584c02eff328dd5b1f7af064fac5942c9 (cherry picked from commit 0b0674121aeb7d8bbcccd0461d939b64879a1273) Signed-off-by: Eli Uriegas <eliuriegas@fb.com> Co-authored-by: pbialecki <pbialecki@nvidia.com>
2021-12-08[LiteInterpreter] Specify `Loader` to `yaml.load` (#67694) (#69642)Nikita Shulga1-1/+8
Summary: It became a mandatory argument since PyYaml-6, but has been present since PyYaml-3 Unblock migration to newer runtime Pull Request resolved: https://github.com/pytorch/pytorch/pull/67694 Reviewed By: seemethere Differential Revision: D32106043 Pulled By: malfet fbshipit-source-id: 35246b97a974b168c066396ea31987b267534c7f
2021-12-08Fix python version in test tools CI job (#66947) (#69643)Nikita Shulga1-1/+1
Summary: On the HUD, the test tools job is failing as the runners now install Python 3.10, which is not compatible with numpy 1.20 See https://github.com/pytorch/pytorch/runs/3952169950?check_suite_focus=true Install dependencies step: ``` ERROR: Command errored out with exit status 1: command: /opt/hostedtoolcache/Python/3.10.0/x64/bin/python /opt/hostedtoolcache/Python/3.10.0/x64/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmptq8aay7m cwd: /tmp/pip-install-dk_6t98q/numpy_e9431bf106b746148c0e7c36e46551b4 Complete output (1169 lines): setup.py:66: RuntimeWarning: NumPy 1.20.0 may not yet support Python 3.10. ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/66947 Reviewed By: suo, malfet Differential Revision: D31799205 Pulled By: janeyx99 fbshipit-source-id: 64bf10c37c0aa4f5837c48e92d56e81d920722bd Co-authored-by: Jane Xu <janeyx@fb.com>
2021-10-14(torch/elastic) add fqdn hostname to error printout (#66182) (#66662)kiukchung6-66/+84
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66182 closes https://github.com/pytorch/pytorch/issues/63174 Does a few things: 1. adds hostname to the error report 2. moves the "root cause" section to the end (presumably since the logs are being "tailed" we want the root cause to appear at the end) 3. moves redundant error info logging to debug 4. makes the border max 60 char in length and justifies left for the header NOTE: YOU HAVE TO annotate your main function with torch.distributed.elastic.multiprocessing.errors.record, otherwise no traceback is printed (this is because python exception propagation does NOT work out of the both for IPC - hence the extra record annotation). Test Plan: Sample ``` ============================================================ run_script_path FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2021-10-05_17:37:22 host : devvm4955.prn0.facebook.com rank : 0 (local_rank: 0) exitcode : 1 (pid: 3296201) error_file: /home/kiuk/tmp/elastic/none_3_lsytqe/attempt_0/0/error.json traceback : Traceback (most recent call last): File "/tmp/jetter.xr3_x6qq/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 372, in wrapper return f(*args, **kwargs) File "main.py", line 28, in main raise RuntimeError(args.throws) RuntimeError: foobar ============================================================ ``` Reviewed By: cbalioglu, aivanou Differential Revision: D31416492 fbshipit-source-id: 0aeaf6e634e23ce0ea7f6a03b12c8a9ac57246e9
2021-10-14Handle shared memory cases in MathBitFallback (#66667)Nikita Shulga5-69/+78
* Handle shared memory cases in MathBithFallback (#63602) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63602 This PR fixes the case when a read and write is performed on a memory shared between mutable and (or) non-mutable arguments. Example: ``` a=torch.tensor([1+1j]) b=a.conj() b.add_(a) # should return tensor([2]) but returns tensor ([2-2j]) ``` The issue here is that in the conjugate fallback, we resolve the conjugation in-place for mutable arguments which can be a problem as shown above in the case when other input arguments share memory with the mutable argument(s). This PR fixes this issue by: 1. first scanning through the operator input arguments and creating a vector of mutable arguments that have the conj bit set to `True` (and accordingly setting the flag `check_for_alias_with_mut_arg ` to `True` or `False`). 2. Iterating through all the arguments. At this time we only look at the non-mutable arguments. If `check_for_alias_with_mut_arg` is set to `True`, then we iterate through `mutable_inputs` to check if the current arg tensor in question doesn't alias any of the entries in `mutable_inputs`. If yes, then we clone the non-mutable tensor arg, else we resolve the conjugation as before. 3. Now we look through the mutable_inputs vector (which contains only mutable input tensors with conj bit set to `True`). We in-place conjugate each of the entries in the vector. 4. Do the computation. 5. Re-conjugate the mutable argument tensors. NOTE: `TensorLists` are not fully handled in ConjugateFallback. Please see the in-line comment for more details. Fixes https://github.com/pytorch/pytorch/issues/59943 Test Plan: Imported from OSS Reviewed By: gmagogsfm Differential Revision: D30466905 Pulled By: anjali411 fbshipit-source-id: 58058e5e6481da04a12d03f743c1491942a6cc9b * fix lint (#66572) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66572 Test Plan: Imported from OSS Reviewed By: seemethere Differential Revision: D31624043 Pulled By: suo fbshipit-source-id: 9db9cee3140d78c2a2f0c937be84755206fee1dd Co-authored-by: anjali411 <chourdiaanjali123@gmail.com> Co-authored-by: Michael Suo <suo@fb.com>
2021-10-14Disable .numpy() and .tolist() for tensor subclasses subclasses and f… ↵anjali4110-0/+0
(#66642) * Disable .numpy() and .tolist() for tensor subclasses subclasses and fix .tolist() for conjugated and negated tensors (#66082) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66082 Fixes https://github.com/pytorch/pytorch/issues/66024 #65779 cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved albanD Test Plan: Imported from OSS Reviewed By: Gamrix, albanD Differential Revision: D31615588 Pulled By: anjali411 fbshipit-source-id: c3e65ef0fe301630eb76732ccd7819683c09aa19 * Apply suggestions from code review Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com> Co-authored-by: Nikita Shulga <nshulga@fb.com>
2021-10-14Delete extraneous whitespacesNikita Shulga1-2/+3
2021-10-14Disable .numpy() and .tolist() for tensor subclasses subclasses and fix ↵anjali4115-3/+24
.tolist() for conjugated and negated tensors (#66082) (#66576) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/66082 Fixes https://github.com/pytorch/pytorch/issues/66024 #65779 cc ezyang anjali411 dylanbespalko mruberry Lezcano nikitaved albanD Test Plan: Imported from OSS Reviewed By: Gamrix, albanD Differential Revision: D31615588 Pulled By: anjali411 fbshipit-source-id: c3e65ef0fe301630eb76732ccd7819683c09aa19
2021-10-14Call `PyArray_Check` only if NumPy is available (#66433) (#66629)Nikita Shulga2-1/+3
Summary: Fixes https://github.com/pytorch/pytorch/issues/66353 Fixes #{issue number} Pull Request resolved: https://github.com/pytorch/pytorch/pull/66433 Reviewed By: seemethere, janeyx99 Differential Revision: D31548290 Pulled By: malfet fbshipit-source-id: 3b094bc8195d0392338e0bdc6df2f39587b85bb3
2021-10-14fix normal with empty std (#66524)Natalia Gimelshein2-1/+5
2021-10-08Fix cosine similarity dim checks (#66214)Natalia Gimelshein4-20/+11
* fix cosine similarity dimensionality check * fix shapes in the doc
2021-10-08[ONNX] Deprecate various args (#65962)Gary Miguel18-304/+362
* [ONNX] Remove argument _retain_param_name from torch.onnx.export() function. (#61702) (#64370) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64370 As of now, the "_retain_param_name" parameter has no description in PyTorch docs website. According to code, this argument determines if we keep the original parameter names of PyTorch model in the final ONNX graph. If this is False, those original parameter names will be replaced with a series of integers starting from 1. Since setting numbers as parameter names make no sense to users, we remove this argument from the torch.onnx.export() function to increase user experience of calling this function. This PR will still keep it in torch.onnx.export() function for backward support while all backend logic has been changed to work as _retain_param_name is set to True. Test Plan: Imported from OSS Reviewed By: ezyang Differential Revision: D30905270 Pulled By: malfet fbshipit-source-id: ca60757ca17daaff937e9f08da42596086795f4a Co-authored-by: fatcat-z <zhang-ji@outlook.com> * [ONNX] Remove strip_doc_string param from torch.onnx.export() function. (#61712) (#64371) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64371 As of now, the "strip_doc_string" parameter was described as below: strip_doc_string (bool, default True): do not include the field doc_string``` from the exported model. Otherwise the field will mention the source code locations for model``. This is usually useless to users who want to transform a PyTorch model to ONNX one. Only when the user wants to debug the export process, these source code locations could provide benefits. To make the export() function more friendly by providing less parameters, we combined "strip_doc_string" into "verbose" parameter. If a user set verbose to True, it means the users need some log information for debugging the export process and this is similar with the purpose of strip_doc_string parameter. But the usage of these 2 arguments are opposite: setting verbose to True means we want to print log information to help debug, which means strip_doc_string should be False. And this is how we replace strip_doc_string with verbose argument in this PR. This PR will still keep it in torch.onnx.export() function for backward support while the usage of it has been combined with verbose argument. Test Plan: Imported from OSS Reviewed By: ezyang Differential Revision: D30905268 Pulled By: malfet fbshipit-source-id: 2f06eb805c01fe15ff7a1b4f6595c937ba716d60 Co-authored-by: fatcat-z <zhang-ji@outlook.com> * [ONNX] minor doc improvements and cleanup (#62514) (#64373) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64373 * Fix some bad formatting and clarify things in onnx.rst. * In `export_to_pretty_string`: * Add documentation for previously undocumented args. * Document that `f` arg is ignored and mark it deprecated. * Update tests to stop setting `f`. * Warn if `_retain_param_name` is set. * Use double quotes for string literals in test_operators.py. Test Plan: Imported from OSS Reviewed By: ezyang Differential Revision: D30905271 Pulled By: malfet fbshipit-source-id: 3627eeabf40b9516c4a83cfab424ce537b36e4b3 * [ONNX] Deprecated the example_outputs param from torch.onnx.export() function. (#62815) (#64380) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64380 * `example_outputs` used to determine the type and shape of the outputs without tracing the execution of the model. And it must be provided when exporting a ScriptModule or ScriptFunction when using export() function. * Since we can work out `example_outputs` in internal function instead of being provided by user, so we deprecated this argument in the export() function to increase user experience of calling this function. Test Plan: Imported from OSS Reviewed By: ezyang Differential Revision: D30905266 Pulled By: malfet fbshipit-source-id: d00b00d7d02b365d165028288ad915678caa51f2 Co-authored-by: hwangdeyu <dejack953@outlook.com> * [ONNX] Deprecate use_external_data_format param from torch.onnx.export() function. (#62257) (#64382) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64382 * This `use_external_data_format` parameter is used for large models cannot be exported because of the 2GB protobuf limit. * When `use_external_data_format` set to True, the model is exported in ONNX external data format, in which case some of the model parameters are stored in external binary files and not in the ONNX model file itself. * This PR will set this paramter to DEPRECATED and check the model proto sizes by code instead of by user, if the sizes lager than 2GB, then `use_external_data_format = True` automatically. Test Plan: Imported from OSS Reviewed By: ezyang Differential Revision: D30905265 Pulled By: malfet fbshipit-source-id: 82b4e17bfa6a8de2bfd700a5282c12f6835603cb Co-authored-by: hwangdeyu <dejack953@outlook.com> * fix clang-tidy error introduced by #64382 (#65977) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65977 Reviewed By: ngimel Differential Revision: D31423174 Pulled By: malfet fbshipit-source-id: 0ea560b9a6ddd6431f70bd3ac10ace68e26ab352 Co-authored-by: BowenBao <bowbao@microsoft.com> Co-authored-by: fatcat-z <zhang-ji@outlook.com> Co-authored-by: hwangdeyu <dejack953@outlook.com>
2021-10-08Convert Sampler back to lazily construction (#63646) (#65926)Erjia Guan2-7/+33
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63646 Fixes #63609 Test Plan: Imported from OSS Reviewed By: NivekT Differential Revision: D30451774 Pulled By: ejguan fbshipit-source-id: 550d77494326446d1a42b5da0559e0d384c47413
2021-10-08Revert "Added option to update parameters using state_dict in AveragedModel ↵Prabhat Roy2-42/+2
(#65495) (#65755)" (#66308) This reverts commit 5f1a434599b46afd99607839d15892e09269a1c4.
2021-10-06Added option to update parameters using state_dict in AveragedModel (#65495) ↵Prabhat Roy2-2/+42
(#65755) * Added option to update parameters using state_dict in AveragedModel (#65495) Summary: While implementing [EMA](https://github.com/pytorch/vision/pull/4381)(which extends AveragedModel) in torchvision, update_parameters() from AveragedModel could not be used as it did not handle state_dict(), so a custom update_parameters() needed to be defined in [EMA class](https://github.com/pytorch/vision/pull/4406). This PR aims to handle this scenario removing the need for this custom update_parameters() implementation. Discussion: https://github.com/pytorch/vision/pull/4406#pullrequestreview-753734102 Pull Request resolved: https://github.com/pytorch/pytorch/pull/65495 Reviewed By: datumbox Differential Revision: D31176742 Pulled By: prabhat00155 fbshipit-source-id: 326d14876018f21cf602bab5eaba344678dbabe2 (cherry picked from commit 2ea724b1fd543304e3be7bd223cac451cd093e16) * Added validation of mode parameter in AveragedModel (#65921) Summary: Discussion: https://github.com/pytorch/pytorch/pull/65495#issuecomment-930460469 Pull Request resolved: https://github.com/pytorch/pytorch/pull/65921 Reviewed By: albanD Differential Revision: D31310105 Pulled By: prabhat00155 fbshipit-source-id: 417691832a7c793744830c11e0ce53e3972d21a3 (cherry picked from commit c7748fc172553da66368fd0b7fea3fe5661e2dc1)
2021-10-06Tweak `file_diff_from_base` for release/1.10 branch (#66202)Nikita Shulga1-2/+2
2021-10-05[DataPipe] DataPipe Fix and Deprecation Warnings for Release 1.10 (#65932)Kevin Tse6-9/+20
* Unify the output pathname of archive reader and extractor (#65424) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65424 This PR is re-implementation for https://github.com/facebookexternal/torchdata/pull/93 Same PR has landed into torchdata https://github.com/facebookexternal/torchdata/pull/157 Test Plan: Imported from OSS Reviewed By: soulitzer Differential Revision: D31090447 Pulled By: ejguan fbshipit-source-id: 45af1ad9b24310bebfd6e010f41cff398946ba65 * [DatePipe] add deprecation warnings for DataPipes that will solely exist in TorchData (#65827) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65827 Test Plan: Imported from OSS Reviewed By: ejguan Differential Revision: D31272794 Pulled By: NivekT fbshipit-source-id: 8da8266184b4df050422904cbc5fca6d7c3d2e02 * [DataPipe] Fixes an issue where TarArchiveReader closes stream when read into a buffer (#65877) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65877 Fixes #65808 Test Plan: Imported from OSS Reviewed By: ejguan Differential Revision: D31296041 Pulled By: NivekT fbshipit-source-id: cdcad3a333ae9781d6063678a122a128955b0ff4 Co-authored-by: Erjia Guan <erjia@fb.com>
2021-10-05[iOS][CI] Update dev certs (#66004) (#66188)Nikita Shulga5-16/+26
Summary: Fixes https://github.com/pytorch/pytorch/issues/65988 Pull Request resolved: https://github.com/pytorch/pytorch/pull/66004 Reviewed By: xta0 Differential Revision: D31340893 Pulled By: malfet fbshipit-source-id: 3bf0be266e9686a73d62e86c5cf0bebeb0416260 Co-authored-by: Tao Xu <taox@fb.com>
2021-10-05Fix backward compatibility tests (#66186)Nikita Shulga1-1/+1
Compare operator list against RC1 build rather than against nightly
2021-10-05Fix Windows ninja builds when MAX_JOBS is specified (#65444) (#66155)Nikita Shulga2-1/+5
Summary: Reported by cloudhan in https://github.com/pytorch/pytorch/pull/64733#issuecomment-924545463 Fixes regression introduced by https://github.com/pytorch/pytorch/commit/047e68235f8ebf8dc9fd816829ba90561d423ff9 cc malfet seemethere Pull Request resolved: https://github.com/pytorch/pytorch/pull/65444 Reviewed By: dagitses, seemethere Differential Revision: D31103260 Pulled By: malfet fbshipit-source-id: 9d5454a64cb8a0b96264119cf16582cc5afed284
2021-10-05Binary building wthout python fix (#66031) (#66117)n-v-k1-6/+7
Summary: Fixes https://github.com/pytorch/pytorch/issues/66030 Pull Request resolved: https://github.com/pytorch/pytorch/pull/66031 Reviewed By: VitalyFedyunin Differential Revision: D31356243 Pulled By: malfet fbshipit-source-id: d1537bc65bbba5d6497ecb8db7160a397eca81fd
2021-09-30[ci] try installing libgnutls to fix cert error (#65934) (#65979)Nikita Shulga2-2/+4
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65934 see: https://github.com/pytorch/pytorch/issues/65931, this was a suggested remediation on the linked issue Test Plan: Imported from OSS Reviewed By: malfet, zhouzhuojie Differential Revision: D31313040 Pulled By: suo fbshipit-source-id: a9e2b82a1e879962af768ed3049c73ab77394738 Co-authored-by: Michael Suo <suo@fb.com>
2021-09-30[DataPipe] Fix deepcopy filehandle for Mapper and in-place modification for ↵Erjia Guan4-112/+143
IterableWrapper (#65220) (#65924) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65220 Fixes #65221 - Remove deepcopy from Mapper to support file handles - Convert `IterableWrapper` to deepcopy iterable instance within each iterator to prevent in-place modification (different data per epoch) - Convert `IDP` to `IterableWrapper` in test_datapipe.py - Refine the variable names (prevent using `dp` that is module reference) Test Plan: Imported from OSS Reviewed By: malfet Differential Revision: D31021886 Pulled By: ejguan fbshipit-source-id: 72a9eee66c758e2717d591cd0942892bddedc223
2021-09-29Fix the slowdown of _object_to_tensor since 1.9 (#65721) (#65835)Nikita Shulga1-2/+5
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65721 #Closes: https://github.com/pytorch/pytorch/issues/65696 The bug is introduced in https://github.com/pytorch/pytorch/pull/55861, and it causes 100X slowdown since 1.9. ghstack-source-id: 139128267 Test Plan: Performance test: ``` import time from torch.distributed.distributed_c10d import _object_to_tensor start = time.time() _object_to_tensor("x" * 50_000_000) print("Time:", time.time() - start) ``` Reviewed By: rohan-varma Differential Revision: D31219794 fbshipit-source-id: 1abec38f9d51361c1eab6ad5efd87b589322e208 Co-authored-by: Yi Wang <wayi@fb.com>
2021-09-28Fix test reporting git merge-base (#65787)Zhuojie Zhou1-1/+1
2021-09-24[1.10] Remove torch.vmap (#65496)Richard Zou3-4/+2
torch.vmap is a prototype feature and should not be in the stable binary. This PR: - Removes the torch.vmap API - Removes the documentation entry for torch.vmap - Changes the vmap tests to use an internal API instead of torch.vmap. Test Plan: - Tested locally (test_torch, test_autograd, test_type_hints, test_vmap), but also wait for CI.
2021-09-21[release/1.10] Pin builder and xla repo (#65433)Nikita Shulga2-2/+2
Pin builder to https://github.com/pytorch/builder/commits/release/1.10 Pin xla to https://github.com/pytorch/xla/tree/r1.10
2021-09-21THCTensor cleanup (#65369)Natalia Gimelshein6-963/+1
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65369 Reviewed By: bhosmer Differential Revision: D31071406 Pulled By: ngimel fbshipit-source-id: bbc3f2781003333641524aeb692b944fd3ad8d7a
2021-09-21[PT/ShardedTensor]Allow zero size local shard (#65007)Xing Liu2-4/+4
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65007 Relax shard size check in ShardMetadata to allow zero size local shard. When sharding a tensor on N ranks, some ranks may have empty shard allocated. As we are assuming SPMD, the ranks w/ empty shard still need to participate in all collectives, and we need to allow this in ShardMetadata. Test Plan: Unit tests and CLI Reviewed By: jiaqizhai, wanchaol Differential Revision: D30926566 fbshipit-source-id: afa562c94ffa8f8d91d65ddb4c348156d871dc36
2021-09-21OpInfo: nn.functional.conv2d (#65233)kshitij123452-0/+57
Summary: Reland : https://github.com/pytorch/pytorch/issues/63517 Reference: https://github.com/pytorch/pytorch/issues/54261 Reference: facebookresearch/functorch#78 Pull Request resolved: https://github.com/pytorch/pytorch/pull/65233 Reviewed By: malfet Differential Revision: D31025538 Pulled By: zou3519 fbshipit-source-id: b1cd38c22f4cb8eedd3f958e02dd7410dcbb8d8d
2021-09-21[JIT] Re-land "Add aten::slice optimization" (#65341)Mike Iovine3-31/+148
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65341 The changes in D30231044 (https://github.com/pytorch/pytorch/commit/babd4499783abc699faf36f3a72a9fc491e0e572) were removed due to a downstream issue in glow. Now that the issue has been fixed by D30849396, we can safely re-introduce the changes. Test Plan: `buck test //caffe2/test:jit -- TestPeephole` Glow test: * `buck test //glow/fb/torch_glow/tests:unfuse_glow_ops_test` * qxy11 confirmed that the problematic glow model now loads correctly with these changes Reviewed By: eellison Differential Revision: D31056878 fbshipit-source-id: 049903ee04ba88885cc9d1a91427af0f1f44f681
2021-09-21[nn] TripletMarginLoss and PairwiseDistance : no batch dim (#64882)kshitij123456-10/+78
Summary: Reference: https://github.com/pytorch/pytorch/issues/60585 Pull Request resolved: https://github.com/pytorch/pytorch/pull/64882 Reviewed By: malfet Differential Revision: D31055577 Pulled By: jbschlosser fbshipit-source-id: 2f0a5a08619b672026b48a78bc7d83a6dccba0bf
2021-09-21correlate forward and backward op (#62553)Teng Gao2-0/+84
Summary: Use startThreadId+seqNumber of forward-op and fwdThreadId+seqNumber of backward-op to correlate pair of them. third_party/kineto should be updated accordingly: https://github.com/pytorch/kineto/pull/372 Pull Request resolved: https://github.com/pytorch/pytorch/pull/62553 Reviewed By: malfet Differential Revision: D30125728 Pulled By: gdankel fbshipit-source-id: 9877a54392ba043d0eac56ce5b7bbf244277fa7e
2021-09-21[docs] Remove .data from some docs (#65358)Rodrigo Berriel2-3/+3
Summary: Related to https://github.com/pytorch/pytorch/issues/30987. Fix the following task: - [ ] Remove the use of `.data` in all our internal code: - [ ] ... - [x] `docs/source/scripts/build_activation_images.py` and `docs/source/notes/extending.rst` In `docs/source/scripts/build_activation_images.py`, I used `nn.init` because the snippet already assumes `nn` is available (the class inherits from `nn.Module`). cc albanD Pull Request resolved: https://github.com/pytorch/pytorch/pull/65358 Reviewed By: malfet Differential Revision: D31061790 Pulled By: albanD fbshipit-source-id: be936c2035f0bdd49986351026fe3e932a5b4032
2021-09-21Adds keyword only args to gradcheck (#65290)Benjamin Rowell1-1/+3
Summary: Changes the call signature of gradcheck so that kwargs are kwargs only. Also modifies return call from gradgradcheck, to reflect these changes. Fixes https://github.com/pytorch/pytorch/issues/65165 Pull Request resolved: https://github.com/pytorch/pytorch/pull/65290 Reviewed By: soulitzer Differential Revision: D31061316 Pulled By: albanD fbshipit-source-id: 3505569a33a497a8be4347bdd425bb2b8e536999
2021-09-20[PyTorch Edge] Backport function for defaults args with out args, flag on ↵Chen Lai7-168/+373
(#63651) Summary: 1. Enable support for operators with default args and out args. For `torch.add(x, h, out=x)`, the number of specified arguments will be 3 instead of 4. 2. Bump bytecode version from 6 to 7 3. Implement backport_v7_to_v6 function. Also slightly refactor the local_thread to allow re-emit operators. 4. unittest to cover backport function 5. Update expect result from 4 to 3 in unit test DefaultArgsWithOutArg to cover the number of specified arguments. Pull Request resolved: https://github.com/pytorch/pytorch/pull/63651 ghstack-source-id: 138539912 Test Plan: ``` caffe2/test/cpp/jit:jit - LiteInterpreterTest.DefaultArgsWithOutArg caffe2/test/cpp/jit:jit - LiteInterpreterTest.DefaultArgsPinvWithOutArg caffe2/test/cpp/jit:jit - LiteInterpreterTest.BackPortByteCodeModelAllVersions ``` Reviewed By: raziel, tugsbayasgalan Differential Revision: D30454080 fbshipit-source-id: 357c50b96682430675142d20d688d1f64e1de307
2021-09-20[JIT] Delete obsolete message: or if you absolutely have to, use ↵Pavel Belevich1-2/+2
c10::impl::GenericDict(c10::impl::deprecatedUntypedDict()) (#65164) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65164 Looks like it was forgotten in https://github.com/pytorch/pytorch/pull/25439 Test Plan: Imported from OSS Reviewed By: malfet Differential Revision: D31072625 Pulled By: pbelevich fbshipit-source-id: a5ffcfb0836f962ab6952a187ba7717c4d4a6e33
2021-09-20[JIT] Support device as Dict key (#65079)Pavel Belevich3-2/+5
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65079 This is required to use RPC DeviceMap aka Dict[torch.device, torch.device] in torchscript Test Plan: Imported from OSS Reviewed By: malfet Differential Revision: D31072626 Pulled By: pbelevich fbshipit-source-id: 51cfa5653db86de73b624e9157d68d1b319bfc64
2021-09-20Reduce PyTorch warnings: Cast fix ↵Amr Elshennawy1-1/+1
xplat/caffe2/aten/src/ATen/core/DeprecatedTypeProperties.h (#65031) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65031 Test Plan: ``` buck build --show-output //caffe2/torch/fb/sparsenn:sparsenn_operators buck test caffe2/torch/fb/sparsenn:test ``` Reviewed By: r-barnes Differential Revision: D30948791 fbshipit-source-id: 13046e1d0ce2c24864ad38f318ca5e34b1bb9552
2021-09-20Basic implementation of ShardedLinear using ShardedTensor. (#64128)Pritam Damania11-88/+725
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/64128 This PR implements a sharded nn.Linear layer using ShardedTensors with the following limitations: 1) Works only for ChunkShardingSpec. 2) Implementation is only aimed to demonstrate functionality and is most likely not performant at all. The PR also introduces a `shard_parameter` API to easily shard parameters of `nn.Modules`. This also has the following limitations: 1) Works only for ChunkShardingSpec. 2) Is not performant since it uses broadcast instead of scatter since ProcessGroupNCCL doesn't yet support scatter. Overall user API for running a sharded linear would be something like this: ``` # SPMD programming paradigm running same code on all nodes. fc = nn.Linear(10, 10) # Setup sharding. sharding_spec=ChunkShardingSpec(...) shard_parameter(fc, 'weight', sharding_spec, src_rank=0) # Run as a normal linear layer. inp = torch.rand(10, 10) output = fc(inp) ``` ghstack-source-id: 138500985 Test Plan: 1) unit tests. 2) waitforbuildbot Reviewed By: wanchaol, bowangbj Differential Revision: D30621215 fbshipit-source-id: 1aa7478568c18a4572f6c3462fdf24a4cbde01d6
2021-09-20Track peak memory usage (#65157)driazati16-61/+236
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65157 Test Plan: Imported from OSS Reviewed By: malfet Differential Revision: D31029049 Pulled By: driazati fbshipit-source-id: 3e87e94e4872d118ad191aef2b77b8cefe90aeb6