summaryrefslogtreecommitdiff
path: root/doc
diff options
context:
space:
mode:
authorDongHun Kwak <dh0128.kwak@samsung.com>2020-12-31 09:37:29 +0900
committerDongHun Kwak <dh0128.kwak@samsung.com>2020-12-31 09:37:29 +0900
commitb9a5c35f80c8e05cd8178bace99809eb5b129c7a (patch)
tree2abb0ff788d1bd4dd94fa061e0cceabdb6f2a729 /doc
parentd5925ce9bd335463f9561bdd10271fee77d2b9af (diff)
downloadpython-numpy-b9a5c35f80c8e05cd8178bace99809eb5b129c7a.tar.gz
python-numpy-b9a5c35f80c8e05cd8178bace99809eb5b129c7a.tar.bz2
python-numpy-b9a5c35f80c8e05cd8178bace99809eb5b129c7a.zip
Imported Upstream version 1.17.0upstream/1.17.0
Diffstat (limited to 'doc')
-rw-r--r--doc/C_STYLE_GUIDE.rst.txt19
-rw-r--r--doc/DISTUTILS.rst.txt181
-rw-r--r--doc/HOWTO_RELEASE.rst.txt38
-rw-r--r--doc/Makefile56
-rw-r--r--doc/RELEASE_WALKTHROUGH.rst.txt45
-rw-r--r--doc/TESTS.rst.txt32
-rw-r--r--doc/changelog/1.15.0-changelog.rst2
-rw-r--r--doc/changelog/1.16.1-changelog.rst2
-rw-r--r--doc/changelog/1.16.5-changelog.rst54
-rw-r--r--doc/changelog/1.16.6-changelog.rst36
-rw-r--r--doc/changelog/1.17.0-changelog.rst694
-rw-r--r--doc/neps/nep-0010-new-iterator-ufunc.rst4
-rw-r--r--doc/neps/nep-0016-abstract-array.rst2
-rw-r--r--doc/neps/nep-0018-array-function-protocol.rst255
-rw-r--r--doc/neps/nep-0019-rng-policy.rst74
-rw-r--r--doc/neps/nep-0020-gufunc-signature-enhancement.rst2
-rw-r--r--doc/neps/nep-0026-missing-data-summary.rst4
-rw-r--r--doc/neps/nep-0027-zero-rank-arrarys.rst28
-rw-r--r--doc/neps/roadmap.rst129
-rw-r--r--doc/release/1.12.0-notes.rst2
-rw-r--r--doc/release/1.14.4-notes.rst2
-rw-r--r--doc/release/1.15.6-notes.rst52
-rw-r--r--doc/release/1.16.0-notes.rst4
-rw-r--r--doc/release/1.16.5-notes.rst68
-rw-r--r--doc/release/1.16.6-notes.rst85
-rw-r--r--doc/release/1.17.0-notes.rst562
-rw-r--r--doc/release/template.rst12
-rw-r--r--doc/source/_templates/indexcontent.html2
-rw-r--r--doc/source/about.rst2
-rw-r--r--doc/source/benchmarking.rst1
-rw-r--r--doc/source/conf.py38
-rw-r--r--doc/source/dev/development_environment.rst11
-rw-r--r--doc/source/dev/development_workflow.rst (renamed from doc/source/dev/gitwash/development_workflow.rst)2
-rw-r--r--doc/source/dev/gitwash/following_latest.rst4
-rw-r--r--doc/source/dev/gitwash/git_development.rst14
-rw-r--r--doc/source/dev/gitwash/git_intro.rst40
-rw-r--r--doc/source/dev/gitwash/git_links.inc5
-rw-r--r--doc/source/dev/gitwash/index.rst26
-rw-r--r--doc/source/dev/governance/people.rst7
-rw-r--r--doc/source/dev/index.rst222
-rw-r--r--doc/source/dev/pull_button.png (renamed from doc/source/dev/gitwash/pull_button.png)bin12893 -> 12893 bytes
-rw-r--r--doc/source/docs/howto_build_docs.rst2
-rw-r--r--doc/source/reference/arrays.classes.rst4
-rw-r--r--doc/source/reference/arrays.dtypes.rst9
-rw-r--r--doc/source/reference/arrays.indexing.rst18
-rw-r--r--doc/source/reference/arrays.ndarray.rst36
-rw-r--r--doc/source/reference/arrays.scalars.rst8
-rw-r--r--doc/source/reference/c-api.array.rst174
-rw-r--r--doc/source/reference/c-api.config.rst19
-rw-r--r--doc/source/reference/c-api.coremath.rst15
-rw-r--r--doc/source/reference/c-api.dtype.rst57
-rw-r--r--doc/source/reference/c-api.iterator.rst49
-rw-r--r--doc/source/reference/c-api.types-and-structures.rst171
-rw-r--r--doc/source/reference/distutils.rst100
-rw-r--r--doc/source/reference/maskedarray.baseclass.rst90
-rw-r--r--doc/source/reference/random/bit_generators/bitgenerators.rst11
-rw-r--r--doc/source/reference/random/bit_generators/index.rst112
-rw-r--r--doc/source/reference/random/bit_generators/mt19937.rst34
-rw-r--r--doc/source/reference/random/bit_generators/pcg64.rst33
-rw-r--r--doc/source/reference/random/bit_generators/philox.rst35
-rw-r--r--doc/source/reference/random/bit_generators/sfc64.rst28
-rw-r--r--doc/source/reference/random/entropy.rst6
-rw-r--r--doc/source/reference/random/extending.rst165
-rw-r--r--doc/source/reference/random/generator.rst84
-rw-r--r--doc/source/reference/random/index.rst212
-rw-r--r--doc/source/reference/random/legacy.rst125
-rw-r--r--doc/source/reference/random/multithreading.rst108
-rw-r--r--doc/source/reference/random/new-or-different.rst118
-rw-r--r--doc/source/reference/random/parallel.rst193
-rw-r--r--doc/source/reference/random/performance.py87
-rw-r--r--doc/source/reference/random/performance.rst153
-rw-r--r--doc/source/reference/routines.char.rst16
-rw-r--r--doc/source/reference/routines.dtype.rst3
-rw-r--r--doc/source/reference/routines.linalg.rst13
-rw-r--r--doc/source/reference/routines.ma.rst13
-rw-r--r--doc/source/reference/routines.math.rst1
-rw-r--r--doc/source/reference/routines.other.rst8
-rw-r--r--doc/source/reference/routines.random.rst83
-rw-r--r--doc/source/reference/routines.rst2
-rw-r--r--doc/source/reference/routines.testing.rst2
-rw-r--r--doc/source/reference/ufuncs.rst9
-rw-r--r--doc/source/release.rst3
-rw-r--r--doc/source/user/basics.io.genfromtxt.rst6
-rw-r--r--doc/source/user/building.rst61
-rw-r--r--doc/source/user/c-info.how-to-extend.rst75
-rw-r--r--doc/source/user/numpy-for-matlab-users.rst14
-rw-r--r--doc/source/user/quickstart.rst13
-rw-r--r--doc/source/user/whatisnumpy.rst17
88 files changed, 4150 insertions, 1263 deletions
diff --git a/doc/C_STYLE_GUIDE.rst.txt b/doc/C_STYLE_GUIDE.rst.txt
index a5726f16f..07f4b99df 100644
--- a/doc/C_STYLE_GUIDE.rst.txt
+++ b/doc/C_STYLE_GUIDE.rst.txt
@@ -10,9 +10,6 @@ to achieve uniformity. Because the NumPy conventions are very close to
those in PEP-0007, that PEP is used as a template below with the NumPy
additions and variations in the appropriate spots.
-NumPy modified PEP-0007
-=======================
-
Introduction
------------
@@ -31,10 +28,7 @@ Two good reasons to break a particular rule:
C dialect
---------
-* Use ANSI/ISO standard C (the 1989 version of the standard).
- This means, amongst many other things, that all declarations
- must be at the top of a block (not necessarily at the top of
- function).
+* Use C99 (that is, the standard defined by ISO/IEC 9899:1999).
* Don't use GCC extensions (e.g. don't write multi-line strings
without trailing backslashes). Preferably break long strings
@@ -49,9 +43,6 @@ C dialect
* All function declarations and definitions must use full
prototypes (i.e. specify the types of all arguments).
-* Do not use C++ style // one line comments, they aren't portable.
- Note: this will change with the proposed transition to C++.
-
* No compiler warnings with major compilers (gcc, VC++, a few others).
Note: NumPy still produces compiler warnings that need to be addressed.
@@ -138,7 +129,7 @@ Code lay-out
the open paren, no spaces inside the parens, no spaces before
commas, one space after each comma.
-* Always put spaces around assignment, Boolean and comparison
+* Always put spaces around the assignment, Boolean and comparison
operators. In expressions using a lot of operators, add spaces
around the outermost (lowest priority) operators.
@@ -179,12 +170,12 @@ Code lay-out
Trailing comments should be used sparingly. Instead of ::
- if (yes) {/* Success! */
+ if (yes) { // Success!
do ::
if (yes) {
- /* Success! */
+ // Success!
* All functions and global variables should be declared static
when they aren't needed outside the current compilation unit.
@@ -201,7 +192,7 @@ Naming conventions
In the future the names should be of the form ``Npy*_PublicFunction``,
where the star is something appropriate.
-* Public Macros should have a NPY_ prefix and then use upper case,
+* Public Macros should have a ``NPY_`` prefix and then use upper case,
for example, ``NPY_DOUBLE``.
* Private functions should be lower case with underscores, for example:
diff --git a/doc/DISTUTILS.rst.txt b/doc/DISTUTILS.rst.txt
index c027afff2..eadde63f8 100644
--- a/doc/DISTUTILS.rst.txt
+++ b/doc/DISTUTILS.rst.txt
@@ -297,11 +297,182 @@ in writing setup scripts:
+ ``config.get_info(*names)`` ---
-Template files
---------------
-XXX: Describe how files with extensions ``.f.src``, ``.pyf.src``,
-``.c.src``, etc. are pre-processed by the ``build_src`` command.
+.. _templating:
+
+Conversion of ``.src`` files using Templates
+--------------------------------------------
+
+NumPy distutils supports automatic conversion of source files named
+<somefile>.src. This facility can be used to maintain very similar
+code blocks requiring only simple changes between blocks. During the
+build phase of setup, if a template file named <somefile>.src is
+encountered, a new file named <somefile> is constructed from the
+template and placed in the build directory to be used instead. Two
+forms of template conversion are supported. The first form occurs for
+files named <file>.ext.src where ext is a recognized Fortran
+extension (f, f90, f95, f77, for, ftn, pyf). The second form is used
+for all other cases.
+
+.. index::
+ single: code generation
+
+Fortran files
+-------------
+
+This template converter will replicate all **function** and
+**subroutine** blocks in the file with names that contain '<...>'
+according to the rules in '<...>'. The number of comma-separated words
+in '<...>' determines the number of times the block is repeated. What
+these words are indicates what that repeat rule, '<...>', should be
+replaced with in each block. All of the repeat rules in a block must
+contain the same number of comma-separated words indicating the number
+of times that block should be repeated. If the word in the repeat rule
+needs a comma, leftarrow, or rightarrow, then prepend it with a
+backslash ' \'. If a word in the repeat rule matches ' \\<index>' then
+it will be replaced with the <index>-th word in the same repeat
+specification. There are two forms for the repeat rule: named and
+short.
+
+Named repeat rule
+^^^^^^^^^^^^^^^^^
+
+A named repeat rule is useful when the same set of repeats must be
+used several times in a block. It is specified using <rule1=item1,
+item2, item3,..., itemN>, where N is the number of times the block
+should be repeated. On each repeat of the block, the entire
+expression, '<...>' will be replaced first with item1, and then with
+item2, and so forth until N repeats are accomplished. Once a named
+repeat specification has been introduced, the same repeat rule may be
+used **in the current block** by referring only to the name
+(i.e. <rule1>.
+
+
+Short repeat rule
+^^^^^^^^^^^^^^^^^
+
+A short repeat rule looks like <item1, item2, item3, ..., itemN>. The
+rule specifies that the entire expression, '<...>' should be replaced
+first with item1, and then with item2, and so forth until N repeats
+are accomplished.
+
+
+Pre-defined names
+^^^^^^^^^^^^^^^^^
+
+The following predefined named repeat rules are available:
+
+- <prefix=s,d,c,z>
+
+- <_c=s,d,c,z>
+
+- <_t=real, double precision, complex, double complex>
+
+- <ftype=real, double precision, complex, double complex>
+
+- <ctype=float, double, complex_float, complex_double>
+
+- <ftypereal=float, double precision, \\0, \\1>
+
+- <ctypereal=float, double, \\0, \\1>
+
+
+Other files
+------------
+
+Non-Fortran files use a separate syntax for defining template blocks
+that should be repeated using a variable expansion similar to the
+named repeat rules of the Fortran-specific repeats.
+
+NumPy Distutils preprocesses C source files (extension: :file:`.c.src`) written
+in a custom templating language to generate C code. The :c:data:`@` symbol is
+used to wrap macro-style variables to empower a string substitution mechanism
+that might describe (for instance) a set of data types.
+
+The template language blocks are delimited by :c:data:`/**begin repeat`
+and :c:data:`/**end repeat**/` lines, which may also be nested using
+consecutively numbered delimiting lines such as :c:data:`/**begin repeat1`
+and :c:data:`/**end repeat1**/`:
+
+1. "/\**begin repeat "on a line by itself marks the beginning of
+a segment that should be repeated.
+
+2. Named variable expansions are defined using ``#name=item1, item2, item3,
+..., itemN#`` and placed on successive lines. These variables are
+replaced in each repeat block with corresponding word. All named
+variables in the same repeat block must define the same number of
+words.
+
+3. In specifying the repeat rule for a named variable, ``item*N`` is short-
+hand for ``item, item, ..., item`` repeated N times. In addition,
+parenthesis in combination with \*N can be used for grouping several
+items that should be repeated. Thus, #name=(item1, item2)*4# is
+equivalent to #name=item1, item2, item1, item2, item1, item2, item1,
+item2#
+
+4. "\*/ "on a line by itself marks the end of the variable expansion
+naming. The next line is the first line that will be repeated using
+the named rules.
+
+5. Inside the block to be repeated, the variables that should be expanded
+are specified as ``@name@``
+
+6. "/\**end repeat**/ "on a line by itself marks the previous line
+as the last line of the block to be repeated.
+
+7. A loop in the NumPy C source code may have a ``@TYPE@`` variable, targeted
+for string substitution, which is preprocessed to a number of otherwise
+identical loops with several strings such as INT, LONG, UINT, ULONG. The
+``@TYPE@`` style syntax thus reduces code duplication and maintenance burden by
+mimicking languages that have generic type support.
+
+The above rules may be clearer in the following template source example:
+
+.. code-block:: NumPyC
+ :linenos:
+ :emphasize-lines: 3, 13, 29, 31
+
+ /* TIMEDELTA to non-float types */
+
+ /**begin repeat
+ *
+ * #TOTYPE = BYTE, UBYTE, SHORT, USHORT, INT, UINT, LONG, ULONG,
+ * LONGLONG, ULONGLONG, DATETIME,
+ * TIMEDELTA#
+ * #totype = npy_byte, npy_ubyte, npy_short, npy_ushort, npy_int, npy_uint,
+ * npy_long, npy_ulong, npy_longlong, npy_ulonglong,
+ * npy_datetime, npy_timedelta#
+ */
+
+ /**begin repeat1
+ *
+ * #FROMTYPE = TIMEDELTA#
+ * #fromtype = npy_timedelta#
+ */
+ static void
+ @FROMTYPE@_to_@TOTYPE@(void *input, void *output, npy_intp n,
+ void *NPY_UNUSED(aip), void *NPY_UNUSED(aop))
+ {
+ const @fromtype@ *ip = input;
+ @totype@ *op = output;
+
+ while (n--) {
+ *op++ = (@totype@)*ip++;
+ }
+ }
+ /**end repeat1**/
+
+ /**end repeat**/
+
+The preprocessing of generically typed C source files (whether in NumPy
+proper or in any third party package using NumPy Distutils) is performed
+by `conv_template.py`_.
+The type specific C files generated (extension: .c)
+by these modules during the build process are ready to be compiled. This
+form of generic typing is also supported for C header files (preprocessed
+to produce .h files).
+
+.. _conv_template.py: https://github.com/numpy/numpy/blob/master/numpy/distutils/conv_template.py
Useful functions in ``numpy.distutils.misc_util``
-------------------------------------------------
@@ -427,7 +598,7 @@ Extra features in NumPy Distutils
'''''''''''''''''''''''''''''''''
Specifying config_fc options for libraries in setup.py script
-------------------------------------------------------------
+-------------------------------------------------------------
It is possible to specify config_fc options in setup.py scripts.
For example, using
diff --git a/doc/HOWTO_RELEASE.rst.txt b/doc/HOWTO_RELEASE.rst.txt
index a6a8fe8ab..e2aea12b7 100644
--- a/doc/HOWTO_RELEASE.rst.txt
+++ b/doc/HOWTO_RELEASE.rst.txt
@@ -5,7 +5,7 @@ Current build and release info
==============================
The current info on building and releasing NumPy and SciPy is scattered in
-several places. It should be summarized in one place, updated and where
+several places. It should be summarized in one place, updated, and where
necessary described in more detail. The sections below list all places where
useful info can be found.
@@ -37,8 +37,8 @@ Supported platforms and versions
================================
Python 2.7 and >=3.4 are the currently supported versions when building from
-source. We test numpy against all these versions every time we merge code to
-trunk. Binary installers may be available for a subset of these versions (see
+source. We test NumPy against all these versions every time we merge code to
+master. Binary installers may be available for a subset of these versions (see
below).
OS X
@@ -54,7 +54,7 @@ Windows
-------
We build 32- and 64-bit wheels for Python 2.7, 3.4, 3.5 on Windows. Windows
-XP, Vista, 7, 8 and 10 are supported. We build numpy using the MSVC compilers
+XP, Vista, 7, 8 and 10 are supported. We build NumPy using the MSVC compilers
on Appveyor, but we are hoping to update to a `mingw-w64 toolchain
<https://mingwpy.github.io>`_. The Windows wheels use ATLAS for BLAS / LAPACK.
@@ -62,7 +62,7 @@ Linux
-----
We build and ship `manylinux1 <https://www.python.org/dev/peps/pep-0513>`_
-wheels for numpy. Many Linux distributions include their own binary builds
+wheels for NumPy. Many Linux distributions include their own binary builds
of NumPy.
BSD / Solaris
@@ -93,7 +93,7 @@ each platform. At the moment this means:
* Manylinux1 wheels use the gcc provided on the Manylinux docker images.
You will need Cython for building the binaries. Cython compiles the ``.pyx``
-files in the numpy distribution to ``.c`` files.
+files in the NumPy distribution to ``.c`` files.
Building source archives and wheels
-----------------------------------
@@ -130,9 +130,9 @@ Uploading to PyPI
Generating author/pr lists
--------------------------
-You will need an personal access token
+You will need a personal access token
`<https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/>`_
-so that scripts can access the github numpy repository
+so that scripts can access the github NumPy repository.
* gitpython (pip)
* pygithub (pip)
@@ -182,8 +182,8 @@ After a date is set, create a new maintenance/x.y.z branch, add new empty
release notes for the next version in the master branch and update the Trac
Milestones.
-Make sure current trunk builds a package correctly
---------------------------------------------------
+Make sure current branch builds a package correctly
+---------------------------------------------------
::
git clean -fxd
@@ -191,7 +191,7 @@ Make sure current trunk builds a package correctly
python setup.py sdist
To actually build the binaries after everything is set up correctly, the
-release.sh script can be used. For details of the build process itself it is
+release.sh script can be used. For details of the build process itself, it is
best to read the pavement.py script.
.. note:: The following steps are repeated for the beta(s), release
@@ -233,7 +233,7 @@ There are three steps to the process.
2. If the C_API_VERSION in the first step has changed, or if the hash of
the API has changed, the cversions.txt file needs to be updated. To check
- the hash, run the script numpy/core/cversions.py and note the api hash that
+ the hash, run the script numpy/core/cversions.py and note the API hash that
is printed. If that hash does not match the last hash in
numpy/core/code_generators/cversions.txt the hash has changed. Using both
the appropriate C_API_VERSION and hash, add a new entry to cversions.txt.
@@ -244,7 +244,7 @@ There are three steps to the process.
definitive.
If steps 1 and 2 are done correctly, compiling the release should not give
- a warning "API mismatch detect at the beginning of the build.
+ a warning "API mismatch detect at the beginning of the build".
3. The numpy/core/include/numpy/numpyconfig.h will need a new
NPY_X_Y_API_VERSION macro, where X and Y are the major and minor version
@@ -271,7 +271,7 @@ Mention at least the following:
- outlook for the near future
Also make sure that as soon as the branch is made, there is a new release
-notes file in trunk for the next release.
+notes file in the master branch for the next release.
Update the release status and create a release "tag"
----------------------------------------------------
@@ -318,10 +318,10 @@ The ``-s`` flag makes a PGP (usually GPG) signed tag. Please do sign the
release tags.
The release tag should have the release number in the annotation (tag
-message). Unfortunately the name of a tag can be changed without breaking the
+message). Unfortunately, the name of a tag can be changed without breaking the
signature, the contents of the message cannot.
-See : https://github.com/scipy/scipy/issues/4919 for a discussion of signing
+See: https://github.com/scipy/scipy/issues/4919 for a discussion of signing
release tags, and https://keyring.debian.org/creating-key.html for instructions
on creating a GPG key if you do not have one.
@@ -479,9 +479,9 @@ Announce to the lists
The release should be announced on the mailing lists of
NumPy and SciPy, to python-announce, and possibly also those of
-Matplotlib,IPython and/or Pygame.
+Matplotlib, IPython and/or Pygame.
-During the beta/RC phase an explicit request for testing the binaries with
+During the beta/RC phase, an explicit request for testing the binaries with
several other libraries (SciPy/Matplotlib/Pygame) should be posted on the
mailing list.
@@ -497,5 +497,5 @@ After the final release is announced, a few administrative tasks are left to be
done:
- Forward port changes in the release branch to release notes and release
- scripts, if any, to trunk.
+ scripts, if any, to master branch.
- Update the Milestones in Trac.
diff --git a/doc/Makefile b/doc/Makefile
index d61d115f0..842d2ad13 100644
--- a/doc/Makefile
+++ b/doc/Makefile
@@ -1,23 +1,34 @@
# Makefile for Sphinx documentation
#
-PYVER = 3
+# PYVER needs to be major.minor, just "3" doesn't work - it will result in
+# issues with the amendments to PYTHONPATH and install paths (see DIST_VARS).
+
+# Use explicit "version_info" indexing since make cannot handle colon characters, and
+# evaluate it now to allow easier debugging when printing the variable
+
+PYVER:=$(shell python3 -c 'from sys import version_info as v; print("{0}.{1}".format(v[0], v[1]))')
PYTHON = python$(PYVER)
+NUMPYVER:=$(shell python3 -c "import numpy; print(numpy.version.git_revision[:10])")
+GITVER ?= $(shell cd ..; python3 -c "from setup import git_version; \
+ print(git_version()[:10])")
+
# You can set these variables from the command line.
-SPHINXOPTS =
-SPHINXBUILD = LANG=C sphinx-build
-PAPER =
+SPHINXOPTS ?=
+SPHINXBUILD ?= LANG=C sphinx-build
+PAPER ?=
FILES=
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
-ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
+ALLSPHINXOPTS = -WT --keep-going -d build/doctrees $(PAPEROPT_$(PAPER)) \
+ $(SPHINXOPTS) source
.PHONY: help clean html web pickle htmlhelp latex changes linkcheck \
- dist dist-build gitwash-update
+ dist dist-build gitwash-update version-check
#------------------------------------------------------------------------------
@@ -35,7 +46,20 @@ help:
@echo " upload USERNAME=... RELEASE=... to upload built docs to docs.scipy.org"
clean:
- -rm -rf build/* source/reference/generated
+ -rm -rf build/*
+ find . -name generated -type d -prune -exec rm -rf "{}" ";"
+
+version-check:
+ifeq "$(GITVER)" "Unknown"
+ # @echo sdist build with unlabeled sources
+else ifneq ($(NUMPYVER),$(GITVER))
+ @echo installed numpy $(NUMPYVER) != current repo git version \'$(GITVER)\'
+ @echo use '"make dist"' or '"GITVER=$(NUMPYVER) make $(MAKECMDGOALS) ..."'
+ @exit 1
+else
+ # for testing
+ # @echo installed numpy $(NUMPYVER) matches git version $(GITVER); exit 1
+endif
gitwash-update:
rm -rf source/dev/gitwash
@@ -58,7 +82,7 @@ gitwash-update:
#
-INSTALL_DIR = $(CURDIR)/build/inst-dist/
+INSTALL_DIR = $(CURDIR)/build/inst-dist
INSTALL_PPH = $(INSTALL_DIR)/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/site-packages:$(INSTALL_DIR)/lib/python$(PYVER)/dist-packages:$(INSTALL_DIR)/local/lib/python$(PYVER)/dist-packages
UPLOAD_DIR=/srv/docs_scipy_org/doc/numpy-$(RELEASE)
@@ -112,7 +136,7 @@ build/generate-stamp: $(wildcard source/reference/*.rst)
mkdir -p build
touch build/generate-stamp
-html: generate
+html: generate version-check
mkdir -p build/html build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html $(FILES)
$(PYTHON) postprocess.py html build/html/*.html
@@ -125,7 +149,7 @@ html-scipyorg:
@echo
@echo "Build finished. The HTML pages are in build/html."
-pickle: generate
+pickle: generate version-check
mkdir -p build/pickle build/doctrees
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) build/pickle $(FILES)
@echo
@@ -135,7 +159,7 @@ pickle: generate
web: pickle
-htmlhelp: generate
+htmlhelp: generate version-check
mkdir -p build/htmlhelp build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp $(FILES)
@echo
@@ -146,11 +170,11 @@ htmlhelp-build: htmlhelp build/htmlhelp/numpy.chm
%.chm: %.hhp
-hhc.exe $^
-qthelp: generate
+qthelp: generate version-check
mkdir -p build/qthelp build/doctrees
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) build/qthelp $(FILES)
-latex: generate
+latex: generate version-check
mkdir -p build/latex build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex $(FILES)
$(PYTHON) postprocess.py tex build/latex/*.tex
@@ -160,18 +184,18 @@ latex: generate
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
-coverage: build
+coverage: build version-check
mkdir -p build/coverage build/doctrees
$(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) build/coverage $(FILES)
@echo "Coverage finished; see c.txt and python.txt in build/coverage"
-changes: generate
+changes: generate version-check
mkdir -p build/changes build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes $(FILES)
@echo
@echo "The overview file is in build/changes."
-linkcheck: generate
+linkcheck: generate version-check
mkdir -p build/linkcheck build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck $(FILES)
@echo
diff --git a/doc/RELEASE_WALKTHROUGH.rst.txt b/doc/RELEASE_WALKTHROUGH.rst.txt
index 960bb3f3e..6987dd6c1 100644
--- a/doc/RELEASE_WALKTHROUGH.rst.txt
+++ b/doc/RELEASE_WALKTHROUGH.rst.txt
@@ -6,6 +6,11 @@ replace 1.14.5 by the correct version.
Release Walkthrough
====================
+Note that in the code snippets below, ``upstream`` refers to the root repository on
+github and ``origin`` to a fork in your personal account. You may need to make adjustments
+if you have not forked the repository but simply cloned it locally. You can
+also edit ``.git/config`` and add ``upstream`` if it isn't already present.
+
Backport Pull Requests
----------------------
@@ -55,7 +60,7 @@ Edit pavement.py and setup.py as detailed in HOWTO_RELEASE::
Sanity check::
- $ python runtests.py -m "full"
+ $ python runtests.py -m "full" # NumPy < 1.17 only
$ python3 runtests.py -m "full"
Push this release directly onto the end of the maintenance branch. This
@@ -86,7 +91,7 @@ commit. This can take a while. The numpy-wheels repository is cloned from
may have been accessed and changed by someone else and a push will fail::
$ cd ../numpy-wheels
- $ git pull origin master
+ $ git pull upstream master
$ git branch <new version> # only when starting new numpy version
$ git checkout v1.14.x # v1.14.x already existed for the 1.14.4 release
@@ -96,7 +101,7 @@ above for ``BUILD_COMMIT``, see the _example from `v1.14.3`::
$ gvim .travis.yml .appveyor.yml
$ git commit -a
- $ git push origin HEAD
+ $ git push upstream HEAD
Now wait. If you get nervous at the amount of time taken -- the builds can take
several hours-- you can check the build progress by following the links
@@ -121,7 +126,7 @@ download all the wheels to the ``../numpy/release/installers`` directory and
upload later using ``twine``::
$ cd ../terryfy
- $ git pull origin master
+ $ git pull upstream master
$ CDN_URL=https://3f23b170c54c2533c070-1c8a9b3114517dc5fe17b7c3f8c63a43.ssl.cf2.rackcdn.com
$ NPY_WHLS=../numpy/release/installers
$ ./wheel-uploader -u $CDN_URL -n -v -w $NPY_WHLS -t win numpy 1.14.5
@@ -134,8 +139,8 @@ environment.
Generate the README files
-------------------------
-This needs to be done after all installers are present, but before the pavement
-file is updated for continued development.
+This needs to be done after all installers are downloaded, but before the pavement
+file is updated for continued development::
$ cd ../numpy
$ paver write_release
@@ -146,7 +151,7 @@ Tag the release
Once the wheels have been built and downloaded without errors, go back to your
numpy repository in the maintenance branch and tag the ``REL`` commit, signing
-it with your gpg key, and build the source distribution archives::
+it with your gpg key::
$ git tag -s v1.14.5
@@ -158,8 +163,8 @@ push the tag upstream::
$ git push upstream v1.14.5
-We wait until this point to push the tag because it is very difficult to change
-the tag after it has been pushed.
+We wait until this point to push the tag because it is public and should not
+be changed after it has been pushed.
Reset the maintenance branch into a development state
@@ -169,6 +174,19 @@ Add another ``REL`` commit to the numpy maintenance branch, which resets the
``ISREALEASED`` flag to ``False`` and increments the version counter::
$ gvim pavement.py setup.py
+
+Create release notes for next release and edit them to set the version::
+
+ $ cp doc/release/template.rst doc/release/1.14.6-notes.rst
+ $ gvim doc/release/1.14.6-notes.rst
+ $ git add doc/release/1.14.6-notes.rst
+
+Add new release notes to the documentation release list::
+
+ $ gvim doc/source/release.rst
+
+Commit the result::
+
$ git commit -a -m"REL: prepare 1.14.x for further development"
$ git push upstream maintenance/1.14.x
@@ -177,7 +195,9 @@ Upload to PyPI
--------------
Upload to PyPI using ``twine``. A recent version of ``twine`` of is needed
-after recent PyPI changes, version ``1.11.0`` was used here. ::
+after recent PyPI changes, version ``1.11.0`` was used here.
+
+.. code-block:: sh
$ cd ../numpy
$ twine upload release/installers/*.whl
@@ -251,8 +271,9 @@ Announce to mailing lists
The release should be announced on the numpy-discussion, scipy-devel,
scipy-user, and python-announce-list mailing lists. Look at previous
-announcements for the basic template. The contributor and PR lists
-are the same as generated for the release notes above.
+announcements for the basic template. The contributor and PR lists are the same
+as generated for the release notes above. If you crosspost, make sure that
+python-announce-list is BCC so that replies will not be sent to that list.
Post-Release Tasks
diff --git a/doc/TESTS.rst.txt b/doc/TESTS.rst.txt
index 5fe0be1f1..14cb28df8 100644
--- a/doc/TESTS.rst.txt
+++ b/doc/TESTS.rst.txt
@@ -37,10 +37,9 @@ or from the command line::
$ python runtests.py
-SciPy uses the testing framework from NumPy (specifically
-:ref:`numpy-testing`), so all the SciPy examples shown here are also
-applicable to NumPy. NumPy's full test suite can be run as
-follows::
+SciPy uses the testing framework from :mod:`numpy.testing`, so all
+the SciPy examples shown here are also applicable to NumPy. NumPy's full test
+suite can be run as follows::
>>> import numpy
>>> numpy.test()
@@ -120,15 +119,6 @@ that makes it hard to identify the test from the output of running the test
suite with ``verbose=2`` (or similar verbosity setting). Use plain comments
(``#``) if necessary.
-Sometimes it is convenient to run ``test_yyy.py`` by itself, so we add
-
-::
-
- if __name__ == "__main__":
- run_module_suite()
-
-at the bottom.
-
Labeling tests
--------------
@@ -331,35 +321,33 @@ Known failures & skipping tests
Sometimes you might want to skip a test or mark it as a known failure,
such as when the test suite is being written before the code it's
meant to test, or if a test only fails on a particular architecture.
-The decorators from numpy.testing.dec can be used to do this.
To skip a test, simply use ``skipif``::
- from numpy.testing import dec
+ import pytest
- @dec.skipif(SkipMyTest, "Skipping this test because...")
+ @pytest.mark.skipif(SkipMyTest, reason="Skipping this test because...")
def test_something(foo):
...
The test is marked as skipped if ``SkipMyTest`` evaluates to nonzero,
and the message in verbose test output is the second argument given to
``skipif``. Similarly, a test can be marked as a known failure by
-using ``knownfailureif``::
+using ``xfail``::
- from numpy.testing import dec
+ import pytest
- @dec.knownfailureif(MyTestFails, "This test is known to fail because...")
+ @pytest.mark.xfail(MyTestFails, reason="This test is known to fail because...")
def test_something_else(foo):
...
Of course, a test can be unconditionally skipped or marked as a known
-failure by passing ``True`` as the first argument to ``skipif`` or
-``knownfailureif``, respectively.
+failure by using ``skip`` or ``xfail`` without argument, respectively.
A total of the number of skipped and known failing tests is displayed
at the end of the test run. Skipped tests are marked as ``'S'`` in
the test results (or ``'SKIPPED'`` for ``verbose > 1``), and known
-failing tests are marked as ``'K'`` (or ``'KNOWN'`` if ``verbose >
+failing tests are marked as ``'x'`` (or ``'XFAIL'`` if ``verbose >
1``).
Tests on random data
diff --git a/doc/changelog/1.15.0-changelog.rst b/doc/changelog/1.15.0-changelog.rst
index b76b9699a..4e3d3680b 100644
--- a/doc/changelog/1.15.0-changelog.rst
+++ b/doc/changelog/1.15.0-changelog.rst
@@ -374,7 +374,7 @@ A total of 438 pull requests were merged for this release.
* `#10778 <https://github.com/numpy/numpy/pull/10778>`__: BUG: test, fix for missing flags['WRITEBACKIFCOPY'] key
* `#10781 <https://github.com/numpy/numpy/pull/10781>`__: ENH: NEP index builder
* `#10785 <https://github.com/numpy/numpy/pull/10785>`__: DOC: Fixed author name in reference to book
-* `#10786 <https://github.com/numpy/numpy/pull/10786>`__: ENH: Add "stablesort" option to inp.sort as an alias for "mergesort".
+* `#10786 <https://github.com/numpy/numpy/pull/10786>`__: ENH: Add "stable" option to np.sort as an alias for "mergesort".
* `#10790 <https://github.com/numpy/numpy/pull/10790>`__: TST: Various fixes prior to switching to pytest
* `#10795 <https://github.com/numpy/numpy/pull/10795>`__: BUG: Allow spaces in output string of einsum
* `#10796 <https://github.com/numpy/numpy/pull/10796>`__: BUG: fix wrong inplace vectorization on overlapping arguments
diff --git a/doc/changelog/1.16.1-changelog.rst b/doc/changelog/1.16.1-changelog.rst
index aaa803c14..30e0e3a24 100644
--- a/doc/changelog/1.16.1-changelog.rst
+++ b/doc/changelog/1.16.1-changelog.rst
@@ -25,7 +25,7 @@ names contributed a patch for the first time.
Pull requests merged
====================
-A total of 32 pull requests were merged for this release.
+A total of 33 pull requests were merged for this release.
* `#12754 <https://github.com/numpy/numpy/pull/12754>`__: BUG: Check paths are unicode, bytes or path-like
* `#12767 <https://github.com/numpy/numpy/pull/12767>`__: ENH: add mm->q floordiv
diff --git a/doc/changelog/1.16.5-changelog.rst b/doc/changelog/1.16.5-changelog.rst
deleted file mode 100644
index 19374058d..000000000
--- a/doc/changelog/1.16.5-changelog.rst
+++ /dev/null
@@ -1,54 +0,0 @@
-
-Contributors
-============
-
-A total of 18 people contributed to this release. People with a "+" by their
-names contributed a patch for the first time.
-
-* Alexander Shadchin
-* Allan Haldane
-* Bruce Merry +
-* Charles Harris
-* Colin Snyder +
-* Dan Allan +
-* Emile +
-* Eric Wieser
-* Grey Baker +
-* Maksim Shabunin +
-* Marten van Kerkwijk
-* Matti Picus
-* Peter Andreas Entschev +
-* Ralf Gommers
-* Richard Harris +
-* Sebastian Berg
-* Sergei Lebedev +
-* Stephan Hoyer
-
-Pull requests merged
-====================
-
-A total of 23 pull requests were merged for this release.
-
-* `#13742 <https://github.com/numpy/numpy/pull/13742>`__: ENH: Add project URLs to setup.py
-* `#13823 <https://github.com/numpy/numpy/pull/13823>`__: TEST, ENH: fix tests and ctypes code for PyPy
-* `#13845 <https://github.com/numpy/numpy/pull/13845>`__: BUG: use npy_intp instead of int for indexing array
-* `#13867 <https://github.com/numpy/numpy/pull/13867>`__: TST: Ignore DeprecationWarning during nose imports
-* `#13905 <https://github.com/numpy/numpy/pull/13905>`__: BUG: Fix use-after-free in boolean indexing
-* `#13933 <https://github.com/numpy/numpy/pull/13933>`__: MAINT/BUG/DOC: Fix errors in _add_newdocs
-* `#13984 <https://github.com/numpy/numpy/pull/13984>`__: BUG: fix byte order reversal for datetime64[ns]
-* `#13994 <https://github.com/numpy/numpy/pull/13994>`__: MAINT,BUG: Use nbytes to also catch empty descr during allocation
-* `#14042 <https://github.com/numpy/numpy/pull/14042>`__: BUG: np.array cleared errors occured in PyMemoryView_FromObject
-* `#14043 <https://github.com/numpy/numpy/pull/14043>`__: BUG: Fixes for Undefined Behavior Sanitizer (UBSan) errors.
-* `#14044 <https://github.com/numpy/numpy/pull/14044>`__: BUG: ensure that casting to/from structured is properly checked.
-* `#14045 <https://github.com/numpy/numpy/pull/14045>`__: MAINT: fix histogram*d dispatchers
-* `#14046 <https://github.com/numpy/numpy/pull/14046>`__: BUG: further fixup to histogram2d dispatcher.
-* `#14052 <https://github.com/numpy/numpy/pull/14052>`__: BUG: Replace contextlib.suppress for Python 2.7
-* `#14056 <https://github.com/numpy/numpy/pull/14056>`__: BUG: fix compilation of 3rd party modules with Py_LIMITED_API...
-* `#14057 <https://github.com/numpy/numpy/pull/14057>`__: BUG: Fix memory leak in dtype from dict contructor
-* `#14058 <https://github.com/numpy/numpy/pull/14058>`__: DOC: Document array_function at a higher level.
-* `#14084 <https://github.com/numpy/numpy/pull/14084>`__: BUG, DOC: add new recfunctions to `__all__`
-* `#14162 <https://github.com/numpy/numpy/pull/14162>`__: BUG: Remove stray print that causes a SystemError on python 3.7
-* `#14297 <https://github.com/numpy/numpy/pull/14297>`__: TST: Pin pytest version to 5.0.1.
-* `#14322 <https://github.com/numpy/numpy/pull/14322>`__: ENH: Enable huge pages in all Linux builds
-* `#14346 <https://github.com/numpy/numpy/pull/14346>`__: BUG: fix behavior of structured_to_unstructured on non-trivial...
-* `#14382 <https://github.com/numpy/numpy/pull/14382>`__: REL: Prepare for the NumPy 1.16.5 release.
diff --git a/doc/changelog/1.16.6-changelog.rst b/doc/changelog/1.16.6-changelog.rst
deleted file mode 100644
index 62ff46c34..000000000
--- a/doc/changelog/1.16.6-changelog.rst
+++ /dev/null
@@ -1,36 +0,0 @@
-
-Contributors
-============
-
-A total of 10 people contributed to this release.
-
-* CakeWithSteak
-* Charles Harris
-* Chris Burr
-* Eric Wieser
-* Fernando Saravia
-* Lars Grueter
-* Matti Picus
-* Maxwell Aladago
-* Qiming Sun
-* Warren Weckesser
-
-Pull requests merged
-====================
-
-A total of 14 pull requests were merged for this release.
-
-* `#14211 <https://github.com/numpy/numpy/pull/14211>`__: BUG: Fix uint-overflow if padding with linear_ramp and negative...
-* `#14275 <https://github.com/numpy/numpy/pull/14275>`__: BUG: fixing to allow unpickling of PY3 pickles from PY2
-* `#14340 <https://github.com/numpy/numpy/pull/14340>`__: BUG: Fix misuse of .names and .fields in various places (backport...
-* `#14423 <https://github.com/numpy/numpy/pull/14423>`__: BUG: test, fix regression in converting to ctypes.
-* `#14434 <https://github.com/numpy/numpy/pull/14434>`__: BUG: Fixed maximum relative error reporting in assert_allclose
-* `#14509 <https://github.com/numpy/numpy/pull/14509>`__: BUG: Fix regression in boolean matmul.
-* `#14686 <https://github.com/numpy/numpy/pull/14686>`__: BUG: properly define PyArray_DescrCheck
-* `#14853 <https://github.com/numpy/numpy/pull/14853>`__: BLD: add 'apt update' to shippable
-* `#14854 <https://github.com/numpy/numpy/pull/14854>`__: BUG: Fix _ctypes class circular reference. (#13808)
-* `#14856 <https://github.com/numpy/numpy/pull/14856>`__: BUG: Fix `np.einsum` errors on Power9 Linux and z/Linux
-* `#14863 <https://github.com/numpy/numpy/pull/14863>`__: BLD: Prevent -flto from optimising long double representation...
-* `#14864 <https://github.com/numpy/numpy/pull/14864>`__: BUG: lib: Fix histogram problem with signed integer arrays.
-* `#15172 <https://github.com/numpy/numpy/pull/15172>`__: ENH: Backport improvements to testing functions.
-* `#15191 <https://github.com/numpy/numpy/pull/15191>`__: REL: Prepare for 1.16.6 release.
diff --git a/doc/changelog/1.17.0-changelog.rst b/doc/changelog/1.17.0-changelog.rst
new file mode 100644
index 000000000..debfb6f5b
--- /dev/null
+++ b/doc/changelog/1.17.0-changelog.rst
@@ -0,0 +1,694 @@
+
+Contributors
+============
+
+A total of 150 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Aaron Voelker +
+* Abdur Rehman +
+* Abdur-Rahmaan Janhangeer +
+* Abhinav Sagar +
+* Adam J. Stewart +
+* Adam Orr +
+* Albert Thomas +
+* Alex Watt +
+* Alexander Blinne +
+* Alexander Shadchin
+* Allan Haldane
+* Ander Ustarroz +
+* Andras Deak
+* Andrea Pattori +
+* Andreas Schwab
+* Andrew Naguib +
+* Andy Scholand +
+* Ankit Shukla +
+* Anthony Sottile
+* Antoine Pitrou
+* Antony Lee
+* Arcesio Castaneda Medina +
+* Assem +
+* Bernardt Duvenhage +
+* Bharat Raghunathan +
+* Bharat123rox +
+* Bran +
+* Bruce Merry +
+* Charles Harris
+* Chirag Nighut +
+* Christoph Gohlke
+* Christopher Whelan +
+* Chuanzhu Xu +
+* Colin Snyder +
+* Dan Allan +
+* Daniel Hrisca
+* Daniel Lawrence +
+* Debsankha Manik +
+* Dennis Zollo +
+* Dieter Werthmüller +
+* Dominic Jack +
+* EelcoPeacs +
+* Eric Larson
+* Eric Wieser
+* Fabrice Fontaine +
+* Gary Gurlaskie +
+* Gregory Lee +
+* Gregory R. Lee
+* Guillaume Horel +
+* Hameer Abbasi
+* Haoyu Sun +
+* Harmon +
+* He Jia +
+* Hunter Damron +
+* Ian Sanders +
+* Ilja +
+* Isaac Virshup +
+* Isaiah Norton +
+* Jackie Leng +
+* Jaime Fernandez
+* Jakub Wilk
+* Jan S. (Milania1) +
+* Jarrod Millman
+* Javier Dehesa +
+* Jeremy Lay +
+* Jim Turner +
+* Jingbei Li +
+* Joachim Hereth +
+* Johannes Hampp +
+* John Belmonte +
+* John Kirkham
+* John Law +
+* Jonas Jensen
+* Joseph Fox-Rabinovitz
+* Joseph Martinot-Lagarde
+* Josh Wilson
+* Juan Luis Cano Rodríguez
+* Julian Taylor
+* Jérémie du Boisberranger +
+* Kai Striega +
+* Katharine Hyatt +
+* Kevin Sheppard
+* Kexuan Sun
+* Kiko Correoso +
+* Kriti Singh +
+* Lars Grueter +
+* Luis Pedro Coelho
+* Maksim Shabunin +
+* Manvi07 +
+* Mark Harfouche
+* Marten van Kerkwijk
+* Martin Reinecke +
+* Matthew Brett
+* Matthias Bussonnier
+* Matti Picus
+* Michel Fruchart +
+* Mike Lui +
+* Mike Taves +
+* Min ho Kim +
+* Mircea Akos Bruma
+* Nick Minkyu Lee
+* Nick Papior
+* Nick R. Papior +
+* Nicola Soranzo +
+* Nimish Telang +
+* OBATA Akio +
+* Oleksandr Pavlyk
+* Ori Broda +
+* Paul Ivanov
+* Pauli Virtanen
+* Peter Andreas Entschev +
+* Peter Bell +
+* Pierre de Buyl
+* Piyush Jaipuriayar +
+* Prithvi MK +
+* Raghuveer Devulapalli +
+* Ralf Gommers
+* Richard Harris +
+* Rishabh Chakrabarti +
+* Riya Sharma +
+* Robert Kern
+* Roman Yurchak
+* Ryan Levy +
+* Sebastian Berg
+* Sergei Lebedev +
+* Shekhar Prasad Rajak +
+* Stefan van der Walt
+* Stephan Hoyer
+* Steve Stagg +
+* SuryaChand P +
+* Søren Rasmussen +
+* Thibault Hallouin +
+* Thomas A Caswell
+* Tobias Uelwer +
+* Tony LaTorre +
+* Toshiki Kataoka
+* Tyler Moncur +
+* Tyler Reddy
+* Valentin Haenel
+* Vrinda Narayan +
+* Warren Weckesser
+* Weitang Li
+* Wojtek Ruszczewski
+* Yu Feng
+* Yu Kobayashi +
+* Yury Kirienko +
+* aashuli +
+* luzpaz
+* parul +
+* spacescientist +
+
+Pull requests merged
+====================
+
+A total of 531 pull requests were merged for this release.
+
+* `#4808 <https://github.com/numpy/numpy/pull/4808>`__: ENH: Make the `mode` parameter of np.pad default to 'constant'
+* `#8131 <https://github.com/numpy/numpy/pull/8131>`__: BUG: Fix help() formatting for deprecated functions.
+* `#8159 <https://github.com/numpy/numpy/pull/8159>`__: ENH: Add import time benchmarks.
+* `#8641 <https://github.com/numpy/numpy/pull/8641>`__: BUG: Preserve types of empty arrays in ix_ when known
+* `#8662 <https://github.com/numpy/numpy/pull/8662>`__: ENH: preserve subclasses in ufunc.outer
+* `#9330 <https://github.com/numpy/numpy/pull/9330>`__: ENH: Make errstate a ContextDecorator in Python3
+* `#10308 <https://github.com/numpy/numpy/pull/10308>`__: API: Make MaskedArray.mask return a view, rather than the underlying...
+* `#10417 <https://github.com/numpy/numpy/pull/10417>`__: ENH: Allow dtype objects to be indexed with multiple fields at...
+* `#10723 <https://github.com/numpy/numpy/pull/10723>`__: BUG: longdouble(int) does not work
+* `#10741 <https://github.com/numpy/numpy/pull/10741>`__: ENH: Implement `np.floating.as_integer_ratio`
+* `#10855 <https://github.com/numpy/numpy/pull/10855>`__: ENH: Adding a count parameter to np.unpackbits
+* `#11230 <https://github.com/numpy/numpy/pull/11230>`__: MAINT: More cleanup of einsum
+* `#11233 <https://github.com/numpy/numpy/pull/11233>`__: BUG: ensure i0 does not change the shape.
+* `#11684 <https://github.com/numpy/numpy/pull/11684>`__: BUG: Raise when unravel_index, ravel_multi_index are given empty...
+* `#11689 <https://github.com/numpy/numpy/pull/11689>`__: DOC: Add ref docs for C generic types.
+* `#11721 <https://github.com/numpy/numpy/pull/11721>`__: BUG: Make `arr.ctypes.data` hold onto a reference to the underlying...
+* `#11829 <https://github.com/numpy/numpy/pull/11829>`__: MAINT: Use textwrap.dedent in f2py tests
+* `#11859 <https://github.com/numpy/numpy/pull/11859>`__: BUG: test and fix np.dtype('i,L') #5645
+* `#11888 <https://github.com/numpy/numpy/pull/11888>`__: ENH: Add pocketfft sources to numpy for testing, benchmarks,...
+* `#11977 <https://github.com/numpy/numpy/pull/11977>`__: BUG: reference cycle in np.vectorize
+* `#12025 <https://github.com/numpy/numpy/pull/12025>`__: DOC: add detail for 'where' argument in ufunc
+* `#12152 <https://github.com/numpy/numpy/pull/12152>`__: TST: Added tests for np.tensordot()
+* `#12201 <https://github.com/numpy/numpy/pull/12201>`__: TST: coverage for _commonType()
+* `#12234 <https://github.com/numpy/numpy/pull/12234>`__: MAINT: refactor PyArray_AdaptFlexibleDType to return a meaningful...
+* `#12239 <https://github.com/numpy/numpy/pull/12239>`__: BUG: polyval returned non-masked arrays for masked input.
+* `#12253 <https://github.com/numpy/numpy/pull/12253>`__: DOC, TST: enable doctests
+* `#12308 <https://github.com/numpy/numpy/pull/12308>`__: ENH: add mm->q floordiv
+* `#12317 <https://github.com/numpy/numpy/pull/12317>`__: ENH: port np.core.overrides to C for speed
+* `#12333 <https://github.com/numpy/numpy/pull/12333>`__: DOC: update description of the Dirichlet distribution
+* `#12418 <https://github.com/numpy/numpy/pull/12418>`__: ENH: Add timsort to npysort
+* `#12428 <https://github.com/numpy/numpy/pull/12428>`__: ENH: always use zip64, upgrade pickle protocol to 3
+* `#12456 <https://github.com/numpy/numpy/pull/12456>`__: ENH: Add np.ctypeslib.as_ctypes_type(dtype), improve `np.ctypeslib.as_ctypes`
+* `#12457 <https://github.com/numpy/numpy/pull/12457>`__: TST: openblas for Azure MacOS
+* `#12463 <https://github.com/numpy/numpy/pull/12463>`__: DOC: fix docstrings for broadcastable inputs in ufunc
+* `#12502 <https://github.com/numpy/numpy/pull/12502>`__: TST: Azure Python version fix
+* `#12506 <https://github.com/numpy/numpy/pull/12506>`__: MAINT: Prepare master for 1.17.0 development.
+* `#12508 <https://github.com/numpy/numpy/pull/12508>`__: DOC, MAINT: Make `PYVER = 3` in doc/Makefile.
+* `#12511 <https://github.com/numpy/numpy/pull/12511>`__: BUG: don't check alignment of size=0 arrays (RELAXED_STRIDES)
+* `#12512 <https://github.com/numpy/numpy/pull/12512>`__: added template-generated files to .gitignore
+* `#12519 <https://github.com/numpy/numpy/pull/12519>`__: ENH/DEP: Use a ufunc under the hood for ndarray.clip
+* `#12522 <https://github.com/numpy/numpy/pull/12522>`__: BUG: Make new-lines in compiler error messages print to the console
+* `#12524 <https://github.com/numpy/numpy/pull/12524>`__: BUG: fix improper use of C-API
+* `#12526 <https://github.com/numpy/numpy/pull/12526>`__: BUG: reorder operations for VS2015
+* `#12527 <https://github.com/numpy/numpy/pull/12527>`__: DEV: Fix lgtm.com C/C++ build
+* `#12528 <https://github.com/numpy/numpy/pull/12528>`__: BUG: fix an unsafe PyTuple_GET_ITEM call
+* `#12532 <https://github.com/numpy/numpy/pull/12532>`__: DEV: add ctags option file
+* `#12534 <https://github.com/numpy/numpy/pull/12534>`__: DOC: Fix desc. of Ellipsis behavior in reference
+* `#12537 <https://github.com/numpy/numpy/pull/12537>`__: DOC: Change 'num' to 'np'
+* `#12538 <https://github.com/numpy/numpy/pull/12538>`__: MAINT: remove VC 9.0 from CI
+* `#12539 <https://github.com/numpy/numpy/pull/12539>`__: DEV: remove travis 32 bit job since it is running on azure
+* `#12543 <https://github.com/numpy/numpy/pull/12543>`__: TST: wheel-match Linux openblas in CI
+* `#12544 <https://github.com/numpy/numpy/pull/12544>`__: BUG: fix refcount issue caused by #12524
+* `#12545 <https://github.com/numpy/numpy/pull/12545>`__: BUG: Ensure probabilities are not NaN in choice
+* `#12546 <https://github.com/numpy/numpy/pull/12546>`__: BUG: check for errors after PyArray_DESCR_REPLACE
+* `#12547 <https://github.com/numpy/numpy/pull/12547>`__: ENH: Cast covariance to double in random mvnormal
+* `#12549 <https://github.com/numpy/numpy/pull/12549>`__: TST: relax codecov project threshold
+* `#12551 <https://github.com/numpy/numpy/pull/12551>`__: MAINT: add warning to numpy.distutils for LDFLAGS append behavior.
+* `#12552 <https://github.com/numpy/numpy/pull/12552>`__: BENCH: Improve benchmarks for numpy.pad
+* `#12554 <https://github.com/numpy/numpy/pull/12554>`__: DOC: more doc updates for structured arrays
+* `#12555 <https://github.com/numpy/numpy/pull/12555>`__: BUG: only override vector size for avx code
+* `#12560 <https://github.com/numpy/numpy/pull/12560>`__: DOC: fix some doctest failures
+* `#12566 <https://github.com/numpy/numpy/pull/12566>`__: BUG: fix segfault in ctypeslib with obj being collected
+* `#12571 <https://github.com/numpy/numpy/pull/12571>`__: Revert "Merge pull request #11721 from eric-wieser/fix-9647"
+* `#12572 <https://github.com/numpy/numpy/pull/12572>`__: BUG: Make `arr.ctypes.data` hold a reference to the underlying...
+* `#12575 <https://github.com/numpy/numpy/pull/12575>`__: ENH: improve performance for numpy.core.records.find_duplicate
+* `#12577 <https://github.com/numpy/numpy/pull/12577>`__: BUG: fix f2py pep338 execution method
+* `#12578 <https://github.com/numpy/numpy/pull/12578>`__: TST: activate shippable maintenance branches
+* `#12583 <https://github.com/numpy/numpy/pull/12583>`__: TST: add test for 'python -mnumpy.f2py'
+* `#12584 <https://github.com/numpy/numpy/pull/12584>`__: Clarify skiprows in loadtxt
+* `#12586 <https://github.com/numpy/numpy/pull/12586>`__: ENH: Implement radix sort
+* `#12589 <https://github.com/numpy/numpy/pull/12589>`__: MAINT: Update changelog.py for Python 3.
+* `#12591 <https://github.com/numpy/numpy/pull/12591>`__: ENH: add "max difference" messages to np.testing.assert_array_equal
+* `#12592 <https://github.com/numpy/numpy/pull/12592>`__: BUG,TST: Remove the misguided `run_command` that wraps subprocess
+* `#12593 <https://github.com/numpy/numpy/pull/12593>`__: ENH,WIP: Use richer exception types for ufunc type resolution...
+* `#12594 <https://github.com/numpy/numpy/pull/12594>`__: DEV, BUILD: add pypy3 to azure CI
+* `#12596 <https://github.com/numpy/numpy/pull/12596>`__: ENH: improve performance of numpy.core.records.fromarrays
+* `#12601 <https://github.com/numpy/numpy/pull/12601>`__: DOC: Correct documentation of `numpy.delete` obj parameter.
+* `#12602 <https://github.com/numpy/numpy/pull/12602>`__: DOC: Update RELEASE_WALKTHROUGH.rst.txt.
+* `#12604 <https://github.com/numpy/numpy/pull/12604>`__: BUG: Check that dtype and formats arguments for None.
+* `#12606 <https://github.com/numpy/numpy/pull/12606>`__: DOC: Document NPY_SORTKIND parameter in PyArray_Sort
+* `#12608 <https://github.com/numpy/numpy/pull/12608>`__: MAINT: Use `*.format` for some strings.
+* `#12609 <https://github.com/numpy/numpy/pull/12609>`__: ENH: Deprecate writeable broadcast_array
+* `#12610 <https://github.com/numpy/numpy/pull/12610>`__: TST: Update runtests.py to specify C99 for gcc.
+* `#12611 <https://github.com/numpy/numpy/pull/12611>`__: BUG: longdouble with elsize 12 is never uint alignable
+* `#12612 <https://github.com/numpy/numpy/pull/12612>`__: TST: Update `travis-test.sh` for C99
+* `#12616 <https://github.com/numpy/numpy/pull/12616>`__: BLD: Fix minimum Python version in setup.py
+* `#12617 <https://github.com/numpy/numpy/pull/12617>`__: BUG: Add missing free in ufunc dealloc
+* `#12618 <https://github.com/numpy/numpy/pull/12618>`__: MAINT: add test for 12-byte alignment
+* `#12620 <https://github.com/numpy/numpy/pull/12620>`__: BLD: move -std=c99 addition to CFLAGS to Azure config
+* `#12624 <https://github.com/numpy/numpy/pull/12624>`__: BUG: Fix incorrect/missing reference cleanups found using valgrind
+* `#12626 <https://github.com/numpy/numpy/pull/12626>`__: BUG: fix uint alignment asserts in lowlevel loops
+* `#12631 <https://github.com/numpy/numpy/pull/12631>`__: BUG: fix f2py problem to build wrappers using PGI's Fortran
+* `#12634 <https://github.com/numpy/numpy/pull/12634>`__: DOC, TST: remove "agg" setting from docs
+* `#12639 <https://github.com/numpy/numpy/pull/12639>`__: BENCH: don't fail at import time with old Numpy
+* `#12641 <https://github.com/numpy/numpy/pull/12641>`__: DOC: update 2018 -> 2019
+* `#12644 <https://github.com/numpy/numpy/pull/12644>`__: ENH: where for ufunc reductions
+* `#12645 <https://github.com/numpy/numpy/pull/12645>`__: DOC: Minor fix to pocketfft release note
+* `#12650 <https://github.com/numpy/numpy/pull/12650>`__: BUG: Fix reference counting for subarrays containing objects
+* `#12651 <https://github.com/numpy/numpy/pull/12651>`__: DOC: SimpleNewFromDescr cannot be given NULL for descr
+* `#12666 <https://github.com/numpy/numpy/pull/12666>`__: BENCH: add asv nanfunction benchmarks
+* `#12668 <https://github.com/numpy/numpy/pull/12668>`__: ENH: Improve error messages for non-matching shapes in concatenate.
+* `#12671 <https://github.com/numpy/numpy/pull/12671>`__: TST: Fix endianness in unstuctured_to_structured test
+* `#12672 <https://github.com/numpy/numpy/pull/12672>`__: BUG: Add 'sparc' to platforms implementing 16 byte reals.
+* `#12677 <https://github.com/numpy/numpy/pull/12677>`__: MAINT: Further fixups to uint alignment checks
+* `#12679 <https://github.com/numpy/numpy/pull/12679>`__: ENH: remove "Invalid value" warnings from median, percentile
+* `#12680 <https://github.com/numpy/numpy/pull/12680>`__: BUG: Ensure failing memory allocations are reported
+* `#12683 <https://github.com/numpy/numpy/pull/12683>`__: ENH: add mm->qm divmod
+* `#12684 <https://github.com/numpy/numpy/pull/12684>`__: DEV: remove _arg from public API, add matmul to benchmark ufuncs
+* `#12685 <https://github.com/numpy/numpy/pull/12685>`__: BUG: Make pocketfft handle long doubles.
+* `#12687 <https://github.com/numpy/numpy/pull/12687>`__: ENH: Better links in documentation
+* `#12690 <https://github.com/numpy/numpy/pull/12690>`__: WIP, ENH: add _nan_mask function
+* `#12693 <https://github.com/numpy/numpy/pull/12693>`__: ENH: Add a hermitian argument to `pinv` and `svd`, matching `matrix_rank`
+* `#12696 <https://github.com/numpy/numpy/pull/12696>`__: BUG: Fix leak of void scalar buffer info
+* `#12698 <https://github.com/numpy/numpy/pull/12698>`__: DOC: improve comments in copycast_isaligned
+* `#12700 <https://github.com/numpy/numpy/pull/12700>`__: ENH: chain additional exception on ufunc method lookup error
+* `#12702 <https://github.com/numpy/numpy/pull/12702>`__: TST: Check FFT results for C/Fortran ordered and non contigous...
+* `#12704 <https://github.com/numpy/numpy/pull/12704>`__: TST: pin Azure brew version for stability
+* `#12709 <https://github.com/numpy/numpy/pull/12709>`__: TST: add ppc64le to Travis CI matrix
+* `#12713 <https://github.com/numpy/numpy/pull/12713>`__: BUG: loosen kwargs requirements in ediff1d
+* `#12722 <https://github.com/numpy/numpy/pull/12722>`__: BUG: Fix rounding of denormals in double and float to half casts...
+* `#12723 <https://github.com/numpy/numpy/pull/12723>`__: BENCH: Include other sort benchmarks
+* `#12724 <https://github.com/numpy/numpy/pull/12724>`__: BENCH: quiet DeprecationWarning
+* `#12727 <https://github.com/numpy/numpy/pull/12727>`__: DOC: fix and doctest tutorial
+* `#12728 <https://github.com/numpy/numpy/pull/12728>`__: DOC: clarify the suffix of single/extended precision math constants
+* `#12729 <https://github.com/numpy/numpy/pull/12729>`__: DOC: Extend documentation of `ndarray.tolist`
+* `#12731 <https://github.com/numpy/numpy/pull/12731>`__: DOC: Update release notes and changelog after 1.16.0 release.
+* `#12733 <https://github.com/numpy/numpy/pull/12733>`__: DOC: clarify the extend of __array_function__ support in NumPy...
+* `#12741 <https://github.com/numpy/numpy/pull/12741>`__: DOC: fix generalized eigenproblem reference in "NumPy for MATLAB...
+* `#12743 <https://github.com/numpy/numpy/pull/12743>`__: BUG: Fix crash in error message formatting introduced by gh-11230
+* `#12748 <https://github.com/numpy/numpy/pull/12748>`__: BUG: Fix SystemError when pickling datetime64 array with pickle5
+* `#12757 <https://github.com/numpy/numpy/pull/12757>`__: BUG: Added parens to macro argument expansions
+* `#12758 <https://github.com/numpy/numpy/pull/12758>`__: DOC: Update docstring of diff() to use 'i' not 'n'
+* `#12762 <https://github.com/numpy/numpy/pull/12762>`__: MAINT: Change the order of checking for locale file and import...
+* `#12783 <https://github.com/numpy/numpy/pull/12783>`__: DOC: document C99 requirement in dev guide
+* `#12787 <https://github.com/numpy/numpy/pull/12787>`__: DOC: remove recommendation to add main for testing
+* `#12805 <https://github.com/numpy/numpy/pull/12805>`__: BUG: double decref of dtype in failure codepath. Test and fix
+* `#12807 <https://github.com/numpy/numpy/pull/12807>`__: BUG, DOC: test, fix that f2py.compile accepts str and bytes,...
+* `#12814 <https://github.com/numpy/numpy/pull/12814>`__: BUG: resolve writeback in arr_insert failure paths
+* `#12815 <https://github.com/numpy/numpy/pull/12815>`__: BUG: Fix testing of f2py.compile from strings.
+* `#12818 <https://github.com/numpy/numpy/pull/12818>`__: DOC: remove python2-only methods, small cleanups
+* `#12824 <https://github.com/numpy/numpy/pull/12824>`__: BUG: fix to check before apply `shlex.split`
+* `#12830 <https://github.com/numpy/numpy/pull/12830>`__: ENH: __array_function__ updates for NumPy 1.17.0
+* `#12831 <https://github.com/numpy/numpy/pull/12831>`__: BUG: Catch stderr when checking compiler version
+* `#12842 <https://github.com/numpy/numpy/pull/12842>`__: BUG: ndarrays pickled by 1.16 cannot be loaded by 1.15.4 and...
+* `#12846 <https://github.com/numpy/numpy/pull/12846>`__: BUG: fix signed zero behavior in npy_divmod
+* `#12850 <https://github.com/numpy/numpy/pull/12850>`__: BUG: fail if old multiarray module detected
+* `#12851 <https://github.com/numpy/numpy/pull/12851>`__: TEST: use xenial by default for travis
+* `#12854 <https://github.com/numpy/numpy/pull/12854>`__: BUG: do not Py_DECREF NULL pointer
+* `#12857 <https://github.com/numpy/numpy/pull/12857>`__: STY: simplify code
+* `#12863 <https://github.com/numpy/numpy/pull/12863>`__: TEST: pin mingw version
+* `#12866 <https://github.com/numpy/numpy/pull/12866>`__: DOC: link to benchmarking info
+* `#12867 <https://github.com/numpy/numpy/pull/12867>`__: TST: Use same OpenBLAS build for testing as for current wheels.
+* `#12871 <https://github.com/numpy/numpy/pull/12871>`__: ENH: add c-imported modules to namespace for freeze analysis
+* `#12877 <https://github.com/numpy/numpy/pull/12877>`__: Remove deprecated ``sudo: false`` from .travis.yml
+* `#12879 <https://github.com/numpy/numpy/pull/12879>`__: DEP: deprecate exec_command
+* `#12885 <https://github.com/numpy/numpy/pull/12885>`__: DOC: fix math formatting of np.linalg.lstsq docs
+* `#12886 <https://github.com/numpy/numpy/pull/12886>`__: DOC: add missing character routines, fix #8578
+* `#12887 <https://github.com/numpy/numpy/pull/12887>`__: BUG: Fix np.rec.fromarrays on arrays which are already structured
+* `#12889 <https://github.com/numpy/numpy/pull/12889>`__: BUG: Make allow_pickle=False the default for loading
+* `#12892 <https://github.com/numpy/numpy/pull/12892>`__: BUG: Do not double-quote arguments passed on to the linker
+* `#12894 <https://github.com/numpy/numpy/pull/12894>`__: MAINT: Removed unused and confusingly indirect imports from mingw32ccompiler
+* `#12895 <https://github.com/numpy/numpy/pull/12895>`__: BUG: Do not insert extra double quote into preprocessor macros
+* `#12903 <https://github.com/numpy/numpy/pull/12903>`__: TST: fix vmImage dispatch in Azure
+* `#12905 <https://github.com/numpy/numpy/pull/12905>`__: BUG: fix byte order reversal for datetime64[ns]
+* `#12908 <https://github.com/numpy/numpy/pull/12908>`__: DOC: Update master following 1.16.1 release.
+* `#12911 <https://github.com/numpy/numpy/pull/12911>`__: BLD: fix doc build for distribution.
+* `#12915 <https://github.com/numpy/numpy/pull/12915>`__: ENH: pathlib support for fromfile(), .tofile() and .dump()
+* `#12920 <https://github.com/numpy/numpy/pull/12920>`__: MAINT: remove complicated test of multiarray import failure mode
+* `#12922 <https://github.com/numpy/numpy/pull/12922>`__: DOC: Add note about arbitrary code execution to numpy.load
+* `#12925 <https://github.com/numpy/numpy/pull/12925>`__: BUG: parse shell escaping in extra_compile_args and extra_link_args
+* `#12928 <https://github.com/numpy/numpy/pull/12928>`__: MAINT: Merge together the unary and binary type resolvers
+* `#12929 <https://github.com/numpy/numpy/pull/12929>`__: DOC: fix documentation bug in np.argsort and extend examples
+* `#12931 <https://github.com/numpy/numpy/pull/12931>`__: MAINT: Remove recurring check
+* `#12932 <https://github.com/numpy/numpy/pull/12932>`__: BUG: do not dereference NULL pointer
+* `#12937 <https://github.com/numpy/numpy/pull/12937>`__: DOC: Correct negative_binomial docstring
+* `#12944 <https://github.com/numpy/numpy/pull/12944>`__: BUG: Make timsort deal with zero length elements.
+* `#12945 <https://github.com/numpy/numpy/pull/12945>`__: BUG: Add timsort without breaking the API.
+* `#12949 <https://github.com/numpy/numpy/pull/12949>`__: DOC: ndarray.max is missing
+* `#12962 <https://github.com/numpy/numpy/pull/12962>`__: ENH: Add 'bitorder' keyword to packbits, unpackbits
+* `#12963 <https://github.com/numpy/numpy/pull/12963>`__: DOC: Grammatical fix in numpy doc
+* `#12964 <https://github.com/numpy/numpy/pull/12964>`__: DOC: Document that ``scale==0`` is now allowed in many distributions.
+* `#12965 <https://github.com/numpy/numpy/pull/12965>`__: DOC: Properly format Return section of ogrid Docstring,
+* `#12968 <https://github.com/numpy/numpy/pull/12968>`__: BENCH: Re-write sorting benchmarks
+* `#12971 <https://github.com/numpy/numpy/pull/12971>`__: ENH: Add 'offset' keyword to 'numpy.fromfile()'
+* `#12973 <https://github.com/numpy/numpy/pull/12973>`__: DOC: Recommend adding dimension to switch between row and column...
+* `#12983 <https://github.com/numpy/numpy/pull/12983>`__: DOC: Randomstate docstring fixes
+* `#12984 <https://github.com/numpy/numpy/pull/12984>`__: DOC: Add examples of negative shifts in np.roll
+* `#12986 <https://github.com/numpy/numpy/pull/12986>`__: BENCH: set ones in any/all benchmarks to 1 instead of 0
+* `#12988 <https://github.com/numpy/numpy/pull/12988>`__: ENH: Create boolean and integer ufuncs for isnan, isinf, and...
+* `#12989 <https://github.com/numpy/numpy/pull/12989>`__: ENH: Correct handling of infinities in np.interp (option B)
+* `#12995 <https://github.com/numpy/numpy/pull/12995>`__: BUG: Add missing PyErr_NoMemory() for reporting a failed malloc
+* `#12996 <https://github.com/numpy/numpy/pull/12996>`__: MAINT: Use the same multiplication order in interp for cached...
+* `#13002 <https://github.com/numpy/numpy/pull/13002>`__: DOC: reduce warnings when building, and rephrase slightly
+* `#13004 <https://github.com/numpy/numpy/pull/13004>`__: MAINT: minor changes for consistency to site.cfg.example
+* `#13008 <https://github.com/numpy/numpy/pull/13008>`__: MAINT: Move pickle import to numpy.compat
+* `#13019 <https://github.com/numpy/numpy/pull/13019>`__: BLD: Windows absolute path DLL loading
+* `#13023 <https://github.com/numpy/numpy/pull/13023>`__: BUG: Changes to string-to-shell parsing behavior broke paths...
+* `#13027 <https://github.com/numpy/numpy/pull/13027>`__: BUG: Fix regression in parsing of F90 and F77 environment variables
+* `#13031 <https://github.com/numpy/numpy/pull/13031>`__: MAINT: Replace if statement with a dictionary lookup for ease...
+* `#13032 <https://github.com/numpy/numpy/pull/13032>`__: MAINT: Extract the loop macros into their own header
+* `#13033 <https://github.com/numpy/numpy/pull/13033>`__: MAINT: Convert property to @property
+* `#13035 <https://github.com/numpy/numpy/pull/13035>`__: DOC: Draw more attention to which functions in random are convenience...
+* `#13036 <https://github.com/numpy/numpy/pull/13036>`__: BUG: __array_interface__ offset was always ignored
+* `#13039 <https://github.com/numpy/numpy/pull/13039>`__: BUG: Remove error-prone borrowed reference handling
+* `#13044 <https://github.com/numpy/numpy/pull/13044>`__: DOC: link to devdocs in README
+* `#13046 <https://github.com/numpy/numpy/pull/13046>`__: ENH: Add shape to *_like() array creation
+* `#13049 <https://github.com/numpy/numpy/pull/13049>`__: MAINT: remove undocumented __buffer__ attribute lookup
+* `#13050 <https://github.com/numpy/numpy/pull/13050>`__: BLD: make doc build work more robustly.
+* `#13054 <https://github.com/numpy/numpy/pull/13054>`__: DOC: Added maximum_sctype to documentation
+* `#13055 <https://github.com/numpy/numpy/pull/13055>`__: DOC: Post NumPy 1.16.2 release update.
+* `#13056 <https://github.com/numpy/numpy/pull/13056>`__: BUG: Fixes to numpy.distutils.Configuration.get_version
+* `#13058 <https://github.com/numpy/numpy/pull/13058>`__: DOC: update docstring in numpy.interp docstring
+* `#13060 <https://github.com/numpy/numpy/pull/13060>`__: BUG: Use C call to sysctlbyname for AVX detection on MacOS
+* `#13063 <https://github.com/numpy/numpy/pull/13063>`__: DOC: revert PR #13058 and fixup Makefile
+* `#13067 <https://github.com/numpy/numpy/pull/13067>`__: MAINT: Use with statements for opening files in distutils
+* `#13068 <https://github.com/numpy/numpy/pull/13068>`__: BUG: Add error checks when converting integers to datetime types
+* `#13071 <https://github.com/numpy/numpy/pull/13071>`__: DOC: Removed incorrect claim regarding shape constraints for...
+* `#13073 <https://github.com/numpy/numpy/pull/13073>`__: MAINT: Fix ABCPolyBase in various ways
+* `#13075 <https://github.com/numpy/numpy/pull/13075>`__: BUG: Convert fortran flags in environment variable
+* `#13076 <https://github.com/numpy/numpy/pull/13076>`__: BUG: Remove our patched version of `distutils.split_quoted`
+* `#13077 <https://github.com/numpy/numpy/pull/13077>`__: BUG: Fix errors in string formatting while producing an error
+* `#13078 <https://github.com/numpy/numpy/pull/13078>`__: MAINT: deduplicate fromroots in np.polynomial
+* `#13079 <https://github.com/numpy/numpy/pull/13079>`__: MAINT: Merge duplicate implementations of `*vander2d` and `*vander3d`...
+* `#13086 <https://github.com/numpy/numpy/pull/13086>`__: BLD: fix include list for sdist building
+* `#13090 <https://github.com/numpy/numpy/pull/13090>`__: BUILD: sphinx 1.8.3 can be used with our outdated templates
+* `#13092 <https://github.com/numpy/numpy/pull/13092>`__: BUG: ensure linspace works on object input.
+* `#13093 <https://github.com/numpy/numpy/pull/13093>`__: BUG: Fix parameter validity checks in ``random.choice``.
+* `#13095 <https://github.com/numpy/numpy/pull/13095>`__: BUG: Fix testsuite failures on ppc and riscv
+* `#13096 <https://github.com/numpy/numpy/pull/13096>`__: TEST: allow refcheck result to vary, increase discoverability...
+* `#13097 <https://github.com/numpy/numpy/pull/13097>`__: DOC: update doc of `ndarray.T`
+* `#13099 <https://github.com/numpy/numpy/pull/13099>`__: DOC: Add note about "copy and slicing"
+* `#13104 <https://github.com/numpy/numpy/pull/13104>`__: DOC: fix references in docs
+* `#13107 <https://github.com/numpy/numpy/pull/13107>`__: MAINT: Unify polynomial valnd functions
+* `#13108 <https://github.com/numpy/numpy/pull/13108>`__: MAINT: Merge duplicate implementations of `hermvander2d` and...
+* `#13109 <https://github.com/numpy/numpy/pull/13109>`__: Prevent traceback chaining in _wrapfunc.
+* `#13111 <https://github.com/numpy/numpy/pull/13111>`__: MAINT: Unify polydiv
+* `#13115 <https://github.com/numpy/numpy/pull/13115>`__: DOC: Fix #12050 by updating numpy.random.hypergeometric docs
+* `#13116 <https://github.com/numpy/numpy/pull/13116>`__: DOC: Add backticks in linalg docstrings.
+* `#13117 <https://github.com/numpy/numpy/pull/13117>`__: DOC: Fix arg type for np.pad, fix #9489
+* `#13118 <https://github.com/numpy/numpy/pull/13118>`__: DOC: update scipy-sphinx-theme, fixes search
+* `#13119 <https://github.com/numpy/numpy/pull/13119>`__: DOC: Fix c-api function documentation duplication.
+* `#13125 <https://github.com/numpy/numpy/pull/13125>`__: BUG: Fix unhandled exception in CBLAS detection
+* `#13126 <https://github.com/numpy/numpy/pull/13126>`__: DEP: polynomial: Be stricter about integral arguments
+* `#13127 <https://github.com/numpy/numpy/pull/13127>`__: DOC: Tidy 1.17.0 release note newlines
+* `#13128 <https://github.com/numpy/numpy/pull/13128>`__: MAINT: Unify polynomial addition and subtraction functions
+* `#13130 <https://github.com/numpy/numpy/pull/13130>`__: MAINT: Unify polynomial fitting functions
+* `#13131 <https://github.com/numpy/numpy/pull/13131>`__: BUILD: use 'quiet' when building docs
+* `#13132 <https://github.com/numpy/numpy/pull/13132>`__: BLD: Allow users to specify BLAS and LAPACK library link order
+* `#13134 <https://github.com/numpy/numpy/pull/13134>`__: ENH: Use AVX for float32 implementation of np.exp & np.log
+* `#13137 <https://github.com/numpy/numpy/pull/13137>`__: BUG: Fix build for glibc on ARC and uclibc.
+* `#13140 <https://github.com/numpy/numpy/pull/13140>`__: DEV: cleanup imports and some assignments (from LGTM)
+* `#13146 <https://github.com/numpy/numpy/pull/13146>`__: MAINT: Unify polynomial power functions
+* `#13147 <https://github.com/numpy/numpy/pull/13147>`__: DOC: Add description of overflow errors
+* `#13149 <https://github.com/numpy/numpy/pull/13149>`__: DOC: correction to numpy.pad docstring
+* `#13157 <https://github.com/numpy/numpy/pull/13157>`__: BLD: streamlined library names in site.cfg sections
+* `#13158 <https://github.com/numpy/numpy/pull/13158>`__: BLD: Add libflame as a LAPACK back-end
+* `#13161 <https://github.com/numpy/numpy/pull/13161>`__: BLD: streamlined CBLAS linkage tries, default to try libraries...
+* `#13162 <https://github.com/numpy/numpy/pull/13162>`__: BUILD: update numpydoc to latest version
+* `#13163 <https://github.com/numpy/numpy/pull/13163>`__: ENH: randomgen
+* `#13169 <https://github.com/numpy/numpy/pull/13169>`__: STY: Fix weird indents to be multiples of 4 spaces
+* `#13170 <https://github.com/numpy/numpy/pull/13170>`__: DOC, BUILD: fail the devdoc build if there are warnings
+* `#13174 <https://github.com/numpy/numpy/pull/13174>`__: DOC: Removed some c-api duplication
+* `#13176 <https://github.com/numpy/numpy/pull/13176>`__: BUG: fix reference count error on invalid input to ndarray.flat
+* `#13181 <https://github.com/numpy/numpy/pull/13181>`__: BENCH, BUG: fix Savez suite, previously was actually calling...
+* `#13182 <https://github.com/numpy/numpy/pull/13182>`__: MAINT: add overlap checks to choose, take, put, putmask
+* `#13188 <https://github.com/numpy/numpy/pull/13188>`__: MAINT: Simplify logic in convert_datetime_to_datetimestruct
+* `#13202 <https://github.com/numpy/numpy/pull/13202>`__: ENH: use rotated companion matrix to reduce error
+* `#13203 <https://github.com/numpy/numpy/pull/13203>`__: DOC: Use std docstring for multivariate normal
+* `#13205 <https://github.com/numpy/numpy/pull/13205>`__: DOC : Fix C-API documentation references to items that don't...
+* `#13206 <https://github.com/numpy/numpy/pull/13206>`__: BUILD: pin sphinx to 1.8.5
+* `#13208 <https://github.com/numpy/numpy/pull/13208>`__: MAINT: cleanup of fast_loop_macros.h
+* `#13216 <https://github.com/numpy/numpy/pull/13216>`__: Adding an example of successful execution of numpy.test() to...
+* `#13217 <https://github.com/numpy/numpy/pull/13217>`__: TST: always publish Azure tests
+* `#13218 <https://github.com/numpy/numpy/pull/13218>`__: ENH: `isfinite` support for `datetime64` and `timedelta64`
+* `#13219 <https://github.com/numpy/numpy/pull/13219>`__: ENH: nan_to_num keyword addition (was #9355)
+* `#13222 <https://github.com/numpy/numpy/pull/13222>`__: DOC: Document/ Deprecate functions exposed in "numpy" namespace
+* `#13224 <https://github.com/numpy/numpy/pull/13224>`__: Improve error message for negative valued argument
+* `#13226 <https://github.com/numpy/numpy/pull/13226>`__: DOC: Fix small issues in mtrand doc strings
+* `#13231 <https://github.com/numpy/numpy/pull/13231>`__: DOC: Change the required Sphinx version to build documentation
+* `#13234 <https://github.com/numpy/numpy/pull/13234>`__: DOC : PyArray_Descr.names undocumented
+* `#13239 <https://github.com/numpy/numpy/pull/13239>`__: DOC: Minor grammatical fixes in NumPy docs
+* `#13242 <https://github.com/numpy/numpy/pull/13242>`__: DOC: fix docstring for floor_divide
+* `#13243 <https://github.com/numpy/numpy/pull/13243>`__: MAINT: replace SETREF with assignment to ret array in ndarray.flat
+* `#13244 <https://github.com/numpy/numpy/pull/13244>`__: DOC: Improve mtrand docstrings
+* `#13250 <https://github.com/numpy/numpy/pull/13250>`__: MAINT: Improve efficiency of pad by avoiding use of apply_along_axis
+* `#13253 <https://github.com/numpy/numpy/pull/13253>`__: TST: fail Azure CI if test failures
+* `#13259 <https://github.com/numpy/numpy/pull/13259>`__: DOC: Small readability improvement
+* `#13262 <https://github.com/numpy/numpy/pull/13262>`__: DOC : Correcting bug on Documentation Page (Byteswapping)
+* `#13264 <https://github.com/numpy/numpy/pull/13264>`__: TST: use OpenBLAS v0.3.5 for POWER8 CI runs
+* `#13269 <https://github.com/numpy/numpy/pull/13269>`__: BUG, MAINT: f2py: Add a cast to avoid a compiler warning.
+* `#13270 <https://github.com/numpy/numpy/pull/13270>`__: TST: use OpenBLAS v0.3.5 for ARMv8 CI
+* `#13271 <https://github.com/numpy/numpy/pull/13271>`__: ENH: vectorize np.abs for unsigned ints and half, improving performance...
+* `#13273 <https://github.com/numpy/numpy/pull/13273>`__: BUG: Fix null pointer dereference in PyArray_DTypeFromObject
+* `#13277 <https://github.com/numpy/numpy/pull/13277>`__: DOC: Document caveat in random.uniform
+* `#13287 <https://github.com/numpy/numpy/pull/13287>`__: Add benchmark for sorting random array.
+* `#13289 <https://github.com/numpy/numpy/pull/13289>`__: DOC: add Quansight Labs as an Institutional Partner
+* `#13291 <https://github.com/numpy/numpy/pull/13291>`__: MAINT: fix unused variable warning in npy_math_complex.c.src
+* `#13292 <https://github.com/numpy/numpy/pull/13292>`__: DOC: update numpydoc to latest master
+* `#13293 <https://github.com/numpy/numpy/pull/13293>`__: DOC: add more info to failure message
+* `#13298 <https://github.com/numpy/numpy/pull/13298>`__: ENH: Added clearer exception for np.diff on 0-dimensional ndarray
+* `#13301 <https://github.com/numpy/numpy/pull/13301>`__: BUG: Fix crash when calling savetxt on a padded array
+* `#13305 <https://github.com/numpy/numpy/pull/13305>`__: NEP: Update NEP-18 to include the ``__skip_array_function__``...
+* `#13306 <https://github.com/numpy/numpy/pull/13306>`__: MAINT: better MemoryError message (#13225)
+* `#13309 <https://github.com/numpy/numpy/pull/13309>`__: DOC: list Quansight rather than Quansight Labs as Institutional...
+* `#13310 <https://github.com/numpy/numpy/pull/13310>`__: ENH: Add project_urls to setup
+* `#13311 <https://github.com/numpy/numpy/pull/13311>`__: BUG: Fix bad error message in np.memmap
+* `#13312 <https://github.com/numpy/numpy/pull/13312>`__: BUG: Close files if an error occurs in genfromtxt
+* `#13313 <https://github.com/numpy/numpy/pull/13313>`__: MAINT: fix typo in 'self'
+* `#13314 <https://github.com/numpy/numpy/pull/13314>`__: DOC: remove misplaced section at bottom of governance people...
+* `#13316 <https://github.com/numpy/numpy/pull/13316>`__: DOC: Added anti-diagonal examples to np.diagonal and np.fill_diagonal
+* `#13320 <https://github.com/numpy/numpy/pull/13320>`__: MAINT: remove unused file
+* `#13321 <https://github.com/numpy/numpy/pull/13321>`__: MAINT: Move exceptions from core._internal to core._exceptions
+* `#13322 <https://github.com/numpy/numpy/pull/13322>`__: MAINT: Move umath error helpers into their own module
+* `#13323 <https://github.com/numpy/numpy/pull/13323>`__: BUG: ufunc.at iteration variable size fix
+* `#13324 <https://github.com/numpy/numpy/pull/13324>`__: MAINT: Move asarray helpers into their own module
+* `#13326 <https://github.com/numpy/numpy/pull/13326>`__: DEP: Deprecate collapsing shape-1 dtype fields to scalars.
+* `#13328 <https://github.com/numpy/numpy/pull/13328>`__: MAINT: Tidy up error message for accumulate and reduceat
+* `#13331 <https://github.com/numpy/numpy/pull/13331>`__: DOC, BLD: fix doc build issues in preparation for the next numpydoc...
+* `#13332 <https://github.com/numpy/numpy/pull/13332>`__: BUG: Always return views from structured_to_unstructured when...
+* `#13334 <https://github.com/numpy/numpy/pull/13334>`__: BUG: Fix structured_to_unstructured on single-field types
+* `#13335 <https://github.com/numpy/numpy/pull/13335>`__: DOC: Add as_ctypes_type to the documentation
+* `#13336 <https://github.com/numpy/numpy/pull/13336>`__: BUILD: fail documentation build if numpy version does not match
+* `#13337 <https://github.com/numpy/numpy/pull/13337>`__: DOC: Add docstrings for consistency in aliases
+* `#13346 <https://github.com/numpy/numpy/pull/13346>`__: BUG/MAINT: Tidy typeinfo.h and .c
+* `#13348 <https://github.com/numpy/numpy/pull/13348>`__: BUG: Return the coefficients array directly
+* `#13354 <https://github.com/numpy/numpy/pull/13354>`__: TST: Added test_fftpocket.py::test_axes
+* `#13367 <https://github.com/numpy/numpy/pull/13367>`__: DOC: reorganize developer docs, use scikit-image as a base for...
+* `#13371 <https://github.com/numpy/numpy/pull/13371>`__: BUG/ENH: Make floor, ceil, and trunc call the matching special...
+* `#13374 <https://github.com/numpy/numpy/pull/13374>`__: DOC: Specify range for numpy.angle
+* `#13377 <https://github.com/numpy/numpy/pull/13377>`__: DOC: Add missing macros to C API documentation
+* `#13379 <https://github.com/numpy/numpy/pull/13379>`__: BLD: address mingw-w64 issue. Follow-up to gh-9977
+* `#13383 <https://github.com/numpy/numpy/pull/13383>`__: MAINT, DOC: Post 1.16.3 release updates
+* `#13388 <https://github.com/numpy/numpy/pull/13388>`__: BUG: Some PyPy versions lack PyStructSequence_InitType2.
+* `#13389 <https://github.com/numpy/numpy/pull/13389>`__: ENH: implement ``__skip_array_function__`` attribute for NEP-18
+* `#13390 <https://github.com/numpy/numpy/pull/13390>`__: ENH: Add support for Fraction to percentile and quantile
+* `#13391 <https://github.com/numpy/numpy/pull/13391>`__: MAINT, DEP: Fix deprecated ``assertEquals()``
+* `#13395 <https://github.com/numpy/numpy/pull/13395>`__: DOC: note re defaults allclose to assert_allclose
+* `#13397 <https://github.com/numpy/numpy/pull/13397>`__: DOC: Resolve confusion regarding hashtag in header line of csv
+* `#13399 <https://github.com/numpy/numpy/pull/13399>`__: ENH: Improved performance of PyArray_FromAny for sequences of...
+* `#13402 <https://github.com/numpy/numpy/pull/13402>`__: DOC: Show the default value of deletechars in the signature of...
+* `#13403 <https://github.com/numpy/numpy/pull/13403>`__: DOC: fix typos in dev/index
+* `#13404 <https://github.com/numpy/numpy/pull/13404>`__: DOC: Add Sebastian Berg as sponsored by BIDS
+* `#13406 <https://github.com/numpy/numpy/pull/13406>`__: DOC: clarify array_{2string,str,repr} defaults
+* `#13409 <https://github.com/numpy/numpy/pull/13409>`__: BUG: (py2 only) fix unicode support for savetxt fmt string
+* `#13413 <https://github.com/numpy/numpy/pull/13413>`__: DOC: document existence of linalg backends
+* `#13415 <https://github.com/numpy/numpy/pull/13415>`__: BUG: fixing bugs in AVX exp/log while handling special value...
+* `#13416 <https://github.com/numpy/numpy/pull/13416>`__: BUG: Protect generators from log(0.0)
+* `#13417 <https://github.com/numpy/numpy/pull/13417>`__: DOC: dimension sizes are non-negative, not positive
+* `#13425 <https://github.com/numpy/numpy/pull/13425>`__: MAINT: fixed typo 'Mismacth' from numpy/core/setup_common.py
+* `#13433 <https://github.com/numpy/numpy/pull/13433>`__: BUG: Handle subarrays in descr_to_dtype
+* `#13435 <https://github.com/numpy/numpy/pull/13435>`__: BUG: Add TypeError to accepted exceptions in crackfortran.
+* `#13436 <https://github.com/numpy/numpy/pull/13436>`__: TST: Add file-not-closed check to LGTM analysis.
+* `#13440 <https://github.com/numpy/numpy/pull/13440>`__: MAINT: fixed typo 'wtihout' from numpy/core/shape_base.py
+* `#13443 <https://github.com/numpy/numpy/pull/13443>`__: BLD, TST: implicit func errors
+* `#13445 <https://github.com/numpy/numpy/pull/13445>`__: MAINT: refactor PyArrayMultiIterObject constructors
+* `#13446 <https://github.com/numpy/numpy/pull/13446>`__: MANT: refactor unravel_index for code repetition
+* `#13449 <https://github.com/numpy/numpy/pull/13449>`__: BUG: missing git raises an OSError
+* `#13456 <https://github.com/numpy/numpy/pull/13456>`__: TST: refine Azure fail reports
+* `#13463 <https://github.com/numpy/numpy/pull/13463>`__: BUG,DEP: Fix writeable flag setting for arrays without base
+* `#13467 <https://github.com/numpy/numpy/pull/13467>`__: ENH: err msg for too large sequences. See #13450
+* `#13469 <https://github.com/numpy/numpy/pull/13469>`__: DOC: correct "version added" in npymath docs
+* `#13471 <https://github.com/numpy/numpy/pull/13471>`__: LICENSE: split license file in standard BSD 3-clause and bundled.
+* `#13477 <https://github.com/numpy/numpy/pull/13477>`__: DOC: have notes in histogram_bin_edges match parameter style
+* `#13479 <https://github.com/numpy/numpy/pull/13479>`__: DOC: Mention the handling of nan in the assert_equal docstring.
+* `#13482 <https://github.com/numpy/numpy/pull/13482>`__: TEST: add duration report to tests, speed up two outliers
+* `#13483 <https://github.com/numpy/numpy/pull/13483>`__: DOC: update mailmap for Bill Spotz
+* `#13485 <https://github.com/numpy/numpy/pull/13485>`__: DOC: add security vulnerability reporting and doc links to README
+* `#13491 <https://github.com/numpy/numpy/pull/13491>`__: BUG/ENH: Create npy format 3.0 to support extended unicode characters...
+* `#13495 <https://github.com/numpy/numpy/pull/13495>`__: BUG: test all ufunc.types for return type, fix for exp, log
+* `#13496 <https://github.com/numpy/numpy/pull/13496>`__: BUG: ma.tostring should respect the order parameter
+* `#13498 <https://github.com/numpy/numpy/pull/13498>`__: DOC: Clarify rcond normalization in linalg.pinv
+* `#13499 <https://github.com/numpy/numpy/pull/13499>`__: MAINT: Use with statement to open/close files to fix LGTM alerts
+* `#13503 <https://github.com/numpy/numpy/pull/13503>`__: ENH: Support object arrays in matmul
+* `#13504 <https://github.com/numpy/numpy/pull/13504>`__: DOC: Update links in PULL_REQUEST_TEMPLATE.md
+* `#13506 <https://github.com/numpy/numpy/pull/13506>`__: ENH: Add sparse option to np.core.numeric.indices
+* `#13507 <https://github.com/numpy/numpy/pull/13507>`__: BUG: np.array cleared errors occured in PyMemoryView_FromObject
+* `#13508 <https://github.com/numpy/numpy/pull/13508>`__: BUG: Removes ValueError for empty kwargs in arraymultiter_new
+* `#13518 <https://github.com/numpy/numpy/pull/13518>`__: MAINT: implement assert_array_compare without converting array...
+* `#13520 <https://github.com/numpy/numpy/pull/13520>`__: BUG: exp, log AVX loops do not use steps
+* `#13523 <https://github.com/numpy/numpy/pull/13523>`__: BUG: distutils/system_info.py fix missing subprocess import
+* `#13529 <https://github.com/numpy/numpy/pull/13529>`__: MAINT: Use exec() instead array_function_dispatch to improve...
+* `#13530 <https://github.com/numpy/numpy/pull/13530>`__: BENCH: Modify benchmarks for radix sort.
+* `#13534 <https://github.com/numpy/numpy/pull/13534>`__: BLD: Make CI pass again with pytest 4.5
+* `#13541 <https://github.com/numpy/numpy/pull/13541>`__: ENH: restore unpack bit lookup table
+* `#13544 <https://github.com/numpy/numpy/pull/13544>`__: ENH: Allow broadcast to be called with zero arguments
+* `#13550 <https://github.com/numpy/numpy/pull/13550>`__: TST: Register markers in conftest.py.
+* `#13551 <https://github.com/numpy/numpy/pull/13551>`__: DOC: Add note to ``nonzero`` docstring.
+* `#13558 <https://github.com/numpy/numpy/pull/13558>`__: MAINT: Fix errors seen on new python 3.8
+* `#13570 <https://github.com/numpy/numpy/pull/13570>`__: DOC: Remove duplicate documentation of the PyArray_SimpleNew...
+* `#13571 <https://github.com/numpy/numpy/pull/13571>`__: DOC: Mention that expand_dims returns a view
+* `#13574 <https://github.com/numpy/numpy/pull/13574>`__: DOC: remove performance claim from searchsorted()
+* `#13575 <https://github.com/numpy/numpy/pull/13575>`__: TST: Apply ufunc signature and type test fixmes.
+* `#13581 <https://github.com/numpy/numpy/pull/13581>`__: ENH: AVX support for exp/log for strided float32 arrays
+* `#13584 <https://github.com/numpy/numpy/pull/13584>`__: DOC: roadmap update
+* `#13589 <https://github.com/numpy/numpy/pull/13589>`__: MAINT: Increment stacklevel for warnings to account for NEP-18...
+* `#13590 <https://github.com/numpy/numpy/pull/13590>`__: BUG: Fixes for Undefined Behavior Sanitizer (UBSan) errors.
+* `#13595 <https://github.com/numpy/numpy/pull/13595>`__: NEP: update NEP 19 with API terminology
+* `#13599 <https://github.com/numpy/numpy/pull/13599>`__: DOC: Fixed minor doc error in take_along_axis
+* `#13603 <https://github.com/numpy/numpy/pull/13603>`__: TST: bump / verify OpenBLAS in CI
+* `#13619 <https://github.com/numpy/numpy/pull/13619>`__: DOC: Add missing return value documentation in ndarray.require
+* `#13621 <https://github.com/numpy/numpy/pull/13621>`__: DOC: Update boolean indices in index arrays with slices example
+* `#13623 <https://github.com/numpy/numpy/pull/13623>`__: BUG: Workaround for bug in clang7.0
+* `#13624 <https://github.com/numpy/numpy/pull/13624>`__: DOC: revert __skip_array_function__ from NEP-18
+* `#13626 <https://github.com/numpy/numpy/pull/13626>`__: DOC: update isfortran docs with return value
+* `#13627 <https://github.com/numpy/numpy/pull/13627>`__: MAINT: revert __skip_array_function__ from NEP-18
+* `#13629 <https://github.com/numpy/numpy/pull/13629>`__: BUG: setup.py install --skip-build fails
+* `#13632 <https://github.com/numpy/numpy/pull/13632>`__: MAINT: Collect together the special-casing of 0d nonzero into...
+* `#13633 <https://github.com/numpy/numpy/pull/13633>`__: DOC: caution against relying upon NumPy's implementation in subclasses
+* `#13634 <https://github.com/numpy/numpy/pull/13634>`__: MAINT: avoid nested dispatch in numpy.core.shape_base
+* `#13636 <https://github.com/numpy/numpy/pull/13636>`__: DOC: Add return section to linalg.matrix_rank & tensordot
+* `#13639 <https://github.com/numpy/numpy/pull/13639>`__: MAINT: Update mailmap for 1.17.0
+* `#13642 <https://github.com/numpy/numpy/pull/13642>`__: BUG: special case object arrays when printing rel-, abs-error...
+* `#13648 <https://github.com/numpy/numpy/pull/13648>`__: BUG: ensure that casting to/from structured is properly checked.
+* `#13649 <https://github.com/numpy/numpy/pull/13649>`__: DOC: Mention PyArray_GetField steals a reference
+* `#13652 <https://github.com/numpy/numpy/pull/13652>`__: MAINT: remove superfluous setting in can_cast_safely_table.
+* `#13655 <https://github.com/numpy/numpy/pull/13655>`__: BUG/MAINT: Non-native byteorder in random ints
+* `#13656 <https://github.com/numpy/numpy/pull/13656>`__: PERF: Use intrinsic rotr on Windows
+* `#13657 <https://github.com/numpy/numpy/pull/13657>`__: BUG: Avoid leading underscores in C function names.
+* `#13660 <https://github.com/numpy/numpy/pull/13660>`__: DOC: Updates following NumPy 1.16.4 release.
+* `#13663 <https://github.com/numpy/numpy/pull/13663>`__: BUG: regression for array([pandas.DataFrame()])
+* `#13664 <https://github.com/numpy/numpy/pull/13664>`__: MAINT: Misc. typo fixes
+* `#13665 <https://github.com/numpy/numpy/pull/13665>`__: MAINT: Use intrinsics in Win64-PCG64
+* `#13670 <https://github.com/numpy/numpy/pull/13670>`__: BUG: Fix RandomState argument name
+* `#13672 <https://github.com/numpy/numpy/pull/13672>`__: DOC: Fix rst markup in RELEASE_WALKTHROUGH.
+* `#13678 <https://github.com/numpy/numpy/pull/13678>`__: BUG: fix benchmark suite importability on Numpy<1.17
+* `#13682 <https://github.com/numpy/numpy/pull/13682>`__: ENH: Support __length_hint__ in PyArray_FromIter
+* `#13684 <https://github.com/numpy/numpy/pull/13684>`__: BUG: Move ndarray.dump to python and make it close the file it...
+* `#13687 <https://github.com/numpy/numpy/pull/13687>`__: DOC: Remove misleading statement
+* `#13688 <https://github.com/numpy/numpy/pull/13688>`__: MAINT: Correct masked aliases
+* `#13690 <https://github.com/numpy/numpy/pull/13690>`__: MAINT: Remove version added from Generator
+* `#13691 <https://github.com/numpy/numpy/pull/13691>`__: BUG: Prevent passing of size 0 to array alloc C functions
+* `#13692 <https://github.com/numpy/numpy/pull/13692>`__: DOC: Update C-API documentation of scanfunc, fromstr
+* `#13693 <https://github.com/numpy/numpy/pull/13693>`__: ENH: Pass input strides and dimensions by pointer to const
+* `#13695 <https://github.com/numpy/numpy/pull/13695>`__: BUG: Ensure Windows choice returns int32
+* `#13696 <https://github.com/numpy/numpy/pull/13696>`__: DOC: Put the useful constants first
+* `#13697 <https://github.com/numpy/numpy/pull/13697>`__: MAINT: speed up hstack and vstack by eliminating list comprehension.
+* `#13700 <https://github.com/numpy/numpy/pull/13700>`__: Add links for GitHub Sponsors button.
+* `#13703 <https://github.com/numpy/numpy/pull/13703>`__: DOC: Adds documentation for numpy.dtype.base
+* `#13704 <https://github.com/numpy/numpy/pull/13704>`__: DOC: Mention PyArray_DIMS can be NULL
+* `#13708 <https://github.com/numpy/numpy/pull/13708>`__: DEP: Deprecate nonzero(0d) in favor of calling atleast_1d explicitly
+* `#13715 <https://github.com/numpy/numpy/pull/13715>`__: BUG: Fix use-after-free in boolean indexing
+* `#13716 <https://github.com/numpy/numpy/pull/13716>`__: BUG: Fix random.choice when probability is not C contiguous
+* `#13720 <https://github.com/numpy/numpy/pull/13720>`__: MAINT/BUG: Manage more files with with statements
+* `#13721 <https://github.com/numpy/numpy/pull/13721>`__: MAINT,BUG: More ufunc exception cleanup
+* `#13724 <https://github.com/numpy/numpy/pull/13724>`__: MAINT: fix use of cache_dim
+* `#13725 <https://github.com/numpy/numpy/pull/13725>`__: BUG: fix compilation of 3rd party modules with Py_LIMITED_API...
+* `#13726 <https://github.com/numpy/numpy/pull/13726>`__: MAINT: Update PCG jump sizes
+* `#13729 <https://github.com/numpy/numpy/pull/13729>`__: DOC: Merge together DISTUTILS.rst.txt#template-files" and distutils.r…
+* `#13730 <https://github.com/numpy/numpy/pull/13730>`__: MAINT: Change keyword from reserved word
+* `#13737 <https://github.com/numpy/numpy/pull/13737>`__: DOC: Mention and try to explain pairwise summation in sum
+* `#13741 <https://github.com/numpy/numpy/pull/13741>`__: MAINT: random: Remove unused empty file binomial.h.
+* `#13743 <https://github.com/numpy/numpy/pull/13743>`__: MAINT: random: Rename legacy distributions file.
+* `#13744 <https://github.com/numpy/numpy/pull/13744>`__: DOC: Update the C style guide for C99.
+* `#13745 <https://github.com/numpy/numpy/pull/13745>`__: BUG: fix segfault on side-effect in __bool__ function in array.nonzero()
+* `#13746 <https://github.com/numpy/numpy/pull/13746>`__: [WIP] DOC : Refactor C-API -- Python Types and C structures
+* `#13757 <https://github.com/numpy/numpy/pull/13757>`__: MAINT: fix histogram*d dispatchers
+* `#13760 <https://github.com/numpy/numpy/pull/13760>`__: DOC: update test guidelines document to use pytest for skipif
+* `#13761 <https://github.com/numpy/numpy/pull/13761>`__: MAINT: random: Rewrite the hypergeometric distribution.
+* `#13762 <https://github.com/numpy/numpy/pull/13762>`__: MAINT: Use textwrap.dedent for multiline strings
+* `#13763 <https://github.com/numpy/numpy/pull/13763>`__: MAINT: Use with statements and dedent in core/setup.py
+* `#13767 <https://github.com/numpy/numpy/pull/13767>`__: DOC: Adds examples for dtype attributes
+* `#13770 <https://github.com/numpy/numpy/pull/13770>`__: MAINT: random: Combine ziggurat.h and ziggurat_constants.h
+* `#13771 <https://github.com/numpy/numpy/pull/13771>`__: DOC: Change random to uninitialized and unpredictable in empty...
+* `#13772 <https://github.com/numpy/numpy/pull/13772>`__: BUILD: use numpy-wheels/openblas_support.py to create _distributor_init.py
+* `#13773 <https://github.com/numpy/numpy/pull/13773>`__: DOC: Update of reference to paper for Lemire's method
+* `#13774 <https://github.com/numpy/numpy/pull/13774>`__: BUG: Make ``Generator._masked`` flag default to ``False``.
+* `#13777 <https://github.com/numpy/numpy/pull/13777>`__: MAINT: Remove duplication of should_use_min_scalar_type function
+* `#13780 <https://github.com/numpy/numpy/pull/13780>`__: ENH: use SeedSequence instead of seed()
+* `#13781 <https://github.com/numpy/numpy/pull/13781>`__: DOC: Update TESTS.rst.txt for pytest
+* `#13786 <https://github.com/numpy/numpy/pull/13786>`__: MAINT: random: Fix a few compiler warnings.
+* `#13787 <https://github.com/numpy/numpy/pull/13787>`__: DOC: Fixed the problem of "versionadded"
+* `#13788 <https://github.com/numpy/numpy/pull/13788>`__: MAINT: fix 'in' -> 'is' typo
+* `#13789 <https://github.com/numpy/numpy/pull/13789>`__: MAINT: Fix warnings in radixsort.c.src: comparing integers of...
+* `#13791 <https://github.com/numpy/numpy/pull/13791>`__: MAINT: remove dSFMT
+* `#13792 <https://github.com/numpy/numpy/pull/13792>`__: LICENSE: update dragon4 license to MIT
+* `#13793 <https://github.com/numpy/numpy/pull/13793>`__: MAINT: remove xoshiro* BitGenerators
+* `#13795 <https://github.com/numpy/numpy/pull/13795>`__: DOC: Update description of sep in fromstring
+* `#13803 <https://github.com/numpy/numpy/pull/13803>`__: DOC: Improve documentation for ``defchararray``
+* `#13813 <https://github.com/numpy/numpy/pull/13813>`__: BUG: further fixup to histogram2d dispatcher.
+* `#13815 <https://github.com/numpy/numpy/pull/13815>`__: MAINT: Correct intrinsic use on Windows
+* `#13818 <https://github.com/numpy/numpy/pull/13818>`__: TST: Add tests for ComplexWarning in astype
+* `#13819 <https://github.com/numpy/numpy/pull/13819>`__: DOC: Fix documented default value of ``__array_priority__`` for...
+* `#13820 <https://github.com/numpy/numpy/pull/13820>`__: MAINT, DOC: Fix misspelled words in documetation.
+* `#13821 <https://github.com/numpy/numpy/pull/13821>`__: MAINT: core: Fix a compiler warning.
+* `#13830 <https://github.com/numpy/numpy/pull/13830>`__: MAINT: Update tox for supported Python versions
+* `#13832 <https://github.com/numpy/numpy/pull/13832>`__: MAINT: remove pcg32 BitGenerator
+* `#13833 <https://github.com/numpy/numpy/pull/13833>`__: MAINT: remove ThreeFry BitGenerator
+* `#13837 <https://github.com/numpy/numpy/pull/13837>`__: MAINT, BUG: fixes from seedsequence
+* `#13838 <https://github.com/numpy/numpy/pull/13838>`__: ENH: SFC64 BitGenerator
+* `#13839 <https://github.com/numpy/numpy/pull/13839>`__: MAINT: Ignore some generated files.
+* `#13840 <https://github.com/numpy/numpy/pull/13840>`__: ENH: np.random.default_gen()
+* `#13843 <https://github.com/numpy/numpy/pull/13843>`__: DOC: remove note about `__array_ufunc__` being provisional for...
+* `#13849 <https://github.com/numpy/numpy/pull/13849>`__: DOC: np.random documentation cleanup and expansion.
+* `#13850 <https://github.com/numpy/numpy/pull/13850>`__: DOC: Update performance numbers
+* `#13851 <https://github.com/numpy/numpy/pull/13851>`__: MAINT: Update shippable.yml to remove Python 2 dependency
+* `#13855 <https://github.com/numpy/numpy/pull/13855>`__: BUG: Fix memory leak in dtype from dict contructor
+* `#13856 <https://github.com/numpy/numpy/pull/13856>`__: MAINT: move location of bitgen.h
+* `#13858 <https://github.com/numpy/numpy/pull/13858>`__: BUG: do not force emulation of 128-bit arithmetic.
+* `#13859 <https://github.com/numpy/numpy/pull/13859>`__: DOC: Update performance numbers for PCG64
+* `#13861 <https://github.com/numpy/numpy/pull/13861>`__: BUG: Ensure consistent interpretation of uint64 states.
+* `#13863 <https://github.com/numpy/numpy/pull/13863>`__: DOC: Document the precise PCG variant.
+* `#13864 <https://github.com/numpy/numpy/pull/13864>`__: TST: Ignore DeprecationWarning during nose imports
+* `#13869 <https://github.com/numpy/numpy/pull/13869>`__: DOC: Prepare for 1.17.0rc1 release
+* `#13870 <https://github.com/numpy/numpy/pull/13870>`__: MAINT,BUG: Use nbytes to also catch empty descr during allocation
+* `#13873 <https://github.com/numpy/numpy/pull/13873>`__: ENH: Rename default_gen -> default_rng
+* `#13893 <https://github.com/numpy/numpy/pull/13893>`__: DOC: fix links in 1.17 release note
+* `#13897 <https://github.com/numpy/numpy/pull/13897>`__: DOC: Use Cython >= 0.29.11 for Python 3.8 support.
+* `#13932 <https://github.com/numpy/numpy/pull/13932>`__: MAINT,BUG,DOC: Fix errors in _add_newdocs
+* `#13963 <https://github.com/numpy/numpy/pull/13963>`__: ENH, BUILD: refactor all OpenBLAS downloads into a single, testable...
+* `#13971 <https://github.com/numpy/numpy/pull/13971>`__: DOC: emphasize random API changes
+* `#13972 <https://github.com/numpy/numpy/pull/13972>`__: MAINT: Rewrite Floyd algorithm
+* `#13992 <https://github.com/numpy/numpy/pull/13992>`__: BUG: Do not crash on recursive `.dtype` attribute lookup.
+* `#13993 <https://github.com/numpy/numpy/pull/13993>`__: DEP: Speed up WarnOnWrite deprecation in buffer interface
+* `#13995 <https://github.com/numpy/numpy/pull/13995>`__: BLD: Remove Trusty dist in Travis CI build
+* `#13996 <https://github.com/numpy/numpy/pull/13996>`__: BUG: Handle weird bytestrings in dtype()
+* `#13997 <https://github.com/numpy/numpy/pull/13997>`__: BUG: i0 Bessel function regression on array-likes supporting...
+* `#13998 <https://github.com/numpy/numpy/pull/13998>`__: BUG: Missing warnings import in polyutils.
+* `#13999 <https://github.com/numpy/numpy/pull/13999>`__: DOC: Document array_function at a higher level.
+* `#14001 <https://github.com/numpy/numpy/pull/14001>`__: DOC: Show workaround for Generator.integers backward compatibility
+* `#14021 <https://github.com/numpy/numpy/pull/14021>`__: DOC: Prepare 1.17.0rc2 release.
+* `#14040 <https://github.com/numpy/numpy/pull/14040>`__: DOC: Improve quickstart documentation of new random Generator.
+* `#14041 <https://github.com/numpy/numpy/pull/14041>`__: TST, MAINT: expand OpenBLAS version checking
+* `#14080 <https://github.com/numpy/numpy/pull/14080>`__: BUG, DOC: add new recfunctions to `__all__`
+* `#14081 <https://github.com/numpy/numpy/pull/14081>`__: BUG: fix build issue on icc 2016
+* `#14082 <https://github.com/numpy/numpy/pull/14082>`__: BUG: Fix file-like object check when saving arrays
+* `#14109 <https://github.com/numpy/numpy/pull/14109>`__: REV: "ENH: Improved performance of PyArray_FromAny for sequences...
+* `#14126 <https://github.com/numpy/numpy/pull/14126>`__: BUG, TEST: Adding validation test suite to validate float32 exp
+* `#14127 <https://github.com/numpy/numpy/pull/14127>`__: DOC: Add blank line above doctest for intersect1d
+* `#14128 <https://github.com/numpy/numpy/pull/14128>`__: MAINT: adjustments to test_ufunc_noncontigous
+* `#14129 <https://github.com/numpy/numpy/pull/14129>`__: MAINT: Use equality instead of identity check with literal
+* `#14133 <https://github.com/numpy/numpy/pull/14133>`__: MAINT: Update mailmap and changelog for 1.17.0
diff --git a/doc/neps/nep-0010-new-iterator-ufunc.rst b/doc/neps/nep-0010-new-iterator-ufunc.rst
index 8601b4a4c..fd7b3e52c 100644
--- a/doc/neps/nep-0010-new-iterator-ufunc.rst
+++ b/doc/neps/nep-0010-new-iterator-ufunc.rst
@@ -1877,8 +1877,8 @@ the new iterator.
Here is one of the original functions, for reference, and some
random image data.::
- In [5]: rand1 = np.random.random_sample(1080*1920*4).astype(np.float32)
- In [6]: rand2 = np.random.random_sample(1080*1920*4).astype(np.float32)
+ In [5]: rand1 = np.random.random(1080*1920*4).astype(np.float32)
+ In [6]: rand2 = np.random.random(1080*1920*4).astype(np.float32)
In [7]: image1 = rand1.reshape(1080,1920,4).swapaxes(0,1)
In [8]: image2 = rand2.reshape(1080,1920,4).swapaxes(0,1)
diff --git a/doc/neps/nep-0016-abstract-array.rst b/doc/neps/nep-0016-abstract-array.rst
index 86d164d8e..7551b11b9 100644
--- a/doc/neps/nep-0016-abstract-array.rst
+++ b/doc/neps/nep-0016-abstract-array.rst
@@ -266,7 +266,7 @@ array, then they'll get a segfault. Right now, in the same situation,
``asarray`` will instead invoke the object's ``__array__`` method, or
use the buffer interface to make a view, or pass through an array with
object dtype, or raise an error, or similar. Probably none of these
-outcomes are actually desireable in most cases, so maybe making it a
+outcomes are actually desirable in most cases, so maybe making it a
segfault instead would be OK? But it's dangerous given that we don't
know how common such code is. OTOH, if we were starting from scratch
then this would probably be the ideal solution.
diff --git a/doc/neps/nep-0018-array-function-protocol.rst b/doc/neps/nep-0018-array-function-protocol.rst
index ffe780c79..fb9b838b5 100644
--- a/doc/neps/nep-0018-array-function-protocol.rst
+++ b/doc/neps/nep-0018-array-function-protocol.rst
@@ -10,6 +10,7 @@ NEP 18 — A dispatch mechanism for NumPy's high level array functions
:Status: Provisional
:Type: Standards Track
:Created: 2018-05-29
+:Updated: 2019-05-25
:Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-August/078493.html
Abstact
@@ -97,12 +98,15 @@ A prototype implementation can be found in
.. note::
- Dispatch with the ``__array_function__`` protocol has been implemented on
- NumPy's master branch but is not yet enabled by default. In NumPy 1.16,
- you will need to set the environment variable
- ``NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1`` before importing NumPy to test
- NumPy function overrides. We anticipate the protocol will be enabled by
- default in NumPy 1.17.
+ Dispatch with the ``__array_function__`` protocol has been implemented but is
+ not yet enabled by default:
+
+ - In NumPy 1.16, you need to set the environment variable
+ ``NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=1`` before importing NumPy to test
+ NumPy function overrides.
+ - In NumPy 1.17, the protocol will be enabled by default, but can be disabled
+ with ``NUMPY_EXPERIMENTAL_ARRAY_FUNCTION=0``.
+ - Eventually, expect to ``__array_function__`` to always be enabled.
The interface
~~~~~~~~~~~~~
@@ -199,6 +203,14 @@ include *all* of the corresponding NumPy function's optional arguments
Optional arguments are only passed in to ``__array_function__`` if they
were explicitly used in the NumPy function call.
+.. note::
+
+ Just like the case for builtin special methods like ``__add__``, properly
+ written ``__array_function__`` methods should always return
+ ``NotImplemented`` when an unknown type is encountered. Otherwise, it will
+ be impossible to correctly override NumPy functions from another object
+ if the operation also includes one of your objects.
+
Necessary changes within the NumPy codebase itself
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -300,6 +312,13 @@ In particular:
- If all ``__array_function__`` methods return ``NotImplemented``,
NumPy will raise ``TypeError``.
+If no ``__array_function__`` methods exist, NumPy will default to calling its
+own implementation, intended for use on NumPy arrays. This case arises, for
+example, when all array-like arguments are Python numbers or lists.
+(NumPy arrays do have a ``__array_function__`` method, given below, but it
+always returns ``NotImplemented`` if any argument other than a NumPy array
+subclass implements ``__array_function__``.)
+
One deviation from the current behavior of ``__array_ufunc__`` is that NumPy
will only call ``__array_function__`` on the *first* argument of each unique
type. This matches Python's
@@ -310,31 +329,47 @@ between these two dispatch protocols, we should
`also update <https://github.com/numpy/numpy/issues/11306>`_
``__array_ufunc__`` to match this behavior.
-Special handling of ``numpy.ndarray``
-'''''''''''''''''''''''''''''''''''''
+The ``__array_function__`` method on ``numpy.ndarray``
+''''''''''''''''''''''''''''''''''''''''''''''''''''''
The use cases for subclasses with ``__array_function__`` are the same as those
-with ``__array_ufunc__``, so ``numpy.ndarray`` should also define a
-``__array_function__`` method mirroring ``ndarray.__array_ufunc__``:
+with ``__array_ufunc__``, so ``numpy.ndarray`` also defines a
+``__array_function__`` method:
.. code:: python
def __array_function__(self, func, types, args, kwargs):
- # Cannot handle items that have __array_function__ other than our own.
- for t in types:
- if (hasattr(t, '__array_function__') and
- t.__array_function__ is not ndarray.__array_function__):
- return NotImplemented
-
- # Arguments contain no overrides, so we can safely call the
- # overloaded function again.
- return func(*args, **kwargs)
-
-To avoid infinite recursion, the dispatch rules for ``__array_function__`` need
-also the same special case they have for ``__array_ufunc__``: any arguments with
-an ``__array_function__`` method that is identical to
-``numpy.ndarray.__array_function__`` are not be called as
-``__array_function__`` implementations.
+ if not all(issubclass(t, ndarray) for t in types):
+ # Defer to any non-subclasses that implement __array_function__
+ return NotImplemented
+
+ # Use NumPy's private implementation without __array_function__
+ # dispatching
+ return func._implementation(*args, **kwargs)
+
+This method matches NumPy's dispatching rules, so for most part it is
+possible to pretend that ``ndarray.__array_function__`` does not exist.
+The private ``_implementation`` attribute, defined below in the
+``array_function_dispatch`` decorator, allows us to avoid the special cases for
+NumPy arrays that were needed in the ``__array_ufunc__`` protocol.
+
+The ``__array_function__`` protocol always calls subclasses before
+superclasses, so if any ``ndarray`` subclasses are involved in an operation,
+they will get the chance to override it, just as if any other argument
+overrides ``__array_function__``. But the default behavior in an operation
+that combines a base NumPy array and a subclass is different: if the subclass
+returns ``NotImplemented``, NumPy's implementation of the function will be
+called instead of raising an exception. This is appropriate since subclasses
+are `expected to be substitutable <https://en.wikipedia.org/wiki/Liskov_substitution_principle>`_.
+
+We still caution authors of subclasses to exercise caution when relying
+upon details of NumPy's internal implementations. It is not always possible to
+write a perfectly substitutable ndarray subclass, e.g., in cases involving the
+creation of new arrays, not least because NumPy makes use of internal
+optimizations specialized to base NumPy arrays, e.g., code written in C. Even
+if NumPy's implementation happens to work today, it may not work in the future.
+In these cases, your recourse is to re-implement top-level NumPy functions via
+``__array_function__`` on your subclass.
Changes within NumPy functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -346,13 +381,12 @@ but of fairly simple and innocuous code that should complete quickly and
without effect if no arguments implement the ``__array_function__``
protocol.
-In most cases, these functions should written using the
-``array_function_dispatch`` decorator, which also associates dispatcher
-functions:
+To achieve this, we define a ``array_function_dispatch`` decorator to rewrite
+NumPy functions. The basic implementation is as follows:
.. code:: python
- def array_function_dispatch(dispatcher):
+ def array_function_dispatch(dispatcher, module=None):
"""Wrap a function for dispatch with the __array_function__ protocol."""
def decorator(implementation):
@functools.wraps(implementation)
@@ -360,6 +394,10 @@ functions:
relevant_args = dispatcher(*args, **kwargs)
return implement_array_function(
implementation, public_api, relevant_args, args, kwargs)
+ if module is not None:
+ public_api.__module__ = module
+ # for ndarray.__array_function__
+ public_api._implementation = implementation
return public_api
return decorator
@@ -367,7 +405,7 @@ functions:
def _broadcast_to_dispatcher(array, shape, subok=None):
return (array,)
- @array_function_dispatch(_broadcast_to_dispatcher)
+ @array_function_dispatch(_broadcast_to_dispatcher, module='numpy')
def broadcast_to(array, shape, subok=False):
... # existing definition of np.broadcast_to
@@ -385,33 +423,41 @@ It's particularly worth calling out the decorator's use of
the wrapped NumPy function.
- On Python 3, it also ensures that the decorator function copies the original
function signature, which is important for introspection based tools such as
- auto-complete. If we care about preserving function signatures on Python 2,
- for the `short while longer <http://www.numpy.org/neps/nep-0014-dropping-python2.7-proposal.html>`_
- that NumPy supports Python 2.7, we do could do so by adding a vendored
- dependency on the (single-file, BSD licensed)
- `decorator library <https://github.com/micheles/decorator>`_.
+ auto-complete.
- Finally, it ensures that the wrapped function
`can be pickled <http://gael-varoquaux.info/programming/decoration-in-python-done-right-decorating-and-pickling.html>`_.
-In a few cases, it would not make sense to use the ``array_function_dispatch``
-decorator directly, but override implementation in terms of
-``implement_array_function`` should still be straightforward.
-
-- Functions written entirely in C (e.g., ``np.concatenate``) can't use
- decorators, but they could still use a C equivalent of
- ``implement_array_function``. If performance is not a
- concern, they could also be easily wrapped with a small Python wrapper.
-- ``np.einsum`` does complicated argument parsing to handle two different
- function signatures. It would probably be best to avoid the overhead of
- parsing it twice in the typical case of no overrides.
-
-Fortunately, in each of these cases so far, the functions already has a generic
-signature of the form ``*args, **kwargs``, which means we don't need to worry
-about potential inconsistency between how functions are called and what we pass
-to ``__array_function__``. (In C, arguments for all Python functions are parsed
-from a tuple ``*args`` and dict ``**kwargs``.) This shouldn't stop us from
-writing overrides for functions with non-generic signatures that can't use the
-decorator, but we should consider these cases carefully.
+The example usage illustrates several best practices for writing dispatchers
+relevant to NumPy contributors:
+
+- We passed the ``module`` argument, which in turn sets the ``__module__``
+ attribute on the generated function. This is for the benefit of better error
+ messages, here for errors raised internally by NumPy when no implementation
+ is found, e.g.,
+ ``TypeError: no implementation found for 'numpy.broadcast_to'``. Setting
+ ``__module__`` to the canonical location in NumPy's public API encourages
+ users to use NumPy's public API for identifying functions in
+ ``__array_function__``.
+
+- The dispatcher is a function that returns a tuple, rather than an equivalent
+ (and equally valid) generator using ``yield``:
+
+ .. code:: python
+
+ # example usage
+ def broadcast_to(array, shape, subok=None):
+ yield array
+
+ This is no accident: NumPy's implementation of dispatch for
+ ``__array_function__`` is fastest when dispatcher functions return a builtin
+ sequence type (``tuple`` or ``list``).
+
+ On a related note, it's perfectly fine for dispatchers to return arguments
+ even if in some cases you *know* that they cannot have an
+ ``__array_function__`` method. This can arise for functions with default
+ arguments (e.g., ``None``) or complex signatures. NumPy's dispatching logic
+ sorts out these cases very quickly, so it generally is not worth the trouble
+ of parsing them on your own.
.. note::
@@ -426,10 +472,10 @@ An important virtue of this approach is that it allows for adding new
optional arguments to NumPy functions without breaking code that already
relies on ``__array_function__``.
-This is not a theoretical concern. The implementation of overrides *within*
-functions like ``np.sum()`` rather than defining a new function capturing
-``*args`` and ``**kwargs`` necessitated some awkward gymnastics to ensure that
-the new ``keepdims`` argument is only passed in cases where it is used, e.g.,
+This is not a theoretical concern. NumPy's older, haphazard implementation of
+overrides *within* functions like ``np.sum()`` necessitated some awkward
+gymnastics when we decided to add new optional arguments, e.g., the new
+``keepdims`` argument is only passed in cases where it is used:
.. code:: python
@@ -439,11 +485,12 @@ the new ``keepdims`` argument is only passed in cases where it is used, e.g.,
kwargs['keepdims'] = keepdims
return array.sum(..., **kwargs)
-This also makes it possible to add optional arguments to ``__array_function__``
-implementations incrementally and only in cases where it makes sense. For
-example, a library implementing immutable arrays would not be required to
-explicitly include an unsupported ``out`` argument. Doing this properly for all
-optional arguments is somewhat onerous, e.g.,
+For ``__array_function__`` implementors, this also means that it is possible
+to implement even existing optional arguments incrementally, and only in cases
+where it makes sense. For example, a library implementing immutable arrays
+would not be required to explicitly include an unsupported ``out`` argument in
+the function signature. This can be somewhat onerous to implement properly,
+e.g.,
.. code:: python
@@ -553,7 +600,7 @@ Backward compatibility
----------------------
This proposal does not change existing semantics, except for those arguments
-that currently have ``__array_function__`` methods, which should be rare.
+that currently have ``__array_function__`` attributes, which should be rare.
Alternatives
@@ -595,7 +642,7 @@ layer, separating NumPy's high level API from default implementations on
The downsides are that this would require an explicit opt-in from all
existing code, e.g., ``import numpy.api as np``, and in the long term
-would result in the maintainence of two separate NumPy APIs. Also, many
+would result in the maintenance of two separate NumPy APIs. Also, many
functions from ``numpy`` itself are already overloaded (but
inadequately), so confusion about high vs. low level APIs in NumPy would
still persist.
@@ -631,7 +678,7 @@ would be straightforward to write a shim for a default
Implementations in terms of a limited core API
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The internal implementations of some NumPy functions is extremely simple.
+The internal implementation of some NumPy functions is extremely simple.
For example:
- ``np.stack()`` is implemented in only a few lines of code by combining
@@ -669,8 +716,8 @@ However, to work well this would require the possibility of implementing
*some* but not all functions with ``__array_function__``, e.g., as described
in the next section.
-Coercion to a NumPy array as a catch-all fallback
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Partial implementation of NumPy's API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With the current design, classes that implement ``__array_function__``
to overload at least one function implicitly declare an intent to
@@ -687,44 +734,64 @@ that assuredly many pandas users rely on. If pandas implemented
functions like ``np.nanmean`` would suddenly break on pandas objects by
raising TypeError.
+Even libraries that reimplement most of NumPy's public API sometimes rely upon
+using utility functions from NumPy without a wrapper. For example, both CuPy
+and JAX simply `use an alias <https://github.com/numpy/numpy/issues/12974>`_ to
+``np.result_type``, which already supports duck-types with a ``dtype``
+attribute.
+
With ``__array_ufunc__``, it's possible to alleviate this concern by
casting all arguments to numpy arrays and re-calling the ufunc, but the
heterogeneous function signatures supported by ``__array_function__``
make it impossible to implement this generic fallback behavior for
``__array_function__``.
-We could resolve this issue by change the handling of return values in
-``__array_function__`` in either of two possible ways:
+We considered three possible ways to resolve this issue, but none were
+entirely satisfactory:
-1. Change the meaning of all arguments returning ``NotImplemented`` to indicate
- that all arguments should be coerced to NumPy arrays and the operation
- should be retried. However, many array libraries (e.g., scipy.sparse) really
- don't want implicit conversions to NumPy arrays, and often avoid implementing
- ``__array__`` for exactly this reason. Implicit conversions can result in
- silent bugs and performance degradation.
+1. Change the meaning of all arguments returning ``NotImplemented`` from
+ ``__array_function__`` to indicate that all arguments should be coerced to
+ NumPy arrays and the operation should be retried. However, many array
+ libraries (e.g., scipy.sparse) really don't want implicit conversions to
+ NumPy arrays, and often avoid implementing ``__array__`` for exactly this
+ reason. Implicit conversions can result in silent bugs and performance
+ degradation.
Potentially, we could enable this behavior only for types that implement
``__array__``, which would resolve the most problematic cases like
scipy.sparse. But in practice, a large fraction of classes that present a
high level API like NumPy arrays already implement ``__array__``. This would
preclude reliable use of NumPy's high level API on these objects.
+
2. Use another sentinel value of some sort, e.g.,
- ``np.NotImplementedButCoercible``, to indicate that a class implementing part
- of NumPy's higher level array API is coercible as a fallback. This is a more
- appealing option.
-
-With either approach, we would need to define additional rules for *how*
-coercible array arguments are coerced. The only sane rule would be to treat
-these return values as equivalent to not defining an
-``__array_function__`` method at all, which means that NumPy functions would
-fall-back to their current behavior of coercing all array-like arguments.
-
-It is not yet clear to us yet if we need an optional like
-``NotImplementedButCoercible``, so for now we propose to defer this issue.
-We can always implement ``np.NotImplementedButCoercible`` at some later time if
-it proves critical to the NumPy community in the future. Importantly, we don't
-think this will stop critical libraries that desire to implement most of the
-high level NumPy API from adopting this proposal.
+ ``np.NotImplementedButCoercible``, to indicate that a class implementing
+ part of NumPy's higher level array API is coercible as a fallback. If all
+ arguments return ``NotImplementedButCoercible``, arguments would be coerced
+ and the operation would be retried.
+
+ Unfortunately, correct behavior after encountering
+ ``NotImplementedButCoercible`` is not always obvious. Particularly
+ challenging is the "mixed" case where some arguments return
+ ``NotImplementedButCoercible`` and others return ``NotImplemented``.
+ Would dispatching be retried after only coercing the "coercible" arguments?
+ If so, then conceivably we could end up looping through the dispatching
+ logic an arbitrary number of times. Either way, the dispatching rules would
+ definitely get more complex and harder to reason about.
+
+3. Allow access to NumPy's implementation of functions, e.g., in the form of
+ a publicly exposed ``__skip_array_function__`` attribute on the NumPy
+ functions. This would allow for falling back to NumPy's implementation by
+ using ``func.__skip_array_function__`` inside ``__array_function__``
+ methods, and could also potentially be used to be used to avoid the
+ overhead of dispatching. However, it runs the risk of potentially exposing
+ details of NumPy's implementations for NumPy functions that do not call
+ ``np.asarray()`` internally. See
+ `this note <https://mail.python.org/pipermail/numpy-discussion/2019-May/079541.html>`_
+ for a summary of the full discussion.
+
+These solutions would solve real use cases, but at the cost of additional
+complexity. We would like to gain experience with how ``__array_function__`` is
+actually used before making decisions that would be difficult to roll back.
A magic decorator that inspects type annotations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -860,7 +927,7 @@ a descriptor.
Given the complexity and the limited use cases, we are also deferring on this
issue for now, but we are confident that ``__array_function__`` could be
-expanded to accomodate these use cases in the future if need be.
+expanded to accommodate these use cases in the future if need be.
Discussion
----------
@@ -877,7 +944,7 @@ it was discussed at a `NumPy developer sprint
Berkeley Institute for Data Science (BIDS) <https://bids.berkeley.edu/>`_.
Detailed discussion of this proposal itself can be found on the
-`the mailing list <https://mail.python.org/pipermail/numpy-discussion/2018-June/078127.html>`_ and relvant pull requests
+`the mailing list <https://mail.python.org/pipermail/numpy-discussion/2018-June/078127.html>`_ and relevant pull requests
(`1 <https://github.com/numpy/numpy/pull/11189>`_,
`2 <https://github.com/numpy/numpy/pull/11303#issuecomment-396638175>`_,
`3 <https://github.com/numpy/numpy/pull/11374>`_)
diff --git a/doc/neps/nep-0019-rng-policy.rst b/doc/neps/nep-0019-rng-policy.rst
index f50897b0f..aa5fdc653 100644
--- a/doc/neps/nep-0019-rng-policy.rst
+++ b/doc/neps/nep-0019-rng-policy.rst
@@ -6,6 +6,7 @@ NEP 19 — Random Number Generator Policy
:Status: Accepted
:Type: Standards Track
:Created: 2018-05-24
+:Updated: 2019-05-21
:Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-June/078126.html
Abstract
@@ -91,7 +92,8 @@ those contributors simply walked away.
Implementation
--------------
-Work on a proposed new PRNG subsystem is already underway in the randomgen_
+Work on a proposed new Pseudo Random Number Generator (PRNG) subsystem is
+already underway in the randomgen_
project. The specifics of the new design are out of scope for this NEP and up
for much discussion, but we will discuss general policies that will guide the
evolution of whatever code is adopted. We will also outline just a few of the
@@ -119,37 +121,38 @@ Gaussian variate generation to the faster `Ziggurat algorithm
discouraged improvement would be tweaking the Ziggurat tables just a little bit
for a small performance improvement.
-Any new design for the RNG subsystem will provide a choice of different core
+Any new design for the random subsystem will provide a choice of different core
uniform PRNG algorithms. A promising design choice is to make these core
uniform PRNGs their own lightweight objects with a minimal set of methods
-(randomgen_ calls them “basic RNGs”). The broader set of non-uniform
+(randomgen_ calls them “BitGenerators”). The broader set of non-uniform
distributions will be its own class that holds a reference to one of these core
uniform PRNG objects and simply delegates to the core uniform PRNG object when
-it needs uniform random numbers. To borrow an example from randomgen_, the
-class ``MT19937`` is a basic RNG that implements the classic Mersenne Twister
-algorithm. The class ``RandomGenerator`` wraps around the basic RNG to provide
+it needs uniform random numbers (randomgen_ calls this the Generator). To
+borrow an example from randomgen_, the
+class ``MT19937`` is a BitGenerator that implements the classic Mersenne Twister
+algorithm. The class ``Generator`` wraps around the BitGenerator to provide
all of the non-uniform distribution methods::
# This is not the only way to instantiate this object.
# This is just handy for demonstrating the delegation.
- >>> brng = MT19937(seed)
- >>> rg = RandomGenerator(brng)
+ >>> bg = MT19937(seed)
+ >>> rg = Generator(bg)
>>> x = rg.standard_normal(10)
-We will be more strict about a select subset of methods on these basic RNG
+We will be more strict about a select subset of methods on these BitGenerator
objects. They MUST guarantee stream-compatibility for a specified set
of methods which are chosen to make it easier to compose them to build other
distributions and which are needed to abstract over the implementation details
-of the variety of core PRNG algorithms. Namely,
+of the variety of BitGenerator algorithms. Namely,
* ``.bytes()``
- * ``.random_uintegers()``
- * ``.random_sample()``
+ * ``integers()`` (formerly ``.random_integers()``)
+ * ``random()`` (formerly ``.random_sample()``)
-The distributions class (``RandomGenerator``) SHOULD have all of the same
+The distributions class (``Generator``) SHOULD have all of the same
distribution methods as ``RandomState`` with close-enough function signatures
such that almost all code that currently works with ``RandomState`` instances
-will work with ``RandomGenerator`` instances (ignoring the precise stream
+will work with ``Generator`` instances (ignoring the precise stream
values). Some variance will be allowed for integer distributions: in order to
avoid some of the cross-platform problems described above, these SHOULD be
rewritten to work with ``uint64`` numbers on all platforms.
@@ -183,9 +186,10 @@ reproducible across numpy versions.
This legacy distributions class MUST be accessible under the name
``numpy.random.RandomState`` for backwards compatibility. All current ways of
instantiating ``numpy.random.RandomState`` with a given state should
-instantiate the Mersenne Twister basic RNG with the same state. The legacy
-distributions class MUST be capable of accepting other basic RNGs. The purpose
-here is to ensure that one can write a program with a consistent basic RNG
+instantiate the Mersenne Twister BitGenerator with the same state. The legacy
+distributions class MUST be capable of accepting other BitGenerators. The
+purpose
+here is to ensure that one can write a program with a consistent BitGenerator
state with a mixture of libraries that may or may not have upgraded from
``RandomState``. Instances of the legacy distributions class MUST respond
``True`` to ``isinstance(rg, numpy.random.RandomState)`` because there is
@@ -209,27 +213,27 @@ consistently and usefully, but a very common usage is in unit tests where many
of the problems of global state are less likely.
This NEP does not propose removing these functions or changing them to use the
-less-stable ``RandomGenerator`` distribution implementations. Future NEPs
+less-stable ``Generator`` distribution implementations. Future NEPs
might.
Specifically, the initial release of the new PRNG subsystem SHALL leave these
convenience functions as aliases to the methods on a global ``RandomState``
-that is initialized with a Mersenne Twister basic RNG object. A call to
-``numpy.random.seed()`` will be forwarded to that basic RNG object. In
+that is initialized with a Mersenne Twister BitGenerator object. A call to
+``numpy.random.seed()`` will be forwarded to that BitGenerator object. In
addition, the global ``RandomState`` instance MUST be accessible in this
initial release by the name ``numpy.random.mtrand._rand``: Robert Kern long ago
promised ``scikit-learn`` that this name would be stable. Whoops.
-In order to allow certain workarounds, it MUST be possible to replace the basic
-RNG underneath the global ``RandomState`` with any other basic RNG object (we
-leave the precise API details up to the new subsystem). Calling
+In order to allow certain workarounds, it MUST be possible to replace the
+BitGenerator underneath the global ``RandomState`` with any other BitGenerator
+object (we leave the precise API details up to the new subsystem). Calling
``numpy.random.seed()`` thereafter SHOULD just pass the given seed to the
-current basic RNG object and not attempt to reset the basic RNG to the Mersenne
-Twister. The set of ``numpy.random.*`` convenience functions SHALL remain the
-same as they currently are. They SHALL be aliases to the ``RandomState``
-methods and not the new less-stable distributions class (``RandomGenerator``,
-in the examples above). Users who want to get the fastest, best distributions
-can follow best practices and instantiate generator objects explicitly.
+current BitGenerator object and not attempt to reset the BitGenerator to the
+Mersenne Twister. The set of ``numpy.random.*`` convenience functions SHALL
+remain the same as they currently are. They SHALL be aliases to the
+``RandomState`` methods and not the new less-stable distributions class
+(``Generator``, in the examples above). Users who want to get the fastest, best
+distributions can follow best practices and instantiate generator objects explicitly.
This NEP does not propose that these requirements remain in perpetuity. After
we have experience with the new PRNG subsystem, we can and should revisit these
@@ -292,14 +296,14 @@ satisfactory subset. At least some projects used a fairly broad selection of
the ``RandomState`` methods in unit tests.
Downstream project owners would have been forced to modify their code to
-accomodate the new PRNG subsystem. Some modifications might be simply
+accommodate the new PRNG subsystem. Some modifications might be simply
mechanical, but the bulk of the work would have been tedious churn for no
positive improvement to the downstream project, just avoiding being broken.
Furthermore, under this old proposal, we would have had a quite lengthy
deprecation period where ``RandomState`` existed alongside the new system of
-basic RNGs and distribution classes. Leaving the implementation of
-``RandomState`` fixed meant that it could not use the new basic RNG state
+BitGenerator and Generator classes. Leaving the implementation of
+``RandomState`` fixed meant that it could not use the new BitGenerator state
objects. Developing programs that use a mixture of libraries that have and
have not upgraded would require managing two sets of PRNG states. This would
notionally have been time-limited, but we intended the deprecation to be very
@@ -308,9 +312,9 @@ long.
The current proposal solves all of these problems. All current usages of
``RandomState`` will continue to work in perpetuity, though some may be
discouraged through documentation. Unit tests can continue to use the full
-complement of ``RandomState`` methods. Mixed ``RandomState/RandomGenerator``
-code can safely share the common basic RNG state. Unmodified ``RandomState``
-code can make use of the new features of alternative basic RNGs like settable
+complement of ``RandomState`` methods. Mixed ``RandomState/Generator``
+code can safely share the common BitGenerator state. Unmodified ``RandomState``
+code can make use of the new features of alternative BitGenerator-like settable
streams.
diff --git a/doc/neps/nep-0020-gufunc-signature-enhancement.rst b/doc/neps/nep-0020-gufunc-signature-enhancement.rst
index 38a9fd53b..a7a992cf1 100644
--- a/doc/neps/nep-0020-gufunc-signature-enhancement.rst
+++ b/doc/neps/nep-0020-gufunc-signature-enhancement.rst
@@ -3,7 +3,7 @@ NEP 20 — Expansion of Generalized Universal Function Signatures
===============================================================
:Author: Marten van Kerkwijk <mhvk@astro.utoronto.ca>
-:Status: Accepted
+:Status: Final
:Type: Standards Track
:Created: 2018-06-10
:Resolution: https://mail.python.org/pipermail/numpy-discussion/2018-April/077959.html,
diff --git a/doc/neps/nep-0026-missing-data-summary.rst b/doc/neps/nep-0026-missing-data-summary.rst
index e99138cdd..78fe999df 100644
--- a/doc/neps/nep-0026-missing-data-summary.rst
+++ b/doc/neps/nep-0026-missing-data-summary.rst
@@ -669,7 +669,7 @@ NumPy could more easily be overtaken by another project.
In the case of the existing NA contribution at issue, how we resolve
this disagreement represents a decision about how NumPy's
-developers, contributers, and users should interact. If we create
+developers, contributors, and users should interact. If we create
a document describing a dispute resolution process, how do we
design it so that it doesn't introduce a large burden and excessive
uncertainty on developers that could prevent them from productively
@@ -677,7 +677,7 @@ contributing code?
If we go this route of writing up a decision process which includes
such a dispute resolution mechanism, I think the meat of it should
-be a roadmap that potential contributers and developers can follow
+be a roadmap that potential contributors and developers can follow
to gain influence over NumPy. NumPy development needs broad support
beyond code contributions, and tying influence in the project to
contributions seems to me like it would be a good way to encourage
diff --git a/doc/neps/nep-0027-zero-rank-arrarys.rst b/doc/neps/nep-0027-zero-rank-arrarys.rst
index d932bb609..430397235 100644
--- a/doc/neps/nep-0027-zero-rank-arrarys.rst
+++ b/doc/neps/nep-0027-zero-rank-arrarys.rst
@@ -51,7 +51,7 @@ However there are some important differences:
* Array scalars are immutable
* Array scalars have different python type for different data types
-
+
Motivation for Array Scalars
----------------------------
@@ -62,7 +62,7 @@ we will try to explain why it is necessary to have three different ways to
represent a number.
There were several numpy-discussion threads:
-
+
* `rank-0 arrays`_ in a 2002 mailing list thread.
* Thoughts about zero dimensional arrays vs Python scalars in a `2005 mailing list thread`_]
@@ -71,7 +71,7 @@ It has been suggested several times that NumPy just use rank-0 arrays to
represent scalar quantities in all case. Pros and cons of converting rank-0
arrays to scalars were summarized as follows:
-- Pros:
+- Pros:
- Some cases when Python expects an integer (the most
dramatic is when slicing and indexing a sequence:
@@ -94,15 +94,15 @@ arrays to scalars were summarized as follows:
files (though this could also be done by a special case
in the pickling code for arrays)
-- Cons:
+- Cons:
- It is difficult to write generic code because scalars
do not have the same methods and attributes as arrays.
(such as ``.type`` or ``.shape``). Also Python scalars have
- different numeric behavior as well.
+ different numeric behavior as well.
- - This results in a special-case checking that is not
- pleasant. Fundamentally it lets the user believe that
+ - This results in a special-case checking that is not
+ pleasant. Fundamentally it lets the user believe that
somehow multidimensional homoegeneous arrays
are something like Python lists (which except for
Object arrays they are not).
@@ -117,7 +117,7 @@ The Need for Zero-Rank Arrays
-----------------------------
Once the idea to use zero-rank arrays to represent scalars was rejected, it was
-natural to consider whether zero-rank arrays can be eliminated alltogether.
+natural to consider whether zero-rank arrays can be eliminated altogether.
However there are some important use cases where zero-rank arrays cannot be
replaced by array scalars. See also `A case for rank-0 arrays`_ from February
2006.
@@ -164,12 +164,12 @@ Alexander started a `Jan 2006 discussion`_ on scipy-dev
with the following proposal:
... it may be reasonable to allow ``a[...]``. This way
- ellipsis can be interpereted as any number of ``:`` s including zero.
+ ellipsis can be interpereted as any number of ``:`` s including zero.
Another subscript operation that makes sense for scalars would be
- ``a[...,newaxis]`` or even ``a[{newaxis, }* ..., {newaxis,}*]``, where
- ``{newaxis,}*`` stands for any number of comma-separated newaxis tokens.
+ ``a[...,newaxis]`` or even ``a[{newaxis, }* ..., {newaxis,}*]``, where
+ ``{newaxis,}*`` stands for any number of comma-separated newaxis tokens.
This will allow one to use ellipsis in generic code that would work on
- any numpy type.
+ any numpy type.
Francesc Altet supported the idea of ``[...]`` on zero-rank arrays and
`suggested`_ that ``[()]`` be supported as well.
@@ -204,7 +204,7 @@ remains on what should be the type of the result - zero rank ndarray or ``x.dtyp
1
Since most if not all numpy function automatically convert zero-rank arrays to scalars on return, there is no reason for
-``[...]`` and ``[()]`` operations to be different.
+``[...]`` and ``[()]`` operations to be different.
See SVN changeset 1864 (which became git commit `9024ff0`_) for
implementation of ``x[...]`` and ``x[()]`` returning numpy scalars.
@@ -234,7 +234,7 @@ Currently all indexing on zero-rank arrays is implemented in a special ``if (nd
that the changes do not affect any existing usage (except, the usage that
relies on exceptions). On the other hand part of motivation for these changes
was to make behavior of ndarrays more uniform and this should allow to
-eliminate ``if (nd == 0)`` checks alltogether.
+eliminate ``if (nd == 0)`` checks altogether.
Copyright
---------
diff --git a/doc/neps/roadmap.rst b/doc/neps/roadmap.rst
index a45423711..2ec0b7520 100644
--- a/doc/neps/roadmap.rst
+++ b/doc/neps/roadmap.rst
@@ -6,74 +6,78 @@ This is a live snapshot of tasks and features we will be investing resources
in. It may be used to encourage and inspire developers and to search for
funding.
-Interoperability protocols & duck typing
-----------------------------------------
-
-- `__array_function__`
-
- See `NEP 18`_ and a sample implementation_
-
-- Array Duck-Typing
-
- `NEP 22`_ `np.asduckarray()`
-
-- Mixins like `NDArrayOperatorsMixin`:
+Interoperability
+----------------
+
+We aim to make it easier to interoperate with NumPy. There are many NumPy-like
+packages that add interesting new capabilities to the Python ecosystem, as well
+as many libraries that extend NumPy's model in various ways. Work in NumPy to
+facilitate interoperability with all such packages, and the code that uses them,
+may include (among other things) interoperability protocols, better duck typing
+support and ndarray subclass handling.
+
+- The ``__array_function__`` protocol is currently experimental and needs to be
+ matured. See `NEP 18`_ for details.
+- New protocols for overriding other functionality in NumPy may be needed.
+- Array duck typing, or handling "duck arrays", needs improvements. See
+ `NEP 22`_ for details.
+
+Extensibility
+-------------
- - for mutable arrays
- - for reduction methods implemented as ufuncs
+We aim to make it much easier to extend NumPy. The primary topic here is to
+improve the dtype system.
-Better dtypes
--------------
+- Easier custom dtypes:
-- Easier custom dtypes
- Simplify and/or wrap the current C-API
- More consistent support for dtype metadata
- Support for writing a dtype in Python
-- New string dtype(s):
- - Encoded strings with fixed-width storage (utf8, latin1, ...) and/or
- - Variable length strings (could share implementation with dtype=object, but are explicitly type-checked)
- - One of these should probably be the default for text data. The current behavior on Python 3 is neither efficient nor user friendly.
-- `np.int` should not be platform dependent
-- better coercion for string + number
-Random number generation policy & rewrite
------------------------------------------
+- New string dtype(s):
-`NEP 19`_ and a `reference implementation`_
+ - Encoded strings with fixed-width storage (utf8, latin1, ...) and/or
+ - Variable length strings (could share implementation with dtype=object,
+ but are explicitly type-checked)
+ - One of these should probably be the default for text data. The current
+ behavior on Python 3 is neither efficient nor user friendly.
-Indexing
---------
+- `np.int` should not be platform dependent
+- Better coercion for string + number
-vindex/oindex `NEP 21`_
+Performance
+-----------
-Infrastructure
---------------
+We want to further improve NumPy's performance, through:
-NumPy is much more than just the code base itself, we also maintain
-docs, CI, benchmarks, etc.
+- Better use of SIMD instructions, also on platforms other than x86.
+- Reducing ufunc overhead.
+- Optimizations in individual functions.
-- Rewrite numpy.org
-- Benchmarking: improve the extent of the existing suite, and run & render
- the results as part of the docs or website.
+Furthermore we would like to improve the benchmarking system, in terms of coverage,
+easy of use, and publication of the results (now
+`here <https://pv.github.io/numpy-bench>`__) as part of the docs or website.
- - Hardware: find a machine that can reliably run serial benchmarks
- - ASV produces graphs, could we set up a site? Currently at
- https://pv.github.io/numpy-bench/, should that become a community resource?
+Website and documentation
+-------------------------
-Functionality outside core
---------------------------
+Our website (https://numpy.org) is in very poor shape and needs to be rewritten
+completely.
-Some things inside NumPy do not actually match the `Scope of NumPy`.
+The NumPy `documentation <https://www.numpy.org/devdocs/user/index.html>`__ is
+of varying quality - in particular the User Guide needs major improvements.
-- A backend system for `numpy.fft` (so that e.g. `fft-mkl` doesn't need to monkeypatch numpy)
+Random number generation policy & rewrite
+-----------------------------------------
-- Rewrite masked arrays to not be a ndarray subclass -- maybe in a separate project?
-- MaskedArray as a duck-array type, and/or
-- dtypes that support missing values
+A new random number generation framework with higher performance generators is
+close to completion, see `NEP 19`_ and `PR 13163`_.
-- Write a strategy on how to deal with overlap between numpy and scipy for `linalg` and `fft` (and implement it).
+Indexing
+--------
-- Deprecate `np.matrix`
+We intend to add new indexing modes for "vectorized indexing" and "outer indexing",
+see `NEP 21`_.
Continuous Integration
----------------------
@@ -81,31 +85,25 @@ Continuous Integration
We depend on CI to discover problems as we continue to develop NumPy before the
code reaches downstream users.
-- CI for more exotic platforms (e.g. ARM is now available from
- http://www.shippable.com/, but it is not free).
+- CI for more exotic platforms (if available as a service).
- Multi-package testing
- Add an official channel for numpy dev builds for CI usage by other projects so
they may confirm new builds do not break their package.
-Typing
-------
+Other functionality
+-------------------
-Python type annotation syntax should support ndarrays and dtypes.
+- ``MaskedArray`` needs to be improved, ideas include:
-- Type annotations for NumPy: github.com/numpy/numpy-stubs
-- Support for typing shape and dtype in multi-dimensional arrays in Python more generally
-
-NumPy scalars
--------------
+ - Rewrite masked arrays to not be a ndarray subclass -- maybe in a separate project?
+ - MaskedArray as a duck-array type, and/or
+ - dtypes that support missing values
-Numpy has both scalars and zero-dimensional arrays.
+- A backend system for ``numpy.fft`` (so that e.g. ``fft-mkl`` doesn't need to monkeypatch numpy)
+- Write a strategy on how to deal with overlap between NumPy and SciPy for ``linalg``
+ and ``fft`` (and implement it).
+- Deprecate ``np.matrix`` (very slowly)
-- The current implementation adds a large maintenance burden -- can we remove
- scalars and/or simplify it internally?
-- Zero dimensional arrays get converted into scalars by most NumPy
- functions (i.e., output of `np.sin(x)` depends on whether `x` is
- zero-dimensional or not). This inconsistency should be addressed,
- so that one could, e.g., write sane type annotations.
.. _`NEP 19`: https://www.numpy.org/neps/nep-0019-rng-policy.html
.. _`NEP 22`: http://www.numpy.org/neps/nep-0022-ndarray-duck-typing-overview.html
@@ -113,3 +111,4 @@ Numpy has both scalars and zero-dimensional arrays.
.. _implementation: https://gist.github.com/shoyer/1f0a308a06cd96df20879a1ddb8f0006
.. _`reference implementation`: https://github.com/bashtage/randomgen
.. _`NEP 21`: https://www.numpy.org/neps/nep-0021-advanced-indexing.html
+.. _`PR 13163`: https://github.com/numpy/numpy/pull/13163
diff --git a/doc/release/1.12.0-notes.rst b/doc/release/1.12.0-notes.rst
index 711055d16..e735d2d77 100644
--- a/doc/release/1.12.0-notes.rst
+++ b/doc/release/1.12.0-notes.rst
@@ -1,3 +1,5 @@
+.. 1.12.0:
+
==========================
NumPy 1.12.0 Release Notes
==========================
diff --git a/doc/release/1.14.4-notes.rst b/doc/release/1.14.4-notes.rst
index 174094c1c..3fb94383b 100644
--- a/doc/release/1.14.4-notes.rst
+++ b/doc/release/1.14.4-notes.rst
@@ -19,7 +19,7 @@ values are now correct.
Note that NumPy will error on import if it detects incorrect float32 `dot`
results. This problem has been seen on the Mac when working in the Anaconda
-enviroment and is due to a subtle interaction between MKL and PyQt5. It is not
+environment and is due to a subtle interaction between MKL and PyQt5. It is not
strictly a NumPy problem, but it is best that users be aware of it. See the
gh-8577 NumPy issue for more information.
diff --git a/doc/release/1.15.6-notes.rst b/doc/release/1.15.6-notes.rst
deleted file mode 100644
index 863f4b495..000000000
--- a/doc/release/1.15.6-notes.rst
+++ /dev/null
@@ -1,52 +0,0 @@
-==========================
-NumPy 1.16.6 Release Notes
-==========================
-
-The NumPy 1.16.6 release fixes bugs reported against the 1.16.5 release, and
-also backports several enhancements from master that seem appropriate for a
-release series that is the last to support Python 2.7. The wheels on PyPI are
-linked with OpenBLAS v0.3.7-dev, which should fix errors on Skylake series
-cpus.
-
-Downstream developers building this release should use Cython >= 0.29.2 and,
-if using OpenBLAS, OpenBLAS >= v0.3.7. The supported Python versions are 2.7
-and 3.5-3.7.
-
-Highlights
-==========
-
-
-New functions
-=============
-
-
-New deprecations
-================
-
-
-Expired deprecations
-====================
-
-
-Future changes
-==============
-
-
-Compatibility notes
-===================
-
-
-C API changes
-=============
-
-
-New Features
-============
-
-
-Improvements
-============
-
-
-Changes
-=======
diff --git a/doc/release/1.16.0-notes.rst b/doc/release/1.16.0-notes.rst
index 341d5f715..1034d6e6c 100644
--- a/doc/release/1.16.0-notes.rst
+++ b/doc/release/1.16.0-notes.rst
@@ -176,7 +176,7 @@ of:
* :c:member:`PyUFuncObject.core_dim_flags`
* :c:member:`PyUFuncObject.core_dim_sizes`
* :c:member:`PyUFuncObject.identity_value`
-* :c:function:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`
+* :c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`
New Features
@@ -407,7 +407,7 @@ Additionally, `logaddexp` now has an identity of ``-inf``, allowing it to be
called on empty sequences, where previously it could not be.
This is possible thanks to the new
-:c:function:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`, which allows
+:c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`, which allows
arbitrary values to be used as identities now.
Improved conversion from ctypes objects
diff --git a/doc/release/1.16.5-notes.rst b/doc/release/1.16.5-notes.rst
deleted file mode 100644
index 5b6eb585b..000000000
--- a/doc/release/1.16.5-notes.rst
+++ /dev/null
@@ -1,68 +0,0 @@
-==========================
-NumPy 1.16.5 Release Notes
-==========================
-
-The NumPy 1.16.5 release fixes bugs reported against the 1.16.4 release, and
-also backports several enhancements from master that seem appropriate for a
-release series that is the last to support Python 2.7. The wheels on PyPI are
-linked with OpenBLAS v0.3.7-dev, which should fix errors on Skylake series
-cpus.
-
-Downstream developers building this release should use Cython >= 0.29.2 and, if
-using OpenBLAS, OpenBLAS >= v0.3.7. The supported Python versions are 2.7 and
-3.5-3.7.
-
-
-Contributors
-============
-
-A total of 18 people contributed to this release. People with a "+" by their
-names contributed a patch for the first time.
-
-* Alexander Shadchin
-* Allan Haldane
-* Bruce Merry +
-* Charles Harris
-* Colin Snyder +
-* Dan Allan +
-* Emile +
-* Eric Wieser
-* Grey Baker +
-* Maksim Shabunin +
-* Marten van Kerkwijk
-* Matti Picus
-* Peter Andreas Entschev +
-* Ralf Gommers
-* Richard Harris +
-* Sebastian Berg
-* Sergei Lebedev +
-* Stephan Hoyer
-
-Pull requests merged
-====================
-
-A total of 23 pull requests were merged for this release.
-
-* `#13742 <https://github.com/numpy/numpy/pull/13742>`__: ENH: Add project URLs to setup.py
-* `#13823 <https://github.com/numpy/numpy/pull/13823>`__: TEST, ENH: fix tests and ctypes code for PyPy
-* `#13845 <https://github.com/numpy/numpy/pull/13845>`__: BUG: use npy_intp instead of int for indexing array
-* `#13867 <https://github.com/numpy/numpy/pull/13867>`__: TST: Ignore DeprecationWarning during nose imports
-* `#13905 <https://github.com/numpy/numpy/pull/13905>`__: BUG: Fix use-after-free in boolean indexing
-* `#13933 <https://github.com/numpy/numpy/pull/13933>`__: MAINT/BUG/DOC: Fix errors in _add_newdocs
-* `#13984 <https://github.com/numpy/numpy/pull/13984>`__: BUG: fix byte order reversal for datetime64[ns]
-* `#13994 <https://github.com/numpy/numpy/pull/13994>`__: MAINT,BUG: Use nbytes to also catch empty descr during allocation
-* `#14042 <https://github.com/numpy/numpy/pull/14042>`__: BUG: np.array cleared errors occured in PyMemoryView_FromObject
-* `#14043 <https://github.com/numpy/numpy/pull/14043>`__: BUG: Fixes for Undefined Behavior Sanitizer (UBSan) errors.
-* `#14044 <https://github.com/numpy/numpy/pull/14044>`__: BUG: ensure that casting to/from structured is properly checked.
-* `#14045 <https://github.com/numpy/numpy/pull/14045>`__: MAINT: fix histogram*d dispatchers
-* `#14046 <https://github.com/numpy/numpy/pull/14046>`__: BUG: further fixup to histogram2d dispatcher.
-* `#14052 <https://github.com/numpy/numpy/pull/14052>`__: BUG: Replace contextlib.suppress for Python 2.7
-* `#14056 <https://github.com/numpy/numpy/pull/14056>`__: BUG: fix compilation of 3rd party modules with Py_LIMITED_API...
-* `#14057 <https://github.com/numpy/numpy/pull/14057>`__: BUG: Fix memory leak in dtype from dict contructor
-* `#14058 <https://github.com/numpy/numpy/pull/14058>`__: DOC: Document array_function at a higher level.
-* `#14084 <https://github.com/numpy/numpy/pull/14084>`__: BUG, DOC: add new recfunctions to `__all__`
-* `#14162 <https://github.com/numpy/numpy/pull/14162>`__: BUG: Remove stray print that causes a SystemError on python 3.7
-* `#14297 <https://github.com/numpy/numpy/pull/14297>`__: TST: Pin pytest version to 5.0.1.
-* `#14322 <https://github.com/numpy/numpy/pull/14322>`__: ENH: Enable huge pages in all Linux builds
-* `#14346 <https://github.com/numpy/numpy/pull/14346>`__: BUG: fix behavior of structured_to_unstructured on non-trivial...
-* `#14382 <https://github.com/numpy/numpy/pull/14382>`__: REL: Prepare for the NumPy 1.16.5 release.
diff --git a/doc/release/1.16.6-notes.rst b/doc/release/1.16.6-notes.rst
deleted file mode 100644
index cda34497c..000000000
--- a/doc/release/1.16.6-notes.rst
+++ /dev/null
@@ -1,85 +0,0 @@
-==========================
-NumPy 1.16.6 Release Notes
-==========================
-
-The NumPy 1.16.6 release fixes bugs reported against the 1.16.5 release, and
-also backports several enhancements from master that seem appropriate for a
-release series that is the last to support Python 2.7. The wheels on PyPI are
-linked with OpenBLAS v0.3.7, which should fix errors on Skylake series
-cpus.
-
-Downstream developers building this release should use Cython >= 0.29.2 and, if
-using OpenBLAS, OpenBLAS >= v0.3.7. The supported Python versions are 2.7 and
-3.5-3.7.
-
-Highlights
-==========
-
-- The ``np.testing.utils`` functions have been updated from 1.19.0-dev0.
- This improves the function documentation and error messages as well
- extending the ``assert_array_compare`` function to additional types.
-
-
-New functions
-=============
-
-Allow matmul (`@` operator) to work with object arrays.
--------------------------------------------------------
-This is an enhancement that was added in NumPy 1.17 and seems reasonable to
-include in the LTS 1.16 release series.
-
-
-Compatibility notes
-===================
-
-Fix regression in matmul (`@` operator) for boolean types
----------------------------------------------------------
-Booleans were being treated as integers rather than booleans,
-which was a regression from previous behavior.
-
-
-Improvements
-============
-
-Array comparison assertions include maximum differences
--------------------------------------------------------
-Error messages from array comparison tests such as ``testing.assert_allclose``
-now include "max absolute difference" and "max relative difference," in
-addition to the previous "mismatch" percentage. This information makes it
-easier to update absolute and relative error tolerances.
-
-Contributors
-============
-
-A total of 10 people contributed to this release.
-
-* CakeWithSteak
-* Charles Harris
-* Chris Burr
-* Eric Wieser
-* Fernando Saravia
-* Lars Grueter
-* Matti Picus
-* Maxwell Aladago
-* Qiming Sun
-* Warren Weckesser
-
-Pull requests merged
-====================
-
-A total of 14 pull requests were merged for this release.
-
-* `#14211 <https://github.com/numpy/numpy/pull/14211>`__: BUG: Fix uint-overflow if padding with linear_ramp and negative...
-* `#14275 <https://github.com/numpy/numpy/pull/14275>`__: BUG: fixing to allow unpickling of PY3 pickles from PY2
-* `#14340 <https://github.com/numpy/numpy/pull/14340>`__: BUG: Fix misuse of .names and .fields in various places (backport...
-* `#14423 <https://github.com/numpy/numpy/pull/14423>`__: BUG: test, fix regression in converting to ctypes.
-* `#14434 <https://github.com/numpy/numpy/pull/14434>`__: BUG: Fixed maximum relative error reporting in assert_allclose
-* `#14509 <https://github.com/numpy/numpy/pull/14509>`__: BUG: Fix regression in boolean matmul.
-* `#14686 <https://github.com/numpy/numpy/pull/14686>`__: BUG: properly define PyArray_DescrCheck
-* `#14853 <https://github.com/numpy/numpy/pull/14853>`__: BLD: add 'apt update' to shippable
-* `#14854 <https://github.com/numpy/numpy/pull/14854>`__: BUG: Fix _ctypes class circular reference. (#13808)
-* `#14856 <https://github.com/numpy/numpy/pull/14856>`__: BUG: Fix `np.einsum` errors on Power9 Linux and z/Linux
-* `#14863 <https://github.com/numpy/numpy/pull/14863>`__: BLD: Prevent -flto from optimising long double representation...
-* `#14864 <https://github.com/numpy/numpy/pull/14864>`__: BUG: lib: Fix histogram problem with signed integer arrays.
-* `#15172 <https://github.com/numpy/numpy/pull/15172>`__: ENH: Backport improvements to testing functions.
-* `#15191 <https://github.com/numpy/numpy/pull/15191>`__: REL: Prepare for 1.16.6 release.
diff --git a/doc/release/1.17.0-notes.rst b/doc/release/1.17.0-notes.rst
new file mode 100644
index 000000000..8d69e36d9
--- /dev/null
+++ b/doc/release/1.17.0-notes.rst
@@ -0,0 +1,562 @@
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.17.0 Release Notes
+==========================
+
+This NumPy release contains a number of new features that should substantially
+improve its performance and usefulness, see Highlights below for a summary. The
+Python versions supported are 3.5-3.7, note that Python 2.7 has been dropped.
+Python 3.8b2 should work with the released source packages, but there are no
+future guarantees.
+
+Downstream developers should use Cython >= 0.29.11 for Python 3.8 support and
+OpenBLAS >= 3.7 (not currently out) to avoid problems on the Skylake
+architecture. The NumPy wheels on PyPI are built from the OpenBLAS development
+branch in order to avoid those problems.
+
+
+Highlights
+==========
+
+* A new extensible `random` module along with four selectable `random number
+ generators <random.BitGenerators>` and improved seeding designed for use in parallel
+ processes has been added. The currently available bit generators are `MT19937
+ <random.mt19937.MT19937>`, `PCG64 <random.pcg64.PCG64>`, `Philox
+ <random.philox.Philox>`, and `SFC64 <random.sfc64.SFC64>`. See below under
+ New Features.
+
+* NumPy's `FFT <fft>` implementation was changed from fftpack to pocketfft,
+ resulting in faster, more accurate transforms and better handling of datasets
+ of prime length. See below under Improvements.
+
+* New radix sort and timsort sorting methods. It is currently not possible to
+ choose which will be used. They are hardwired to the datatype and used
+ when either ``stable`` or ``mergesort`` is passed as the method. See below
+ under Improvements.
+
+* Overriding numpy functions is now possible by default,
+ see ``__array_function__`` below.
+
+
+New functions
+=============
+
+* `numpy.errstate` is now also a function decorator
+
+
+Deprecations
+============
+
+`numpy.polynomial` functions warn when passed ``float`` in place of ``int``
+---------------------------------------------------------------------------
+Previously functions in this module would accept ``float`` values provided they
+were integral (``1.0``, ``2.0``, etc). For consistency with the rest of numpy,
+doing so is now deprecated, and in future will raise a ``TypeError``.
+
+Similarly, passing a float like ``0.5`` in place of an integer will now raise a
+``TypeError`` instead of the previous ``ValueError``.
+
+Deprecate `numpy.distutils.exec_command` and ``temp_file_name``
+---------------------------------------------------------------
+The internal use of these functions has been refactored and there are better
+alternatives. Replace ``exec_command`` with `subprocess.Popen` and
+`temp_file_name <numpy.distutils.exec_command>` with `tempfile.mkstemp`.
+
+Writeable flag of C-API wrapped arrays
+--------------------------------------
+When an array is created from the C-API to wrap a pointer to data, the only
+indication we have of the read-write nature of the data is the ``writeable``
+flag set during creation. It is dangerous to force the flag to writeable.
+In the future it will not be possible to switch the writeable flag to ``True``
+from python.
+This deprecation should not affect many users since arrays created in such
+a manner are very rare in practice and only available through the NumPy C-API.
+
+`numpy.nonzero` should no longer be called on 0d arrays
+-------------------------------------------------------
+The behavior of `numpy.nonzero` on 0d arrays was surprising, making uses of it
+almost always incorrect. If the old behavior was intended, it can be preserved
+without a warning by using ``nonzero(atleast_1d(arr))`` instead of
+``nonzero(arr)``. In a future release, it is most likely this will raise a
+``ValueError``.
+
+Writing to the result of `numpy.broadcast_arrays` will warn
+-----------------------------------------------------------
+
+Commonly `numpy.broadcast_arrays` returns a writeable array with internal
+overlap, making it unsafe to write to. A future version will set the
+``writeable`` flag to ``False``, and require users to manually set it to
+``True`` if they are sure that is what they want to do. Now writing to it will
+emit a deprecation warning with instructions to set the ``writeable`` flag
+``True``. Note that if one were to inspect the flag before setting it, one
+would find it would already be ``True``. Explicitly setting it, though, as one
+will need to do in future versions, clears an internal flag that is used to
+produce the deprecation warning. To help alleviate confusion, an additional
+`FutureWarning` will be emitted when accessing the ``writeable`` flag state to
+clarify the contradiction.
+
+Note that for the C-side buffer protocol such an array will return a
+readonly buffer immediately unless a writable buffer is requested. If
+a writeable buffer is requested a warning will be given. When using
+cython, the ``const`` qualifier should be used with such arrays to avoid
+the warning (e.g. ``cdef const double[::1] view``).
+
+
+Future Changes
+==============
+
+Shape-1 fields in dtypes won't be collapsed to scalars in a future version
+--------------------------------------------------------------------------
+
+Currently, a field specified as ``[(name, dtype, 1)]`` or ``"1type"`` is
+interpreted as a scalar field (i.e., the same as ``[(name, dtype)]`` or
+``[(name, dtype, ()]``). This now raises a FutureWarning; in a future version,
+it will be interpreted as a shape-(1,) field, i.e. the same as ``[(name,
+dtype, (1,))]`` or ``"(1,)type"`` (consistently with ``[(name, dtype, n)]``
+/ ``"ntype"`` with ``n>1``, which is already equivalent to ``[(name, dtype,
+(n,)]`` / ``"(n,)type"``).
+
+
+Compatibility notes
+===================
+
+``float16`` subnormal rounding
+------------------------------
+Casting from a different floating point precision to ``float16`` used incorrect
+rounding in some edge cases. This means in rare cases, subnormal results will
+now be rounded up instead of down, changing the last bit (ULP) of the result.
+
+Signed zero when using divmod
+-----------------------------
+Starting in version `1.12.0`, numpy incorrectly returned a negatively signed zero
+when using the ``divmod`` and ``floor_divide`` functions when the result was
+zero. For example::
+
+ >>> np.zeros(10)//1
+ array([-0., -0., -0., -0., -0., -0., -0., -0., -0., -0.])
+
+With this release, the result is correctly returned as a positively signed
+zero::
+
+ >>> np.zeros(10)//1
+ array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
+
+``MaskedArray.mask`` now returns a view of the mask, not the mask itself
+------------------------------------------------------------------------
+Returning the mask itself was unsafe, as it could be reshaped in place which
+would violate expectations of the masked array code. The behavior of `mask
+<ma.MaskedArray.mask>` is now consistent with `data <ma.MaskedArray.data>`,
+which also returns a view.
+
+The underlying mask can still be accessed with ``._mask`` if it is needed.
+Tests that contain ``assert x.mask is not y.mask`` or similar will need to be
+updated.
+
+Do not lookup ``__buffer__`` attribute in `numpy.frombuffer`
+------------------------------------------------------------
+Looking up ``__buffer__`` attribute in `numpy.frombuffer` was undocumented and
+non-functional. This code was removed. If needed, use
+``frombuffer(memoryview(obj), ...)`` instead.
+
+``out`` is buffered for memory overlaps in `take`, `choose`, `put`
+------------------------------------------------------------------
+If the out argument to these functions is provided and has memory overlap with
+the other arguments, it is now buffered to avoid order-dependent behavior.
+
+Unpickling while loading requires explicit opt-in
+-------------------------------------------------
+The functions `load`, and ``lib.format.read_array`` take an
+``allow_pickle`` keyword which now defaults to ``False`` in response to
+`CVE-2019-6446 <https://nvd.nist.gov/vuln/detail/CVE-2019-6446>`_.
+
+
+.. currentmodule:: numpy.random.mtrand
+
+Potential changes to the random stream in old random module
+-----------------------------------------------------------
+Due to bugs in the application of ``log`` to random floating point numbers,
+the stream may change when sampling from `~RandomState.beta`, `~RandomState.binomial`,
+`~RandomState.laplace`, `~RandomState.logistic`, `~RandomState.logseries` or
+`~RandomState.multinomial` if a ``0`` is generated in the underlying `MT19937
+<~numpy.random.mt11937.MT19937>` random stream. There is a ``1`` in
+:math:`10^{53}` chance of this occurring, so the probability that the stream
+changes for any given seed is extremely small. If a ``0`` is encountered in the
+underlying generator, then the incorrect value produced (either `numpy.inf` or
+`numpy.nan`) is now dropped.
+
+.. currentmodule:: numpy
+
+`i0` now always returns a result with the same shape as the input
+-----------------------------------------------------------------
+Previously, the output was squeezed, such that, e.g., input with just a single
+element would lead to an array scalar being returned, and inputs with shapes
+such as ``(10, 1)`` would yield results that would not broadcast against the
+input.
+
+Note that we generally recommend the SciPy implementation over the numpy one:
+it is a proper ufunc written in C, and more than an order of magnitude faster.
+
+`can_cast` no longer assumes all unsafe casting is allowed
+----------------------------------------------------------
+Previously, `can_cast` returned `True` for almost all inputs for
+``casting='unsafe'``, even for cases where casting was not possible, such as
+from a structured dtype to a regular one. This has been fixed, making it
+more consistent with actual casting using, e.g., the `.astype <ndarray.astype>`
+method.
+
+``ndarray.flags.writeable`` can be switched to true slightly more often
+-----------------------------------------------------------------------
+
+In rare cases, it was not possible to switch an array from not writeable
+to writeable, although a base array is writeable. This can happen if an
+intermediate `ndarray.base` object is writeable. Previously, only the deepest
+base object was considered for this decision. However, in rare cases this
+object does not have the necessary information. In that case switching to
+writeable was never allowed. This has now been fixed.
+
+
+C API changes
+=============
+
+dimension or stride input arguments are now passed by ``npy_intp const*``
+-------------------------------------------------------------------------
+Previously these function arguments were declared as the more strict
+``npy_intp*``, which prevented the caller passing constant data.
+This change is backwards compatible, but now allows code like::
+
+ npy_intp const fixed_dims[] = {1, 2, 3};
+ // no longer complains that the const-qualifier is discarded
+ npy_intp size = PyArray_MultiplyList(fixed_dims, 3);
+
+
+New Features
+============
+
+.. currentmodule:: numpy.random
+
+New extensible `numpy.random` module with selectable random number generators
+-----------------------------------------------------------------------------
+A new extensible `numpy.random` module along with four selectable random number
+generators and improved seeding designed for use in parallel processes has been
+added. The currently available :ref:`Bit Generators <bit_generator>` are
+`~mt19937.MT19937`, `~pcg64.PCG64`, `~philox.Philox`, and `~sfc64.SFC64`.
+``PCG64`` is the new default while ``MT19937`` is retained for backwards
+compatibility. Note that the legacy random module is unchanged and is now
+frozen, your current results will not change. More information is available in
+the :ref:`API change description <new-or-different>` and in the `top-level view
+<numpy.random>` documentation.
+
+.. currentmodule:: numpy
+
+libFLAME
+--------
+Support for building NumPy with the libFLAME linear algebra package as the LAPACK,
+implementation, see
+`libFLAME <https://www.cs.utexas.edu/~flame/web/libFLAME.html>`_ for details.
+
+User-defined BLAS detection order
+---------------------------------
+`distutils` now uses an environment variable, comma-separated and case
+insensitive, to determine the detection order for BLAS libraries.
+By default ``NPY_BLAS_ORDER=mkl,blis,openblas,atlas,accelerate,blas``.
+However, to force the use of OpenBLAS simply do::
+
+ NPY_BLAS_ORDER=openblas python setup.py build
+
+which forces the use of OpenBLAS.
+This may be helpful for users which have a MKL installation but wishes to try
+out different implementations.
+
+User-defined LAPACK detection order
+-----------------------------------
+``numpy.distutils`` now uses an environment variable, comma-separated and case
+insensitive, to determine the detection order for LAPACK libraries.
+By default ``NPY_LAPACK_ORDER=mkl,openblas,flame,atlas,accelerate,lapack``.
+However, to force the use of OpenBLAS simply do::
+
+ NPY_LAPACK_ORDER=openblas python setup.py build
+
+which forces the use of OpenBLAS.
+This may be helpful for users which have a MKL installation but wishes to try
+out different implementations.
+
+`ufunc.reduce` and related functions now accept a ``where`` mask
+----------------------------------------------------------------
+`ufunc.reduce`, `sum`, `prod`, `min`, `max` all
+now accept a ``where`` keyword argument, which can be used to tell which
+elements to include in the reduction. For reductions that do not have an
+identity, it is necessary to also pass in an initial value (e.g.,
+``initial=np.inf`` for `min`). For instance, the equivalent of
+`nansum` would be ``np.sum(a, where=~np.isnan(a))``.
+
+Timsort and radix sort have replaced mergesort for stable sorting
+-----------------------------------------------------------------
+Both radix sort and timsort have been implemented and are now used in place of
+mergesort. Due to the need to maintain backward compatibility, the sorting
+``kind`` options ``"stable"`` and ``"mergesort"`` have been made aliases of
+each other with the actual sort implementation depending on the array type.
+Radix sort is used for small integer types of 16 bits or less and timsort for
+the remaining types. Timsort features improved performace on data containing
+already or nearly sorted data and performs like mergesort on random data and
+requires :math:`O(n/2)` working space. Details of the timsort algorithm can be
+found at `CPython listsort.txt
+<https://github.com/python/cpython/blob/3.7/Objects/listsort.txt>`_.
+
+`packbits` and `unpackbits` accept an ``order`` keyword
+-------------------------------------------------------
+The ``order`` keyword defaults to ``big``, and will order the **bits**
+accordingly. For ``'order=big'`` 3 will become ``[0, 0, 0, 0, 0, 0, 1, 1]``,
+and ``[1, 1, 0, 0, 0, 0, 0, 0]`` for ``order=little``
+
+`unpackbits` now accepts a ``count`` parameter
+----------------------------------------------
+``count`` allows subsetting the number of bits that will be unpacked up-front,
+rather than reshaping and subsetting later, making the `packbits` operation
+invertible, and the unpacking less wasteful. Counts larger than the number of
+available bits add zero padding. Negative counts trim bits off the end instead
+of counting from the beginning. None counts implement the existing behavior of
+unpacking everything.
+
+`linalg.svd` and `linalg.pinv` can be faster on hermitian inputs
+----------------------------------------------------------------
+These functions now accept a ``hermitian`` argument, matching the one added
+to `linalg.matrix_rank` in 1.14.0.
+
+divmod operation is now supported for two ``timedelta64`` operands
+------------------------------------------------------------------
+The divmod operator now handles two ``timedelta64`` operands, with
+type signature ``mm->qm``.
+
+`fromfile` now takes an ``offset`` argument
+-------------------------------------------
+This function now takes an ``offset`` keyword argument for binary files,
+which specifics the offset (in bytes) from the file's current position.
+Defaults to ``0``.
+
+New mode "empty" for `pad`
+--------------------------
+This mode pads an array to a desired shape without initializing the new
+entries.
+
+`empty_like` and related functions now accept a ``shape`` argument
+------------------------------------------------------------------
+`empty_like`, `full_like`, `ones_like` and `zeros_like` now accept a ``shape``
+keyword argument, which can be used to create a new array
+as the prototype, overriding its shape as well. This is particularly useful
+when combined with the ``__array_function__`` protocol, allowing the creation
+of new arbitrary-shape arrays from NumPy-like libraries when such an array
+is used as the prototype.
+
+Floating point scalars implement ``as_integer_ratio`` to match the builtin float
+--------------------------------------------------------------------------------
+This returns a (numerator, denominator) pair, which can be used to construct a
+`fractions.Fraction`.
+
+Structured ``dtype`` objects can be indexed with multiple fields names
+----------------------------------------------------------------------
+``arr.dtype[['a', 'b']]`` now returns a dtype that is equivalent to
+``arr[['a', 'b']].dtype``, for consistency with
+``arr.dtype['a'] == arr['a'].dtype``.
+
+Like the dtype of structured arrays indexed with a list of fields, this dtype
+has the same ``itemsize`` as the original, but only keeps a subset of the fields.
+
+This means that ``arr[['a', 'b']]`` and ``arr.view(arr.dtype[['a', 'b']])`` are
+equivalent.
+
+``.npy`` files support unicode field names
+------------------------------------------
+A new format version of 3.0 has been introduced, which enables structured types
+with non-latin1 field names. This is used automatically when needed.
+
+
+Improvements
+============
+
+Array comparison assertions include maximum differences
+-------------------------------------------------------
+Error messages from array comparison tests such as
+`testing.assert_allclose` now include "max absolute difference" and
+"max relative difference," in addition to the previous "mismatch" percentage.
+This information makes it easier to update absolute and relative error
+tolerances.
+
+Replacement of the fftpack based `fft` module by the pocketfft library
+----------------------------------------------------------------------
+Both implementations have the same ancestor (Fortran77 FFTPACK by Paul N.
+Swarztrauber), but pocketfft contains additional modifications which improve
+both accuracy and performance in some circumstances. For FFT lengths containing
+large prime factors, pocketfft uses Bluestein's algorithm, which maintains
+:math:`O(N log N)` run time complexity instead of deteriorating towards
+:math:`O(N*N)` for prime lengths. Also, accuracy for real valued FFTs with near
+prime lengths has improved and is on par with complex valued FFTs.
+
+Further improvements to ``ctypes`` support in `numpy.ctypeslib`
+---------------------------------------------------------------
+A new `numpy.ctypeslib.as_ctypes_type` function has been added, which can be
+used to converts a `dtype` into a best-guess `ctypes` type. Thanks to this
+new function, `numpy.ctypeslib.as_ctypes` now supports a much wider range of
+array types, including structures, booleans, and integers of non-native
+endianness.
+
+`numpy.errstate` is now also a function decorator
+-------------------------------------------------
+Currently, if you have a function like::
+
+ def foo():
+ pass
+
+and you want to wrap the whole thing in `errstate`, you have to rewrite it
+like so::
+
+ def foo():
+ with np.errstate(...):
+ pass
+
+but with this change, you can do::
+
+ @np.errstate(...)
+ def foo():
+ pass
+
+thereby saving a level of indentation
+
+`numpy.exp` and `numpy.log` speed up for float32 implementation
+---------------------------------------------------------------
+float32 implementation of `exp` and `log` now benefit from AVX2/AVX512
+instruction set which are detected during runtime. `exp` has a max ulp
+error of 2.52 and `log` has a max ulp error or 3.83.
+
+Improve performance of `numpy.pad`
+----------------------------------
+The performance of the function has been improved for most cases by filling in
+a preallocated array with the desired padded shape instead of using
+concatenation.
+
+`numpy.interp` handles infinities more robustly
+-----------------------------------------------
+In some cases where `interp` would previously return `nan`, it now
+returns an appropriate infinity.
+
+Pathlib support for `fromfile`, `tofile` and `ndarray.dump`
+-----------------------------------------------------------
+`fromfile`, `ndarray.ndarray.tofile` and `ndarray.dump` now support
+the `pathlib.Path` type for the ``file``/``fid`` parameter.
+
+Specialized `isnan`, `isinf`, and `isfinite` ufuncs for bool and int types
+--------------------------------------------------------------------------
+The boolean and integer types are incapable of storing `nan` and `inf` values,
+which allows us to provide specialized ufuncs that are up to 250x faster than
+the previous approach.
+
+`isfinite` supports ``datetime64`` and ``timedelta64`` types
+-----------------------------------------------------------------
+Previously, `isfinite` used to raise a `TypeError` on being used on these
+two types.
+
+New keywords added to `nan_to_num`
+----------------------------------
+`nan_to_num` now accepts keywords ``nan``, ``posinf`` and ``neginf``
+allowing the user to define the value to replace the ``nan``, positive and
+negative ``np.inf`` values respectively.
+
+MemoryErrors caused by allocated overly large arrays are more descriptive
+-------------------------------------------------------------------------
+Often the cause of a MemoryError is incorrect broadcasting, which results in a
+very large and incorrect shape. The message of the error now includes this
+shape to help diagnose the cause of failure.
+
+`floor`, `ceil`, and `trunc` now respect builtin magic methods
+--------------------------------------------------------------
+These ufuncs now call the ``__floor__``, ``__ceil__``, and ``__trunc__``
+methods when called on object arrays, making them compatible with
+`decimal.Decimal` and `fractions.Fraction` objects.
+
+`quantile` now works on `fraction.Fraction` and `decimal.Decimal` objects
+-------------------------------------------------------------------------
+In general, this handles object arrays more gracefully, and avoids floating-
+point operations if exact arithmetic types are used.
+
+Support of object arrays in `matmul`
+------------------------------------
+It is now possible to use `matmul` (or the ``@`` operator) with object arrays.
+For instance, it is now possible to do::
+
+ from fractions import Fraction
+ a = np.array([[Fraction(1, 2), Fraction(1, 3)], [Fraction(1, 3), Fraction(1, 2)]])
+ b = a @ a
+
+
+Changes
+=======
+
+`median` and `percentile` family of functions no longer warn about ``nan``
+--------------------------------------------------------------------------
+`numpy.median`, `numpy.percentile`, and `numpy.quantile` used to emit a
+``RuntimeWarning`` when encountering an `nan`. Since they return the
+``nan`` value, the warning is redundant and has been removed.
+
+``timedelta64 % 0`` behavior adjusted to return ``NaT``
+-------------------------------------------------------
+The modulus operation with two ``np.timedelta64`` operands now returns
+``NaT`` in the case of division by zero, rather than returning zero
+
+NumPy functions now always support overrides with ``__array_function__``
+------------------------------------------------------------------------
+NumPy now always checks the ``__array_function__`` method to implement overrides
+of NumPy functions on non-NumPy arrays, as described in `NEP 18`_. The feature
+was available for testing with NumPy 1.16 if appropriate environment variables
+are set, but is now always enabled.
+
+.. _`NEP 18` : http://www.numpy.org/neps/nep-0018-array-function-protocol.html
+
+``lib.recfunctions.structured_to_unstructured`` does not squeeze single-field views
+-----------------------------------------------------------------------------------
+Previously ``structured_to_unstructured(arr[['a']])`` would produce a squeezed
+result inconsistent with ``structured_to_unstructured(arr[['a', b']])``. This
+was accidental. The old behavior can be retained with
+``structured_to_unstructured(arr[['a']]).squeeze(axis=-1)`` or far more simply,
+``arr['a']``.
+
+`clip` now uses a ufunc under the hood
+--------------------------------------
+This means that registering clip functions for custom dtypes in C via
+``descr->f->fastclip`` is deprecated - they should use the ufunc registration
+mechanism instead, attaching to the ``np.core.umath.clip`` ufunc.
+
+It also means that ``clip`` accepts ``where`` and ``casting`` arguments,
+and can be override with ``__array_ufunc__``.
+
+A consequence of this change is that some behaviors of the old ``clip`` have
+been deprecated:
+
+* Passing ``nan`` to mean "do not clip" as one or both bounds. This didn't work
+ in all cases anyway, and can be better handled by passing infinities of the
+ appropriate sign.
+* Using "unsafe" casting by default when an ``out`` argument is passed. Using
+ ``casting="unsafe"`` explicitly will silence this warning.
+
+Additionally, there are some corner cases with behavior changes:
+
+* Padding ``max < min`` has changed to be more consistent across dtypes, but
+ should not be relied upon.
+* Scalar ``min`` and ``max`` take part in promotion rules like they do in all
+ other ufuncs.
+
+``__array_interface__`` offset now works as documented
+------------------------------------------------------
+The interface may use an ``offset`` value that was mistakenly ignored.
+
+Pickle protocol in `savez` set to 3 for ``force zip64`` flag
+-----------------------------------------------------------------
+`savez` was not using the ``force_zip64`` flag, which limited the size of
+the archive to 2GB. But using the flag requires us to use pickle protocol 3 to
+write ``object`` arrays. The protocol used was bumped to 3, meaning the archive
+will be unreadable by Python2.
+
+Structured arrays indexed with non-existent fields raise ``KeyError`` not ``ValueError``
+----------------------------------------------------------------------------------------
+``arr['bad_field']`` on a structured type raises ``KeyError``, for consistency
+with ``dict['bad_field']``.
+
diff --git a/doc/release/template.rst b/doc/release/template.rst
index db9458ac1..fdfec2be9 100644
--- a/doc/release/template.rst
+++ b/doc/release/template.rst
@@ -11,16 +11,16 @@ New functions
=============
-New deprecations
-================
+Deprecations
+============
-Expired deprecations
-====================
+Future Changes
+==============
-Future changes
-==============
+Expired deprecations
+====================
Compatibility notes
diff --git a/doc/source/_templates/indexcontent.html b/doc/source/_templates/indexcontent.html
index 008eaaa7c..294d39233 100644
--- a/doc/source/_templates/indexcontent.html
+++ b/doc/source/_templates/indexcontent.html
@@ -7,6 +7,8 @@
<span class="linkdescr">start here</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("reference/index") }}">NumPy Reference</a><br/>
<span class="linkdescr">reference documentation</span></p>
+ <p class="biglink"><a class="biglink" href="{{ pathto("benchmarking") }}">Benchmarking</a><br/>
+ <span class="linkdescr">benchmarking NumPy</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("f2py/index") }}">F2Py Guide</a><br/>
<span class="linkdescr">f2py documentation</span></p>
<p class="biglink"><a class="biglink" href="{{ pathto("dev/index") }}">NumPy Developer Guide</a><br/>
diff --git a/doc/source/about.rst b/doc/source/about.rst
index 5ac4facbb..3e83833d1 100644
--- a/doc/source/about.rst
+++ b/doc/source/about.rst
@@ -8,7 +8,7 @@ needed for scientific computing with Python. This package contains:
- sophisticated :ref:`(broadcasting) functions <ufuncs>`
- basic :ref:`linear algebra functions <routines.linalg>`
- basic :ref:`Fourier transforms <routines.fft>`
-- sophisticated :ref:`random number capabilities <routines.random>`
+- sophisticated :ref:`random number capabilities <numpyrandom>`
- tools for integrating Fortran code
- tools for integrating C/C++ code
diff --git a/doc/source/benchmarking.rst b/doc/source/benchmarking.rst
new file mode 100644
index 000000000..9f0eeb03a
--- /dev/null
+++ b/doc/source/benchmarking.rst
@@ -0,0 +1 @@
+.. include:: ../../benchmarks/README.rst
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 072a3b44e..fa0c0e7e4 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -19,11 +19,19 @@ needs_sphinx = '1.0'
sys.path.insert(0, os.path.abspath('../sphinxext'))
-extensions = ['sphinx.ext.autodoc', 'numpydoc',
- 'sphinx.ext.intersphinx', 'sphinx.ext.coverage',
- 'sphinx.ext.doctest', 'sphinx.ext.autosummary',
- 'sphinx.ext.graphviz', 'sphinx.ext.ifconfig',
- 'matplotlib.sphinxext.plot_directive']
+extensions = [
+ 'sphinx.ext.autodoc',
+ 'numpydoc',
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.coverage',
+ 'sphinx.ext.doctest',
+ 'sphinx.ext.autosummary',
+ 'sphinx.ext.graphviz',
+ 'sphinx.ext.ifconfig',
+ 'matplotlib.sphinxext.plot_directive',
+ 'IPython.sphinxext.ipython_console_highlighting',
+ 'IPython.sphinxext.ipython_directive',
+]
if sphinx.__version__ >= "1.4":
extensions.append('sphinx.ext.imgmath')
@@ -234,7 +242,7 @@ numpydoc_use_plots = True
# -----------------------------------------------------------------------------
import glob
-autosummary_generate = glob.glob("reference/*.rst")
+autosummary_generate = True
# -----------------------------------------------------------------------------
# Coverage checker
@@ -355,3 +363,21 @@ def linkcode_resolve(domain, info):
else:
return "https://github.com/numpy/numpy/blob/v%s/numpy/%s%s" % (
numpy.__version__, fn, linespec)
+
+from pygments.lexers import CLexer
+from pygments import token
+from sphinx.highlighting import lexers
+import copy
+
+class NumPyLexer(CLexer):
+ name = 'NUMPYLEXER'
+
+ tokens = copy.deepcopy(lexers['c'].tokens)
+ # Extend the regex for valid identifiers with @
+ for k, val in tokens.items():
+ for i, v in enumerate(val):
+ if isinstance(v, tuple):
+ if isinstance(v[0], str):
+ val[i] = (v[0].replace('a-zA-Z', 'a-zA-Z@'),) + v[1:]
+
+lexers['NumPyC'] = NumPyLexer(stripnl=False)
diff --git a/doc/source/dev/development_environment.rst b/doc/source/dev/development_environment.rst
index aa4326f63..bc491b711 100644
--- a/doc/source/dev/development_environment.rst
+++ b/doc/source/dev/development_environment.rst
@@ -8,7 +8,9 @@ Recommended development setup
Since NumPy contains parts written in C and Cython that need to be
compiled before use, make sure you have the necessary compilers and Python
-development headers installed - see :ref:`building-from-source`.
+development headers installed - see :ref:`building-from-source`. Building
+NumPy as of version ``1.17`` requires a C99 compliant compiler. For
+some older compilers this may require ``export CFLAGS='-std=c99'``.
Having compiled code also means that importing NumPy from the development
sources needs some additional steps, which are explained below. For the rest
@@ -125,6 +127,9 @@ the interpreter, tests can be run like this::
>>> np.test('full') # Also run tests marked as slow
>>> np.test('full', verbose=2) # Additionally print test name/file
+ An example of a successful test :
+ ``4686 passed, 362 skipped, 9 xfailed, 5 warnings in 213.99 seconds``
+
Or a similar way from the command line::
$ python -c "import numpy as np; np.test()"
@@ -142,9 +147,9 @@ That also takes extra arguments, like ``--pdb`` which drops you into the Python
debugger when a test fails or an exception is raised.
Running tests with `tox`_ is also supported. For example, to build NumPy and
-run the test suite with Python 3.4, use::
+run the test suite with Python 3.7, use::
- $ tox -e py34
+ $ tox -e py37
For more extensive information, see :ref:`testing-guidelines`
diff --git a/doc/source/dev/gitwash/development_workflow.rst b/doc/source/dev/development_workflow.rst
index 9561e25f7..200d95b92 100644
--- a/doc/source/dev/gitwash/development_workflow.rst
+++ b/doc/source/dev/development_workflow.rst
@@ -507,4 +507,4 @@ them to ``upstream`` as follows:
want.
-.. include:: git_links.inc
+.. include:: gitwash/git_links.inc
diff --git a/doc/source/dev/gitwash/following_latest.rst b/doc/source/dev/gitwash/following_latest.rst
index ad497bf9a..0e98b4ec4 100644
--- a/doc/source/dev/gitwash/following_latest.rst
+++ b/doc/source/dev/gitwash/following_latest.rst
@@ -1,9 +1,5 @@
.. _following-latest:
-=============================
- Following the latest source
-=============================
-
These are the instructions if you just want to follow the latest
*NumPy* source, but you don't need to do any development for now.
If you do want to contribute a patch (excellent!) or do more extensive
diff --git a/doc/source/dev/gitwash/git_development.rst b/doc/source/dev/gitwash/git_development.rst
deleted file mode 100644
index 5d7d47f89..000000000
--- a/doc/source/dev/gitwash/git_development.rst
+++ /dev/null
@@ -1,14 +0,0 @@
-.. _git-development:
-
-=====================
- Git for development
-=====================
-
-Contents:
-
-.. toctree::
- :maxdepth: 2
-
- development_setup
- configure_git
- dot2_dot3
diff --git a/doc/source/dev/gitwash/git_intro.rst b/doc/source/dev/gitwash/git_intro.rst
index 3ce322f8f..9d596d4d4 100644
--- a/doc/source/dev/gitwash/git_intro.rst
+++ b/doc/source/dev/gitwash/git_intro.rst
@@ -1,42 +1,8 @@
-============
-Introduction
-============
-
-These pages describe a git_ and github_ workflow for the NumPy_
-project.
-
-There are several different workflows here, for different ways of
-working with *NumPy*.
-
-This is not a comprehensive git_ reference, it's just a workflow for our
-own project. It's tailored to the github_ hosting service. You may well
-find better or quicker ways of getting stuff done with git_, but these
-should get you started.
-
-For general resources for learning git_ see :ref:`git-resources`.
-
-.. _install-git:
-
Install git
===========
-Overview
---------
-
-================ =============
-Debian / Ubuntu ``sudo apt-get install git-core``
-Fedora ``sudo yum install git-core``
-Windows Download and install msysGit_
-OS X Use the git-osx-installer_
-================ =============
-
-In detail
----------
-
-See the git_ page for the most recent information.
-
-Have a look at the github_ install help pages available from `github help`_
-
-There are good instructions here: http://book.git-scm.com/2_installing_git.html
+Developing with git can be done entirely without github. Git is a distributed
+version control system. In order to use git on your machine you must `install
+it`_.
.. include:: git_links.inc
diff --git a/doc/source/dev/gitwash/git_links.inc b/doc/source/dev/gitwash/git_links.inc
index cebbb3a67..f69a3cf62 100644
--- a/doc/source/dev/gitwash/git_links.inc
+++ b/doc/source/dev/gitwash/git_links.inc
@@ -10,10 +10,9 @@
.. git stuff
.. _git: https://git-scm.com/
-.. _github: https://github.com
+.. _github: https://github.com/numpy/numpy
.. _github help: https://help.github.com
-.. _msysgit: https://code.google.com/p/msysgit/downloads/list
-.. _git-osx-installer: https://code.google.com/p/git-osx-installer/downloads/list
+.. _`install it`: https://git-scm.com/downloads
.. _subversion: http://subversion.tigris.org/
.. _git cheat sheet: http://cheat.errtheblog.com/s/git
.. _pro git book: https://git-scm.com/book/
diff --git a/doc/source/dev/gitwash/index.rst b/doc/source/dev/gitwash/index.rst
index b867bbd97..afbb5e019 100644
--- a/doc/source/dev/gitwash/index.rst
+++ b/doc/source/dev/gitwash/index.rst
@@ -1,7 +1,22 @@
.. _using-git:
+.. _git-development:
+
+=====================
+ Git for development
+=====================
+
+These pages describe a general git_ and github_ workflow.
+
+This is not a comprehensive git_ reference. It's tailored to the github_
+hosting service. You may well find better or quicker ways of getting stuff done
+with git_, but these should get you started.
+
+For general resources for learning git_ see :ref:`git-resources`.
+
+Have a look at the github_ install help pages available from `github help`_
+
+.. _install-git:
-Working with *NumPy* source code
-================================
Contents:
@@ -10,6 +25,9 @@ Contents:
git_intro
following_latest
- git_development
- development_workflow
+ development_setup
+ configure_git
+ dot2_dot3
git_resources
+
+.. include:: git_links.inc
diff --git a/doc/source/dev/governance/people.rst b/doc/source/dev/governance/people.rst
index 7b8d3cab0..40347f9bf 100644
--- a/doc/source/dev/governance/people.rst
+++ b/doc/source/dev/governance/people.rst
@@ -56,10 +56,7 @@ NumFOCUS Subcommittee
Institutional Partners
----------------------
-* UC Berkeley (Stefan van der Walt)
+* UC Berkeley (Stefan van der Walt, Matti Picus, Tyler Reddy, Sebastian Berg)
+* Quansight (Ralf Gommers, Hameer Abbasi)
-Document history
-----------------
-
-https://github.com/numpy/numpy/commits/master/doc/source/dev/governance/governance.rst
diff --git a/doc/source/dev/index.rst b/doc/source/dev/index.rst
index 825b93b53..f0b81ba5d 100644
--- a/doc/source/dev/index.rst
+++ b/doc/source/dev/index.rst
@@ -2,14 +2,230 @@
Contributing to NumPy
#####################
+Development process - summary
+=============================
+
+Here's the short summary, complete TOC links are below:
+
+1. If you are a first-time contributor:
+
+ * Go to `https://github.com/numpy/numpy
+ <https://github.com/numpy/numpy>`_ and click the
+ "fork" button to create your own copy of the project.
+
+ * Clone the project to your local computer::
+
+ git clone https://github.com/your-username/numpy.git
+
+ * Change the directory::
+
+ cd numpy
+
+ * Add the upstream repository::
+
+ git remote add upstream https://github.com/numpy/numpy.git
+
+ * Now, `git remote -v` will show two remote repositories named:
+
+ - ``upstream``, which refers to the ``numpy`` repository
+ - ``origin``, which refers to your personal fork
+
+2. Develop your contribution:
+
+ * Pull the latest changes from upstream::
+
+ git checkout master
+ git pull upstream master
+
+ * Create a branch for the feature you want to work on. Since the
+ branch name will appear in the merge message, use a sensible name
+ such as 'linspace-speedups'::
+
+ git checkout -b linspace-speedups
+
+ * Commit locally as you progress (``git add`` and ``git commit``)
+ Use a `properly formatted <writing-the-commit-message>` commit message,
+ write tests that fail before your change and pass afterward, run all the
+ `tests locally <development-environment>`. Be sure to document any
+ changed behavior in docstrings, keeping to the NumPy docstring
+ `standard <howto-document>`.
+
+3. To submit your contribution:
+
+ * Push your changes back to your fork on GitHub::
+
+ git push origin linspace-speedups
+
+ * Enter your GitHub username and password (repeat contributors or advanced
+ users can remove this step by connecting to GitHub with `SSH <set-up-and-
+ configure-a-github-account>`.
+
+ * Go to GitHub. The new branch will show up with a green Pull Request
+ button. Make sure the title and message are clear, concise, and self-
+ explanatory. Then click the button to submit it.
+
+ * If your commit introduces a new feature or changes functionality, post on
+ the `mailing list`_ to explain your changes. For bug fixes, documentation
+ updates, etc., this is generally not necessary, though if you do not get
+ any reaction, do feel free to ask for review.
+
+4. Review process:
+
+ * Reviewers (the other developers and interested community members) will
+ write inline and/or general comments on your Pull Request (PR) to help
+ you improve its implementation, documentation and style. Every single
+ developer working on the project has their code reviewed, and we've come
+ to see it as friendly conversation from which we all learn and the
+ overall code quality benefits. Therefore, please don't let the review
+ discourage you from contributing: its only aim is to improve the quality
+ of project, not to criticize (we are, after all, very grateful for the
+ time you're donating!).
+
+ * To update your PR, make your changes on your local repository, commit,
+ **run tests, and only if they succeed** push to your fork. As soon as
+ those changes are pushed up (to the same branch as before) the PR will
+ update automatically. If you have no idea how to fix the test failures,
+ you may push your changes anyway and ask for help in a PR comment.
+
+ * Various continuous integration (CI) services are triggered after each PR
+ update to build the code, run unit tests, measure code coverage and check
+ coding style of your branch. The CI tests must pass before your PR can be
+ merged. If CI fails, you can find out why by clicking on the "failed"
+ icon (red cross) and inspecting the build and test log. To avoid overuse
+ and waste of this resource, `test your work <recommended-development-
+ setup>` locally before committing.
+
+ * A PR must be **approved** by at least one core team member before merging.
+ Approval means the core team member has carefully reviewed the changes,
+ and the PR is ready for merging.
+
+5. Document changes
+
+ Beyond changes to a functions docstring and possible description in the
+ general documentation, if your change introduces any user-facing
+ modifications, update the current release notes under
+ ``doc/release/X.XX-notes.rst``
+
+ If your change introduces a deprecation, make sure to discuss this first on
+ GitHub or the mailing list first. If agreement on the deprecation is
+ reached, follow `NEP 23 deprecation policy <http://www.numpy.org/neps/
+ nep-0023-backwards-compatibility.html>`_ to add the deprecation.
+
+6. Cross referencing issues
+
+ If the PR relates to any issues, you can add the text ``xref gh-xxxx`` where
+ ``xxxx`` is the number of the issue to github comments. Likewise, if the PR
+ solves an issue, replace the ``xref`` with ``closes``, ``fixes`` or any of
+ the other flavors `github accepts <https://help.github.com/en/articles/
+ closing-issues-using-keywords>`_.
+
+ In the source code, be sure to preface any issue or PR reference with
+ ``gh-xxxx``.
+
+For a more detailed discussion, read on and follow the links at the bottom of
+this page.
+
+Divergence between ``upstream/master`` and your feature branch
+--------------------------------------------------------------
+
+If GitHub indicates that the branch of your Pull Request can no longer
+be merged automatically, you have to incorporate changes that have been made
+since you started into your branch. Our recommended way to do this is to
+`rebase on master <rebasing-on-master>`.
+
+Guidelines
+----------
+
+* All code should have tests (see `test coverage`_ below for more details).
+* All code should be `documented <docstring-standard>`.
+* No changes are ever committed without review and approval by a core
+ team member.Please ask politely on the PR or on the `mailing list`_ if you
+ get no response to your pull request within a week.
+
+Stylistic Guidelines
+--------------------
+
+* Set up your editor to follow `PEP 8 <https://www.python.org/dev/peps/
+ pep-0008/>`_ (remove trailing white space, no tabs, etc.). Check code with
+ pyflakes / flake8.
+
+* Use numpy data types instead of strings (``np.uint8`` instead of
+ ``"uint8"``).
+
+* Use the following import conventions::
+
+ import numpy as np
+
+* For C code, see the `numpy-c-style-guide`
+
+
+Test coverage
+-------------
+
+Pull requests (PRs) that modify code should either have new tests, or modify existing
+tests to fail before the PR and pass afterwards. You should `run the tests
+<development-environment>` before pushing a PR.
+
+Tests for a module should ideally cover all code in that module,
+i.e., statement coverage should be at 100%.
+
+To measure the test coverage, install
+`pytest-cov <https://pytest-cov.readthedocs.io/en/latest/>`__
+and then run::
+
+ $ python runtests.py --coverage
+
+This will create a report in `build/coverage`, which can be viewed with::
+
+ $ firefox build/coverage/index.html
+
+Building docs
+-------------
+
+To build docs, run ``make`` from the ``doc`` directory. ``make help`` lists
+all targets. For example, to build the HTML documentation, you can run:
+
+.. code:: sh
+
+ make html
+
+Then, all the HTML files will be generated in ``doc/build/html/``.
+Since the documentation is based on docstrings, the appropriate version of
+numpy must be installed in the host python used to run sphinx.
+
+Requirements
+~~~~~~~~~~~~
+
+`Sphinx <http://www.sphinx-doc.org/en/stable/>`__ is needed to build
+the documentation. Matplotlib and SciPy are also required.
+
+Fixing Warnings
+~~~~~~~~~~~~~~~
+
+- "citation not found: R###" There is probably an underscore after a
+ reference in the first line of a docstring (e.g. [1]\_). Use this
+ method to find the source file: $ cd doc/build; grep -rin R####
+
+- "Duplicate citation R###, other instance in..."" There is probably a
+ [2] without a [1] in one of the docstrings
+
+Development process - details
+=============================
+
+The rest of the story
+
.. toctree::
- :maxdepth: 3
+ :maxdepth: 2
conduct/code_of_conduct
- gitwash/index
+ Git Basics <gitwash/index>
development_environment
+ development_workflow
+ ../benchmarking
style_guide
releasing
governance/index
-For core developers: see :ref:`development-workflow`.
+NumPy-specific workflow is in `numpy-development-workflow`.
+
+.. _`mailing list`: https://mail.python.org/mailman/listinfo/numpy-devel
diff --git a/doc/source/dev/gitwash/pull_button.png b/doc/source/dev/pull_button.png
index e5031681b..e5031681b 100644
--- a/doc/source/dev/gitwash/pull_button.png
+++ b/doc/source/dev/pull_button.png
Binary files differ
diff --git a/doc/source/docs/howto_build_docs.rst b/doc/source/docs/howto_build_docs.rst
index cdf490c37..98d1b88ba 100644
--- a/doc/source/docs/howto_build_docs.rst
+++ b/doc/source/docs/howto_build_docs.rst
@@ -5,7 +5,7 @@ Building the NumPy API and reference docs
=========================================
We currently use Sphinx_ for generating the API and reference
-documentation for NumPy. You will need Sphinx 1.0.1 or newer.
+documentation for NumPy. You will need Sphinx 1.8.3 or newer.
If you only want to get the documentation, note that pre-built
versions can be found at
diff --git a/doc/source/reference/arrays.classes.rst b/doc/source/reference/arrays.classes.rst
index dc8669a2b..0fafc593e 100644
--- a/doc/source/reference/arrays.classes.rst
+++ b/doc/source/reference/arrays.classes.rst
@@ -50,10 +50,6 @@ NumPy provides several hooks that classes can customize:
.. versionadded:: 1.13
- .. note:: The API is `provisional
- <https://docs.python.org/3/glossary.html#term-provisional-api>`_,
- i.e., we do not yet guarantee backward compatibility.
-
Any class, ndarray subclass or not, can define this method or set it to
:obj:`None` in order to override the behavior of NumPy's ufuncs. This works
quite similarly to Python's ``__mul__`` and other binary operation routines.
diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst
index f2072263f..ab743a8ee 100644
--- a/doc/source/reference/arrays.dtypes.rst
+++ b/doc/source/reference/arrays.dtypes.rst
@@ -14,7 +14,7 @@ following aspects of the data:
1. Type of the data (integer, float, Python object, etc.)
2. Size of the data (how many bytes is in *e.g.* the integer)
3. Byte order of the data (:term:`little-endian` or :term:`big-endian`)
-4. If the data type is :term:`structured`, an aggregate of other
+4. If the data type is :term:`structured data type`, an aggregate of other
data types, (*e.g.*, describing an array item consisting of
an integer and a float),
@@ -42,7 +42,7 @@ needed in NumPy.
pair: dtype; field
Structured data types are formed by creating a data type whose
-:term:`fields` contain other data types. Each field has a name by
+:term:`field` contain other data types. Each field has a name by
which it can be :ref:`accessed <arrays.indexing.fields>`. The parent data
type should be of sufficient size to contain all its fields; the
parent is nearly always based on the :class:`void` type which allows
@@ -145,7 +145,7 @@ Array-scalar types
This is true for their sub-classes as well.
Note that not all data-type information can be supplied with a
- type-object: for example, :term:`flexible` data-types have
+ type-object: for example, `flexible` data-types have
a default *itemsize* of 0, and require an explicitly given size
to be useful.
@@ -511,7 +511,7 @@ Endianness of this data:
dtype.byteorder
-Information about sub-data-types in a :term:`structured` data type:
+Information about sub-data-types in a :term:`structured data type`:
.. autosummary::
:toctree: generated/
@@ -538,6 +538,7 @@ Attributes providing additional information:
dtype.isnative
dtype.descr
dtype.alignment
+ dtype.base
Methods
diff --git a/doc/source/reference/arrays.indexing.rst b/doc/source/reference/arrays.indexing.rst
index 027a57f26..0c0c8dff6 100644
--- a/doc/source/reference/arrays.indexing.rst
+++ b/doc/source/reference/arrays.indexing.rst
@@ -57,6 +57,17 @@ interpreted as counting from the end of the array (*i.e.*, if
All arrays generated by basic slicing are always :term:`views <view>`
of the original array.
+.. note::
+
+ NumPy slicing creates a :term:`view` instead of a copy as in the case of
+ builtin Python sequences such as string, tuple and list.
+ Care must be taken when extracting
+ a small portion from a large array which becomes useless after the
+ extraction, because the small portion extracted contains a reference
+ to the large original array whose memory will not be released until
+ all arrays derived from it are garbage-collected. In such cases an
+ explicit ``copy()`` is recommended.
+
The standard rules of sequence slicing apply to basic slicing on a
per-dimension basis (including using a step index). Some useful
concepts to remember include:
@@ -111,9 +122,10 @@ concepts to remember include:
[5],
[6]]])
-- :const:`Ellipsis` expand to the number of ``:`` objects needed to
- make a selection tuple of the same length as ``x.ndim``. There may
- only be a single ellipsis present.
+- :const:`Ellipsis` expands to the number of ``:`` objects needed for the
+ selection tuple to index all dimensions. In most cases, this means that
+ length of the expanded selection tuple is ``x.ndim``. There may only be a
+ single ellipsis present.
.. admonition:: Example
diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst
index 306d22f43..8f431bc9c 100644
--- a/doc/source/reference/arrays.ndarray.rst
+++ b/doc/source/reference/arrays.ndarray.rst
@@ -9,7 +9,7 @@ The N-dimensional array (:class:`ndarray`)
An :class:`ndarray` is a (usually fixed-size) multidimensional
container of items of the same type and size. The number of dimensions
and items in an array is defined by its :attr:`shape <ndarray.shape>`,
-which is a :class:`tuple` of *N* positive integers that specify the
+which is a :class:`tuple` of *N* non-negative integers that specify the
sizes of each dimension. The type of items in the array is specified by
a separate :ref:`data-type object (dtype) <arrays.dtypes>`, one of which
is associated with each ndarray.
@@ -82,10 +82,12 @@ Indexing arrays
Arrays can be indexed using an extended Python slicing syntax,
``array[selection]``. Similar syntax is also used for accessing
-fields in a :ref:`structured array <arrays.dtypes.field>`.
+fields in a :term:`structured data type`.
.. seealso:: :ref:`Array Indexing <arrays.indexing>`.
+.. _memory-layout:
+
Internal memory layout of an ndarray
====================================
@@ -127,7 +129,7 @@ strided scheme, and correspond to memory that can be *addressed* by the strides:
where :math:`d_j` `= self.shape[j]`.
Both the C and Fortran orders are :term:`contiguous`, *i.e.,*
-:term:`single-segment`, memory layouts, in which every part of the
+single-segment, memory layouts, in which every part of the
memory block can be accessed by some combination of the indices.
While a C-style and Fortran-style contiguous array, which has the corresponding
@@ -143,14 +145,15 @@ different. This can happen in two cases:
considered C-style and Fortran-style contiguous.
Point 1. means that ``self`` and ``self.squeeze()`` always have the same
-contiguity and :term:`aligned` flags value. This also means that even a high
-dimensional array could be C-style and Fortran-style contiguous at the same
-time.
+contiguity and ``aligned`` flags value. This also means
+that even a high dimensional array could be C-style and Fortran-style
+contiguous at the same time.
.. index:: aligned
An array is considered aligned if the memory offsets for all elements and the
-base offset itself is a multiple of `self.itemsize`.
+base offset itself is a multiple of `self.itemsize`. Understanding
+`memory-alignment` leads to better performance on most hardware.
.. note::
@@ -409,6 +412,7 @@ be performed.
.. autosummary::
:toctree: generated/
+ ndarray.max
ndarray.argmax
ndarray.min
ndarray.argmin
@@ -440,7 +444,7 @@ Each of the arithmetic operations (``+``, ``-``, ``*``, ``/``, ``//``,
``%``, ``divmod()``, ``**`` or ``pow()``, ``<<``, ``>>``, ``&``,
``^``, ``|``, ``~``) and the comparisons (``==``, ``<``, ``>``,
``<=``, ``>=``, ``!=``) is equivalent to the corresponding
-:term:`universal function` (or :term:`ufunc` for short) in NumPy. For
+universal function (or :term:`ufunc` for short) in NumPy. For
more information, see the section on :ref:`Universal Functions
<ufuncs>`.
@@ -461,12 +465,12 @@ Truth value of an array (:func:`bool()`):
.. autosummary::
:toctree: generated/
- ndarray.__nonzero__
+ ndarray.__bool__
.. note::
Truth-value testing of an array invokes
- :meth:`ndarray.__nonzero__`, which raises an error if the number of
+ :meth:`ndarray.__bool__`, which raises an error if the number of
elements in the array is larger than 1, because the truth value
of such arrays is ambiguous. Use :meth:`.any() <ndarray.any>` and
:meth:`.all() <ndarray.all>` instead to be clear about what is meant
@@ -492,7 +496,6 @@ Arithmetic:
ndarray.__add__
ndarray.__sub__
ndarray.__mul__
- ndarray.__div__
ndarray.__truediv__
ndarray.__floordiv__
ndarray.__mod__
@@ -527,7 +530,6 @@ Arithmetic, in-place:
ndarray.__iadd__
ndarray.__isub__
ndarray.__imul__
- ndarray.__idiv__
ndarray.__itruediv__
ndarray.__ifloordiv__
ndarray.__imod__
@@ -597,19 +599,17 @@ Container customization: (see :ref:`Indexing <arrays.indexing>`)
ndarray.__setitem__
ndarray.__contains__
-Conversion; the operations :func:`complex()`, :func:`int()`,
-:func:`long()`, :func:`float()`, :func:`oct()`, and
-:func:`hex()`. They work only on arrays that have one element in them
+Conversion; the operations :func:`int()`, :func:`float()` and
+:func:`complex()`.
+. They work only on arrays that have one element in them
and return the appropriate scalar.
.. autosummary::
:toctree: generated/
ndarray.__int__
- ndarray.__long__
ndarray.__float__
- ndarray.__oct__
- ndarray.__hex__
+ ndarray.__complex__
String representations:
diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst
index 9c4f05f75..d27d61e2c 100644
--- a/doc/source/reference/arrays.scalars.rst
+++ b/doc/source/reference/arrays.scalars.rst
@@ -177,7 +177,7 @@ Any Python object:
.. note::
- The data actually stored in :term:`object arrays <object array>`
+ The data actually stored in object arrays
(*i.e.*, arrays having dtype :class:`object_`) are references to
Python objects, not the objects themselves. Hence, object arrays
behave more like usual Python :class:`lists <list>`, in the sense
@@ -188,8 +188,10 @@ Any Python object:
on item access, but instead returns the actual object that
the array item refers to.
-The following data types are :term:`flexible`. They have no predefined
-size: the data they describe can be of different length in different
+.. index:: flexible
+
+The following data types are **flexible**: they have no predefined
+size and the data they describe can be of different length in different
arrays. (In the character codes ``#`` is an integer denoting how many
elements the data type consists of.)
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index 76aa680ae..bd6062b16 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -33,7 +33,7 @@ sub-types).
Returns a pointer to the dimensions/shape of the array. The
number of elements matches the number of dimensions
- of the array.
+ of the array. Can return ``NULL`` for 0-dimensional arrays.
.. c:function:: npy_intp *PyArray_SHAPE(PyArrayObject *arr)
@@ -199,8 +199,8 @@ From scratch
^^^^^^^^^^^^
.. c:function:: PyObject* PyArray_NewFromDescr( \
- PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp* dims, \
- npy_intp* strides, void* data, int flags, PyObject* obj)
+ PyTypeObject* subtype, PyArray_Descr* descr, int nd, npy_intp const* dims, \
+ npy_intp const* strides, void* data, int flags, PyObject* obj)
This function steals a reference to *descr*. The easiest way to get one
is using :c:func:`PyArray_DescrFromType`.
@@ -219,7 +219,7 @@ From scratch
If *data* is ``NULL``, then new unitinialized memory will be allocated and
*flags* can be non-zero to indicate a Fortran-style contiguous array. Use
- :c:ref:`PyArray_FILLWBYTE` to initialze the memory.
+ :c:func:`PyArray_FILLWBYTE` to initialize the memory.
If *data* is not ``NULL``, then it is assumed to point to the memory
to be used for the array and the *flags* argument is used as the
@@ -266,8 +266,9 @@ From scratch
base-class array.
.. c:function:: PyObject* PyArray_New( \
- PyTypeObject* subtype, int nd, npy_intp* dims, int type_num, \
- npy_intp* strides, void* data, int itemsize, int flags, PyObject* obj)
+ PyTypeObject* subtype, int nd, npy_intp const* dims, int type_num, \
+ npy_intp const* strides, void* data, int itemsize, int flags, \
+ PyObject* obj)
This is similar to :c:func:`PyArray_NewFromDescr` (...) except you
specify the data-type descriptor with *type_num* and *itemsize*,
@@ -288,29 +289,40 @@ From scratch
are passed in they must be consistent with the dimensions, the
itemsize, and the data of the array.
-.. c:function:: PyObject* PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)
+.. c:function:: PyObject* PyArray_SimpleNew(int nd, npy_intp const* dims, int typenum)
Create a new uninitialized array of type, *typenum*, whose size in
- each of *nd* dimensions is given by the integer array, *dims*.
- This function cannot be used to create a flexible-type array (no
- itemsize given).
+ each of *nd* dimensions is given by the integer array, *dims*.The memory
+ for the array is uninitialized (unless typenum is :c:data:`NPY_OBJECT`
+ in which case each element in the array is set to NULL). The
+ *typenum* argument allows specification of any of the builtin
+ data-types such as :c:data:`NPY_FLOAT` or :c:data:`NPY_LONG`. The
+ memory for the array can be set to zero if desired using
+ :c:func:`PyArray_FILLWBYTE` (return_object, 0).This function cannot be
+ used to create a flexible-type array (no itemsize given).
.. c:function:: PyObject* PyArray_SimpleNewFromData( \
- int nd, npy_intp* dims, int typenum, void* data)
+ int nd, npy_intp const* dims, int typenum, void* data)
Create an array wrapper around *data* pointed to by the given
pointer. The array flags will have a default that the data area is
well-behaved and C-style contiguous. The shape of the array is
given by the *dims* c-array of length *nd*. The data-type of the
- array is indicated by *typenum*.
+ array is indicated by *typenum*. If data comes from another
+ reference-counted Python object, the reference count on this object
+ should be increased after the pointer is passed in, and the base member
+ of the returned ndarray should point to the Python object that owns
+ the data. This will ensure that the provided memory is not
+ freed while the returned array is in existence. To free memory as soon
+ as the ndarray is deallocated, set the OWNDATA flag on the returned ndarray.
.. c:function:: PyObject* PyArray_SimpleNewFromDescr( \
- int nd, npy_intp* dims, PyArray_Descr* descr)
+ int nd, npy_int const* dims, PyArray_Descr* descr)
- This function steals a reference to *descr* if it is not NULL.
+ This function steals a reference to *descr*.
- Create a new array with the provided data-type descriptor, *descr*
- , of the shape determined by *nd* and *dims*.
+ Create a new array with the provided data-type descriptor, *descr*,
+ of the shape determined by *nd* and *dims*.
.. c:function:: PyArray_FILLWBYTE(PyObject* obj, int val)
@@ -319,7 +331,7 @@ From scratch
This macro calls memset, so obj must be contiguous.
.. c:function:: PyObject* PyArray_Zeros( \
- int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran)
+ int nd, npy_intp const* dims, PyArray_Descr* dtype, int fortran)
Construct a new *nd* -dimensional array with shape given by *dims*
and data type given by *dtype*. If *fortran* is non-zero, then a
@@ -328,13 +340,13 @@ From scratch
corresponds to :c:type:`NPY_OBJECT` ).
.. c:function:: PyObject* PyArray_ZEROS( \
- int nd, npy_intp* dims, int type_num, int fortran)
+ int nd, npy_intp const* dims, int type_num, int fortran)
Macro form of :c:func:`PyArray_Zeros` which takes a type-number instead
of a data-type object.
.. c:function:: PyObject* PyArray_Empty( \
- int nd, npy_intp* dims, PyArray_Descr* dtype, int fortran)
+ int nd, npy_intp const* dims, PyArray_Descr* dtype, int fortran)
Construct a new *nd* -dimensional array with shape given by *dims*
and data type given by *dtype*. If *fortran* is non-zero, then a
@@ -344,7 +356,7 @@ From scratch
filled with :c:data:`Py_None`.
.. c:function:: PyObject* PyArray_EMPTY( \
- int nd, npy_intp* dims, int typenum, int fortran)
+ int nd, npy_intp const* dims, int typenum, int fortran)
Macro form of :c:func:`PyArray_Empty` which takes a type-number,
*typenum*, instead of a data-type object.
@@ -510,6 +522,11 @@ From other objects
:c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
:c:data:`NPY_ARRAY_ALIGNED`
+ .. c:var:: NPY_ARRAY_OUT_ARRAY
+
+ :c:data:`NPY_ARRAY_C_CONTIGUOUS` \| :c:data:`NPY_ARRAY_ALIGNED` \|
+ :c:data:`NPY_ARRAY_WRITEABLE`
+
.. c:var:: NPY_ARRAY_OUT_FARRAY
:c:data:`NPY_ARRAY_F_CONTIGUOUS` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
@@ -573,8 +590,9 @@ From other objects
return NULL;
}
if (arr == NULL) {
+ /*
... validate/change dtype, validate flags, ndim, etc ...
- // Could make custom strides here too
+ Could make custom strides here too */
arr = PyArray_NewFromDescr(&PyArray_Type, dtype, ndim,
dims, NULL,
fortran ? NPY_ARRAY_F_CONTIGUOUS : 0,
@@ -588,10 +606,14 @@ From other objects
}
}
else {
+ /*
... in this case the other parameters weren't filled, just
validate and possibly copy arr itself ...
+ */
}
+ /*
... use arr ...
+ */
.. c:function:: PyObject* PyArray_CheckFromAny( \
PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, \
@@ -784,7 +806,7 @@ From other objects
PyObject* obj, int typenum, int requirements)
Combination of :c:func:`PyArray_FROM_OF` and :c:func:`PyArray_FROM_OT`
- allowing both a *typenum* and a *flags* argument to be provided..
+ allowing both a *typenum* and a *flags* argument to be provided.
.. c:function:: PyObject* PyArray_FROMANY( \
PyObject* obj, int typenum, int min, int max, int requirements)
@@ -816,17 +838,17 @@ Dealing with types
General check of Python Type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-.. c:function:: PyArray_Check(op)
+.. c:function:: PyArray_Check(PyObject *op)
Evaluates true if *op* is a Python object whose type is a sub-type
of :c:data:`PyArray_Type`.
-.. c:function:: PyArray_CheckExact(op)
+.. c:function:: PyArray_CheckExact(PyObject *op)
Evaluates true if *op* is a Python object with type
:c:data:`PyArray_Type`.
-.. c:function:: PyArray_HasArrayInterface(op, out)
+.. c:function:: PyArray_HasArrayInterface(PyObject *op, PyObject *out)
If ``op`` implements any part of the array interface, then ``out``
will contain a new reference to the newly created ndarray using
@@ -1650,11 +1672,13 @@ Conversion
.. c:function:: PyObject* PyArray_GetField( \
PyArrayObject* self, PyArray_Descr* dtype, int offset)
- Equivalent to :meth:`ndarray.getfield<numpy.ndarray.getfield>` (*self*, *dtype*, *offset*). Return
- a new array of the given *dtype* using the data in the current
- array at a specified *offset* in bytes. The *offset* plus the
- itemsize of the new array type must be less than *self*
- ->descr->elsize or an error is raised. The same shape and strides
+ Equivalent to :meth:`ndarray.getfield<numpy.ndarray.getfield>`
+ (*self*, *dtype*, *offset*). This function `steals a reference
+ <https://docs.python.org/3/c-api/intro.html?reference-count-details>`_
+ to `PyArray_Descr` and returns a new array of the given `dtype` using
+ the data in the current array at a specified `offset` in bytes. The
+ `offset` plus the itemsize of the new array type must be less than ``self
+ ->descr->elsize`` or an error is raised. The same shape and strides
as the original array are used. Therefore, this function has the
effect of returning a field from a structured array. But, it can also
be used to select specific bytes or groups of bytes from any array
@@ -1904,22 +1928,23 @@ Item selection and manipulation
all values are clipped to the region [0, len(*op*) ).
-.. c:function:: PyObject* PyArray_Sort(PyArrayObject* self, int axis)
+.. c:function:: PyObject* PyArray_Sort(PyArrayObject* self, int axis, NPY_SORTKIND kind)
- Equivalent to :meth:`ndarray.sort<numpy.ndarray.sort>` (*self*, *axis*). Return an array with
- the items of *self* sorted along *axis*.
+ Equivalent to :meth:`ndarray.sort<numpy.ndarray.sort>` (*self*, *axis*, *kind*).
+ Return an array with the items of *self* sorted along *axis*. The array
+ is sorted using the algorithm denoted by *kind* , which is an integer/enum pointing
+ to the type of sorting algorithms used.
.. c:function:: PyObject* PyArray_ArgSort(PyArrayObject* self, int axis)
- Equivalent to :meth:`ndarray.argsort<numpy.ndarray.argsort>` (*self*, *axis*). Return an array of
- indices such that selection of these indices along the given
- ``axis`` would return a sorted version of *self*. If *self*
- ->descr is a data-type with fields defined, then
- self->descr->names is used to determine the sort order. A
- comparison where the first field is equal will use the second
- field and so on. To alter the sort order of a structured array, create
- a new data-type with a different order of names and construct a
- view of the array with that new data-type.
+ Equivalent to :meth:`ndarray.argsort<numpy.ndarray.argsort>` (*self*, *axis*).
+ Return an array of indices such that selection of these indices
+ along the given ``axis`` would return a sorted version of *self*. If *self* ->descr
+ is a data-type with fields defined, then self->descr->names is used
+ to determine the sort order. A comparison where the first field is equal
+ will use the second field and so on. To alter the sort order of a
+ structured array, create a new data-type with a different order of names
+ and construct a view of the array with that new data-type.
.. c:function:: PyObject* PyArray_LexSort(PyObject* sort_keys, int axis)
@@ -2333,8 +2358,8 @@ Other functions
^^^^^^^^^^^^^^^
.. c:function:: Bool PyArray_CheckStrides( \
- int elsize, int nd, npy_intp numbytes, npy_intp* dims, \
- npy_intp* newstrides)
+ int elsize, int nd, npy_intp numbytes, npy_intp const* dims, \
+ npy_intp const* newstrides)
Determine if *newstrides* is a strides array consistent with the
memory of an *nd* -dimensional array with shape ``dims`` and
@@ -2346,14 +2371,14 @@ Other functions
*elsize* refer to a single-segment array. Return :c:data:`NPY_TRUE` if
*newstrides* is acceptable, otherwise return :c:data:`NPY_FALSE`.
-.. c:function:: npy_intp PyArray_MultiplyList(npy_intp* seq, int n)
+.. c:function:: npy_intp PyArray_MultiplyList(npy_intp const* seq, int n)
-.. c:function:: int PyArray_MultiplyIntList(int* seq, int n)
+.. c:function:: int PyArray_MultiplyIntList(int const* seq, int n)
Both of these routines multiply an *n* -length array, *seq*, of
integers and return the result. No overflow checking is performed.
-.. c:function:: int PyArray_CompareLists(npy_intp* l1, npy_intp* l2, int n)
+.. c:function:: int PyArray_CompareLists(npy_intp const* l1, npy_intp const* l2, int n)
Given two *n* -length arrays of integers, *l1*, and *l2*, return
1 if the lists are identical; otherwise, return 0.
@@ -2659,22 +2684,22 @@ cost of a slight overhead.
.. code-block:: c
- PyArrayIterObject \*iter;
- PyArrayNeighborhoodIterObject \*neigh_iter;
+ PyArrayIterObject *iter;
+ PyArrayNeighborhoodIterObject *neigh_iter;
iter = PyArray_IterNew(x);
- //For a 3x3 kernel
+ /*For a 3x3 kernel */
bounds = {-1, 1, -1, 1};
neigh_iter = (PyArrayNeighborhoodIterObject*)PyArrayNeighborhoodIter_New(
iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL);
for(i = 0; i < iter->size; ++i) {
for (j = 0; j < neigh_iter->size; ++j) {
- // Walk around the item currently pointed by iter->dataptr
+ /* Walk around the item currently pointed by iter->dataptr */
PyArrayNeighborhoodIter_Next(neigh_iter);
}
- // Move to the next point of iter
+ /* Move to the next point of iter */
PyArrayIter_Next(iter);
PyArrayNeighborhoodIter_Reset(neigh_iter);
}
@@ -2988,8 +3013,11 @@ to.
.. c:function:: int PyArray_SortkindConverter(PyObject* obj, NPY_SORTKIND* sort)
Convert Python strings into one of :c:data:`NPY_QUICKSORT` (starts
- with 'q' or 'Q') , :c:data:`NPY_HEAPSORT` (starts with 'h' or 'H'),
- or :c:data:`NPY_MERGESORT` (starts with 'm' or 'M').
+ with 'q' or 'Q'), :c:data:`NPY_HEAPSORT` (starts with 'h' or 'H'),
+ :c:data:`NPY_MERGESORT` (starts with 'm' or 'M') or :c:data:`NPY_STABLESORT`
+ (starts with 't' or 'T'). :c:data:`NPY_MERGESORT` and :c:data:`NPY_STABLESORT`
+ are aliased to each other for backwards compatibility and may refer to one
+ of several stable sorting algorithms depending on the data type.
.. c:function:: int PyArray_SearchsideConverter( \
PyObject* obj, NPY_SEARCHSIDE* side)
@@ -3252,19 +3280,19 @@ Memory management
Macros to allocate, free, and reallocate memory. These macros are used
internally to create arrays.
-.. c:function:: npy_intp* PyDimMem_NEW(nd)
+.. c:function:: npy_intp* PyDimMem_NEW(int nd)
-.. c:function:: PyDimMem_FREE(npy_intp* ptr)
+.. c:function:: PyDimMem_FREE(char* ptr)
-.. c:function:: npy_intp* PyDimMem_RENEW(npy_intp* ptr, npy_intp newnd)
+.. c:function:: npy_intp* PyDimMem_RENEW(void* ptr, size_t newnd)
Macros to allocate, free, and reallocate dimension and strides memory.
-.. c:function:: PyArray_malloc(nbytes)
+.. c:function:: void* PyArray_malloc(size_t nbytes)
-.. c:function:: PyArray_free(ptr)
+.. c:function:: PyArray_free(void* ptr)
-.. c:function:: PyArray_realloc(ptr, nbytes)
+.. c:function:: void* PyArray_realloc(npy_intp* ptr, size_t nbytes)
These macros use different memory allocators, depending on the
constant :c:data:`NPY_USE_PYMEM`. The system malloc is used when
@@ -3466,31 +3494,31 @@ Other constants
Miscellaneous Macros
^^^^^^^^^^^^^^^^^^^^
-.. c:function:: PyArray_SAMESHAPE(a1, a2)
+.. c:function:: PyArray_SAMESHAPE(PyArrayObject *a1, PyArrayObject *a2)
Evaluates as True if arrays *a1* and *a2* have the same shape.
-.. c:function:: PyArray_MAX(a,b)
+.. c:macro:: PyArray_MAX(a,b)
Returns the maximum of *a* and *b*. If (*a*) or (*b*) are
expressions they are evaluated twice.
-.. c:function:: PyArray_MIN(a,b)
+.. c:macro:: PyArray_MIN(a,b)
Returns the minimum of *a* and *b*. If (*a*) or (*b*) are
expressions they are evaluated twice.
-.. c:function:: PyArray_CLT(a,b)
+.. c:macro:: PyArray_CLT(a,b)
-.. c:function:: PyArray_CGT(a,b)
+.. c:macro:: PyArray_CGT(a,b)
-.. c:function:: PyArray_CLE(a,b)
+.. c:macro:: PyArray_CLE(a,b)
-.. c:function:: PyArray_CGE(a,b)
+.. c:macro:: PyArray_CGE(a,b)
-.. c:function:: PyArray_CEQ(a,b)
+.. c:macro:: PyArray_CEQ(a,b)
-.. c:function:: PyArray_CNE(a,b)
+.. c:macro:: PyArray_CNE(a,b)
Implements the complex comparisons between two complex numbers
(structures with a real and imag member) using NumPy's definition
@@ -3533,11 +3561,15 @@ Enumerated Types
A special variable-type which can take on the values :c:data:`NPY_{KIND}`
where ``{KIND}`` is
- **QUICKSORT**, **HEAPSORT**, **MERGESORT**
+ **QUICKSORT**, **HEAPSORT**, **MERGESORT**, **STABLESORT**
.. c:var:: NPY_NSORTS
- Defined to be the number of sorts.
+ Defined to be the number of sorts. It is fixed at three by the need for
+ backwards compatibility, and consequently :c:data:`NPY_MERGESORT` and
+ :c:data:`NPY_STABLESORT` are aliased to each other and may refer to one
+ of several stable sorting algorithms depending on the data type.
+
.. c:type:: NPY_SCALARKIND
diff --git a/doc/source/reference/c-api.config.rst b/doc/source/reference/c-api.config.rst
index 60bf61a32..05e6fe44d 100644
--- a/doc/source/reference/c-api.config.rst
+++ b/doc/source/reference/c-api.config.rst
@@ -101,3 +101,22 @@ Platform information
Returns the endianness of the current platform.
One of :c:data:`NPY_CPU_BIG`, :c:data:`NPY_CPU_LITTLE`,
or :c:data:`NPY_CPU_UNKNOWN_ENDIAN`.
+
+
+Compiler directives
+-------------------
+
+.. c:var:: NPY_LIKELY
+.. c:var:: NPY_UNLIKELY
+.. c:var:: NPY_UNUSED
+
+
+Interrupt Handling
+------------------
+
+.. c:var:: NPY_INTERRUPT_H
+.. c:var:: NPY_SIGSETJMP
+.. c:var:: NPY_SIGLONGJMP
+.. c:var:: NPY_SIGJMP_BUF
+.. c:var:: NPY_SIGINT_ON
+.. c:var:: NPY_SIGINT_OFF
diff --git a/doc/source/reference/c-api.coremath.rst b/doc/source/reference/c-api.coremath.rst
index 691f73287..7e00322f9 100644
--- a/doc/source/reference/c-api.coremath.rst
+++ b/doc/source/reference/c-api.coremath.rst
@@ -80,8 +80,9 @@ Floating point classification
Useful math constants
~~~~~~~~~~~~~~~~~~~~~
-The following math constants are available in npy_math.h. Single and extended
-precision are also available by adding the F and L suffixes respectively.
+The following math constants are available in ``npy_math.h``. Single
+and extended precision are also available by adding the ``f`` and
+``l`` suffixes respectively.
.. c:var:: NPY_E
@@ -184,7 +185,7 @@ Those can be useful for precise floating point comparison.
* NPY_FPE_INVALID
Note that :c:func:`npy_get_floatstatus_barrier` is preferable as it prevents
- agressive compiler optimizations reordering the call relative to
+ aggressive compiler optimizations reordering the call relative to
the code setting the status, which could lead to incorrect results.
.. versionadded:: 1.9.0
@@ -192,7 +193,7 @@ Those can be useful for precise floating point comparison.
.. c:function:: int npy_get_floatstatus_barrier(char*)
Get floating point status. A pointer to a local variable is passed in to
- prevent aggresive compiler optimizations from reodering this function call
+ prevent aggressive compiler optimizations from reodering this function call
relative to the code setting the status, which could lead to incorrect
results.
@@ -210,7 +211,7 @@ Those can be useful for precise floating point comparison.
Clears the floating point status. Returns the previous status mask.
Note that :c:func:`npy_clear_floatstatus_barrier` is preferable as it
- prevents agressive compiler optimizations reordering the call relative to
+ prevents aggressive compiler optimizations reordering the call relative to
the code setting the status, which could lead to incorrect results.
.. versionadded:: 1.9.0
@@ -218,7 +219,7 @@ Those can be useful for precise floating point comparison.
.. c:function:: int npy_clear_floatstatus_barrier(char*)
Clears the floating point status. A pointer to a local variable is passed in to
- prevent aggresive compiler optimizations from reodering this function call.
+ prevent aggressive compiler optimizations from reodering this function call.
Returns the previous status mask.
.. versionadded:: 1.15.0
@@ -258,7 +259,7 @@ and co.
Half-precision functions
~~~~~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 2.0.0
+.. versionadded:: 1.6.0
The header file <numpy/halffloat.h> provides functions to work with
IEEE 754-2008 16-bit floating point values. While this format is
diff --git a/doc/source/reference/c-api.dtype.rst b/doc/source/reference/c-api.dtype.rst
index 9ac46b284..72e908861 100644
--- a/doc/source/reference/c-api.dtype.rst
+++ b/doc/source/reference/c-api.dtype.rst
@@ -308,13 +308,45 @@ to the front of the integer name.
(unsigned) char
-.. c:type:: npy_(u)short
+.. c:type:: npy_short
- (unsigned) short
+ short
-.. c:type:: npy_(u)int
+.. c:type:: npy_ushort
- (unsigned) int
+ unsigned short
+
+.. c:type:: npy_uint
+
+ unsigned int
+
+.. c:type:: npy_int
+
+ int
+
+.. c:type:: npy_int16
+
+ 16-bit integer
+
+.. c:type:: npy_uint16
+
+ 16-bit unsigned integer
+
+.. c:type:: npy_int32
+
+ 32-bit integer
+
+.. c:type:: npy_uint32
+
+ 32-bit unsigned integer
+
+.. c:type:: npy_int64
+
+ 64-bit integer
+
+.. c:type:: npy_uint64
+
+ 64-bit unsigned integer
.. c:type:: npy_(u)long
@@ -324,22 +356,31 @@ to the front of the integer name.
(unsigned long long int)
-.. c:type:: npy_(u)intp
+.. c:type:: npy_intp
- (unsigned) Py_intptr_t (an integer that is the size of a pointer on
+ Py_intptr_t (an integer that is the size of a pointer on
+ the platform).
+
+.. c:type:: npy_uintp
+
+ unsigned Py_intptr_t (an integer that is the size of a pointer on
the platform).
(Complex) Floating point
^^^^^^^^^^^^^^^^^^^^^^^^
+.. c:type:: npy_half
+
+ 16-bit float
+
.. c:type:: npy_(c)float
- float
+ 32-bit float
.. c:type:: npy_(c)double
- double
+ 64-bit double
.. c:type:: npy_(c)longdouble
diff --git a/doc/source/reference/c-api.iterator.rst b/doc/source/reference/c-api.iterator.rst
index 940452d3c..b77d029cc 100644
--- a/doc/source/reference/c-api.iterator.rst
+++ b/doc/source/reference/c-api.iterator.rst
@@ -593,25 +593,23 @@ Construction and Destruction
code doing iteration can write to this operand to
control which elements will be untouched and which ones will be
modified. This is useful when the mask should be a combination
- of input masks, for example. Mask values can be created
- with the :c:func:`NpyMask_Create` function.
+ of input masks.
.. c:var:: NPY_ITER_WRITEMASKED
.. versionadded:: 1.7
- Indicates that only elements which the operand with
- the ARRAYMASK flag indicates are intended to be modified
- by the iteration. In general, the iterator does not enforce
- this, it is up to the code doing the iteration to follow
- that promise. Code can use the :c:func:`NpyMask_IsExposed`
- inline function to test whether the mask at a particular
- element allows writing.
+ This array is the mask for all `writemasked <numpy.nditer>`
+ operands. Code uses the ``writemasked`` flag which indicates
+ that only elements where the chosen ARRAYMASK operand is True
+ will be written to. In general, the iterator does not enforce
+ this, it is up to the code doing the iteration to follow that
+ promise.
- When this flag is used, and this operand is buffered, this
- changes how data is copied from the buffer into the array.
+ When ``writemasked`` flag is used, and this operand is buffered,
+ this changes how data is copied from the buffer into the array.
A masked copying routine is used, which only copies the
- elements in the buffer for which :c:func:`NpyMask_IsExposed`
+ elements in the buffer for which ``writemasked``
returns true from the corresponding element in the ARRAYMASK
operand.
@@ -630,7 +628,7 @@ Construction and Destruction
.. c:function:: NpyIter* NpyIter_AdvancedNew( \
npy_intp nop, PyArrayObject** op, npy_uint32 flags, NPY_ORDER order, \
NPY_CASTING casting, npy_uint32* op_flags, PyArray_Descr** op_dtypes, \
- int oa_ndim, int** op_axes, npy_intp* itershape, npy_intp buffersize)
+ int oa_ndim, int** op_axes, npy_intp const* itershape, npy_intp buffersize)
Extends :c:func:`NpyIter_MultiNew` with several advanced options providing
more control over broadcasting and buffering.
@@ -867,7 +865,7 @@ Construction and Destruction
} while (iternext2(iter2));
} while (iternext1(iter1));
-.. c:function:: int NpyIter_GotoMultiIndex(NpyIter* iter, npy_intp* multi_index)
+.. c:function:: int NpyIter_GotoMultiIndex(NpyIter* iter, npy_intp const* multi_index)
Adjusts the iterator to point to the ``ndim`` indices
pointed to by ``multi_index``. Returns an error if a multi-index
@@ -974,19 +972,6 @@ Construction and Destruction
Returns the number of operands in the iterator.
- When :c:data:`NPY_ITER_USE_MASKNA` is used on an operand, a new
- operand is added to the end of the operand list in the iterator
- to track that operand's NA mask. Thus, this equals the number
- of construction operands plus the number of operands for
- which the flag :c:data:`NPY_ITER_USE_MASKNA` was specified.
-
-.. c:function:: int NpyIter_GetFirstMaskNAOp(NpyIter* iter)
-
- .. versionadded:: 1.7
-
- Returns the index of the first NA mask operand in the array. This
- value is equal to the number of operands passed into the constructor.
-
.. c:function:: npy_intp* NpyIter_GetAxisStrideArray(NpyIter* iter, int axis)
Gets the array of strides for the specified axis. Requires that
@@ -1023,16 +1008,6 @@ Construction and Destruction
that are being iterated. The result points into ``iter``,
so the caller does not gain any references to the PyObjects.
-.. c:function:: npy_int8* NpyIter_GetMaskNAIndexArray(NpyIter* iter)
-
- .. versionadded:: 1.7
-
- This gives back a pointer to the ``nop`` indices which map
- construction operands with :c:data:`NPY_ITER_USE_MASKNA` flagged
- to their corresponding NA mask operands and vice versa. For
- operands which were not flagged with :c:data:`NPY_ITER_USE_MASKNA`,
- this array contains negative values.
-
.. c:function:: PyObject* NpyIter_GetIterView(NpyIter* iter, npy_intp i)
This gives back a reference to a new ndarray view, which is a view
diff --git a/doc/source/reference/c-api.types-and-structures.rst b/doc/source/reference/c-api.types-and-structures.rst
index f04d65ee1..a716b5a06 100644
--- a/doc/source/reference/c-api.types-and-structures.rst
+++ b/doc/source/reference/c-api.types-and-structures.rst
@@ -57,8 +57,8 @@ types are place holders that allow the array scalars to fit into a
hierarchy of actual Python types.
-PyArray_Type
-------------
+PyArray_Type and PyArrayObject
+------------------------------
.. c:var:: PyArray_Type
@@ -74,7 +74,7 @@ PyArray_Type
subclasses) will have this structure. For future compatibility,
these structure members should normally be accessed using the
provided macros. If you need a shorter name, then you can make use
- of :c:type:`NPY_AO` which is defined to be equivalent to
+ of :c:type:`NPY_AO` (deprecated) which is defined to be equivalent to
:c:type:`PyArrayObject`.
.. code-block:: c
@@ -91,7 +91,7 @@ PyArray_Type
PyObject *weakreflist;
} PyArrayObject;
-.. c:macro: PyArrayObject.PyObject_HEAD
+.. c:macro:: PyArrayObject.PyObject_HEAD
This is needed by all Python objects. It consists of (at least)
a reference count member ( ``ob_refcnt`` ) and a pointer to the
@@ -130,14 +130,16 @@ PyArray_Type
.. c:member:: PyObject *PyArrayObject.base
This member is used to hold a pointer to another Python object that
- is related to this array. There are two use cases: 1) If this array
- does not own its own memory, then base points to the Python object
- that owns it (perhaps another array object), 2) If this array has
- the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
- :c:data:NPY_ARRAY_WRITEBACKIFCOPY`: flag set, then this array is
- a working copy of a "misbehaved" array. When
- ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to by base
- will be updated with the contents of this array.
+ is related to this array. There are two use cases:
+
+ - If this array does not own its own memory, then base points to the
+ Python object that owns it (perhaps another array object)
+ - If this array has the (deprecated) :c:data:`NPY_ARRAY_UPDATEIFCOPY` or
+ :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` flag set, then this array is a working
+ copy of a "misbehaved" array.
+
+ When ``PyArray_ResolveWritebackIfCopy`` is called, the array pointed to
+ by base will be updated with the contents of this array.
.. c:member:: PyArray_Descr *PyArrayObject.descr
@@ -163,8 +165,8 @@ PyArray_Type
weakref module).
-PyArrayDescr_Type
------------------
+PyArrayDescr_Type and PyArray_Descr
+-----------------------------------
.. c:var:: PyArrayDescr_Type
@@ -203,14 +205,17 @@ PyArrayDescr_Type
char kind;
char type;
char byteorder;
- char unused;
- int flags;
+ char flags;
int type_num;
int elsize;
int alignment;
PyArray_ArrayDescr *subarray;
PyObject *fields;
+ PyObject *names;
PyArray_ArrFuncs *f;
+ PyObject *metadata;
+ NpyAuxData *c_metadata;
+ npy_hash_t hash;
} PyArray_Descr;
.. c:member:: PyTypeObject *PyArray_Descr.typeobj
@@ -242,7 +247,7 @@ PyArrayDescr_Type
endian), '=' (native), '\|' (irrelevant, ignore). All builtin data-
types have byteorder '='.
-.. c:member:: int PyArray_Descr.flags
+.. c:member:: char PyArray_Descr.flags
A data-type bit-flag that determines if the data-type exhibits object-
array like behavior. Each bit in this member is a flag which are named
@@ -250,11 +255,13 @@ PyArrayDescr_Type
.. c:var:: NPY_ITEM_REFCOUNT
- .. c:var:: NPY_ITEM_HASOBJECT
-
Indicates that items of this data-type must be reference
counted (using :c:func:`Py_INCREF` and :c:func:`Py_DECREF` ).
+ .. c:var:: NPY_ITEM_HASOBJECT
+
+ Same as :c:data:`NPY_ITEM_REFCOUNT`.
+
.. c:var:: NPY_LIST_PICKLE
Indicates arrays of this data-type must be converted to a list
@@ -377,6 +384,11 @@ PyArrayDescr_Type
normally a Python string. These tuples are placed in this
dictionary keyed by name (and also title if given).
+.. c:member:: PyObject *PyArray_Descr.names
+
+ An ordered tuple of field names. It is NULL if no field is
+ defined.
+
.. c:member:: PyArray_ArrFuncs *PyArray_Descr.f
A pointer to a structure containing functions that the type needs
@@ -384,6 +396,20 @@ PyArrayDescr_Type
thing as the universal functions (ufuncs) described later. Their
signatures can vary arbitrarily.
+.. c:member:: PyObject *PyArray_Descr.metadata
+
+ Metadata about this dtype.
+
+.. c:member:: NpyAuxData *PyArray_Descr.c_metadata
+
+ Metadata specific to the C implementation
+ of the particular dtype. Added for NumPy 1.7.0.
+
+.. c:member:: Npy_hash_t *PyArray_Descr.hash
+
+ Currently unused. Reserved for future use in caching
+ hash values.
+
.. c:type:: PyArray_ArrFuncs
Functions implementing internal features. Not all of these
@@ -508,20 +534,19 @@ PyArrayDescr_Type
and ``is2`` *bytes*, respectively. This function requires
behaved (though not necessarily contiguous) memory.
- .. c:member:: int scanfunc(FILE* fd, void* ip , void* sep , void* arr)
+ .. c:member:: int scanfunc(FILE* fd, void* ip, void* arr)
A pointer to a function that scans (scanf style) one element
of the corresponding type from the file descriptor ``fd`` into
the array memory pointed to by ``ip``. The array is assumed
- to be behaved. If ``sep`` is not NULL, then a separator string
- is also scanned from the file before returning. The last
- argument ``arr`` is the array to be scanned into. A 0 is
- returned if the scan is successful. A negative number
- indicates something went wrong: -1 means the end of file was
- reached before the separator string could be scanned, -4 means
- that the end of file was reached before the element could be
- scanned, and -3 means that the element could not be
- interpreted from the format string. Requires a behaved array.
+ to be behaved.
+ The last argument ``arr`` is the array to be scanned into.
+ Returns number of receiving arguments successfully assigned (which
+ may be zero in case a matching failure occurred before the first
+ receiving argument was assigned), or EOF if input failure occurs
+ before the first receiving argument was assigned.
+ This function should be called without holding the Python GIL, and
+ has to grab it for error reporting.
.. c:member:: int fromstr(char* str, void* ip, char** endptr, void* arr)
@@ -532,6 +557,8 @@ PyArrayDescr_Type
string. The last argument ``arr`` is the array into which ip
points (needed for variable-size data- types). Returns 0 on
success or -1 on failure. Requires a behaved array.
+ This function should be called without holding the Python GIL, and
+ has to grab it for error reporting.
.. c:member:: Bool nonzero(void* data, void* arr)
@@ -653,25 +680,28 @@ PyArrayDescr_Type
The :c:data:`PyArray_Type` typeobject implements many of the features of
-Python objects including the tp_as_number, tp_as_sequence,
-tp_as_mapping, and tp_as_buffer interfaces. The rich comparison
-(tp_richcompare) is also used along with new-style attribute lookup
-for methods (tp_methods) and properties (tp_getset). The
-:c:data:`PyArray_Type` can also be sub-typed.
+:c:type:`Python objects <PyTypeObject>` including the :c:member:`tp_as_number
+<PyTypeObject.tp_as_number>`, :c:member:`tp_as_sequence
+<PyTypeObject.tp_as_sequence>`, :c:member:`tp_as_mapping
+<PyTypeObject.tp_as_mapping>`, and :c:member:`tp_as_buffer
+<PyTypeObject.tp_as_buffer>` interfaces. The :c:type:`rich comparison
+<richcmpfunc>`) is also used along with new-style attribute lookup for
+member (:c:member:`tp_members <PyTypeObject.tp_members>`) and properties
+(:c:member:`tp_getset <PyTypeObject.tp_getset>`).
+The :c:data:`PyArray_Type` can also be sub-typed.
.. tip::
- The tp_as_number methods use a generic approach to call whatever
- function has been registered for handling the operation. The
- function PyNumeric_SetOps(..) can be used to register functions to
- handle particular mathematical operations (for all arrays). When
- the umath module is imported, it sets the numeric operations for
- all arrays to the corresponding ufuncs. The tp_str and tp_repr
- methods can also be altered using PyString_SetStringFunction(...).
+ The ``tp_as_number`` methods use a generic approach to call whatever
+ function has been registered for handling the operation. When the
+ ``_multiarray_umath module`` is imported, it sets the numeric operations
+ for all arrays to the corresponding ufuncs. This choice can be changed with
+ :c:func:`PyUFunc_ReplaceLoopBySignature` The ``tp_str`` and ``tp_repr``
+ methods can also be altered using :c:func:`PyArray_SetStringFunction`.
-PyUFunc_Type
-------------
+PyUFunc_Type and PyUFuncObject
+------------------------------
.. c:var:: PyUFunc_Type
@@ -763,8 +793,8 @@ PyUFunc_Type
the identity for this operation. It is only used for a
reduce-like call on an empty array.
- .. c:member:: void PyUFuncObject.functions(char** args, npy_intp* dims,
- npy_intp* steps, void* extradata)
+ .. c:member:: void PyUFuncObject.functions( \
+ char** args, npy_intp* dims, npy_intp* steps, void* extradata)
An array of function pointers --- one for each data type
supported by the ufunc. This is the vector loop that is called
@@ -909,8 +939,8 @@ PyUFunc_Type
- :c:data:`UFUNC_CORE_DIM_SIZE_INFERRED` if the dim size will be
determined from the operands and not from a :ref:`frozen <frozen>` signature
-PyArrayIter_Type
-----------------
+PyArrayIter_Type and PyArrayIterObject
+--------------------------------------
.. c:var:: PyArrayIter_Type
@@ -1019,8 +1049,8 @@ with it through the use of the macros :c:func:`PyArray_ITER_NEXT` (it),
:c:type:`PyArrayIterObject *`.
-PyArrayMultiIter_Type
----------------------
+PyArrayMultiIter_Type and PyArrayMultiIterObject
+------------------------------------------------
.. c:var:: PyArrayMultiIter_Type
@@ -1081,8 +1111,8 @@ PyArrayMultiIter_Type
arrays to be broadcast together. On return, the iterators are
adjusted for broadcasting.
-PyArrayNeighborhoodIter_Type
-----------------------------
+PyArrayNeighborhoodIter_Type and PyArrayNeighborhoodIterObject
+--------------------------------------------------------------
.. c:var:: PyArrayNeighborhoodIter_Type
@@ -1095,8 +1125,33 @@ PyArrayNeighborhoodIter_Type
:c:data:`PyArrayNeighborhoodIter_Type` is the
:c:type:`PyArrayNeighborhoodIterObject`.
-PyArrayFlags_Type
------------------
+ .. code-block:: c
+
+ typedef struct {
+ PyObject_HEAD
+ int nd_m1;
+ npy_intp index, size;
+ npy_intp coordinates[NPY_MAXDIMS]
+ npy_intp dims_m1[NPY_MAXDIMS];
+ npy_intp strides[NPY_MAXDIMS];
+ npy_intp backstrides[NPY_MAXDIMS];
+ npy_intp factors[NPY_MAXDIMS];
+ PyArrayObject *ao;
+ char *dataptr;
+ npy_bool contiguous;
+ npy_intp bounds[NPY_MAXDIMS][2];
+ npy_intp limits[NPY_MAXDIMS][2];
+ npy_intp limits_sizes[NPY_MAXDIMS];
+ npy_iter_get_dataptr_t translate;
+ npy_intp nd;
+ npy_intp dimensions[NPY_MAXDIMS];
+ PyArrayIterObject* _internal_iter;
+ char* constant;
+ int mode;
+ } PyArrayNeighborhoodIterObject;
+
+PyArrayFlags_Type and PyArrayFlagsObject
+----------------------------------------
.. c:var:: PyArrayFlags_Type
@@ -1106,6 +1161,16 @@ PyArrayFlags_Type
attributes or by accessing them as if the object were a dictionary
with the flag names as entries.
+.. c:type:: PyArrayFlagsObject
+
+ .. code-block:: c
+
+ typedef struct PyArrayFlagsObject {
+ PyObject_HEAD
+ PyObject *arr;
+ int flags;
+ } PyArrayFlagsObject;
+
ScalarArrayTypes
----------------
diff --git a/doc/source/reference/distutils.rst b/doc/source/reference/distutils.rst
index 88e533832..46e5ec25e 100644
--- a/doc/source/reference/distutils.rst
+++ b/doc/source/reference/distutils.rst
@@ -214,102 +214,4 @@ template and placed in the build directory to be used instead. Two
forms of template conversion are supported. The first form occurs for
files named <file>.ext.src where ext is a recognized Fortran
extension (f, f90, f95, f77, for, ftn, pyf). The second form is used
-for all other cases.
-
-.. index::
- single: code generation
-
-Fortran files
--------------
-
-This template converter will replicate all **function** and
-**subroutine** blocks in the file with names that contain '<...>'
-according to the rules in '<...>'. The number of comma-separated words
-in '<...>' determines the number of times the block is repeated. What
-these words are indicates what that repeat rule, '<...>', should be
-replaced with in each block. All of the repeat rules in a block must
-contain the same number of comma-separated words indicating the number
-of times that block should be repeated. If the word in the repeat rule
-needs a comma, leftarrow, or rightarrow, then prepend it with a
-backslash ' \'. If a word in the repeat rule matches ' \\<index>' then
-it will be replaced with the <index>-th word in the same repeat
-specification. There are two forms for the repeat rule: named and
-short.
-
-
-Named repeat rule
-^^^^^^^^^^^^^^^^^
-
-A named repeat rule is useful when the same set of repeats must be
-used several times in a block. It is specified using <rule1=item1,
-item2, item3,..., itemN>, where N is the number of times the block
-should be repeated. On each repeat of the block, the entire
-expression, '<...>' will be replaced first with item1, and then with
-item2, and so forth until N repeats are accomplished. Once a named
-repeat specification has been introduced, the same repeat rule may be
-used **in the current block** by referring only to the name
-(i.e. <rule1>.
-
-
-Short repeat rule
-^^^^^^^^^^^^^^^^^
-
-A short repeat rule looks like <item1, item2, item3, ..., itemN>. The
-rule specifies that the entire expression, '<...>' should be replaced
-first with item1, and then with item2, and so forth until N repeats
-are accomplished.
-
-
-Pre-defined names
-^^^^^^^^^^^^^^^^^
-
-The following predefined named repeat rules are available:
-
-- <prefix=s,d,c,z>
-
-- <_c=s,d,c,z>
-
-- <_t=real, double precision, complex, double complex>
-
-- <ftype=real, double precision, complex, double complex>
-
-- <ctype=float, double, complex_float, complex_double>
-
-- <ftypereal=float, double precision, \\0, \\1>
-
-- <ctypereal=float, double, \\0, \\1>
-
-
-Other files
------------
-
-Non-Fortran files use a separate syntax for defining template blocks
-that should be repeated using a variable expansion similar to the
-named repeat rules of the Fortran-specific repeats. The template rules
-for these files are:
-
-1. "/\**begin repeat "on a line by itself marks the beginning of
- a segment that should be repeated.
-
-2. Named variable expansions are defined using #name=item1, item2, item3,
- ..., itemN# and placed on successive lines. These variables are
- replaced in each repeat block with corresponding word. All named
- variables in the same repeat block must define the same number of
- words.
-
-3. In specifying the repeat rule for a named variable, item*N is short-
- hand for item, item, ..., item repeated N times. In addition,
- parenthesis in combination with \*N can be used for grouping several
- items that should be repeated. Thus, #name=(item1, item2)*4# is
- equivalent to #name=item1, item2, item1, item2, item1, item2, item1,
- item2#
-
-4. "\*/ "on a line by itself marks the end of the variable expansion
- naming. The next line is the first line that will be repeated using
- the named rules.
-
-5. Inside the block to be repeated, the variables that should be expanded
- are specified as @name@.
-
-6. "/\**end repeat**/ "on a line by itself marks the previous line
- as the last line of the block to be repeated.
+for all other cases. See :ref:`templating`.
diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst
index 427ad1536..204ebfe08 100644
--- a/doc/source/reference/maskedarray.baseclass.rst
+++ b/doc/source/reference/maskedarray.baseclass.rst
@@ -49,11 +49,11 @@ The :class:`MaskedArray` class
.. class:: MaskedArray
- A subclass of :class:`~numpy.ndarray` designed to manipulate numerical arrays with missing data.
+A subclass of :class:`~numpy.ndarray` designed to manipulate numerical arrays with missing data.
- An instance of :class:`MaskedArray` can be thought as the combination of several elements:
+An instance of :class:`MaskedArray` can be thought as the combination of several elements:
* The :attr:`~MaskedArray.data`, as a regular :class:`numpy.ndarray` of any shape or datatype (the data).
* A boolean :attr:`~numpy.ma.MaskedArray.mask` with the same shape as the data, where a ``True`` value indicates that the corresponding element of the data is invalid.
@@ -62,89 +62,26 @@ The :class:`MaskedArray` class
+.. _ma-attributes:
+
Attributes and properties of masked arrays
------------------------------------------
.. seealso:: :ref:`Array Attributes <arrays.ndarray.attributes>`
+.. autoattribute:: MaskedArray.data
-.. attribute:: MaskedArray.data
-
- Returns the underlying data, as a view of the masked array.
- If the underlying data is a subclass of :class:`numpy.ndarray`, it is
- returned as such.
-
- >>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 1], [1, 0]])
- >>> x.data
- matrix([[1, 2],
- [3, 4]])
-
- The type of the data can be accessed through the :attr:`baseclass`
- attribute.
-
-.. attribute:: MaskedArray.mask
-
- Returns the underlying mask, as an array with the same shape and structure
- as the data, but where all fields are atomically booleans.
- A value of ``True`` indicates an invalid entry.
-
-
-.. attribute:: MaskedArray.recordmask
-
- Returns the mask of the array if it has no named fields. For structured
- arrays, returns a ndarray of booleans where entries are ``True`` if **all**
- the fields are masked, ``False`` otherwise::
-
- >>> x = ma.array([(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)],
- ... mask=[(0, 0), (1, 0), (1, 1), (0, 1), (0, 0)],
- ... dtype=[('a', int), ('b', int)])
- >>> x.recordmask
- array([False, False, True, False, False])
-
-
-.. attribute:: MaskedArray.fill_value
-
- Returns the value used to fill the invalid entries of a masked array.
- The value is either a scalar (if the masked array has no named fields),
- or a 0-D ndarray with the same :attr:`dtype` as the masked array if it has
- named fields.
-
- The default filling value depends on the datatype of the array:
-
- ======== ========
- datatype default
- ======== ========
- bool True
- int 999999
- float 1.e20
- complex 1.e20+0j
- object '?'
- string 'N/A'
- ======== ========
-
-
-
-.. attribute:: MaskedArray.baseclass
-
- Returns the class of the underlying data.
-
- >>> x = ma.array(np.matrix([[1, 2], [3, 4]]), mask=[[0, 0], [1, 0]])
- >>> x.baseclass
- <class 'numpy.matrixlib.defmatrix.matrix'>
-
-
-.. attribute:: MaskedArray.sharedmask
+.. autoattribute:: MaskedArray.mask
- Returns whether the mask of the array is shared between several masked arrays.
- If this is the case, any modification to the mask of one array will be
- propagated to the others.
+.. autoattribute:: MaskedArray.recordmask
+.. autoattribute:: MaskedArray.fill_value
-.. attribute:: MaskedArray.hardmask
+.. autoattribute:: MaskedArray.baseclass
- Returns whether the mask is hard (``True``) or soft (``False``).
- When the mask is hard, masked entries cannot be unmasked.
+.. autoattribute:: MaskedArray.sharedmask
+.. autoattribute:: MaskedArray.hardmask
As :class:`MaskedArray` is a subclass of :class:`~numpy.ndarray`, a masked array also inherits all the attributes and properties of a :class:`~numpy.ndarray` instance.
@@ -184,10 +121,8 @@ Conversion
:toctree: generated/
MaskedArray.__float__
- MaskedArray.__hex__
MaskedArray.__int__
MaskedArray.__long__
- MaskedArray.__oct__
MaskedArray.view
MaskedArray.astype
@@ -311,7 +246,7 @@ Truth value of an array (:func:`bool()`):
.. autosummary::
:toctree: generated/
- MaskedArray.__nonzero__
+ MaskedArray.__bool__
Arithmetic:
@@ -328,7 +263,6 @@ Arithmetic:
MaskedArray.__mul__
MaskedArray.__rmul__
MaskedArray.__div__
- MaskedArray.__rdiv__
MaskedArray.__truediv__
MaskedArray.__rtruediv__
MaskedArray.__floordiv__
diff --git a/doc/source/reference/random/bit_generators/bitgenerators.rst b/doc/source/reference/random/bit_generators/bitgenerators.rst
new file mode 100644
index 000000000..1474f7dac
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/bitgenerators.rst
@@ -0,0 +1,11 @@
+:orphan:
+
+BitGenerator
+------------
+
+.. currentmodule:: numpy.random.bit_generator
+
+.. autosummary::
+ :toctree: generated/
+
+ BitGenerator
diff --git a/doc/source/reference/random/bit_generators/index.rst b/doc/source/reference/random/bit_generators/index.rst
new file mode 100644
index 000000000..35d9e5d09
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/index.rst
@@ -0,0 +1,112 @@
+.. _bit_generator:
+
+.. currentmodule:: numpy.random
+
+Bit Generators
+--------------
+
+The random values produced by :class:`~Generator`
+orignate in a BitGenerator. The BitGenerators do not directly provide
+random numbers and only contains methods used for seeding, getting or
+setting the state, jumping or advancing the state, and for accessing
+low-level wrappers for consumption by code that can efficiently
+access the functions provided, e.g., `numba <https://numba.pydata.org>`_.
+
+Supported BitGenerators
+=======================
+
+The included BitGenerators are:
+
+* PCG-64 - The default. A fast generator that supports many parallel streams
+ and can be advanced by an arbitrary amount. See the documentation for
+ :meth:`~.PCG64.advance`. PCG-64 has a period of :math:`2^{128}`. See the `PCG
+ author's page`_ for more details about this class of PRNG.
+* MT19937 - The standard Python BitGenerator. Adds a `~mt19937.MT19937.jumped`
+ function that returns a new generator with state as-if :math:`2^{128}` draws have
+ been made.
+* Philox - A counter-based generator capable of being advanced an
+ arbitrary number of steps or generating independent streams. See the
+ `Random123`_ page for more details about this class of bit generators.
+* SFC64 - A fast generator based on random invertible mappings. Usually the
+ fastest generator of the four. See the `SFC author's page`_ for (a little)
+ more detail.
+
+.. _`PCG author's page`: http://www.pcg-random.org/
+.. _`Random123`: https://www.deshawresearch.com/resources_random123.html
+.. _`SFC author's page`: http://pracrand.sourceforge.net/RNG_engines.txt
+
+.. toctree::
+ :maxdepth: 1
+
+ BitGenerator <bitgenerators>
+ MT19937 <mt19937>
+ PCG64 <pcg64>
+ Philox <philox>
+ SFC64 <sfc64>
+
+Seeding and Entropy
+-------------------
+
+A BitGenerator provides a stream of random values. In order to generate
+reproducible streams, BitGenerators support setting their initial state via a
+seed. All of the provided BitGenerators will take an arbitrary-sized
+non-negative integer, or a list of such integers, as a seed. BitGenerators
+need to take those inputs and process them into a high-quality internal state
+for the BitGenerator. All of the BitGenerators in numpy delegate that task to
+`~SeedSequence`, which uses hashing techniques to ensure that even low-quality
+seeds generate high-quality initial states.
+
+.. code-block:: python
+
+ from numpy.random import PCG64
+
+ bg = PCG64(12345678903141592653589793)
+
+.. end_block
+
+`~SeedSequence` is designed to be convenient for implementing best practices.
+We recommend that a stochastic program defaults to using entropy from the OS so
+that each run is different. The program should print out or log that entropy.
+In order to reproduce a past value, the program should allow the user to
+provide that value through some mechanism, a command-line argument is common,
+so that the user can then re-enter that entropy to reproduce the result.
+`~SeedSequence` can take care of everything except for communicating with the
+user, which is up to you.
+
+.. code-block:: python
+
+ from numpy.random import PCG64, SeedSequence
+
+ # Get the user's seed somehow, maybe through `argparse`.
+ # If the user did not provide a seed, it should return `None`.
+ seed = get_user_seed()
+ ss = SeedSequence(seed)
+ print('seed = {}'.format(ss.entropy))
+ bg = PCG64(ss)
+
+.. end_block
+
+We default to using a 128-bit integer using entropy gathered from the OS. This
+is a good amount of entropy to initialize all of the generators that we have in
+numpy. We do not recommend using small seeds below 32 bits for general use.
+Using just a small set of seeds to instantiate larger state spaces means that
+there are some initial states that are impossible to reach. This creates some
+biases if everyone uses such values.
+
+There will not be anything *wrong* with the results, per se; even a seed of
+0 is perfectly fine thanks to the processing that `~SeedSequence` does. If you
+just need *some* fixed value for unit tests or debugging, feel free to use
+whatever seed you like. But if you want to make inferences from the results or
+publish them, drawing from a larger set of seeds is good practice.
+
+If you need to generate a good seed "offline", then ``SeedSequence().entropy``
+or using ``secrets.randbits(128)`` from the standard library are both
+convenient ways.
+
+.. autosummary::
+ :toctree: generated/
+
+ SeedSequence
+ bit_generator.ISeedSequence
+ bit_generator.ISpawnableSeedSequence
+ bit_generator.SeedlessSeedSequence
diff --git a/doc/source/reference/random/bit_generators/mt19937.rst b/doc/source/reference/random/bit_generators/mt19937.rst
new file mode 100644
index 000000000..25ba1d7b5
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/mt19937.rst
@@ -0,0 +1,34 @@
+Mersenne Twister (MT19937)
+--------------------------
+
+.. module:: numpy.random.mt19937
+
+.. currentmodule:: numpy.random.mt19937
+
+.. autoclass:: MT19937
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~MT19937.state
+
+Parallel generation
+===================
+.. autosummary::
+ :toctree: generated/
+
+ ~MT19937.jumped
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~MT19937.cffi
+ ~MT19937.ctypes
+
+
diff --git a/doc/source/reference/random/bit_generators/pcg64.rst b/doc/source/reference/random/bit_generators/pcg64.rst
new file mode 100644
index 000000000..7aef1e0dd
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/pcg64.rst
@@ -0,0 +1,33 @@
+Parallel Congruent Generator (64-bit, PCG64)
+--------------------------------------------
+
+.. module:: numpy.random.pcg64
+
+.. currentmodule:: numpy.random.pcg64
+
+.. autoclass:: PCG64
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~PCG64.state
+
+Parallel generation
+===================
+.. autosummary::
+ :toctree: generated/
+
+ ~PCG64.advance
+ ~PCG64.jumped
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~PCG64.cffi
+ ~PCG64.ctypes
diff --git a/doc/source/reference/random/bit_generators/philox.rst b/doc/source/reference/random/bit_generators/philox.rst
new file mode 100644
index 000000000..5e581e094
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/philox.rst
@@ -0,0 +1,35 @@
+Philox Counter-based RNG
+------------------------
+
+.. module:: numpy.random.philox
+
+.. currentmodule:: numpy.random.philox
+
+.. autoclass:: Philox
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~Philox.state
+
+Parallel generation
+===================
+.. autosummary::
+ :toctree: generated/
+
+ ~Philox.advance
+ ~Philox.jumped
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~Philox.cffi
+ ~Philox.ctypes
+
+
diff --git a/doc/source/reference/random/bit_generators/sfc64.rst b/doc/source/reference/random/bit_generators/sfc64.rst
new file mode 100644
index 000000000..dc03820ae
--- /dev/null
+++ b/doc/source/reference/random/bit_generators/sfc64.rst
@@ -0,0 +1,28 @@
+SFC64 Small Fast Chaotic PRNG
+-----------------------------
+
+.. module:: numpy.random.sfc64
+
+.. currentmodule:: numpy.random.sfc64
+
+.. autoclass:: SFC64
+ :exclude-members:
+
+State
+=====
+
+.. autosummary::
+ :toctree: generated/
+
+ ~SFC64.state
+
+Extending
+=========
+.. autosummary::
+ :toctree: generated/
+
+ ~SFC64.cffi
+ ~SFC64.ctypes
+
+
+
diff --git a/doc/source/reference/random/entropy.rst b/doc/source/reference/random/entropy.rst
new file mode 100644
index 000000000..0664da6f9
--- /dev/null
+++ b/doc/source/reference/random/entropy.rst
@@ -0,0 +1,6 @@
+System Entropy
+==============
+
+.. module:: numpy.random.entropy
+
+.. autofunction:: random_entropy
diff --git a/doc/source/reference/random/extending.rst b/doc/source/reference/random/extending.rst
new file mode 100644
index 000000000..22f9cb7e4
--- /dev/null
+++ b/doc/source/reference/random/extending.rst
@@ -0,0 +1,165 @@
+.. currentmodule:: numpy.random
+
+Extending
+---------
+The BitGenerators have been designed to be extendable using standard tools for
+high-performance Python -- numba and Cython. The `~Generator` object can also
+be used with user-provided BitGenerators as long as these export a small set of
+required functions.
+
+Numba
+=====
+Numba can be used with either CTypes or CFFI. The current iteration of the
+BitGenerators all export a small set of functions through both interfaces.
+
+This example shows how numba can be used to produce Box-Muller normals using
+a pure Python implementation which is then compiled. The random numbers are
+provided by ``ctypes.next_double``.
+
+.. code-block:: python
+
+ from numpy.random import PCG64
+ import numpy as np
+ import numba as nb
+
+ x = PCG64()
+ f = x.ctypes.next_double
+ s = x.ctypes.state
+ state_addr = x.ctypes.state_address
+
+ def normals(n, state):
+ out = np.empty(n)
+ for i in range((n+1)//2):
+ x1 = 2.0*f(state) - 1.0
+ x2 = 2.0*f(state) - 1.0
+ r2 = x1*x1 + x2*x2
+ while r2 >= 1.0 or r2 == 0.0:
+ x1 = 2.0*f(state) - 1.0
+ x2 = 2.0*f(state) - 1.0
+ r2 = x1*x1 + x2*x2
+ g = np.sqrt(-2.0*np.log(r2)/r2)
+ out[2*i] = g*x1
+ if 2*i+1 < n:
+ out[2*i+1] = g*x2
+ return out
+
+ # Compile using Numba
+ print(normals(10, s).var())
+ # Warm up
+ normalsj = nb.jit(normals, nopython=True)
+ # Must use state address not state with numba
+ normalsj(1, state_addr)
+ %timeit normalsj(1000000, state_addr)
+ print('1,000,000 Box-Muller (numba/PCG64) randoms')
+ %timeit np.random.standard_normal(1000000)
+ print('1,000,000 Box-Muller (NumPy) randoms')
+
+
+Both CTypes and CFFI allow the more complicated distributions to be used
+directly in Numba after compiling the file distributions.c into a DLL or so.
+An example showing the use of a more complicated distribution is in the
+examples folder.
+
+.. _randomgen_cython:
+
+Cython
+======
+
+Cython can be used to unpack the ``PyCapsule`` provided by a BitGenerator.
+This example uses `~pcg64.PCG64` and
+``random_gauss_zig``, the Ziggurat-based generator for normals, to fill an
+array. The usual caveats for writing high-performance code using Cython --
+removing bounds checks and wrap around, providing array alignment information
+-- still apply.
+
+.. code-block:: cython
+
+ import numpy as np
+ cimport numpy as np
+ cimport cython
+ from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer
+ from numpy.random.common cimport *
+ from numpy.random.distributions cimport random_gauss_zig
+ from numpy.random import PCG64
+
+
+ @cython.boundscheck(False)
+ @cython.wraparound(False)
+ def normals_zig(Py_ssize_t n):
+ cdef Py_ssize_t i
+ cdef bitgen_t *rng
+ cdef const char *capsule_name = "BitGenerator"
+ cdef double[::1] random_values
+
+ x = PCG64()
+ capsule = x.capsule
+ if not PyCapsule_IsValid(capsule, capsule_name):
+ raise ValueError("Invalid pointer to anon_func_state")
+ rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name)
+ random_values = np.empty(n)
+ # Best practice is to release GIL and acquire the lock
+ with x.lock, nogil:
+ for i in range(n):
+ random_values[i] = random_gauss_zig(rng)
+ randoms = np.asarray(random_values)
+ return randoms
+
+The BitGenerator can also be directly accessed using the members of the basic
+RNG structure.
+
+.. code-block:: cython
+
+ @cython.boundscheck(False)
+ @cython.wraparound(False)
+ def uniforms(Py_ssize_t n):
+ cdef Py_ssize_t i
+ cdef bitgen_t *rng
+ cdef const char *capsule_name = "BitGenerator"
+ cdef double[::1] random_values
+
+ x = PCG64()
+ capsule = x.capsule
+ # Optional check that the capsule if from a BitGenerator
+ if not PyCapsule_IsValid(capsule, capsule_name):
+ raise ValueError("Invalid pointer to anon_func_state")
+ # Cast the pointer
+ rng = <bitgen_t *> PyCapsule_GetPointer(capsule, capsule_name)
+ random_values = np.empty(n)
+ with x.lock, nogil:
+ for i in range(n):
+ # Call the function
+ random_values[i] = rng.next_double(rng.state)
+ randoms = np.asarray(random_values)
+ return randoms
+
+These functions along with a minimal setup file are included in the
+examples folder.
+
+New Basic RNGs
+==============
+`~Generator` can be used with other user-provided BitGenerators. The simplest
+way to write a new BitGenerator is to examine the pyx file of one of the
+existing BitGenerators. The key structure that must be provided is the
+``capsule`` which contains a ``PyCapsule`` to a struct pointer of type
+``bitgen_t``,
+
+.. code-block:: c
+
+ typedef struct bitgen {
+ void *state;
+ uint64_t (*next_uint64)(void *st);
+ uint32_t (*next_uint32)(void *st);
+ double (*next_double)(void *st);
+ uint64_t (*next_raw)(void *st);
+ } bitgen_t;
+
+which provides 5 pointers. The first is an opaque pointer to the data structure
+used by the BitGenerators. The next three are function pointers which return
+the next 64- and 32-bit unsigned integers, the next random double and the next
+raw value. This final function is used for testing and so can be set to
+the next 64-bit unsigned integer function if not needed. Functions inside
+``Generator`` use this structure as in
+
+.. code-block:: c
+
+ bitgen_state->next_uint64(bitgen_state->state)
diff --git a/doc/source/reference/random/generator.rst b/doc/source/reference/random/generator.rst
new file mode 100644
index 000000000..c3803bcab
--- /dev/null
+++ b/doc/source/reference/random/generator.rst
@@ -0,0 +1,84 @@
+.. currentmodule:: numpy.random
+
+Random Generator
+----------------
+The `~Generator` provides access to
+a wide range of distributions, and served as a replacement for
+:class:`~numpy.random.RandomState`. The main difference between
+the two is that ``Generator`` relies on an additional BitGenerator to
+manage state and generate the random bits, which are then transformed into
+random values from useful distributions. The default BitGenerator used by
+``Generator`` is `~PCG64`. The BitGenerator
+can be changed by passing an instantized BitGenerator to ``Generator``.
+
+
+.. autofunction:: default_rng
+
+.. autoclass:: Generator
+ :exclude-members:
+
+Accessing the BitGenerator
+==========================
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.bit_generator
+
+Simple random data
+==================
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.integers
+ ~Generator.random
+ ~Generator.choice
+ ~Generator.bytes
+
+Permutations
+============
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.shuffle
+ ~Generator.permutation
+
+Distributions
+=============
+.. autosummary::
+ :toctree: generated/
+
+ ~Generator.beta
+ ~Generator.binomial
+ ~Generator.chisquare
+ ~Generator.dirichlet
+ ~Generator.exponential
+ ~Generator.f
+ ~Generator.gamma
+ ~Generator.geometric
+ ~Generator.gumbel
+ ~Generator.hypergeometric
+ ~Generator.laplace
+ ~Generator.logistic
+ ~Generator.lognormal
+ ~Generator.logseries
+ ~Generator.multinomial
+ ~Generator.multivariate_normal
+ ~Generator.negative_binomial
+ ~Generator.noncentral_chisquare
+ ~Generator.noncentral_f
+ ~Generator.normal
+ ~Generator.pareto
+ ~Generator.poisson
+ ~Generator.power
+ ~Generator.rayleigh
+ ~Generator.standard_cauchy
+ ~Generator.standard_exponential
+ ~Generator.standard_gamma
+ ~Generator.standard_normal
+ ~Generator.standard_t
+ ~Generator.triangular
+ ~Generator.uniform
+ ~Generator.vonmises
+ ~Generator.wald
+ ~Generator.weibull
+ ~Generator.zipf
diff --git a/doc/source/reference/random/index.rst b/doc/source/reference/random/index.rst
new file mode 100644
index 000000000..01f9981a2
--- /dev/null
+++ b/doc/source/reference/random/index.rst
@@ -0,0 +1,212 @@
+.. _numpyrandom:
+
+.. py:module:: numpy.random
+
+.. currentmodule:: numpy.random
+
+Random sampling (:mod:`numpy.random`)
+=====================================
+
+Numpy's random number routines produce pseudo random numbers using
+combinations of a `BitGenerator` to create sequences and a `Generator`
+to use those sequences to sample from different statistical distributions:
+
+* BitGenerators: Objects that generate random numbers. These are typically
+ unsigned integer words filled with sequences of either 32 or 64 random bits.
+* Generators: Objects that transform sequences of random bits from a
+ BitGenerator into sequences of numbers that follow a specific probability
+ distribution (such as uniform, Normal or Binomial) within a specified
+ interval.
+
+Since Numpy version 1.17.0 the Generator can be initialized with a
+number of different BitGenerators. It exposes many different probability
+distributions. See `NEP 19 <https://www.numpy.org/neps/
+nep-0019-rng-policy.html>`_ for context on the updated random Numpy number
+routines. The legacy `.RandomState` random number routines are still
+available, but limited to a single BitGenerator.
+
+For convenience and backward compatibility, a single `~.RandomState`
+instance's methods are imported into the numpy.random namespace, see
+:ref:`legacy` for the complete list.
+
+Quick Start
+-----------
+
+By default, `~Generator` uses bits provided by `~pcg64.PCG64` which
+has better statistical properties than the legacy mt19937 random
+number generator in `~.RandomState`.
+
+.. code-block:: python
+
+ # Uses the old numpy.random.RandomState
+ from numpy import random
+ random.standard_normal()
+
+`~Generator` can be used as a replacement for `~.RandomState`. Both class
+instances now hold a internal `BitGenerator` instance to provide the bit
+stream, it is accessible as ``gen.bit_generator``. Some long-overdue API
+cleanup means that legacy and compatibility methods have been removed from
+`~.Generator`
+
+=================== ============== ============
+`~.RandomState` `~.Generator` Notes
+------------------- -------------- ------------
+``random_sample``, ``random`` Compatible with `random.random`
+``rand``
+------------------- -------------- ------------
+``randint``, ``integers`` Add an ``endpoint`` kwarg
+``random_integers``
+------------------- -------------- ------------
+``tomaxint`` removed Use ``integers(0, np.iinfo(np.int).max,``
+ ``endpoint=False)``
+------------------- -------------- ------------
+``seed`` removed Use `~.SeedSequence.spawn`
+=================== ============== ============
+
+See `new-or-different` for more information
+
+.. code-block:: python
+
+ # As replacement for RandomState(); default_rng() instantiates Generator with
+ # the default PCG64 BitGenerator.
+ from numpy.random import default_rng
+ rg = default_rng()
+ rg.standard_normal()
+ rg.bit_generator
+
+Something like the following code can be used to support both ``RandomState``
+and ``Generator``, with the understanding that the interfaces are slightly
+different
+
+.. code-block:: python
+
+ try:
+ rg_integers = rg.integers
+ except AttributeError:
+ rg_integers = rg.randint
+ a = rg_integers(1000)
+
+Seeds can be passed to any of the BitGenerators. The provided value is mixed
+via `~.SeedSequence` to spread a possible sequence of seeds across a wider
+range of initialization states for the BitGenerator. Here `~.PCG64` is used and
+is wrapped with a `~.Generator`.
+
+.. code-block:: python
+
+ from numpy.random import Generator, PCG64
+ rg = Generator(PCG64(12345))
+ rg.standard_normal()
+
+Introduction
+------------
+The new infrastructure takes a different approach to producing random numbers
+from the `~.RandomState` object. Random number generation is separated into
+two components, a bit generator and a random generator.
+
+The `BitGenerator` has a limited set of responsibilities. It manages state
+and provides functions to produce random doubles and random unsigned 32- and
+64-bit values.
+
+The `random generator <Generator>` takes the
+bit generator-provided stream and transforms them into more useful
+distributions, e.g., simulated normal random values. This structure allows
+alternative bit generators to be used with little code duplication.
+
+The `Generator` is the user-facing object that is nearly identical to
+`.RandomState`. The canonical method to initialize a generator passes a
+`~.PCG64` bit generator as the sole argument.
+
+.. code-block:: python
+
+ from numpy.random import default_rng
+ rg = default_rng(12345)
+ rg.random()
+
+One can also instantiate `Generator` directly with a `BitGenerator` instance.
+To use the older `~mt19937.MT19937` algorithm, one can instantiate it directly
+and pass it to `Generator`.
+
+.. code-block:: python
+
+ from numpy.random import Generator, MT19937
+ rg = Generator(MT19937(12345))
+ rg.random()
+
+What's New or Different
+~~~~~~~~~~~~~~~~~~~~~~~
+.. warning::
+
+ The Box-Muller method used to produce NumPy's normals is no longer available
+ in `Generator`. It is not possible to reproduce the exact random
+ values using Generator for the normal distribution or any other
+ distribution that relies on the normal such as the `.RandomState.gamma` or
+ `.RandomState.standard_t`. If you require bitwise backward compatible
+ streams, use `.RandomState`.
+
+* The Generator's normal, exponential and gamma functions use 256-step Ziggurat
+ methods which are 2-10 times faster than NumPy's Box-Muller or inverse CDF
+ implementations.
+* Optional ``dtype`` argument that accepts ``np.float32`` or ``np.float64``
+ to produce either single or double prevision uniform random variables for
+ select distributions
+* Optional ``out`` argument that allows existing arrays to be filled for
+ select distributions
+* `~entropy.random_entropy` provides access to the system
+ source of randomness that is used in cryptographic applications (e.g.,
+ ``/dev/urandom`` on Unix).
+* All BitGenerators can produce doubles, uint64s and uint32s via CTypes
+ (`~.PCG64.ctypes`) and CFFI (`~.PCG64.cffi`). This allows the bit generators
+ to be used in numba.
+* The bit generators can be used in downstream projects via
+ :ref:`Cython <randomgen_cython>`.
+* `~.Generator.integers` is now the canonical way to generate integer
+ random numbers from a discrete uniform distribution. The ``rand`` and
+ ``randn`` methods are only available through the legacy `~.RandomState`.
+ The ``endpoint`` keyword can be used to specify open or closed intervals.
+ This replaces both ``randint`` and the deprecated ``random_integers``.
+* `~.Generator.random` is now the canonical way to generate floating-point
+ random numbers, which replaces `.RandomState.random_sample`,
+ `.RandomState.sample`, and `.RandomState.ranf`. This is consistent with
+ Python's `random.random`.
+* All BitGenerators in numpy use `~SeedSequence` to convert seeds into
+ initialized states.
+
+See :ref:`new-or-different` for a complete list of improvements and
+differences from the traditional ``Randomstate``.
+
+Parallel Generation
+~~~~~~~~~~~~~~~~~~~
+
+The included generators can be used in parallel, distributed applications in
+one of three ways:
+
+* :ref:`seedsequence-spawn`
+* :ref:`independent-streams`
+* :ref:`parallel-jumped`
+
+Concepts
+--------
+.. toctree::
+ :maxdepth: 1
+
+ generator
+ legacy mtrand <legacy>
+ BitGenerators, SeedSequences <bit_generators/index>
+
+Features
+--------
+.. toctree::
+ :maxdepth: 2
+
+ Parallel Applications <parallel>
+ Multithreaded Generation <multithreading>
+ new-or-different
+ Comparing Performance <performance>
+ extending
+ Reading System Entropy <entropy>
+
+Original Source
+~~~~~~~~~~~~~~~
+
+This package was developed independently of NumPy and was integrated in version
+1.17.0. The original repo is at https://github.com/bashtage/randomgen.
diff --git a/doc/source/reference/random/legacy.rst b/doc/source/reference/random/legacy.rst
new file mode 100644
index 000000000..04d4d3569
--- /dev/null
+++ b/doc/source/reference/random/legacy.rst
@@ -0,0 +1,125 @@
+.. currentmodule:: numpy.random
+
+.. _legacy:
+
+Legacy Random Generation
+------------------------
+The `~mtrand.RandomState` provides access to
+legacy generators. This generator is considered frozen and will have
+no further improvements. It is guaranteed to produce the same values
+as the final point release of NumPy v1.16. These all depend on Box-Muller
+normals or inverse CDF exponentials or gammas. This class should only be used
+if it is essential to have randoms that are identical to what
+would have been produced by previous versions of NumPy.
+
+`~mtrand.RandomState` adds additional information
+to the state which is required when using Box-Muller normals since these
+are produced in pairs. It is important to use
+`~mtrand.RandomState.get_state`, and not the underlying bit generators
+`state`, when accessing the state so that these extra values are saved.
+
+Although we provide the `~mt19937.MT19937` BitGenerator for use independent of
+`~mtrand.RandomState`, note that its default seeding uses `~SeedSequence`
+rather than the legacy seeding algorithm. `~mtrand.RandomState` will use the
+legacy seeding algorithm. The methods to use the legacy seeding algorithm are
+currently private as the main reason to use them is just to implement
+`~mtrand.RandomState`. However, one can reset the state of `~mt19937.MT19937`
+using the state of the `~mtrand.RandomState`:
+
+.. code-block:: python
+
+ from numpy.random import MT19937
+ from numpy.random import RandomState
+
+ rs = RandomState(12345)
+ mt19937 = MT19937()
+ mt19937.state = rs.get_state()
+ rs2 = RandomState(mt19937)
+
+ # Same output
+ rs.standard_normal()
+ rs2.standard_normal()
+
+ rs.random()
+ rs2.random()
+
+ rs.standard_exponential()
+ rs2.standard_exponential()
+
+
+.. currentmodule:: numpy.random.mtrand
+
+.. autoclass:: RandomState
+ :exclude-members:
+
+Seeding and State
+=================
+
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.get_state
+ ~RandomState.set_state
+ ~RandomState.seed
+
+Simple random data
+==================
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.rand
+ ~RandomState.randn
+ ~RandomState.randint
+ ~RandomState.random_integers
+ ~RandomState.random_sample
+ ~RandomState.choice
+ ~RandomState.bytes
+
+Permutations
+============
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.shuffle
+ ~RandomState.permutation
+
+Distributions
+=============
+.. autosummary::
+ :toctree: generated/
+
+ ~RandomState.beta
+ ~RandomState.binomial
+ ~RandomState.chisquare
+ ~RandomState.dirichlet
+ ~RandomState.exponential
+ ~RandomState.f
+ ~RandomState.gamma
+ ~RandomState.geometric
+ ~RandomState.gumbel
+ ~RandomState.hypergeometric
+ ~RandomState.laplace
+ ~RandomState.logistic
+ ~RandomState.lognormal
+ ~RandomState.logseries
+ ~RandomState.multinomial
+ ~RandomState.multivariate_normal
+ ~RandomState.negative_binomial
+ ~RandomState.noncentral_chisquare
+ ~RandomState.noncentral_f
+ ~RandomState.normal
+ ~RandomState.pareto
+ ~RandomState.poisson
+ ~RandomState.power
+ ~RandomState.rayleigh
+ ~RandomState.standard_cauchy
+ ~RandomState.standard_exponential
+ ~RandomState.standard_gamma
+ ~RandomState.standard_normal
+ ~RandomState.standard_t
+ ~RandomState.triangular
+ ~RandomState.uniform
+ ~RandomState.vonmises
+ ~RandomState.wald
+ ~RandomState.weibull
+ ~RandomState.zipf
diff --git a/doc/source/reference/random/multithreading.rst b/doc/source/reference/random/multithreading.rst
new file mode 100644
index 000000000..6883d3672
--- /dev/null
+++ b/doc/source/reference/random/multithreading.rst
@@ -0,0 +1,108 @@
+Multithreaded Generation
+========================
+
+The four core distributions (:meth:`~.Generator.random`,
+:meth:`~.Generator.standard_normal`, :meth:`~.Generator.standard_exponential`,
+and :meth:`~.Generator.standard_gamma`) all allow existing arrays to be filled
+using the ``out`` keyword argument. Existing arrays need to be contiguous and
+well-behaved (writable and aligned). Under normal circumstances, arrays
+created using the common constructors such as :meth:`numpy.empty` will satisfy
+these requirements.
+
+This example makes use of Python 3 :mod:`concurrent.futures` to fill an array
+using multiple threads. Threads are long-lived so that repeated calls do not
+require any additional overheads from thread creation. The underlying
+BitGenerator is `PCG64` which is fast, has a long period and supports
+using `PCG64.jumped` to return a new generator while advancing the
+state. The random numbers generated are reproducible in the sense that the same
+seed will produce the same outputs.
+
+.. code-block:: ipython
+
+ from numpy.random import Generator, PCG64
+ import multiprocessing
+ import concurrent.futures
+ import numpy as np
+
+ class MultithreadedRNG(object):
+ def __init__(self, n, seed=None, threads=None):
+ rg = PCG64(seed)
+ if threads is None:
+ threads = multiprocessing.cpu_count()
+ self.threads = threads
+
+ self._random_generators = [rg]
+ last_rg = rg
+ for _ in range(0, threads-1):
+ new_rg = last_rg.jumped()
+ self._random_generators.append(new_rg)
+ last_rg = new_rg
+
+ self.n = n
+ self.executor = concurrent.futures.ThreadPoolExecutor(threads)
+ self.values = np.empty(n)
+ self.step = np.ceil(n / threads).astype(np.int)
+
+ def fill(self):
+ def _fill(random_state, out, first, last):
+ random_state.standard_normal(out=out[first:last])
+
+ futures = {}
+ for i in range(self.threads):
+ args = (_fill,
+ self._random_generators[i],
+ self.values,
+ i * self.step,
+ (i + 1) * self.step)
+ futures[self.executor.submit(*args)] = i
+ concurrent.futures.wait(futures)
+
+ def __del__(self):
+ self.executor.shutdown(False)
+
+
+The multithreaded random number generator can be used to fill an array.
+The ``values`` attributes shows the zero-value before the fill and the
+random value after.
+
+.. code-block:: ipython
+
+ In [2]: mrng = MultithreadedRNG(10000000, seed=0)
+ ...: print(mrng.values[-1])
+ 0.0
+
+ In [3]: mrng.fill()
+ ...: print(mrng.values[-1])
+ 3.296046120254392
+
+The time required to produce using multiple threads can be compared to
+the time required to generate using a single thread.
+
+.. code-block:: ipython
+
+ In [4]: print(mrng.threads)
+ ...: %timeit mrng.fill()
+
+ 4
+ 32.8 ms ± 2.71 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
+
+The single threaded call directly uses the BitGenerator.
+
+.. code-block:: ipython
+
+ In [5]: values = np.empty(10000000)
+ ...: rg = Generator(PCG64())
+ ...: %timeit rg.standard_normal(out=values)
+
+ 99.6 ms ± 222 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
+
+The gains are substantial and the scaling is reasonable even for large that
+are only moderately large. The gains are even larger when compared to a call
+that does not use an existing array due to array creation overhead.
+
+.. code-block:: ipython
+
+ In [6]: rg = Generator(PCG64())
+ ...: %timeit rg.standard_normal(10000000)
+
+ 125 ms ± 309 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
diff --git a/doc/source/reference/random/new-or-different.rst b/doc/source/reference/random/new-or-different.rst
new file mode 100644
index 000000000..5442f46c9
--- /dev/null
+++ b/doc/source/reference/random/new-or-different.rst
@@ -0,0 +1,118 @@
+.. _new-or-different:
+
+.. currentmodule:: numpy.random
+
+What's New or Different
+-----------------------
+
+.. warning::
+
+ The Box-Muller method used to produce NumPy's normals is no longer available
+ in `Generator`. It is not possible to reproduce the exact random
+ values using ``Generator`` for the normal distribution or any other
+ distribution that relies on the normal such as the `gamma` or
+ `standard_t`. If you require bitwise backward compatible
+ streams, use `RandomState`.
+
+Quick comparison of legacy `mtrand <legacy>`_ to the new `Generator`
+
+================== ==================== =============
+Feature Older Equivalent Notes
+------------------ -------------------- -------------
+`~.Generator` `~.RandomState` ``Generator`` requires a stream
+ source, called a `BitGenerator
+ <bit_generators>` A number of these
+ are provided. ``RandomState`` uses
+ the Mersenne Twister `~.MT19937` by
+ default, but can also be instantiated
+ with any BitGenerator.
+------------------ -------------------- -------------
+``random`` ``random_sample``, Access the values in a BitGenerator,
+ ``rand`` convert them to ``float64`` in the
+ interval ``[0.0.,`` `` 1.0)``.
+ In addition to the ``size`` kwarg, now
+ supports ``dtype='d'`` or ``dtype='f'``,
+ and an ``out`` kwarg to fill a user-
+ supplied array.
+
+ Many other distributions are also
+ supported.
+------------------ -------------------- -------------
+``integers`` ``randint``, Use the ``endpoint`` kwarg to adjust
+ ``random_integers`` the inclusion or exclution of the
+ ``high`` interval endpoint
+================== ==================== =============
+
+And in more detail:
+
+* `~.entropy.random_entropy` provides access to the system
+ source of randomness that is used in cryptographic applications (e.g.,
+ ``/dev/urandom`` on Unix).
+* Simulate from the complex normal distribution
+ (`~.Generator.complex_normal`)
+* The normal, exponential and gamma generators use 256-step Ziggurat
+ methods which are 2-10 times faster than NumPy's default implementation in
+ `~.Generator.standard_normal`, `~.Generator.standard_exponential` or
+ `~.Generator.standard_gamma`.
+* `~.Generator.integers` is now the canonical way to generate integer
+ random numbers from a discrete uniform distribution. The ``rand`` and
+ ``randn`` methods are only available through the legacy `~.RandomState`.
+ This replaces both ``randint`` and the deprecated ``random_integers``.
+* The Box-Muller method used to produce NumPy's normals is no longer available.
+* All bit generators can produce doubles, uint64s and
+ uint32s via CTypes (`~PCG64.ctypes`) and CFFI (`~PCG64.cffi`).
+ This allows these bit generators to be used in numba.
+* The bit generators can be used in downstream projects via
+ Cython.
+
+
+.. ipython:: python
+
+ from numpy.random import Generator, PCG64
+ import numpy.random
+ rg = Generator(PCG64())
+ %timeit rg.standard_normal(100000)
+ %timeit numpy.random.standard_normal(100000)
+
+.. ipython:: python
+
+ %timeit rg.standard_exponential(100000)
+ %timeit numpy.random.standard_exponential(100000)
+
+.. ipython:: python
+
+ %timeit rg.standard_gamma(3.0, 100000)
+ %timeit numpy.random.standard_gamma(3.0, 100000)
+
+* Optional ``dtype`` argument that accepts ``np.float32`` or ``np.float64``
+ to produce either single or double prevision uniform random variables for
+ select distributions
+
+ * Uniforms (`~.Generator.random` and `~.Generator.integers`)
+ * Normals (`~.Generator.standard_normal`)
+ * Standard Gammas (`~.Generator.standard_gamma`)
+ * Standard Exponentials (`~.Generator.standard_exponential`)
+
+.. ipython:: python
+
+ rg = Generator(PCG64(0))
+ rg.random(3, dtype='d')
+ rg.random(3, dtype='f')
+
+* Optional ``out`` argument that allows existing arrays to be filled for
+ select distributions
+
+ * Uniforms (`~.Generator.random`)
+ * Normals (`~.Generator.standard_normal`)
+ * Standard Gammas (`~.Generator.standard_gamma`)
+ * Standard Exponentials (`~.Generator.standard_exponential`)
+
+ This allows multithreading to fill large arrays in chunks using suitable
+ BitGenerators in parallel.
+
+.. ipython:: python
+
+ existing = np.zeros(4)
+ rg.random(out=existing[:2])
+ print(existing)
+
diff --git a/doc/source/reference/random/parallel.rst b/doc/source/reference/random/parallel.rst
new file mode 100644
index 000000000..2f79f22d8
--- /dev/null
+++ b/doc/source/reference/random/parallel.rst
@@ -0,0 +1,193 @@
+Parallel Random Number Generation
+=================================
+
+There are three strategies implemented that can be used to produce
+repeatable pseudo-random numbers across multiple processes (local
+or distributed).
+
+.. currentmodule:: numpy.random
+
+.. _seedsequence-spawn:
+
+`~SeedSequence` spawning
+------------------------
+
+`~SeedSequence` `implements an algorithm`_ to process a user-provided seed,
+typically as an integer of some size, and to convert it into an initial state for
+a `~BitGenerator`. It uses hashing techniques to ensure that low-quality seeds
+are turned into high quality initial states (at least, with very high
+probability).
+
+For example, `~mt19937.MT19937` has a state consisting of 624
+`uint32` integers. A naive way to take a 32-bit integer seed would be to just set
+the last element of the state to the 32-bit seed and leave the rest 0s. This is
+a valid state for `~mt19937.MT19937`, but not a good one. The Mersenne Twister
+algorithm `suffers if there are too many 0s`_. Similarly, two adjacent 32-bit
+integer seeds (i.e. ``12345`` and ``12346``) would produce very similar
+streams.
+
+`~SeedSequence` avoids these problems by using successions of integer hashes
+with good `avalanche properties`_ to ensure that flipping any bit in the input
+input has about a 50% chance of flipping any bit in the output. Two input seeds
+that are very close to each other will produce initial states that are very far
+from each other (with very high probability). It is also constructed in such
+a way that you can provide arbitrary-sized integers or lists of integers.
+`~SeedSequence` will take all of the bits that you provide and mix them
+together to produce however many bits the consuming `~BitGenerator` needs to
+initialize itself.
+
+These properties together mean that we can safely mix together the usual
+user-provided seed with simple incrementing counters to get `~BitGenerator`
+states that are (to very high probability) independent of each other. We can
+wrap this together into an API that is easy to use and difficult to misuse.
+
+.. code-block:: python
+
+ from numpy.random import SeedSequence, default_rng
+
+ ss = SeedSequence(12345)
+
+ # Spawn off 10 child SeedSequences to pass to child processes.
+ child_seeds = ss.spawn(10)
+ streams = [default_rng(s) for s in child_seeds]
+
+.. end_block
+
+Child `~SeedSequence` objects can also spawn to make grandchildren, and so on.
+Each `~SeedSequence` has its position in the tree of spawned `~SeedSequence`
+objects mixed in with the user-provided seed to generate independent (with very
+high probability) streams.
+
+.. code-block:: python
+
+ grandchildren = child_seeds[0].spawn(4)
+ grand_streams = [default_rng(s) for s in grandchildren]
+
+.. end_block
+
+This feature lets you make local decisions about when and how to split up
+streams without coordination between processes. You do not have to preallocate
+space to avoid overlapping or request streams from a common global service. This
+general "tree-hashing" scheme is `not unique to numpy`_ but not yet widespread.
+Python has increasingly-flexible mechanisms for parallelization available, and
+this scheme fits in very well with that kind of use.
+
+Using this scheme, an upper bound on the probability of a collision can be
+estimated if one knows the number of streams that you derive. `~SeedSequence`
+hashes its inputs, both the seed and the spawn-tree-path, down to a 128-bit
+pool by default. The probability that there is a collision in
+that pool, pessimistically-estimated ([1]_), will be about :math:`n^2*2^{-128}` where
+`n` is the number of streams spawned. If a program uses an aggressive million
+streams, about :math:`2^{20}`, then the probability that at least one pair of
+them are identical is about :math:`2^{-88}`, which is in solidly-ignorable
+territory ([2]_).
+
+.. [1] The algorithm is carefully designed to eliminate a number of possible
+ ways to collide. For example, if one only does one level of spawning, it
+ is guaranteed that all states will be unique. But it's easier to
+ estimate the naive upper bound on a napkin and take comfort knowing
+ that the probability is actually lower.
+
+.. [2] In this calculation, we can ignore the amount of numbers drawn from each
+ stream. Each of the PRNGs we provide has some extra protection built in
+ that avoids overlaps if the `~SeedSequence` pools differ in the
+ slightest bit. `~pcg64.PCG64` has :math:`2^{127}` separate cycles
+ determined by the seed in addition to the position in the
+ :math:`2^{128}` long period for each cycle, so one has to both get on or
+ near the same cycle *and* seed a nearby position in the cycle.
+ `~philox.Philox` has completely independent cycles determined by the seed.
+ `~sfc64.SFC64` incorporates a 64-bit counter so every unique seed is at
+ least :math:`2^{64}` iterations away from any other seed. And
+ finally, `~mt19937.MT19937` has just an unimaginably huge period. Getting
+ a collision internal to `~SeedSequence` is the way a failure would be
+ observed.
+
+.. _`implements an algorithm`: http://www.pcg-random.org/posts/developing-a-seed_seq-alternative.html
+.. _`suffers if there are too many 0s`: http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/emt19937ar.html
+.. _`avalanche properties`: https://en.wikipedia.org/wiki/Avalanche_effect
+.. _`not unique to numpy`: https://www.iro.umontreal.ca/~lecuyer/myftp/papers/parallel-rng-imacs.pdf
+
+
+.. _independent-streams:
+
+Independent Streams
+-------------------
+
+:class:`~philox.Philox` is a counter-based RNG based which generates values by
+encrypting an incrementing counter using weak cryptographic primitives. The
+seed determines the key that is used for the encryption. Unique keys create
+unique, independent streams. :class:`~philox.Philox` lets you bypass the
+seeding algorithm to directly set the 128-bit key. Similar, but different, keys
+will still create independent streams.
+
+.. code-block:: python
+
+ import secrets
+ from numpy.random import Philox
+
+ # 128-bit number as a seed
+ root_seed = secrets.getrandbits(128)
+ streams = [Philox(key=root_seed + stream_id) for stream_id in range(10)]
+
+.. end_block
+
+This scheme does require that you avoid reusing stream IDs. This may require
+coordination between the parallel processes.
+
+
+.. _parallel-jumped:
+
+Jumping the BitGenerator state
+------------------------------
+
+``jumped`` advances the state of the BitGenerator *as-if* a large number of
+random numbers have been drawn, and returns a new instance with this state.
+The specific number of draws varies by BitGenerator, and ranges from
+:math:`2^{64}` to :math:`2^{128}`. Additionally, the *as-if* draws also depend
+on the size of the default random number produced by the specific BitGenerator.
+The BitGenerators that support ``jumped``, along with the period of the
+BitGenerator, the size of the jump and the bits in the default unsigned random
+are listed below.
+
++-----------------+-------------------------+-------------------------+-------------------------+
+| BitGenerator | Period | Jump Size | Bits |
++=================+=========================+=========================+=========================+
+| MT19937 | :math:`2^{19937}` | :math:`2^{128}` | 32 |
++-----------------+-------------------------+-------------------------+-------------------------+
+| PCG64 | :math:`2^{128}` | :math:`~2^{127}` ([3]_) | 64 |
++-----------------+-------------------------+-------------------------+-------------------------+
+| Philox | :math:`2^{256}` | :math:`2^{128}` | 64 |
++-----------------+-------------------------+-------------------------+-------------------------+
+
+.. [3] The jump size is :math:`(\phi-1)*2^{128}` where :math:`\phi` is the
+ golden ratio. As the jumps wrap around the period, the actual distances
+ between neighboring streams will slowly grow smaller than the jump size,
+ but using the golden ratio this way is a classic method of constructing
+ a low-discrepancy sequence that spreads out the states around the period
+ optimally. You will not be able to jump enough to make those distances
+ small enough to overlap in your lifetime.
+
+``jumped`` can be used to produce long blocks which should be long enough to not
+overlap.
+
+.. code-block:: python
+
+ import secrets
+ from numpy.random import PCG64
+
+ seed = secrets.getrandbits(128)
+ blocked_rng = []
+ rng = PCG64(seed)
+ for i in range(10):
+ blocked_rng.append(rng.jumped(i))
+
+.. end_block
+
+When using ``jumped``, one does have to take care not to jump to a stream that
+was already used. In the above example, one could not later use
+``blocked_rng[0].jumped()`` as it would overlap with ``blocked_rng[1]``. Like
+with the independent streams, if the main process here wants to split off 10
+more streams by jumping, then it needs to start with ``range(10, 20)``,
+otherwise it would recreate the same streams. On the other hand, if you
+carefully construct the streams, then you are guaranteed to have streams that
+do not overlap.
diff --git a/doc/source/reference/random/performance.py b/doc/source/reference/random/performance.py
new file mode 100644
index 000000000..28a42eb0d
--- /dev/null
+++ b/doc/source/reference/random/performance.py
@@ -0,0 +1,87 @@
+from collections import OrderedDict
+from timeit import repeat
+
+import pandas as pd
+
+import numpy as np
+from numpy.random import MT19937, PCG64, Philox, SFC64
+
+PRNGS = [MT19937, PCG64, Philox, SFC64]
+
+funcs = OrderedDict()
+integers = 'integers(0, 2**{bits},size=1000000, dtype="uint{bits}")'
+funcs['32-bit Unsigned Ints'] = integers.format(bits=32)
+funcs['64-bit Unsigned Ints'] = integers.format(bits=64)
+funcs['Uniforms'] = 'random(size=1000000)'
+funcs['Normals'] = 'standard_normal(size=1000000)'
+funcs['Exponentials'] = 'standard_exponential(size=1000000)'
+funcs['Gammas'] = 'standard_gamma(3.0,size=1000000)'
+funcs['Binomials'] = 'binomial(9, .1, size=1000000)'
+funcs['Laplaces'] = 'laplace(size=1000000)'
+funcs['Poissons'] = 'poisson(3.0, size=1000000)'
+
+setup = """
+from numpy.random import {prng}, Generator
+rg = Generator({prng}())
+"""
+
+test = "rg.{func}"
+table = OrderedDict()
+for prng in PRNGS:
+ print(prng)
+ col = OrderedDict()
+ for key in funcs:
+ t = repeat(test.format(func=funcs[key]),
+ setup.format(prng=prng().__class__.__name__),
+ number=1, repeat=3)
+ col[key] = 1000 * min(t)
+ col = pd.Series(col)
+ table[prng().__class__.__name__] = col
+
+npfuncs = OrderedDict()
+npfuncs.update(funcs)
+npfuncs['32-bit Unsigned Ints'] = 'randint(2**32,dtype="uint32",size=1000000)'
+npfuncs['64-bit Unsigned Ints'] = 'randint(2**64,dtype="uint64",size=1000000)'
+setup = """
+from numpy.random import RandomState
+rg = RandomState()
+"""
+col = {}
+for key in npfuncs:
+ t = repeat(test.format(func=npfuncs[key]),
+ setup.format(prng=prng().__class__.__name__),
+ number=1, repeat=3)
+ col[key] = 1000 * min(t)
+table['RandomState'] = pd.Series(col)
+
+columns = ['MT19937','PCG64','Philox','SFC64', 'RandomState']
+table = pd.DataFrame(table)
+order = np.log(table).mean().sort_values().index
+table = table.T
+table = table.reindex(columns)
+table = table.T
+table = table.reindex([k for k in funcs], axis=0)
+print(table.to_csv(float_format='%0.1f'))
+
+
+rel = table.loc[:, ['RandomState']].values @ np.ones(
+ (1, table.shape[1])) / table
+rel.pop('RandomState')
+rel = rel.T
+rel['Overall'] = np.exp(np.log(rel).mean(1))
+rel *= 100
+rel = np.round(rel)
+rel = rel.T
+print(rel.to_csv(float_format='%0d'))
+
+# Cross-platform table
+rows = ['32-bit Unsigned Ints','64-bit Unsigned Ints','Uniforms','Normals','Exponentials']
+xplat = rel.reindex(rows, axis=0)
+xplat = 100 * (xplat / xplat.MT19937.values[:,None])
+overall = np.exp(np.log(xplat).mean(0))
+xplat = xplat.T.copy()
+xplat['Overall']=overall
+print(xplat.T.round(1))
+
+
+
diff --git a/doc/source/reference/random/performance.rst b/doc/source/reference/random/performance.rst
new file mode 100644
index 000000000..2d5fca496
--- /dev/null
+++ b/doc/source/reference/random/performance.rst
@@ -0,0 +1,153 @@
+Performance
+-----------
+
+.. currentmodule:: numpy.random
+
+Recommendation
+**************
+The recommended generator for general use is :class:`~pcg64.PCG64`. It is
+statistically high quality, full-featured, and fast on most platforms, but
+somewhat slow when compiled for 32-bit processes.
+
+:class:`~philox.Philox` is fairly slow, but its statistical properties have
+very high quality, and it is easy to get assuredly-independent stream by using
+unique keys. If that is the style you wish to use for parallel streams, or you
+are porting from another system that uses that style, then
+:class:`~philox.Philox` is your choice.
+
+:class:`~sfc64.SFC64` is statistically high quality and very fast. However, it
+lacks jumpability. If you are not using that capability and want lots of speed,
+even on 32-bit processes, this is your choice.
+
+:class:`~mt19937.MT19937` `fails some statistical tests`_ and is not especially
+fast compared to modern PRNGs. For these reasons, we mostly do not recommend
+using it on its own, only through the legacy `~.RandomState` for
+reproducing old results. That said, it has a very long history as a default in
+many systems.
+
+.. _`fails some statistical tests`: https://www.iro.umontreal.ca/~lecuyer/myftp/papers/testu01.pdf
+
+Timings
+*******
+
+The timings below are the time in ns to produce 1 random value from a
+specific distribution. The original :class:`~mt19937.MT19937` generator is
+much slower since it requires 2 32-bit values to equal the output of the
+faster generators.
+
+Integer performance has a similar ordering.
+
+The pattern is similar for other, more complex generators. The normal
+performance of the legacy :class:`~.RandomState` generator is much
+lower than the other since it uses the Box-Muller transformation rather
+than the Ziggurat generator. The performance gap for Exponentials is also
+large due to the cost of computing the log function to invert the CDF.
+The column labeled MT19973 is used the same 32-bit generator as
+:class:`~.RandomState` but produces random values using
+:class:`~Generator`.
+
+.. csv-table::
+ :header: ,MT19937,PCG64,Philox,SFC64,RandomState
+ :widths: 14,14,14,14,14,14
+
+ 32-bit Unsigned Ints,3.2,2.7,4.9,2.7,3.2
+ 64-bit Unsigned Ints,5.6,3.7,6.3,2.9,5.7
+ Uniforms,7.3,4.1,8.1,3.1,7.3
+ Normals,13.1,10.2,13.5,7.8,34.6
+ Exponentials,7.9,5.4,8.5,4.1,40.3
+ Gammas,34.8,28.0,34.7,25.1,58.1
+ Binomials,25.0,21.4,26.1,19.5,25.2
+ Laplaces,45.1,40.7,45.5,38.1,45.6
+ Poissons,67.6,52.4,69.2,46.4,78.1
+
+The next table presents the performance in percentage relative to values
+generated by the legacy generator, `RandomState(MT19937())`. The overall
+performance was computed using a geometric mean.
+
+.. csv-table::
+ :header: ,MT19937,PCG64,Philox,SFC64
+ :widths: 14,14,14,14,14
+
+ 32-bit Unsigned Ints,101,121,67,121
+ 64-bit Unsigned Ints,102,156,91,199
+ Uniforms,100,179,90,235
+ Normals,263,338,257,443
+ Exponentials,507,752,474,985
+ Gammas,167,207,167,231
+ Binomials,101,118,96,129
+ Laplaces,101,112,100,120
+ Poissons,116,149,113,168
+ Overall,144,192,132,225
+
+.. note::
+
+ All timings were taken using Linux on a i5-3570 processor.
+
+Performance on different Operating Systems
+******************************************
+Performance differs across platforms due to compiler and hardware availability
+(e.g., register width) differences. The default bit generator has been chosen
+to perform well on 64-bit platforms. Performance on 32-bit operating systems
+is very different.
+
+The values reported are normalized relative to the speed of MT19937 in
+each table. A value of 100 indicates that the performance matches the MT19937.
+Higher values indicate improved performance. These values cannot be compared
+across tables.
+
+64-bit Linux
+~~~~~~~~~~~~
+
+=================== ========= ======= ======== =======
+Distribution MT19937 PCG64 Philox SFC64
+=================== ========= ======= ======== =======
+32-bit Unsigned Int 100 119.8 67.7 120.2
+64-bit Unsigned Int 100 152.9 90.8 213.3
+Uniforms 100 179.0 87.0 232.0
+Normals 100 128.5 99.2 167.8
+Exponentials 100 148.3 93.0 189.3
+**Overall** 100 144.3 86.8 180.0
+=================== ========= ======= ======== =======
+
+
+64-bit Windows
+~~~~~~~~~~~~~~
+The relative performance on 64-bit Linux and 64-bit Windows is broadly similar.
+
+
+=================== ========= ======= ======== =======
+Distribution MT19937 PCG64 Philox SFC64
+=================== ========= ======= ======== =======
+32-bit Unsigned Int 100 129.1 35.0 135.0
+64-bit Unsigned Int 100 146.9 35.7 176.5
+Uniforms 100 165.0 37.0 192.0
+Normals 100 128.5 48.5 158.0
+Exponentials 100 151.6 39.0 172.8
+**Overall** 100 143.6 38.7 165.7
+=================== ========= ======= ======== =======
+
+
+32-bit Windows
+~~~~~~~~~~~~~~
+
+The performance of 64-bit generators on 32-bit Windows is much lower than on 64-bit
+operating systems due to register width. MT19937, the generator that has been
+in NumPy since 2005, operates on 32-bit integers.
+
+=================== ========= ======= ======== =======
+Distribution MT19937 PCG64 Philox SFC64
+=================== ========= ======= ======== =======
+32-bit Unsigned Int 100 30.5 21.1 77.9
+64-bit Unsigned Int 100 26.3 19.2 97.0
+Uniforms 100 28.0 23.0 106.0
+Normals 100 40.1 31.3 112.6
+Exponentials 100 33.7 26.3 109.8
+**Overall** 100 31.4 23.8 99.8
+=================== ========= ======= ======== =======
+
+
+.. note::
+
+ Linux timings used Ubuntu 18.04 and GCC 7.4. Windows timings were made on
+ Windows 10 using Microsoft C/C++ Optimizing Compiler Version 19 (Visual
+ Studio 2015). All timings were produced on a i5-3570 processor.
diff --git a/doc/source/reference/routines.char.rst b/doc/source/reference/routines.char.rst
index 7413e3615..513f975e7 100644
--- a/doc/source/reference/routines.char.rst
+++ b/doc/source/reference/routines.char.rst
@@ -1,11 +1,13 @@
String operations
*****************
-.. currentmodule:: numpy.core.defchararray
+.. currentmodule:: numpy.char
-This module provides a set of vectorized string operations for arrays
-of type `numpy.string_` or `numpy.unicode_`. All of them are based on
-the string methods in the Python standard library.
+.. module:: numpy.char
+
+The `numpy.char` module provides a set of vectorized string
+operations for arrays of type `numpy.string_` or `numpy.unicode_`.
+All of them are based on the string methods in the Python standard library.
String operations
-----------------
@@ -20,6 +22,7 @@ String operations
center
decode
encode
+ expandtabs
join
ljust
lower
@@ -63,9 +66,11 @@ String information
:toctree: generated/
count
+ endswith
find
index
isalpha
+ isalnum
isdecimal
isdigit
islower
@@ -76,6 +81,7 @@ String information
rfind
rindex
startswith
+ str_len
Convenience class
-----------------
@@ -83,4 +89,6 @@ Convenience class
.. autosummary::
:toctree: generated/
+ array
+ asarray
chararray
diff --git a/doc/source/reference/routines.dtype.rst b/doc/source/reference/routines.dtype.rst
index ec8d2981d..e9189ca07 100644
--- a/doc/source/reference/routines.dtype.rst
+++ b/doc/source/reference/routines.dtype.rst
@@ -17,11 +17,9 @@ Data type routines
Creating data types
-------------------
-
.. autosummary::
:toctree: generated/
-
dtype
format_parser
@@ -53,3 +51,4 @@ Miscellaneous
typename
sctype2char
mintypecode
+ maximum_sctype
diff --git a/doc/source/reference/routines.linalg.rst b/doc/source/reference/routines.linalg.rst
index c6bffc874..d42e77ad8 100644
--- a/doc/source/reference/routines.linalg.rst
+++ b/doc/source/reference/routines.linalg.rst
@@ -5,6 +5,19 @@
Linear algebra (:mod:`numpy.linalg`)
************************************
+The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient
+low level implementations of standard linear algebra algorithms. Those
+libraries may be provided by NumPy itself using C versions of a subset of their
+reference implementations but, when possible, highly optimized libraries that
+take advantage of specialized processor functionality are preferred. Examples
+of such libraries are OpenBLAS_, MKL (TM), and ATLAS. Because those libraries
+are multithreaded and processor dependent, environmental variables and external
+packages such as threadpoolctl_ may be needed to control the number of threads
+or specify the processor architecture.
+
+.. _OpenBLAS: https://www.openblas.net/
+.. _threadpoolctl: https://github.com/joblib/threadpoolctl
+
.. currentmodule:: numpy
Matrix and vector products
diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst
index 15f2ba0a4..491bb6bff 100644
--- a/doc/source/reference/routines.ma.rst
+++ b/doc/source/reference/routines.ma.rst
@@ -68,9 +68,6 @@ Inspecting the array
ma.is_masked
ma.is_mask
- ma.MaskedArray.data
- ma.MaskedArray.mask
- ma.MaskedArray.recordmask
ma.MaskedArray.all
ma.MaskedArray.any
@@ -80,6 +77,12 @@ Inspecting the array
ma.size
+.. autosummary::
+
+ ma.MaskedArray.data
+ ma.MaskedArray.mask
+ ma.MaskedArray.recordmask
+
_____
Manipulating a MaskedArray
@@ -285,8 +288,10 @@ Filling a masked array
ma.MaskedArray.get_fill_value
ma.MaskedArray.set_fill_value
- ma.MaskedArray.fill_value
+.. autosummary::
+
+ ma.MaskedArray.fill_value
_____
diff --git a/doc/source/reference/routines.math.rst b/doc/source/reference/routines.math.rst
index 821363987..3c2f96830 100644
--- a/doc/source/reference/routines.math.rst
+++ b/doc/source/reference/routines.math.rst
@@ -141,6 +141,7 @@ Handling complex numbers
real
imag
conj
+ conjugate
Miscellaneous
diff --git a/doc/source/reference/routines.other.rst b/doc/source/reference/routines.other.rst
index 45b9ac3d9..0a3677904 100644
--- a/doc/source/reference/routines.other.rst
+++ b/doc/source/reference/routines.other.rst
@@ -5,14 +5,6 @@ Miscellaneous routines
.. currentmodule:: numpy
-Buffer objects
---------------
-.. autosummary::
- :toctree: generated/
-
- getbuffer
- newbuffer
-
Performance tuning
------------------
.. autosummary::
diff --git a/doc/source/reference/routines.random.rst b/doc/source/reference/routines.random.rst
deleted file mode 100644
index cda4e2b61..000000000
--- a/doc/source/reference/routines.random.rst
+++ /dev/null
@@ -1,83 +0,0 @@
-.. _routines.random:
-
-.. module:: numpy.random
-
-Random sampling (:mod:`numpy.random`)
-*************************************
-
-.. currentmodule:: numpy.random
-
-Simple random data
-==================
-.. autosummary::
- :toctree: generated/
-
- rand
- randn
- randint
- random_integers
- random_sample
- random
- ranf
- sample
- choice
- bytes
-
-Permutations
-============
-.. autosummary::
- :toctree: generated/
-
- shuffle
- permutation
-
-Distributions
-=============
-.. autosummary::
- :toctree: generated/
-
- beta
- binomial
- chisquare
- dirichlet
- exponential
- f
- gamma
- geometric
- gumbel
- hypergeometric
- laplace
- logistic
- lognormal
- logseries
- multinomial
- multivariate_normal
- negative_binomial
- noncentral_chisquare
- noncentral_f
- normal
- pareto
- poisson
- power
- rayleigh
- standard_cauchy
- standard_exponential
- standard_gamma
- standard_normal
- standard_t
- triangular
- uniform
- vonmises
- wald
- weibull
- zipf
-
-Random generator
-================
-.. autosummary::
- :toctree: generated/
-
- RandomState
- seed
- get_state
- set_state
diff --git a/doc/source/reference/routines.rst b/doc/source/reference/routines.rst
index a9e80480b..7a9b97d77 100644
--- a/doc/source/reference/routines.rst
+++ b/doc/source/reference/routines.rst
@@ -41,7 +41,7 @@ indentation.
routines.other
routines.padding
routines.polynomials
- routines.random
+ random/index
routines.set
routines.sort
routines.statistics
diff --git a/doc/source/reference/routines.testing.rst b/doc/source/reference/routines.testing.rst
index 77c046768..c676dec07 100644
--- a/doc/source/reference/routines.testing.rst
+++ b/doc/source/reference/routines.testing.rst
@@ -1,5 +1,3 @@
-.. _numpy-testing:
-
.. module:: numpy.testing
Test Support (:mod:`numpy.testing`)
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index 3cc956887..d00e88b34 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -16,7 +16,7 @@ A universal function (or :term:`ufunc` for short) is a function that
operates on :class:`ndarrays <ndarray>` in an element-by-element fashion,
supporting :ref:`array broadcasting <ufuncs.broadcasting>`, :ref:`type
casting <ufuncs.casting>`, and several other standard features. That
-is, a ufunc is a ":term:`vectorized`" wrapper for a function that
+is, a ufunc is a ":term:`vectorized <vectorization>`" wrapper for a function that
takes a fixed number of specific inputs and produces a fixed number of
specific outputs.
@@ -59,7 +59,7 @@ understood by four rules:
entry in that dimension will be used for all calculations along
that dimension. In other words, the stepping machinery of the
:term:`ufunc` will simply not step along that dimension (the
- :term:`stride` will be 0 for that dimension).
+ :ref:`stride <memory-layout>` will be 0 for that dimension).
Broadcasting is used throughout NumPy to decide how to handle
disparately shaped arrays; for example, all arithmetic operations (``+``,
@@ -70,7 +70,7 @@ arrays before operation.
.. index:: broadcastable
-A set of arrays is called ":term:`broadcastable`" to the same shape if
+A set of arrays is called "broadcastable" to the same shape if
the above rules produce a valid result, *i.e.*, one of the following
is true:
@@ -118,7 +118,7 @@ all output arrays will be passed to the :obj:`~class.__array_prepare__` and
the highest :obj:`~class.__array_priority__` of any other input to the
universal function. The default :obj:`~class.__array_priority__` of the
ndarray is 0.0, and the default :obj:`~class.__array_priority__` of a subtype
-is 1.0. Matrices have :obj:`~class.__array_priority__` equal to 10.0.
+is 0.0. Matrices have :obj:`~class.__array_priority__` equal to 10.0.
All ufuncs can also take output arguments. If necessary, output will
be cast to the data-type(s) of the provided output array(s). If a class
@@ -586,6 +586,7 @@ Math operations
sign
heaviside
conj
+ conjugate
exp
exp2
log
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 10a8caabd..f8d83726f 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -2,8 +2,7 @@
Release Notes
*************
-.. include:: ../release/1.16.6-notes.rst
-.. include:: ../release/1.16.5-notes.rst
+.. include:: ../release/1.17.0-notes.rst
.. include:: ../release/1.16.4-notes.rst
.. include:: ../release/1.16.3-notes.rst
.. include:: ../release/1.16.2-notes.rst
diff --git a/doc/source/user/basics.io.genfromtxt.rst b/doc/source/user/basics.io.genfromtxt.rst
index 21832e5aa..6ef80bf8e 100644
--- a/doc/source/user/basics.io.genfromtxt.rst
+++ b/doc/source/user/basics.io.genfromtxt.rst
@@ -521,12 +521,6 @@ provides several convenience functions derived from
:func:`~numpy.genfromtxt`. These functions work the same way as the
original, but they have different default values.
-:func:`~numpy.ndfromtxt`
- Always set ``usemask=False``.
- The output is always a standard :class:`numpy.ndarray`.
-:func:`~numpy.mafromtxt`
- Always set ``usemask=True``.
- The output is always a :class:`~numpy.ma.MaskedArray`
:func:`~numpy.recfromtxt`
Returns a standard :class:`numpy.recarray` (if ``usemask=False``) or a
:class:`~numpy.ma.MaskedRecords` array (if ``usemaske=True``). The
diff --git a/doc/source/user/building.rst b/doc/source/user/building.rst
index d224951dd..a13e1160a 100644
--- a/doc/source/user/building.rst
+++ b/doc/source/user/building.rst
@@ -118,12 +118,71 @@ means that g77 has been used. If libgfortran.so is a dependency, gfortran
has been used. If both are dependencies, this means both have been used, which
is almost always a very bad idea.
+Accelerated BLAS/LAPACK libraries
+---------------------------------
+
+NumPy searches for optimized linear algebra libraries such as BLAS and LAPACK.
+There are specific orders for searching these libraries, as described below.
+
+BLAS
+~~~~
+
+The default order for the libraries are:
+
+1. MKL
+2. BLIS
+3. OpenBLAS
+4. ATLAS
+5. Accelerate (MacOS)
+6. BLAS (NetLIB)
+
+
+If you wish to build against OpenBLAS but you also have BLIS available one
+may predefine the order of searching via the environment variable
+``NPY_BLAS_ORDER`` which is a comma-separated list of the above names which
+is used to determine what to search for, for instance::
+
+ NPY_BLAS_ORDER=ATLAS,blis,openblas,MKL python setup.py build
+
+will prefer to use ATLAS, then BLIS, then OpenBLAS and as a last resort MKL.
+If neither of these exists the build will fail (names are compared
+lower case).
+
+LAPACK
+~~~~~~
+
+The default order for the libraries are:
+
+1. MKL
+2. OpenBLAS
+3. libFLAME
+4. ATLAS
+5. Accelerate (MacOS)
+6. LAPACK (NetLIB)
+
+
+If you wish to build against OpenBLAS but you also have MKL available one
+may predefine the order of searching via the environment variable
+``NPY_LAPACK_ORDER`` which is a comma-separated list of the above names,
+for instance::
+
+ NPY_LAPACK_ORDER=ATLAS,openblas,MKL python setup.py build
+
+will prefer to use ATLAS, then OpenBLAS and as a last resort MKL.
+If neither of these exists the build will fail (names are compared
+lower case).
+
+
Disabling ATLAS and other accelerated libraries
------------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Usage of ATLAS and other accelerated libraries in NumPy can be disabled
via::
+ NPY_BLAS_ORDER= NPY_LAPACK_ORDER= python setup.py build
+
+or::
+
BLAS=None LAPACK=None ATLAS=None python setup.py build
diff --git a/doc/source/user/c-info.how-to-extend.rst b/doc/source/user/c-info.how-to-extend.rst
index 9738168d2..3961325fb 100644
--- a/doc/source/user/c-info.how-to-extend.rst
+++ b/doc/source/user/c-info.how-to-extend.rst
@@ -362,8 +362,7 @@ specific builtin data-type ( *e.g.* float), while specifying a
particular set of requirements ( *e.g.* contiguous, aligned, and
writeable). The syntax is
-.. c:function:: PyObject *PyArray_FROM_OTF( \
- PyObject* obj, int typenum, int requirements)
+:c:func:`PyArray_FROM_OTF`
Return an ndarray from any Python object, *obj*, that can be
converted to an array. The number of dimensions in the returned
@@ -446,31 +445,23 @@ writeable). The syntax is
flags most commonly needed are :c:data:`NPY_ARRAY_IN_ARRAY`,
:c:data:`NPY_OUT_ARRAY`, and :c:data:`NPY_ARRAY_INOUT_ARRAY`:
- .. c:var:: NPY_ARRAY_IN_ARRAY
+ :c:data:`NPY_ARRAY_IN_ARRAY`
- Equivalent to :c:data:`NPY_ARRAY_C_CONTIGUOUS` \|
- :c:data:`NPY_ARRAY_ALIGNED`. This combination of flags is useful
- for arrays that must be in C-contiguous order and aligned.
- These kinds of arrays are usually input arrays for some
- algorithm.
+ This flag is useful for arrays that must be in C-contiguous
+ order and aligned. These kinds of arrays are usually input
+ arrays for some algorithm.
- .. c:var:: NPY_ARRAY_OUT_ARRAY
+ :c:data:`NPY_ARRAY_OUT_ARRAY`
- Equivalent to :c:data:`NPY_ARRAY_C_CONTIGUOUS` \|
- :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE`. This
- combination of flags is useful to specify an array that is
+ This flag is useful to specify an array that is
in C-contiguous order, is aligned, and can be written to
as well. Such an array is usually returned as output
(although normally such output arrays are created from
scratch).
- .. c:var:: NPY_ARRAY_INOUT_ARRAY
+ :c:data:`NPY_ARRAY_INOUT_ARRAY`
- Equivalent to :c:data:`NPY_ARRAY_C_CONTIGUOUS` \|
- :c:data:`NPY_ARRAY_ALIGNED` \| :c:data:`NPY_ARRAY_WRITEABLE` \|
- :c:data:`NPY_ARRAY_WRITEBACKIFCOPY` \|
- :c:data:`NPY_ARRAY_UPDATEIFCOPY`. This combination of flags is
- useful to specify an array that will be used for both
+ This flag is useful to specify an array that will be used for both
input and output. :c:func:`PyArray_ResolveWritebackIfCopy`
must be called before :func:`Py_DECREF` at
the end of the interface routine to write back the temporary data
@@ -487,16 +478,16 @@ writeable). The syntax is
Other useful flags that can be OR'd as additional requirements are:
- .. c:var:: NPY_ARRAY_FORCECAST
+ :c:data:`NPY_ARRAY_FORCECAST`
Cast to the desired type, even if it can't be done without losing
information.
- .. c:var:: NPY_ARRAY_ENSURECOPY
+ :c:data:`NPY_ARRAY_ENSURECOPY`
Make sure the resulting array is a copy of the original.
- .. c:var:: NPY_ARRAY_ENSUREARRAY
+ :c:data:`NPY_ARRAY_ENSUREARRAY`
Make sure the resulting object is an actual ndarray and not a sub-
class.
@@ -513,7 +504,7 @@ writeable). The syntax is
Creating a brand-new ndarray
----------------------------
-Quite often new arrays must be created from within extension-module
+Quite often, new arrays must be created from within extension-module
code. Perhaps an output array is needed and you don't want the caller
to have to supply it. Perhaps only a temporary array is needed to hold
an intermediate calculation. Whatever the need there are simple ways
@@ -521,43 +512,9 @@ to get an ndarray object of whatever data-type is needed. The most
general function for doing this is :c:func:`PyArray_NewFromDescr`. All array
creation functions go through this heavily re-used code. Because of
its flexibility, it can be somewhat confusing to use. As a result,
-simpler forms exist that are easier to use.
-
-.. c:function:: PyObject *PyArray_SimpleNew(int nd, npy_intp* dims, int typenum)
-
- This function allocates new memory and places it in an ndarray
- with *nd* dimensions whose shape is determined by the array of
- at least *nd* items pointed to by *dims*. The memory for the
- array is uninitialized (unless typenum is :c:data:`NPY_OBJECT` in
- which case each element in the array is set to NULL). The
- *typenum* argument allows specification of any of the builtin
- data-types such as :c:data:`NPY_FLOAT` or :c:data:`NPY_LONG`. The
- memory for the array can be set to zero if desired using
- :c:func:`PyArray_FILLWBYTE` (return_object, 0).
-
-.. c:function:: PyObject *PyArray_SimpleNewFromData( \
- int nd, npy_intp* dims, int typenum, void* data)
-
- Sometimes, you want to wrap memory allocated elsewhere into an
- ndarray object for downstream use. This routine makes it
- straightforward to do that. The first three arguments are the same
- as in :c:func:`PyArray_SimpleNew`, the final argument is a pointer to a
- block of contiguous memory that the ndarray should use as it's
- data-buffer which will be interpreted in C-style contiguous
- fashion. A new reference to an ndarray is returned, but the
- ndarray will not own its data. When this ndarray is deallocated,
- the pointer will not be freed.
-
- You should ensure that the provided memory is not freed while the
- returned array is in existence. The easiest way to handle this is
- if data comes from another reference-counted Python object. The
- reference count on this object should be increased after the
- pointer is passed in, and the base member of the returned ndarray
- should point to the Python object that owns the data. Then, when
- the ndarray is deallocated, the base-member will be DECREF'd
- appropriately. If you want the memory to be freed as soon as the
- ndarray is deallocated then simply set the OWNDATA flag on the
- returned ndarray.
+simpler forms exist that are easier to use. These forms are part of the
+:c:func:`PyArray_SimpleNew` family of functions, which simplify the interface
+by providing default values for common use cases.
Getting at ndarray memory and accessing elements of the ndarray
diff --git a/doc/source/user/numpy-for-matlab-users.rst b/doc/source/user/numpy-for-matlab-users.rst
index 399237c21..e53d1ca45 100644
--- a/doc/source/user/numpy-for-matlab-users.rst
+++ b/doc/source/user/numpy-for-matlab-users.rst
@@ -436,7 +436,7 @@ Linear Algebra Equivalents
``a``
* - ``rand(3,4)``
- - ``random.rand(3,4)``
+ - ``random.rand(3,4)`` or ``random.random_sample((3, 4))``
- random 3x4 matrix
* - ``linspace(1,3,4)``
@@ -547,7 +547,7 @@ Linear Algebra Equivalents
- eigenvalues and eigenvectors of ``a``
* - ``[V,D]=eig(a,b)``
- - ``V,D = np.linalg.eig(a,b)``
+ - ``D,V = scipy.linalg.eig(a,b)``
- eigenvalues and eigenvectors of ``a``, ``b``
* - ``[V,D]=eigs(a,k)``
@@ -693,19 +693,19 @@ this is just an example, not a statement of "best practices"):
::
- # Make all numpy available via shorter 'num' prefix
- import numpy as num
+ # Make all numpy available via shorter 'np' prefix
+ import numpy as np
# Make all matlib functions accessible at the top level via M.func()
import numpy.matlib as M
# Make some matlib functions accessible directly at the top level via, e.g. rand(3,3)
from numpy.matlib import rand,zeros,ones,empty,eye
# Define a Hermitian function
def hermitian(A, **kwargs):
- return num.transpose(A,**kwargs).conj()
+ return np.transpose(A,**kwargs).conj()
# Make some shortcuts for transpose,hermitian:
- # num.transpose(A) --> T(A)
+ # np.transpose(A) --> T(A)
# hermitian(A) --> H(A)
- T = num.transpose
+ T = np.transpose
H = hermitian
Links
diff --git a/doc/source/user/quickstart.rst b/doc/source/user/quickstart.rst
index 5ef8b145f..09647be86 100644
--- a/doc/source/user/quickstart.rst
+++ b/doc/source/user/quickstart.rst
@@ -884,6 +884,17 @@ The ``copy`` method makes a complete copy of the array and its data.
[ 8, 10, 10, 11]])
+Sometimes ``copy`` should be called after slicing if the original array is not required anymore.
+For example, suppose ``a`` is a huge intermediate result and the final result ``b`` only contains
+a small fraction of ``a``, a deep copy should be made when constructing ``b`` with slicing::
+
+ >>> a = np.arange(int(1e8))
+ >>> b = a[:100].copy()
+ >>> del a # the memory of ``a`` can be released.
+
+If ``b = a[:100]`` is used instead, ``a`` is referenced by ``b`` and will persist in memory
+even if ``del a`` is executed.
+
Functions and Methods Overview
------------------------------
@@ -1465,5 +1476,5 @@ Further reading
- The `Python tutorial <https://docs.python.org/tutorial/>`__
- :ref:`reference`
- `SciPy Tutorial <https://docs.scipy.org/doc/scipy/reference/tutorial/index.html>`__
-- `SciPy Lecture Notes <https://www.scipy-lectures.org>`__
+- `SciPy Lecture Notes <https://scipy-lectures.org>`__
- A `matlab, R, IDL, NumPy/SciPy dictionary <http://mathesaurus.sf.net/>`__
diff --git a/doc/source/user/whatisnumpy.rst b/doc/source/user/whatisnumpy.rst
index cd74a8de3..abaa2bfed 100644
--- a/doc/source/user/whatisnumpy.rst
+++ b/doc/source/user/whatisnumpy.rst
@@ -91,6 +91,11 @@ idiom is even simpler! This last example illustrates two of NumPy's
features which are the basis of much of its power: vectorization and
broadcasting.
+.. _whatis-vectorization:
+
+Why is NumPy Fast?
+------------------
+
Vectorization describes the absence of any explicit looping, indexing,
etc., in the code - these things are taking place, of course, just
"behind the scenes" in optimized, pre-compiled C code. Vectorized
@@ -120,9 +125,13 @@ the shape of the larger in such a way that the resulting broadcast is
unambiguous. For detailed "rules" of broadcasting see
`numpy.doc.broadcasting`.
+Who Else Uses NumPy?
+--------------------
+
NumPy fully supports an object-oriented approach, starting, once
again, with `ndarray`. For example, `ndarray` is a class, possessing
-numerous methods and attributes. Many of its methods mirror
-functions in the outer-most NumPy namespace, giving the programmer
-complete freedom to code in whichever paradigm she prefers and/or
-which seems most appropriate to the task at hand.
+numerous methods and attributes. Many of its methods are mirrored by
+functions in the outer-most NumPy namespace, allowing the programmer
+to code in whichever paradigm they prefer. This flexibility has allowed the
+NumPy array dialect and NumPy `ndarray` class to become the *de-facto* language
+of multi-dimensional data interchange used in Python.