示例#1
0
def set_deterministic(d):
    r"""Sets a global flag to force all operations to use a deterministic
    implementation if available. If an operation that does not have a
    deterministic implementation is called while this setting is True, the
    operation will throw a RuntimeError.

    Note that deterministic operations tend to have worse performance than
    non-deterministic operations.

    Args:
        d (:class:`bool`): If True, force operations to be deterministic.
                           If False, allow non-deterministic operations.
    """
    _C._set_deterministic(d)
示例#2
0
def _set_deterministic(d):
    r"""Sets a global flag to force all operations to use a deterministic
    implementation if available. If an operation that does not have a
    deterministic implementation is called while this setting is True, the
    operation will throw a RuntimeError.

    Note that deterministic operations tend to have worse performance than
    non-deterministic operations.

    Args:
        d (:class:`bool`): If True, force operations to be deterministic.
                           If False, allow non-deterministic operations.

    .. warning::
        This feature is experimental and not complete. The above docstring
        represents what the future behavior is intended to be. Right now,
        `_set_deterministic` will only affect `torch.bmm` and convolution
        operators.
    """
    _C._set_deterministic(d)
示例#3
0
def set_deterministic(d):
    r""" Sets whether native PyTorch operations must use deterministic
    algorithms. When True, operations without deterministic algorithms
    will throw a :class:RuntimeError when called.

    .. warning::
        This feature is a beta feature, so it does not affect every
        nondeterministic operation yet. The following operations are
        affected by this flag.

    The following normally-nondeterministic operations will act
    deterministically when `d=True`:

        * :class:`torch.nn.Conv1d` when called on CUDA tensor
        * :class:`torch.nn.Conv2d` when called on CUDA tensor
        * :class:`torch.nn.Conv3d` when called on CUDA tensor
        * :class:`torch.nn.ConvTranspose1d` when called on CUDA tensor
        * :class:`torch.nn.ConvTranspose2d` when called on CUDA tensor
        * :class:`torch.nn.ConvTranspose3d` when called on CUDA tensor
        * :func:`torch.bmm` when called on sparse-dense CUDA tensors

    The following normally-nondeterministic operations will throw a
    :class:`RuntimeError` when `d=True`:

        * :class:`torch.nn.AvgPool3d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.AdaptiveAvgPool2d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.AdaptiveAvgPool3d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.MaxPool3d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.AdaptiveMaxPool2d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.FractionalMaxPool2d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.FractionalMaxPool3d` when called on a CUDA tensor that requires grad
        * :func:`torch.nn.functional.interpolate` when called on a CUDA tensor that requires grad
            and one of the following modes is used:
            - `linear`
            - `bilinear`
            - `bicubic`
            - `trilinear`
        * :class:`torch.nn.ReflectionPad1d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.ReflectionPad2d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.ReplicationPad1d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.ReplicationPad2d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.ReplicationPad3d` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.NLLLoss` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.CTCLoss` when called on a CUDA tensor that requires grad
        * :class:`torch.nn.EmbeddingBag` when called on a CUDA tensor that requires grad
        * :func:`torch.scatter_add_` when called on a CUDA tensor
        * :func:`torch.index_add_` when called on a CUDA tensor
        * :func:`torch.index_select` when called on a CUDA tensor that requires grad
        * :func:`torch.repeat_interleave` when called on a CUDA tensor that requires grad
        * :func:`torch.histc` when called on a CUDA tensor
        * :func:`torch.bincount` when called on a CUDA tensor

    A handful of CUDA operations are nondeterministic if the CUDA version is
    10.2 or greater, unless the environment variable `CUBLAS_WORKSPACE_CONFIG=:4096:8`
    or `CUBLAS_WORKSPACE_CONFIG=:16:8` is set. See the CUDA documentation for more
    details: `<https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility>`_
    If one of these environment variable configurations is not set, a :class:`RuntimeError`
    will be raised from these operations when called with CUDA tensors:

        * :func:`torch.mm`
        * :func:`torch.mv`
        * :func:`torch.bmm`

    Note that deterministic operations tend to have worse performance than
    non-deterministic operations.

    Args:
        d (:class:`bool`): If True, force operations to be deterministic.
                           If False, allow non-deterministic operations.
    """
    _C._set_deterministic(d)