Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] fix some format issue #45752

Merged
merged 35 commits into from
Sep 22, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
0f965b4
fix some error
enkilee Sep 5, 2022
830747e
fix
enkilee Sep 5, 2022
e0a8c82
Merge branch 'PaddlePaddle:develop' into develop
enkilee Sep 5, 2022
a557956
fix some error
enkilee Sep 5, 2022
16c6acb
Merge branch 'PaddlePaddle:develop' into develop
enkilee Sep 5, 2022
19125af
fix bugs
enkilee Sep 5, 2022
31e7181
Merge branch 'PaddlePaddle:develop' into develop
enkilee Sep 5, 2022
c9ddf04
Merge branch 'develop' of https://github.com/enkilee/Paddle into develop
enkilee Sep 13, 2022
e967dfd
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
enkilee Sep 13, 2022
65c84cc
Merge branch 'PaddlePaddle:develop' into develop
enkilee Sep 13, 2022
9cda38c
Merge branches 'develop' and 'develop' of https://github.com/enkilee/…
enkilee Sep 13, 2022
76a8d67
fix some errors
enkilee Sep 13, 2022
1742a4b
fix
enkilee Sep 13, 2022
f9aedbf
Update transform.py
enkilee Sep 16, 2022
f944edb
Update normal.py
enkilee Sep 16, 2022
e9de501
Update uniform.py
enkilee Sep 16, 2022
4081d99
Update kl.py
enkilee Sep 16, 2022
4332987
Update math.py
enkilee Sep 16, 2022
3b821f7
Update math.py
enkilee Sep 16, 2022
e0b1bdd
Update loss.py
enkilee Sep 16, 2022
5d87ad1
Update transform.py
enkilee Sep 16, 2022
fd359b7
Update math.py
enkilee Sep 16, 2022
d11a735
Merge branch 'develop' into pr/enkilee/45752
SigureMo Sep 17, 2022
3891a3b
fix some format issue
SigureMo Sep 17, 2022
da7f003
Update normal.py
SigureMo Sep 17, 2022
8ffe585
fix missing np
SigureMo Sep 17, 2022
0b67ff0
order imports
SigureMo Sep 17, 2022
35b0174
fix some flake8 warning
SigureMo Sep 17, 2022
6633af5
Update python/paddle/tensor/math.py
Ligoml Sep 21, 2022
669cfa9
fix OP-->API
Ligoml Sep 21, 2022
272284e
fix op
Ligoml Sep 21, 2022
f6819bf
fix grid_sample format
Ligoml Sep 21, 2022
ba0b018
trim trailing whitespace
SigureMo Sep 21, 2022
551473b
empty commit, test=document_fix
SigureMo Sep 21, 2022
1f4ed90
empty commit
SigureMo Sep 21, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions python/paddle/distribution/kl.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,11 @@ def kl_divergence(p, q):
KL(p||q) = \int p(x)log\frac{p(x)}{q(x)} \mathrm{d}x

Args:
p (Distribution): ``Distribution`` object.
q (Distribution): ``Distribution`` object.
p (Distribution): ``Distribution`` object. Inherits from the Distribution Base class.
q (Distribution): ``Distribution`` object. Inherits from the Distribution Base class.

Returns:
Tensor: Batchwise KL-divergence between distribution p and q.
Tensor, Batchwise KL-divergence between distribution p and q.

Examples:

Expand Down Expand Up @@ -71,8 +71,8 @@ def register_kl(cls_p, cls_q):
implemention funciton by the decorator.

Args:
cls_p(Distribution): Subclass derived from ``Distribution``.
cls_q(Distribution): Subclass derived from ``Distribution``.
cls_p (Distribution): The Distribution type of Instance p. Subclass derived from ``Distribution``.
cls_q (Distribution): The Distribution type of Instance q. Subclass derived from ``Distribution``.

Examples:
.. code-block:: python
Expand Down
86 changes: 43 additions & 43 deletions python/paddle/distribution/normal.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ class Normal(distribution.Distribution):

.. math::

pdf(x; \mu, \sigma) = \\frac{1}{Z}e^{\\frac {-0.5 (x - \mu)^2} {\sigma^2} }
pdf(x; \mu, \sigma) = \frac{1}{Z}e^{\frac {-0.5 (x - \mu)^2} {\sigma^2} }

.. math::

Expand All @@ -49,43 +49,43 @@ class Normal(distribution.Distribution):
* :math:`Z`: is the normalization constant.

Args:
loc(int|float|list|tuple|numpy.ndarray|Tensor): The mean of normal distribution.The data type is int, float, list, numpy.ndarray or Tensor.
scale(int|float|list|tuple|numpy.ndarray|Tensor): The std of normal distribution.The data type is int, float, list, numpy.ndarray or Tensor.
loc(int|float|list|tuple|numpy.ndarray|Tensor): The mean of normal distribution.The data type is float32 and float64.
scale(int|float|list|tuple|numpy.ndarray|Tensor): The std of normal distribution.The data type is float32 and float64.
name(str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.

Examples:
.. code-block:: python

import paddle
from paddle.distribution import Normal

# Define a single scalar Normal distribution.
dist = Normal(loc=0., scale=3.)
# Define a batch of two scalar valued Normals.
# The first has mean 1 and standard deviation 11, the second 2 and 22.
dist = Normal(loc=[1., 2.], scale=[11., 22.])
# Get 3 samples, returning a 3 x 2 tensor.
dist.sample([3])

# Define a batch of two scalar valued Normals.
# Both have mean 1, but different standard deviations.
dist = Normal(loc=1., scale=[11., 22.])

# Complete example
value_tensor = paddle.to_tensor([0.8], dtype="float32")

normal_a = Normal([0.], [1.])
normal_b = Normal([0.5], [2.])
sample = normal_a.sample([2])
# a random tensor created by normal distribution with shape: [2, 1]
entropy = normal_a.entropy()
# [1.4189385] with shape: [1]
lp = normal_a.log_prob(value_tensor)
# [-1.2389386] with shape: [1]
p = normal_a.probs(value_tensor)
# [0.28969154] with shape: [1]
kl = normal_a.kl_divergence(normal_b)
# [0.34939718] with shape: [1]
import paddle
from paddle.distribution import Normal

# Define a single scalar Normal distribution.
dist = Normal(loc=0., scale=3.)
# Define a batch of two scalar valued Normals.
# The first has mean 1 and standard deviation 11, the second 2 and 22.
dist = Normal(loc=[1., 2.], scale=[11., 22.])
# Get 3 samples, returning a 3 x 2 tensor.
dist.sample([3])

# Define a batch of two scalar valued Normals.
# Both have mean 1, but different standard deviations.
dist = Normal(loc=1., scale=[11., 22.])

# Complete example
value_tensor = paddle.to_tensor([0.8], dtype="float32")

normal_a = Normal([0.], [1.])
normal_b = Normal([0.5], [2.])
sample = normal_a.sample([2])
# a random tensor created by normal distribution with shape: [2, 1]
entropy = normal_a.entropy()
# [1.4189385] with shape: [1]
lp = normal_a.log_prob(value_tensor)
# [-1.2389386] with shape: [1]
p = normal_a.probs(value_tensor)
# [0.28969154] with shape: [1]
kl = normal_a.kl_divergence(normal_b)
# [0.34939718] with shape: [1]
"""

def __init__(self, loc, scale, name=None):
Expand Down Expand Up @@ -132,11 +132,11 @@ def sample(self, shape, seed=0):
"""Generate samples of the specified shape.

Args:
shape (list): 1D `int32`. Shape of the generated samples.
seed (int): Python integer number.
shape (list): 1D `int32`. Shape of the generated samples.
seed (int): Python integer number.

Returns:
Tensor: A tensor with prepended dimensions shape.The data type is float32.
Tensor, A tensor with prepended dimensions shape.The data type is float32.

"""
if not _non_static_mode():
Expand Down Expand Up @@ -177,14 +177,14 @@ def entropy(self):

.. math::

entropy(\sigma) = 0.5 \\log (2 \pi e \sigma^2)
entropy(\sigma) = 0.5 \log (2 \pi e \sigma^2)

In the above equation:

* :math:`scale = \sigma`: is the std.

Returns:
Tensor: Shannon entropy of normal distribution.The data type is float32.
Tensor, Shannon entropy of normal distribution.The data type is float32.

"""
name = self.name + '_entropy'
Expand Down Expand Up @@ -221,10 +221,10 @@ def probs(self, value):
"""Probability density/mass function.

Args:
value (Tensor): The input tensor.
value (Tensor): The input tensor.

Returns:
Tensor: probability.The data type is same with value.
Tensor, probability. The data type is same with value.

"""
name = self.name + '_probs'
Expand All @@ -243,11 +243,11 @@ def kl_divergence(self, other):

.. math::

KL\_divergence(\mu_0, \sigma_0; \mu_1, \sigma_1) = 0.5 (ratio^2 + (\\frac{diff}{\sigma_1})^2 - 1 - 2 \\ln {ratio})
KL\_divergence(\mu_0, \sigma_0; \mu_1, \sigma_1) = 0.5 (ratio^2 + (\frac{diff}{\sigma_1})^2 - 1 - 2 \ln {ratio})

.. math::

ratio = \\frac{\sigma_0}{\sigma_1}
ratio = \frac{\sigma_0}{\sigma_1}

.. math::

Expand All @@ -266,7 +266,7 @@ def kl_divergence(self, other):
other (Normal): instance of Normal.

Returns:
Tensor: kl-divergence between two normal distributions.The data type is float32.
Tensor, kl-divergence between two normal distributions.The data type is float32.

"""
if not _non_static_mode():
Expand Down
10 changes: 6 additions & 4 deletions python/paddle/distribution/transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ class Transform(object):
Suppose :math:`X` is a K-dimensional random variable with probability
density function :math:`p_X(x)`. A new random variable :math:`Y = f(X)` may
be defined by transforming :math:`X` with a suitably well-behaved funciton
:math:`f`. It suffices for what follows to note that if f is one-to-one and
:math:`f`. It suffices for what follows to note that if `f` is one-to-one and
its inverse :math:`f^{-1}` have a well-defined Jacobian, then the density of
:math:`Y` is

Expand Down Expand Up @@ -1001,16 +1001,16 @@ class StackTransform(Transform):
specific axis.

Args:
transforms(Sequence[Transform]): The sequence of transformations.
axis(int): The axis along which will be transformed.
transforms (Sequence[Transform]): The sequence of transformations.
axis (int, optional): The axis along which will be transformed. default
value is 0.

Examples:

.. code-block:: python

import paddle


x = paddle.stack(
(paddle.to_tensor([1., 2., 3.]), paddle.to_tensor([1, 2., 3.])), 1)
t = paddle.distribution.StackTransform(
Expand All @@ -1023,11 +1023,13 @@ class StackTransform(Transform):
# [[2.71828175 , 1. ],
# [7.38905621 , 4. ],
# [20.08553696, 9. ]])

print(t.inverse(t.forward(x)))
# Tensor(shape=[3, 2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
# [[1., 1.],
# [2., 2.],
# [3., 3.]])

print(t.forward_log_det_jacobian(x))
# Tensor(shape=[3, 2], dtype=float32, place=Place(gpu:0), stop_gradient=True,
# [[1. , 0.69314718],
Expand Down
82 changes: 42 additions & 40 deletions python/paddle/distribution/uniform.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class Uniform(distribution.Distribution):

.. math::

pdf(x; a, b) = \\frac{1}{Z}, \ a <=x <b
pdf(x; a, b) = \frac{1}{Z}, \ a <=x <b

.. math::

Expand All @@ -50,43 +50,45 @@ class Uniform(distribution.Distribution):
* :math:`Z`: is the normalizing constant.

The parameters `low` and `high` must be shaped in a way that supports
[broadcasting](https://www.paddlepaddle.org.cn/documentation/docs/en/develop/beginners_guide/basic_concept/broadcasting_en.html) (e.g., `high - low` is a valid operation).
:ref:`user_guide_broadcasting` (e.g., `high - low` is a valid operation).

Args:
low(int|float|list|tuple|numpy.ndarray|Tensor): The lower boundary of uniform distribution.The data type is int, float, list, numpy.ndarray or Tensor
high(int|float|list|tuple|numpy.ndarray|Tensor): The higher boundary of uniform distribution.The data type is int, float, list, numpy.ndarray or Tensor
name(str, optional): Name for the operation (optional, default is None). For more information, please refer to :ref:`api_guide_Name`.
low(int|float|list|tuple|numpy.ndarray|Tensor): The lower boundary of
uniform distribution.The data type is float32 and float64.
high(int|float|list|tuple|numpy.ndarray|Tensor): The higher boundary
of uniform distribution.The data type is float32 and float64.
name (str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.

Examples:
.. code-block:: python

import paddle
from paddle.distribution import Uniform

# Without broadcasting, a single uniform distribution [3, 4]:
u1 = Uniform(low=3.0, high=4.0)
# 2 distributions [1, 3], [2, 4]
u2 = Uniform(low=[1.0, 2.0], high=[3.0, 4.0])
# 4 distributions
u3 = Uniform(low=[[1.0, 2.0], [3.0, 4.0]],
high=[[1.5, 2.5], [3.5, 4.5]])

# With broadcasting:
u4 = Uniform(low=3.0, high=[5.0, 6.0, 7.0])

# Complete example
value_tensor = paddle.to_tensor([0.8], dtype="float32")

uniform = Uniform([0.], [2.])

sample = uniform.sample([2])
# a random tensor created by uniform distribution with shape: [2, 1]
entropy = uniform.entropy()
# [0.6931472] with shape: [1]
lp = uniform.log_prob(value_tensor)
# [-0.6931472] with shape: [1]
p = uniform.probs(value_tensor)
# [0.5] with shape: [1]
import paddle
from paddle.distribution import Uniform

# Without broadcasting, a single uniform distribution [3, 4]:
u1 = Uniform(low=3.0, high=4.0)
# 2 distributions [1, 3], [2, 4]
u2 = Uniform(low=[1.0, 2.0], high=[3.0, 4.0])
# 4 distributions
u3 = Uniform(low=[[1.0, 2.0], [3.0, 4.0]],
high=[[1.5, 2.5], [3.5, 4.5]])

# With broadcasting:
u4 = Uniform(low=3.0, high=[5.0, 6.0, 7.0])

# Complete example
value_tensor = paddle.to_tensor([0.8], dtype="float32")

uniform = Uniform([0.], [2.])

sample = uniform.sample([2])
# a random tensor created by uniform distribution with shape: [2, 1]
entropy = uniform.entropy()
# [0.6931472] with shape: [1]
lp = uniform.log_prob(value_tensor)
# [-0.6931472] with shape: [1]
p = uniform.probs(value_tensor)
# [0.5] with shape: [1]
"""

def __init__(self, low, high, name=None):
Expand Down Expand Up @@ -132,11 +134,11 @@ def sample(self, shape, seed=0):
"""Generate samples of the specified shape.

Args:
shape (list): 1D `int32`. Shape of the generated samples.
seed (int): Python integer number.
shape (list): 1D `int32`. Shape of the generated samples.
seed (int): Python integer number.

Returns:
Tensor: A tensor with prepended dimensions shape.The data type is float32.
Tensor, A tensor with prepended dimensions shape. The data type is float32.

"""
if not _non_static_mode():
Expand Down Expand Up @@ -179,10 +181,10 @@ def log_prob(self, value):
"""Log probability density/mass function.

Args:
value (Tensor): The input tensor.
value (Tensor): The input tensor.

Returns:
Tensor: log probability.The data type is same with value.
Tensor, log probability.The data type is same with value.

"""
value = self._check_values_dtype_in_probs(self.low, value)
Expand Down Expand Up @@ -216,10 +218,10 @@ def probs(self, value):
"""Probability density/mass function.

Args:
value (Tensor): The input tensor.
value (Tensor): The input tensor.

Returns:
Tensor: probability.The data type is same with value.
Tensor, probability. The data type is same with value.

"""
value = self._check_values_dtype_in_probs(self.low, value)
Expand Down Expand Up @@ -256,7 +258,7 @@ def entropy(self):
entropy(low, high) = \\log (high - low)

Returns:
Tensor: Shannon entropy of uniform distribution.The data type is float32.
Tensor, Shannon entropy of uniform distribution.The data type is float32.

"""
name = self.name + '_entropy'
Expand Down
Loading