0


深入浅出PaddlePaddle函数——paddle.to_tensor

分类目录:《深入浅出PaddlePaddle函数》总目录
相关文章:
· 深入浅出TensorFlow2函数——tf.constant
· 深入浅出Pytorch函数——torch.tensor
· 深入浅出Pytorch函数——torch.as_tensor
· 深入浅出Pytorch函数——torch.Tensor
· 深入浅出PaddlePaddle函数——paddle.to_tensor
· 深入浅出PaddlePaddle函数——paddle.Tensor


通过已知的

data

来创建一个Tensor,Tensor类型为paddle.Tensor。

data

可以是

scalar

tuple

list

numpy.ndarray

paddle.Tensor

。如果

data

已经是一个

Tensor

,且

dtype

place

没有发生变化,将不会发生Tensor的拷贝并返回原来的Tensor。 否则会创建一个新的 Tensor,且不保留原来计算图。

语法

paddle.to_tensor(data, dtype=None, place=None, stop_gradient=True)

参数

  • data:[scalar/tuple/list/ndarray/Tensor] 初始化Tensor的数据,可以是scalartuplelistnumpy.ndarraypaddle.Tensor类型。
  • dtype:[可选,str] 创建Tensor的数据类型,可以是boolfloat16float32float64int8int16int32int64uint8complex64complex128。 默认值为None,如果 data为 python 浮点类型,则从get_default_dtype获取类型,如果data为其他类型,则会自动推导类型。
  • place:[可选, CPUPlace/CUDAPinnedPlace/CUDAPlace] 创建Tensor的设备位置,可以是 CPUPlaceCUDAPinnedPlaceCUDAPlace。默认值为None,使用全局的place
  • stop_gradient: [可选,bool] 是否阻断Autograd的梯度传导。默认值为True,此时不进行梯度传传导。

返回值

通过

data

创建的 Tensor。

实例

import paddle

type(paddle.to_tensor(1))
# <class 'paddle.Tensor'>

paddle.to_tensor(1)#Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [1])

x = paddle.to_tensor(1, stop_gradient=False)print(x)#Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=False,
#        [1])

paddle.to_tensor(x)  # A new tensor will be created with default stop_gradient=True
#Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
#        [1])

paddle.to_tensor([[0.1,0.2],[0.3,0.4]], place=paddle.CPUPlace(), stop_gradient=False)#Tensor(shape=[2,2], dtype=float32, place=CPUPlace, stop_gradient=False,
#        [[0.10000000,0.20000000],
#         [0.30000001,0.40000001]])type(paddle.to_tensor([[1+1j,2],[3+2j,4]], dtype='complex64'))
# <class 'paddle.Tensor'>

paddle.to_tensor([[1+1j,2],[3+2j,4]], dtype='complex64')#Tensor(shape=[2,2], dtype=complex64, place=CPUPlace, stop_gradient=True,
#        [[(1+1j),(2+0j)],
#         [(3+2j),(4+0j)]])

函数实现

def to_tensor(data, dtype=None, place=None, stop_gradient=True):
    r"""
    Constructs a ``paddle.Tensor`` from ``data`` ,
    which can be scalar, tuple, list, numpy\.ndarray, paddle\.Tensor.
    If the ``data`` is already a Tensor, copy will be performed and return a new tensor.
    If you only want to change stop_gradient property, please call ``Tensor.stop_gradient = stop_gradient`` directly.
    Args:data(scalar|tuple|list|ndarray|Tensor): Initial data for the tensor.
            Can be a scalar, list, tuple, numpy\.ndarray, paddle\.Tensor.dtype(str|np.dtype, optional): The desired data type of returned tensor. Can be 'bool','float16','float32','float64','int8','int16','int32','int64','uint8','complex64','complex128'. Default: None, infers dtype from ``data``
            except for python float number which gets dtype from ``get_default_type`` .place(CPUPlace|CUDAPinnedPlace|CUDAPlace|str, optional): The place to allocate Tensor. Can be
            CPUPlace, CUDAPinnedPlace, CUDAPlace. Default: None, means global place. If ``place`` is
            string, It can be ``cpu``, ``gpu:x`` and ``gpu_pinned``, where ``x`` is the index of the GPUs.stop_gradient(bool, optional): Whether to block the gradient propagation of Autograd. Default: True.
    Returns:
        Tensor: A Tensor constructed from ``data`` .
    Examples:.. code-block:: python
        import paddle
        type(paddle.to_tensor(1))
        # <class 'paddle.Tensor'>
        paddle.to_tensor(1)#Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
        #        [1])
        x = paddle.to_tensor(1, stop_gradient=False)print(x)#Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=False,
        #        [1])
        paddle.to_tensor(x)  # A new tensor will be created with default stop_gradient=True
        #Tensor(shape=[1], dtype=int64, place=CPUPlace, stop_gradient=True,
        #        [1])
        paddle.to_tensor([[0.1,0.2],[0.3,0.4]], place=paddle.CPUPlace(), stop_gradient=False)#Tensor(shape=[2,2], dtype=float32, place=CPUPlace, stop_gradient=False,
        #        [[0.10000000,0.20000000],
        #         [0.30000001,0.40000001]])type(paddle.to_tensor([[1+1j,2],[3+2j,4]], dtype='complex64'))
        # <class 'paddle.Tensor'>
        paddle.to_tensor([[1+1j,2],[3+2j,4]], dtype='complex64')#Tensor(shape=[2,2], dtype=complex64, place=CPUPlace, stop_gradient=True,
        #        [[(1+1j),(2+0j)],
        #         [(3+2j),(4+0j)]])"""
    place =_get_paddle_place(place)if place is None:
        place =_current_expected_place()if_non_static_mode():return_to_tensor_non_static(data, dtype, place, stop_gradient)#callassign forstatic graphelse:
        re_exp = re.compile(r'[(](.+?)[)]', re.S)
        place_str = re.findall(re_exp,str(place))[0]

        with paddle.static.device_guard(place_str):return_to_tensor_static(data, dtype, stop_gradient)

def full_like(x, fill_value, dtype=None, name=None):"""
    This function creates a tensor filled with ``fill_value`` which has identical shape of ``x`` and ``dtype``.
    If the ``dtype`` is None, the data type of Tensor is same with ``x``.
    Args:x(Tensor): The input tensor which specifies shape and data type. The data type can be bool, float16, float32, float64, int32, int64.fill_value(bool|float|int): The value to fill the tensor with. Note: this value shouldn't exceed the range of the output data type.dtype(np.dtype|str, optional): The data type of output. The data type can be one
            of bool, float16, float32, float64, int32, int64. The default value is None, which means the output
            data type is the same as input.name(str, optional): For details, please refer to :ref:`api_guide_Name`. Generally, no setting is required. Default: None.
    Returns:
        Tensor: Tensor which is created according to ``x``, ``fill_value`` and ``dtype``.
    Examples:.. code-block:: python
          import paddle
          input = paddle.full(shape=[2,3], fill_value=0.0, dtype='float32', name='input')
          output = paddle.full_like(input,2.0)
          # [[2.2.2.]
          #  [2.2.2.]]"""

    if dtype is None:
        dtype = x.dtype
    else:if not isinstance(dtype, core.VarDesc.VarType):
            dtype =convert_np_dtype_to_dtype_(dtype)ifin_dygraph_mode():return _C_ops.full_like(x, fill_value, dtype, x.place)if_in_legacy_dygraph():return _legacy_C_ops.fill_any_like(
            x,'value', fill_value,'dtype', dtype
        )

    helper =LayerHelper("full_like",**locals())check_variable_and_dtype(
        x,'x',['bool','float16','float32','float64','int16','int32','int64'],'full_like',)check_dtype(
        dtype,'dtype',['bool','float16','float32','float64','int16','int32','int64'],'full_like/zeros_like/ones_like',)
    out = helper.create_variable_for_type_inference(dtype=dtype)

    helper.append_op(
        type='fill_any_like',
        inputs={'X':[x]},
        attrs={'value': fill_value,"dtype": dtype},
        outputs={'Out':[out]},)
    out.stop_gradient = True
    return out

本文转载自: https://blog.csdn.net/hy592070616/article/details/129392411
版权归原作者 von Neumann 所有, 如有侵权,请联系我们删除。

“深入浅出PaddlePaddle函数——paddle.to_tensor”的评论:

还没有评论