0


PyTorch实现sin函数模拟

PyTorch实现sin函数模拟

文章目录

一、简介

本文旨在使用两种方法来实现sin函数的模拟,具体的模拟方法是使用机器学习来实现的,我们使用Python的torch模块进行机器学习,从而为sin确定多项式的系数。

二、第一种方法

# 这个案例相当于是使用torch来模拟sin函数进行计算啦。# 通过3次函数来模拟sin函数,实现类似于机器学习的操作。import torch
import math

dtype = torch.float# 数据的类型

device = torch.device("cpu")# 设备的类型# device = torch.device("cuda:0") # Uncomment this to run on GPU# Create random input and output data
x = torch.linspace(-math.pi, math.pi,2000, device=device, dtype=dtype)# 与numpy的linspace是类似的

y = torch.sin(x)# tensor->张量# Randomly initialize weights# 标准的高斯函数分布。# 随机产生一个参数,然后通过学习来进行改进参数。
a = torch.randn((), device=device, dtype=dtype)# a

b = torch.randn((), device=device, dtype=dtype)# b

c = torch.randn((), device=device, dtype=dtype)# c

d = torch.randn((), device=device, dtype=dtype)# d

learning_rate =1e-6for t inrange(2000):# Forward pass: compute predicted y
    y_pred = a + b * x + c * x **2+ d * x **3# 这个也是一个张量。# 3次函数来进行模拟。# Compute and print loss
    loss =(y_pred - y).pow(2).sum().item()if t %100==99:print(t, loss)# 计算误差# Backprop to compute gradients of a, b, c, d with respect to loss
    grad_y_pred =2.0*(y_pred - y)
    grad_a = grad_y_pred.sum()
    grad_b =(grad_y_pred * x).sum()
    grad_c =(grad_y_pred * x **2).sum()
    grad_d =(grad_y_pred * x **3).sum()# 计算误差。# Update weights using gradient descent# 更新参数,每一次都要更新。
    a -= learning_rate * grad_a
    b -= learning_rate * grad_b
    c -= learning_rate * grad_c
    d -= learning_rate * grad_d
    # reward# 最终的结果print(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

运行结果:

99676.0404663085938199478.38140869140625299339.39117431640625399241.61537170410156499172.80801391601562599124.3700790405273469990.2608489990234479966.2343597412109489949.3053703308105599937.37403106689453109928.96288299560547119923.031932830810547129918.848905563354492139915.898048400878906149913.81600570678711159912.34669017791748169911.309612274169922179910.57749080657959189910.06057643890380919999.695555686950684
Result: y =-0.03098311647772789+0.852223813533783 x +0.005345103796571493 x^2+-0.09268788248300552 x^3

三、第二种方法

import torch
import math

dtype = torch.float
device = torch.device("cpu")# device = torch.device("cuda:0")  # Uncomment this to run on GPU# Create Tensors to hold input and outputs.# By default, requires_grad=False, which indicates that we do not need to# compute gradients with respect to these Tensors during the backward pass.
x = torch.linspace(-math.pi, math.pi,2000, device=device, dtype=dtype)
y = torch.sin(x)# Create random Tensors for weights. For a third order polynomial, we need# 4 weights: y = a + b x + c x^2 + d x^3# Setting requires_grad=True indicates that we want to compute gradients with# respect to these Tensors during the backward pass.
a = torch.randn((), device=device, dtype=dtype, requires_grad=True)
b = torch.randn((), device=device, dtype=dtype, requires_grad=True)
c = torch.randn((), device=device, dtype=dtype, requires_grad=True)
d = torch.randn((), device=device, dtype=dtype, requires_grad=True)

learning_rate =1e-6for t inrange(2000):# Forward pass: compute predicted y using operations on Tensors.
    y_pred = a + b * x + c * x **2+ d * x **3# Compute and print loss using operations on Tensors.# Now loss is a Tensor of shape (1,)# loss.item() gets the scalar value held in the loss.
    loss =(y_pred - y).pow(2).sum()if t %100==99:print(t, loss.item())# Use autograd to compute the backward pass. This call will compute the# gradient of loss with respect to all Tensors with requires_grad=True.# After this call a.grad, b.grad. c.grad and d.grad will be Tensors holding# the gradient of the loss with respect to a, b, c, d respectively.
    loss.backward()# Manually update weights using gradient descent. Wrap in torch.no_grad()# because weights have requires_grad=True, but we don't need to track this# in autograd.with torch.no_grad():
        a -= learning_rate * a.grad
        b -= learning_rate * b.grad
        c -= learning_rate * c.grad
        d -= learning_rate * d.grad

        # Manually zero the gradients after updating weights
        a.grad =None
        b.grad =None
        c.grad =None
        d.grad =Noneprint(f'Result: y = {a.item()} + {b.item()} x + {c.item()} x^2 + {d.item()} x^3')

运行结果:

991702.3205566406251991140.3609619140625299765.3402709960938399514.934326171875499347.6383972167969599235.80038452148438699160.98876953125799110.9115295410156289977.3681945800781299954.883243560791016109939.79965591430664119929.673206329345703129922.869291305541992139918.293842315673828149915.214327812194824159913.1397705078125169911.740955352783203179910.796865463256836189910.15902233123779319999.727652549743652
Result: y =0.019909318536520004+0.8338049650192261 x +-0.0034346890170127153 x^2+-0.09006795287132263 x^3

四、总结

以上的两种方法都只是模拟到了3次方,所以仅仅只是在x比较小的时候才比较合理,此外,由于系数是随机产生的,因此,每次运行的结果可能会有一定的差别的。

谢谢大家的阅读与支持啦。


本文转载自: https://blog.csdn.net/m0_54218263/article/details/122391053
版权归原作者 hhh江月 所有, 如有侵权,请联系我们删除。

“PyTorch实现sin函数模拟”的评论:

还没有评论