1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > 深度学习框架PyTorch:入门与实践 学习(一)

深度学习框架PyTorch:入门与实践 学习(一)

时间:2020-11-15 14:43:18

相关推荐

深度学习框架PyTorch:入门与实践 学习(一)

从今天起要从头开始学习PyTorch了,在此记下笔记。

PyTorch 入门第一步

Tensor

import Torch as tx = t.tensor(5,3)#生成5*3的矩阵print(x.size())# 输出x的维度print(x.size()[0]) #输出x的第0维print(x.size(0))# 输出x的第0维y = t.rand(5,3)# 生成0-1之间的随机数矩阵# 加法x + yt.add(x,y)result = t.rand(5,3)t.add(x,y,out=result)

inplace

y.add(x)#不改变y的值y.add_(x)# 改变y的值

函数名带下划线的会修改Tensor本身,x.add(y),x.t()会返回一个新的Tensor,而x不变,但x.add_(y)改变x。

Tensor 与 numpy

tensor转numpy

a = t.tensor(5,3)b = a.numpy()

numpy 转 tensor

import numpy as npa = np.ones(5)b = t.from_numpy(a)

转换之后,因为tensor和numpy共享内存,所以一个改变另一个也会被改变。

AutoGrad自动微分

Variable

data:tensor

grad:存梯度,也是一个variable

gradfn:function,反向传播计算输入的梯度

from torch.autograd import Variablex = Variable(t.ones(2,2),require_grad=True)y = x.sum()y.backward()print(x.grad)# 1, 1# 1, 1

梯度在反向传播过程中是累加的,故每次反向传播都会累加之前的梯度,所以反向传播之前要把梯度清0.

x.grad.data.zero_()

神经网络

定义网络

继承nn.Module实现nn.Module中forward方法把具有可学习参数的层放在构造函数__init()__中(不可学习参数的层可放可不放)

import torch.nn as nnimport torchimport torch.nn.functional as Ffrom torch.autograd import Variableclass MyNet(nn.Module):def __init__(self):# nn.Module子类的函数必须在构造函数中执行父类的构造函数super(MyNet, self).__init__()self.first = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3)self.second = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3)self.Linear1 = nn.Linear(in_features=32*12*12, out_features=20)self.Linear2 = nn.Linear(in_features=20, out_features=1)def forward(self, x):x = self.first(x)x = F.relu(x)x = self.second(x)x = F.relu(x)x = x.view(x.size()[0], -1)x = self.Linear1(x)x = self.Linear2(x)return xNet = MyNet()print(Net)

定义了forward函数之后,backward会被自动实现。

# 网络参数params = list(Net.parameters())print(len(params))# Net.named_parameters()可返回可学习的参数和名称for name, parameters in Net.named_parameters():print(name, ":", parameters.size())

forward函数的输入输出均是Variable

input = Variable(torch.rand((1, 3, 16, 16)))out = Net(input)print(out.size())

torch.nn只支持mini-natch,故输入必须是4维的,

损失函数

target = torch.rand((1, 1))criterion = nn.MSELoss()loss = criterion(out, target)print(loss)

Net.zero_grad()print("反向传播之前的梯度:", Net.first.bias.grad)loss.backward()print("反向传播之后的梯度:", Net.first.bias.grad)

优化器

import torch.optim as optimoptimizer = optim.Adam(Net.parameters, lr=0.01)# 梯度先清零optimizer.zero_grad()out = Net(input)print(out.size())target = torch.rand((1, 1))criterion = nn.MSELoss()loss = criterion(out, target)loss.backward()# 更新参数optimizer.step()

数据加载与预处理

daraloader是可迭代的对象,将dataset返回的每一个样本拼接成一个batch,并提供多行程加速优化和代码打乱等。程序对dataset遍历完一遍之后,对dataloader也完成了一次迭代。

import torchvision as tvimport torch as timport torchvision.transforms as transformsfrom torchvision.transforms import ToPILImageimport torch.nn as nnimport torchimport torch.nn.functional as Ffrom torch.autograd import Variabletransform = pose([transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5),(0.5, 0.5, 0.5)),])trainset = tv.datasets.CIFAR10(root='F:/code/EDSR-PyTorch-master-png/',train=True, download=True, transform=transform)trainloader = t.utils.data.DataLoader(trainset,batch_size=4,shuffle=True,num_workers=0)testset = tv.datasets.CIFAR10(root='F:/code/EDSR-PyTorch-master-png/',train=False, download=True, transform=transform)testloader = t.utils.data.DataLoader(testset,batch_size=4,shuffle=False,num_workers=0)

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。