1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > 深度学习框架PyTorch:入门与实践 学习(二)

深度学习框架PyTorch:入门与实践 学习(二)

时间:2021-05-05 05:59:35

相关推荐

深度学习框架PyTorch:入门与实践 学习(二)

Tensor和Autograd------Tensor

Tensor

创建:

Tensor(*size),创建时不会立马分配空间,使用时才会分配,而其他均创建时立马就分配。ones(*sizes)zeros(*sizes)eye(*sizes)对角线为1其他为0arange(s,e,step)从s到e步长为steplinspace(s,e,steps)从s到e,均匀切分成steps份rand/randn(*sizes)均匀/标准整体分布normal(mean,std)/uniform(from,to)randperm(m)随机排列

利用List创建tensor

a = torch.Tensor([[1, 2, 3], [4, 5, 6]])print(a)

b = a.tolist()print(b)

print(a.size())

print(a.numel())#输出the num of element

# 创建一个和a一样大小的Tensorc = torch.Tensor(a.size())print(c.size())# 查看c的形状print(c.shape)

常用tensor操作:

Tensor.view()可以调整tensor的形状,但不会修改自身的数据,两者共享内存

a = torch.Tensor([[1, 2, 3], [4, 5, 6]])print(a)b = a.view(3, 2)print(b)

# -1自动计算该维度的大小c = a.view(-1, 6)print(c)

squeeze,unsqueeze减少维度增加维度

a = torch.Tensor([[1, 2, 3], [4, 5, 6]])b = a.view(3, 2)# 在1维增加一维b = b.unsqueeze(1)print(b.size())# 减去倒数第二维c = b.squeeze(-2)print(c.size())

a = torch.Tensor([[1, 2, 3], [4, 5, 6]])print(a.size())b = a.view(1, 1, 1, 2, 3)print(b.size())c = b.squeeze(0)print(c.size())# 把所有维度为1的都删去c = c.squeeze()print(c.size())

resize:调整size,可以修改tensor的尺寸,如果新尺寸大于原尺寸会分配新的内存空间,如果小于原尺寸,数据依旧会被保存。

a = torch.Tensor([[1, 2, 3], [4, 5, 6]])print(a)print(a.size())b = a.resize_(1, 3)print(b.size())print(b)b = a.resize_(3, 3)print(b.size())print(b)

索引:

a = torch.Tensor([[1, 2, 3], [4, 5, 6]])print(a)# 输出每个元素是否满足条件,满足为1否则为0print(a > 1)# 输出满足条件的元素print(a[a>1])

gather(input, dim, index):根据index在dim维选取数据,选取的大小跟index一样。dim=0时,out[i][j]=input[index[i][j]][j],dim=1时,out[i][j]=input[i][index[i][j]]

a = torch.arange(0, 16).view(4, 4)print(a)index = torch.LongTensor([[0, 1, 2, 3]])b = a.gather(0, index)print(b)index = torch.LongTensor([[3], [2], [1], [0]])c = a.gather(1, index)print(c)

scatter:gather的逆操作,

c = torch.zeros((4, 4))c.scatter_(0, index, b)print(c.size())print(c)

高级索引:

x[[1,2],[1,2],[2,0]]# x[1,1,2],x[2,2,0]x[[2,1,0],[0],[1]] #x[2,0,1],x[1,0,1],x[0,0,1]

线性回归

import torch as tfrom matplotlib import pyplot as pltfrom IPython import displayt.manual_seed(100)def get_fake_data(batch_size=8):x = t.rand(batch_size, 1) * 20y = x * 2 + (1 + t.randn(batch_size, 1)) * 3return x, yx, y = get_fake_data()plt.scatter(x.squeeze().numpy(), y.squeeze().numpy())w = t.rand(1, 1)b = t.zeros(1, 1)lr = 0.001for ii in range(20000):x, y = get_fake_data()y_pred = x.mul(w) + b.expand_as(y)loss = 0.5 * (y_pred - y) ** 2loss = loss.sum()dloss = 1dy_pred = dloss * (y_pred - y)dw = x * dy_preddb = dy_pred.sum()w.sub_((lr * dw).sum())b.sub_(lr * db)if ii % 1000 == 0:display.clear_output(wait=True)x = t.arange(0, 20).view(-1, 1)y = x.mul(w) + b.expand_as(x)plt.plot(x.numpy(), y.numpy())x2, y2 = get_fake_data(batch_size=20)plt.scatter(x2.numpy(), y2.numpy())plt.xlim(0, 20)plt.ylim(0, 41)plt.show()plt.pause(0.5)

Tensor 和Autograd------Autograd

Variable不支持部分Inplace 函数,因为这些函数会修改tensor自身,但在反向传播中,variable需要缓存原来的tensor计算梯度。

torch.autograd.grad(z, y)

输出z对y的梯度

autograd根据用户对variable的操作构建计算图,对variable的操作抽象为function由用户创建的节点称为叶子节点,叶子节点的grad_fn为None,叶子节点中需要求导的variable具有accumulateGrad,因为其梯度是累加的。variable默认requires_grad=false。当一个节点的rrquires_grad设置为true时,其他依赖它的节点的requires_grad均为truevolatile=True,将所有依赖它的节点全部设置为vllatile=true,优先级比require_grad=True高,volatile的节点不会求导,也无法进行反向传播。

用Variable实现线性回归

import torch as tfrom torch.autograd import Variable as Vfrom matplotlib import pyplot as pltfrom IPython import displayt.manual_seed(1000)def get_fake_data(batch_size=16):x = t.rand(batch_size, 1) * 20y = x * 2 + (1 + t.randn(batch_size, 1)) * 3return x, yx, y = get_fake_data()plt.scatter(x.squeeze().numpy(), y.squeeze().numpy())w = V(t.rand(1, 1), requires_grad=True)b = V(t.zeros(1, 1), requires_grad=True)lr = 0.0001for ii in range(8000):x, y = get_fake_data()x = V(x)y = V(y)y_pred = x.mul(w) + b.expand_as(y)loss = 0.5 * (y_pred - y) ** 2loss = loss.sum()loss.backward()dloss = 1w.data = w.data - lr * w.grad.datab.data = b.data - lr * b.grad.dataw.grad.data.zero_()b.grad.data.zero_()if ii % 1000 == 0:display.clear_output(wait=True)x = t.arange(0, 20).view(-1, 1)y = x.mul(w.data) + b.data.expand_as(x)plt.plot(x.numpy(), y.numpy())x2, y2 = get_fake_data(batch_size=20)plt.scatter(x2.numpy(), y2.numpy())plt.xlim(0, 20)plt.ylim(0, 41)plt.show()plt.pause(0.5)print(w.data.squeeze()[0], b.data.squeeze()[0])

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。