0


深度学习:可视化方法(模型可视化,训练过程可视化,特征提取可视化)

0.环境说明

python3.8.5+pytorch

1. 模型结构可视化

1.1 netron

step1:在虚拟环境中安装netron

  1. pip install netron

step2: 在虚拟环境中打开netron

在这里插入图片描述

step3:浏览器中输入地址:http://localhost:8080/

step4:选择保存的模型xxx.pt

在这里插入图片描述
在这里插入图片描述

1.2 使用tensorboard

step1:安装tensorboard,最简单的方式就是直接安装一个tensorflow

  1. pip install tensorflow==1.15.0-i https://mirrors.aliyun.com/pypi/simple

step2:代码中设置

  1. from torch.utils.tensorboard import SummaryWriter
  1. '''设置在模型构建后'''
  2. writer = SummaryWriter(log_dir='./output/log')
  3. writer.add_graph(model, torch.empty(10,4))#注意这里要结合你具体的训练样本,10batch_szie可任意,4是训练样本的特征长度需要和训练样本一致
  1. '''设置在反向传播过程中,记录loss和acc'''# 可视化输出
  2. writer.add_scalar('loss', _loss, train_step)
  3. writer.add_scalar('acc', _acc, train_step)
  4. train_step +=1
  1. '''train损失和test损失共同打印在一张图上,add_scalars注意s'''
  2. writer.add_scalars('epoch_loss',{'train':train_loss,'test':test_loss},epoch)

step3:

  1. 进入cmd命令行;
  2. 切换当前磁盘到events文件所在的磁盘;
  3. 确保events文件所在的路径没有中文字符串;
  4. 输入命令:tensorboard --logdir C:\Users\...\output\log在这里插入图片描述
  5. 浏览器中输入http://localhost:6006/#images![![在这里插入图片描述](https://img-blog.csdnimg.cn/83b6b6694ba846f8855c5b1781f3afc2.png](https://img-blog.csdnimg.cn/e774f56df9eb440c84df9ac0c1bc1f81.png)

2. 训练过程可视化

2.1 tensorboard

上文已经提及,只需要在训练过程中add即可。

  1. '''设置在反向传播过程中,记录loss和acc'''# 可视化输出
  2. writer.add_scalar('loss', _loss, train_step)
  3. writer.add_scalar('acc', _acc, train_step)
  4. train_step +=1

在这里插入图片描述

2.2 普通代码

  1. if batch_idx %100==0:print(f"Train Epoch:{epoch} [{batch_idx*len(data)}/{len(train_loader.dataset)} ({100.*batch_idx/len(train_loader):.0f}%)]\tloss:{loss.item():.6f}")

效果图:
在这里插入图片描述

3. 特征提取可视化

需要tensorboard配合hook,直接上代码。
model: LeNet
data: MNIST

  1. import enum
  2. import sys
  3. import torch
  4. from torch import nn
  5. from torchvision import datasets,transforms
  6. from torch.utils.data import DataLoader
  7. from torch.utils.tensorboard import SummaryWriter
  8. from myutils.metrics import Acc_Score #自己写的一个计算准确度的类,继承ModuleclassLeNet_BN(nn.Module):def__init__(self,in_chanel)->None:super(LeNet_BN,self).__init__()
  9. self.feature_hook_img ={}
  10. self.features = nn.Sequential(
  11. nn.Conv2d(in_chanel,6, kernel_size=5), nn.BatchNorm2d(6), nn.Sigmoid(),
  12. nn.AvgPool2d(kernel_size=2, stride=2),
  13. nn.Conv2d(6,16, kernel_size=5), nn.BatchNorm2d(16), nn.Sigmoid())
  14. self.classifi = nn.Sequential(
  15. nn.AvgPool2d(kernel_size=2, stride=2), nn.Flatten(),
  16. nn.Linear(256,120), nn.BatchNorm1d(120), nn.Sigmoid(),
  17. nn.Linear(120,84), nn.BatchNorm1d(84), nn.Sigmoid(),
  18. nn.Linear(84,10))defforward(self,X):
  19. X = self.features(X)# 特征提取与分类需要分开
  20. X = self.classifi(X)return X
  21. defadd_hooks(self):#可视化钩子defcreate_hook_fn(idx):defhook_fn(model,input,output):
  22. self.feature_hook_img[idx]=output.cpu()return hook_fn
  23. for _idx,_layer inenumerate(self.features):
  24. _layer.register_forward_hook(create_hook_fn(_idx))defadd_image_summary(self,writer,step,prefix=None):iflen(self.feature_hook_img)==0:returnif prefix isNone:
  25. prefix='layer'else:
  26. prefix =f"{prefix}_layer"for _k in self.feature_hook_img:# 包含原始图像
  27. _v = self.feature_hook_img[_k][0:1,...]# 只获取第一张图像
  28. _v = torch.permute(_v,(1,0,2,3))#(1,c,h,w)->(c,1,h,w)# 交换通道,展示每个维度的提取的图像特征
  29. writer.add_images(f"{prefix}_{_k}",_v,step)if __name__=='__main__':# 加载数据# device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
  30. tsf = transforms.Compose([transforms.ToTensor()])
  31. train_data= datasets.MNIST(root='dataset\mnist_train',train=True,transform=tsf,download=True)
  32. train_data_loader = DataLoader(train_data,batch_size=32,shuffle=True)
  33. test_data = datasets.MNIST(root='dataset\mnist_test',train=False,transform=tsf,download=True)
  34. test_data_loader = DataLoader(test_data,batch_size=32,shuffle=32)
  35. model = LeNet_BN(1)
  36. model.add_hooks()
  37. lr=1e-2
  38. epochs =10
  39. loss_f = torch.nn.CrossEntropyLoss()
  40. acc_f = Acc_Score()
  41. opt = torch.optim.SGD(model.parameters(),lr)
  42. writer = SummaryWriter(log_dir='./output/log')
  43. writer.add_graph(model,torch.empty(10,1,28,28))for epoch inrange(epochs):for idx,data inenumerate(train_data_loader):
  44. X,y = data
  45. y = y.to(torch.long)# 前向传播
  46. y_pred = model(X)
  47. train_loss = loss_f(y_pred,y)
  48. train_acc = acc_f(y_pred,y)# 反向传播
  49. opt.zero_grad()
  50. train_loss.backward()
  51. opt.step()if(idx+1)%100==0:print(f"epoch:{epoch} |{(idx+1)*32}/{len(train_data)}({100.*(idx+1)*32/len(train_data):.2f}%)|\tloss:{train_loss.item():.3f}\tacc:{train_acc.item():.2f}")
  52. model.add_image_summary(writer,epoch,'train')# 添加本次训练
  53. test_loss=0
  54. test_acc=0
  55. test_numbers =len(test_data)/32for data in test_data_loader:
  56. model.eval()
  57. X,y = data
  58. y=y.to(torch.long)# print(y)# y = y.to(torch.long)
  59. y_pred = model(X)
  60. test_loss += loss_f(y_pred,y).item()
  61. test_acc += acc_f(y_pred,y).item()
  62. test_loss = test_loss/test_numbers
  63. test_acc = test_acc/test_numbers
  64. print('test res:')print(f"epoch:{epoch} \tloss:{test_loss:.3f}\tacc:{test_acc:.2f}")print('-'*80)
  65. writer.add_scalars('epoch_loss',{'train':train_loss.item(),'test':test_loss},epoch)
  66. writer.add_scalars('epoch_acc',{'train':train_acc.item(),'test':test_acc},epoch)
  67. writer.close()# 关闭

本文转载自: https://blog.csdn.net/qq_42911863/article/details/126160153
版权归原作者 ZERO_pan 所有, 如有侵权,请联系我们删除。

“深度学习:可视化方法(模型可视化,训练过程可视化,特征提取可视化)”的评论:

还没有评论