0


tqdm高级使用方法(类keras进度条)

简介

在很多场景,我们希望对一个进度条标识其运行的内容(

  1. set_description

),同时也希望在进度条中增加一些信息,如模型训练的精度等。本文就将基于tqdm,在实际应用中充实进度条。

一、简单示例

  1. from tqdm import tqdm
  2. tq_bar = tqdm(range(10))for idx, i inenumerate(tq_bar):
  3. acc_ = i*10
  4. loss =1/((i+1)*10)
  5. tq_bar.set_description(f'SimpleLoop [{idx+1}]')
  6. tq_bar.set_postfix(dict(acc=f'{acc_}%', loss=f'{loss:.3f}'))
  • 结果
  1. SimpleLoop [10]: 100%|██████████████████████████████████| 10/10 [00:01<00:00, 8.30it/s, acc=90%, loss=0.010]

二、在深度学习训练中使用(pytorch 类似 keras)

  1. import torch
  2. from torch import nn
  3. from torch.nn import functional as F
  4. from torch.optim import AdamW, Adam
  5. from torch.utils.data import Dataset, TensorDataset, DataLoader
  6. import torchvision as tv
  7. from torchvision import transforms
  8. classsimpleCNN(nn.Module):def__init__(self, input_dim=3, n_class=10):super(simpleCNN, self).__init__()
  9. self.features = nn.Sequential(
  10. nn.Conv2d(input_dim,32, kernel_size=7, padding=2,dilation=2, bias=False),
  11. nn.BatchNorm2d(32),
  12. nn.ReLU(inplace=True),
  13. nn.MaxPool2d(3,2,1))
  14. self.clf = nn.Sequential(
  15. nn.Linear(4608,128),
  16. nn.ReLU(inplace=True),
  17. nn.Dropout(0.2),
  18. nn.Linear(128,64),
  19. nn.ReLU(inplace=True),
  20. nn.Dropout(0.2),
  21. nn.Linear(64, n_class))defforward(self, x):
  22. out = self.features(x)
  23. out = out.view(out.size(0),-1)return self.clf(out)
  24. transform = transforms.Compose([transforms.ToTensor(),#转为tensor
  25. transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5)),#归一化])
  26. dt = tv.datasets.CIFAR10(train=True, download=True, root=r'D:\work\my_project\play_data', transform=transform)# dt = tv.datasets.CIFAR10(train=True, download=False, root=r'D:\work\my_project\play_data', transform=transform)
  27. dt_loader = DataLoader(dt, batch_size=256)
  28. model = simpleCNN(3,10)
  29. loss_func = nn.CrossEntropyLoss()
  30. optm = AdamW(model.parameters(), lr=1e-3)for ep inrange(5):
  31. one_batch_bar = tqdm(dt_loader)
  32. one_batch_bar.set_description(f'[ epoch: {ep+1} ]')
  33. step_counts =0
  34. step_loss_sum =0
  35. step_right =0
  36. step_samples =0for tmp_x, tmp_y in one_batch_bar:# forward
  37. optm.zero_grad()
  38. step_pred = model(tmp_x)
  39. step_loss = loss_func(step_pred, tmp_y)
  40. loss_print = step_loss.detach().numpy()
  41. step_right_i =(torch.argmax(step_pred, dim=1)== tmp_y).detach().numpy().sum()# backword
  42. step_loss.backward()
  43. optm.step()# info
  44. step_counts +=1
  45. step_loss_sum += loss_print
  46. step_right += step_right_i
  47. step_samples +=len(tmp_y)
  48. one_batch_bar.set_postfix(dict(
  49. loss=f'{step_loss_sum/step_counts:.5f}',
  50. acc=f'{step_right/step_samples*100:.2f}%'))
  • 结果
  1. [ epoch: 1 ]: 100%|██████████████████████████████████████████████████████| 196/196 [00:39<00:00, 4.95it/s, loss=1.73634, acc=36.93%]
  2. [ epoch: 2 ]: 100%|██████████████████████████████████████████████████████| 196/196 [00:38<00:00, 5.13it/s, loss=1.43507, acc=48.46%]
  3. [ epoch: 3 ]: 100%|██████████████████████████████████████████████████████| 196/196 [00:45<00:00, 4.34it/s, loss=1.30025, acc=53.85%]
  4. [ epoch: 4 ]: 100%|██████████████████████████████████████████████████████| 196/196 [00:37<00:00, 5.28it/s, loss=1.22050, acc=57.06%]
  5. [ epoch: 5 ]: 100%|██████████████████████████████████████████████████████| 196/196 [00:42<00:00, 4.65it/s, loss=1.16387, acc=58.78%]

本文转载自: https://blog.csdn.net/Scc_hy/article/details/126256530
版权归原作者 Scc_hy 所有, 如有侵权,请联系我们删除。

“tqdm高级使用方法(类keras进度条)”的评论:

还没有评论