0


更简单的掩码图像建模框架SimMIM介绍和PyTorch代码实现

MAE发布以来,各种使用掩码技术的自监督掩码模型在其基础之上有了更进一步的研究。在本文中我们将探索一篇和MAE同期的工作:SimMIM: A Simple Framework for Masked Image Modeling,研究团队是微软亚研院,并在PyTorch中编写它,最后我们也会提供相关的代码。

SimMIM的骨干网络是VIT,熟悉自监督学习的基础知识也非常有帮助,最后我们还要精通PyTorch,因为我们使用它来实现我们的模型。

图像中的掩码技术

在过去的几年中,对比学习和非对比学习方法一直是计算机视觉(CV)的自监督学习(SSL)的主要形式,他们中的最先进的(SOTA)模型与监督学习处于同等地位。从根本上说,对比学习的目的是教会神经网络将相似的数据点(正对)放在一起,并将不同的数据点(负对)分开,这是一项需要学习视觉模式的任务。非对比学习克服了与对比学习相关的障碍(例如,需要大量的标注数据)。

而自然语言处理(NLP)为SSL使用掩码建模,其中输入的一个随机片段被掩码,模型的目标是根据剩余的信息恢复它,这样做的印象是将教会模型语法。像BERT这样的神经网络就属于这一类,这种方式已经取得了惊人的性能。

NLP 和视觉之间存在一定的差异,图像中的局部性非常强,即附近的像素高度相关,因此即使一个像素被屏蔽,通过分析其邻居也可以相对容易地推断出它的值。并且照片是连续的不像 NLP 中的标记是离散的,像素是低级原始特征而单词是人类构建的高级概念。

随着ViT的出现,蒙面建模最近已进入计算机视觉领域,在 ImageNet 分类等下游任务上取得了具有竞争力的分数。但是这些方法很棘手,并且依赖于精细的组件,如像 iGPT 一样像素聚类,以及通过额外的离散变分自动编码器 (dVAE) 进行标记化,这是 BEiT 使用的一种技术。

SimMIM 是一个简单的掩码图像建模框架并且超越了以前的 SOTA 基线,在没有复杂的元素的同时保持了效率。具体来说在提取图像的标记后,SimMIM 通过用可学习的掩码标记替换它们来随机屏蔽一些标记,并用 ViT 对数据进行编码。接下来通过将掩码标记的编码表示传递给线性层来重建缺失部分,损失是预测像素和实际像素之间的 L1 损失除以掩码标记的数量。

Pytorch实现

SimMIM 很简单而且没有特别复杂的操作。我们假设从一组维度为 batch_size X n_tokens X token_dim 的令牌开始。

  1. fromtorchimport (
  2. randn,
  3. )
  4. # tokens is currently a dummy tensor.
  5. # Later, it will be replaced by the actual tokens
  6. tokens=randn(batch_size, n_tokens, token_dim)

首先必须确定要屏蔽哪些标记。一种策略是在每个样本的 [0, n_tokens-1] (从零开始的索引)范围内生成一组索引,这些索引将是为该行屏蔽的标记的索引。

  1. fromtorchimport (
  2. randn,
  3. )
  4. tokens=randn(batch_size, n_tokens, token_dim)
  5. indices_to_mask=randn(batch_size, n_tokens)
  6. # Number of tokens to mask
  7. # 50% of the total number of tokens performs well on average.
  8. # However, for smaller patch sizes, a higher masking ratio is generally better.
  9. # For example, for a patch size of 32, 0.5 performs well but for
  10. # a patch size of 16, it would be worthwhile to increase it to 0.8.
  11. n_masked_tokens=int(0.5*n_tokens)
  12. # topk returns the k largest elements as well as their indices
  13. # dim=1 tells it to find the maximum values and their indices
  14. # on a per-row basis
  15. # The indices of the tokens that are to be masked is going
  16. # to be the indices of the n_masked_tokens largest values
  17. indices_to_mask=indices_to_mask.topk(
  18. k=n_masked_tokens,
  19. dim=1,
  20. )
  21. # The largest values can be accesses via indices_to_mask.values,
  22. # and their indices can be accessed via indices_to_mask.indices
  23. indices_to_mask=indices_to_mask.indices

indices_to_mask 的形状为 batch_size X n_masked_tokens,每一行都包含要为该特定数据点屏蔽标记的索引。使用 indices_to_mask 索引标记会稍微复杂一些,所以更好的方法是构建大小为 batch_size * n_tokens 的位掩码,其中如果标记[i][j] 被屏蔽,则 bitmask[i][j] 为 True,否则为 False .

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. tokens=randn(batch_size, n_tokens, token_dim)
  6. indices_to_mask=randn(batch_size, n_tokens)
  7. n_masked_tokens=int(0.5*n_tokens)
  8. indices_to_mask=indices_to_mask.topk(
  9. k=n_masked_tokens,
  10. dim=1,
  11. )
  12. indices_to_mask=indices_to_mask.indices
  13. # Initially, bitmask is simply full of zeros (i.e., False)
  14. bitmask=zeros(batch_size, n_tokens)
  15. # What this line does is as follows:
  16. # For every row i, bitmask[i][j] is replaced
  17. # by the value argument (in this case 1), where j takes every value
  18. # in indices_to_mask[i].
  19. # For example, if indices_to_mask[3] is
  20. # [2, 4, 7], then bitmask[3][2], bitmask[3][4], and bitmask[3][7]
  21. # are all set to 1.
  22. bitmask=bitmask.scatter(
  23. dim=1,
  24. index=indices_to_mask,
  25. value=1,
  26. )
  27. bitmask=bitmask.bool()

要使用位掩码首先要通过VIT从输入产生令牌。我们这里使用的ViT来自timm包,但是它可以很容易地为转换为其他实现。

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. # vit is assumed to be a vision transformer from timm
  6. # To get tokens from a timm ViT, one must call its patch_embed method
  7. # tokens is now of shape batch_size X n_tokens X token_dim
  8. # Keep in mind that input is image data and of size
  9. # batch_size X n_channels X height X width
  10. tokens=vit.patch_embed(input)
  11. indices_to_mask=randn(batch_size, n_tokens)
  12. n_masked_tokens=int(0.5*n_tokens)
  13. indices_to_mask=indices_to_mask.topk(
  14. k=n_masked_tokens,
  15. dim=1,
  16. )
  17. indices_to_mask=indices_to_mask.indices
  18. bitmask=zeros(batch_size, n_tokens)
  19. bitmask=bitmask.scatter(
  20. dim=1,
  21. index=indices_to_mask,
  22. value=1,
  23. )
  24. bitmask=bitmask.bool()

下一步使用掩码令牌替换切片。PyTorch 不允许以Inplace的方式修改变量,不能直接将掩码标记赋值给令牌[bitmask];所以必须用掩码标记填充形状为 batch_size * n_tokens * token_dim 的张量(维度相同),

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. fromtorch.nnimport (
  6. Parameter,
  7. )
  8. tokens=vit.patch_embed(input)
  9. # The mask token itself is simply a vector of dimension token_dim
  10. mask_token=Parameter(randn(token_dim))
  11. # mask_token is repeated to make it the same shape as tokens
  12. # mask_tokens is now of size batch_size X n_tokens X token_dim
  13. mask_tokens=mask_token.repeat(batch_size, n_tokens, 1)
  14. indices_to_mask=randn(batch_size, n_tokens)
  15. n_masked_tokens=int(0.5*n_tokens)
  16. indices_to_mask=indices_to_mask.topk(
  17. k=n_masked_tokens,
  18. dim=1,
  19. )
  20. indices_to_mask=indices_to_mask.indices
  21. bitmask=zeros(batch_size, n_tokens)
  22. bitmask=bitmask.scatter(
  23. dim=1,
  24. index=indices_to_mask,
  25. value=1,
  26. )
  27. bitmask=bitmask.bool()

这样就完成了掩码的过程

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. fromtorch.nnimport (
  6. Parameter,
  7. )
  8. tokens=vit.patch_embed(input)
  9. mask_token=Parameter(randn(token_dim))
  10. mask_tokens=mask_token.repeat(batch_size, n_tokens, 1)
  11. indices_to_mask=randn(batch_size, n_tokens)
  12. n_masked_tokens=int(0.5*n_tokens)
  13. indices_to_mask=indices_to_mask.topk(
  14. k=n_masked_tokens,
  15. dim=1,
  16. )
  17. indices_to_mask=indices_to_mask.indices
  18. bitmask=zeros(batch_size, n_tokens)
  19. bitmask=bitmask.scatter(
  20. dim=1,
  21. index=indices_to_mask,
  22. value=1,
  23. )
  24. bitmask=bitmask.bool()
  25. # bitmask must have the same number of axes as tokens and mask_tokens
  26. # Therefore, unsqueeze(2) adds an axis to it and it is now of shape batch_size X n_tokens X 1
  27. bitmask=bitmask.unsqueeze(2)
  28. # ~bitmask turns True to False and False to True
  29. # Here, all that is taking place is (~bitmask) is multiplied by tokens
  30. # to zero out every token that is supposed to be masked, and the result is added
  31. # to bitmask*mask_tokens, in which everything is 0 except the tokens that are
  32. # supposed to mask.
  33. tokens= (~bitmask)*tokens+bitmask*mask_tokens

然后就是位置嵌入

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. fromtorch.nnimport (
  6. Parameter,
  7. )
  8. tokens=vit.patch_embed(input)
  9. mask_token=Parameter(randn(token_dim))
  10. mask_tokens=mask_token.repeat(batch_size, n_tokens, 1)
  11. indices_to_mask=randn(batch_size, n_tokens)
  12. n_masked_tokens=int(0.5*n_tokens)
  13. indices_to_mask=indices_to_mask.topk(
  14. k=n_masked_tokens,
  15. dim=1,
  16. )
  17. indices_to_mask=indices_to_mask.indices
  18. bitmask=zeros(batch_size, n_tokens)
  19. bitmask=bitmask.scatter(
  20. dim=1,
  21. index=indices_to_mask,
  22. value=1,
  23. )
  24. bitmask=bitmask.bool()
  25. bitmask=bitmask.unsqueeze(2)
  26. tokens= (~bitmask)*tokens+bitmask*mask_tokens
  27. # In timm, a ViT's position embedding is accessible via vit.pos_embed
  28. # The reason for vit.pos_embed[:, 1:] in place of simply vit.pos_embed
  29. # is that the first position embedding vector is for the class token,
  30. # which is not used for self-supervised learning.
  31. tokens=tokens+vit.pos_embed[:, 1:]

令牌可以被输入到 ViT获得它的编码表示。

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. fromtorch.nnimport (
  6. Parameter,
  7. )
  8. tokens=vit.patch_embed(input)
  9. mask_token=Parameter(randn(token_dim))
  10. mask_tokens=mask_token.repeat(batch_size, n_tokens, 1)
  11. indices_to_mask=randn(batch_size, n_tokens)
  12. n_masked_tokens=int(0.5*n_tokens)
  13. indices_to_mask=indices_to_mask.topk(
  14. k=n_masked_tokens,
  15. dim=1,
  16. )
  17. indices_to_mask=indices_to_mask.indices
  18. bitmask=zeros(batch_size, n_tokens)
  19. bitmask=bitmask.scatter(
  20. dim=1,
  21. index=indices_to_mask,
  22. value=1,
  23. )
  24. bitmask=bitmask.bool()
  25. bitmask=bitmask.unsqueeze(2)
  26. tokens= (~bitmask)*tokens+bitmask*mask_tokens
  27. tokens=tokens+vit.pos_embed[:, 1:]
  28. # The encoded representation of tokens
  29. encoded=vit.blocks(tokens)

被屏蔽的令牌将从编码中获取,然后它们通过线性层来重建像素值。

  1. fromtorchimport (
  2. randn,
  3. zeros,
  4. )
  5. fromtorch.nnimport (
  6. Linear,
  7. Parameter,
  8. )
  9. tokens=vit.patch_embed(input)
  10. mask_token=Parameter(randn(token_dim))
  11. mask_tokens=mask_token.repeat(batch_size, n_tokens, 1)
  12. indices_to_mask=randn(batch_size, n_tokens)
  13. n_masked_tokens=int(0.5*n_tokens)
  14. indices_to_mask=indices_to_mask.topk(
  15. k=n_masked_tokens,
  16. dim=1,
  17. )
  18. indices_to_mask=indices_to_mask.indices
  19. bitmask=zeros(batch_size, n_tokens)
  20. bitmask=bitmask.scatter(
  21. dim=1,
  22. index=indices_to_mask,
  23. value=1,
  24. )
  25. bitmask=bitmask.bool()
  26. bitmask=bitmask.unsqueeze(2)
  27. tokens= (~bitmask)*tokens+bitmask*mask_tokens
  28. tokens=tokens+vit.pos_embed[:, 1:]
  29. encoded=vit.blocks(tokens)
  30. # To index input and encoded with bitmask,
  31. # the axis that was added must be removed.
  32. # This reverts bit_mask to a size of batch_size X n_tokens
  33. bitmask=bitmask.squeeze(2)
  34. # The encoded mask tokens, of shape batch_size X n_masked_tokens X token_dim
  35. masked_tokens_encoded=encoded[bitmask]
  36. # In timm, A ViT's patch height and width are vit.patch_embed.patch_size
  37. patch_height=patch_width=vit.patch_embed.patch_size
  38. # The input is the tokens,
  39. # the output is the reconstructed raw pixel values.
  40. # Therefore, the output shape is 3 (for 3 channels)
  41. # multiplied by patch_height*patch_width, which is the original shape
  42. # of the patches before they were tokenized
  43. decoder_out_dim=3*patch_height*patch_width
  44. decoder=Linear(
  45. in_features=token_dim,
  46. out_features=decoder_out_dim,
  47. )
  48. # The reconstructed pixels, of shape batch_size X n_masked_tokens X 3*patch_height*patch_width
  49. masked_patches_reconstructed=decoder(masked_tokens_encoded)

最后masked_patchesde的重构与初始数据中的原始像素进行比较。因为输入的patches不可用,因此必须对输入进行patche处理。PyTorch的reshap函数有一些限制,用torch进行拼接。重塑将产生不正确的输出。所以一个简单的解决方案是einops(它是一个方便用于操作张量的库,并且与框架无关)。

需要注意的是,patches和令牌(Token)是不同的。patches是从 batch_size * 3 * height * width 重塑为 batch_size * n_tokens * 3 * patch_height * patch_width 的数据,而令牌是通过沿最终轴线性变换patches创建的。

  1. fromeinopsimport (
  2. rearrange,
  3. )
  4. fromtorchimport (
  5. randn,
  6. zeros,
  7. )
  8. fromtorch.nnimport (
  9. Linear,
  10. Parameter,
  11. )
  12. tokens=vit.patch_embed(input)
  13. mask_token=torch.nn.Parameter(torch.randn(token_dim))
  14. mask_tokens=self.mask_token.repeat(batch_size, n_tokens, 1)
  15. indices_to_mask=randn(batch_size, n_tokens)
  16. n_masked_tokens=int(0.5*n_tokens)
  17. indices_to_mask=indices_to_mask.topk(
  18. k=n_masked_tokens,
  19. dim=1,
  20. )
  21. indices_to_mask=indices_to_mask.indices
  22. bitmask=zeros(batch_size, n_tokens)
  23. bitmask=bitmask.scatter(
  24. dim=1,
  25. index=indices_to_mask,
  26. value=1,
  27. )
  28. bitmask=bitmask.bool()
  29. bitmask=bitmask.unsqueeze(2)
  30. tokens= (~bitmask)*tokens+bitmask*mask_tokens
  31. tokens=tokens+vit.pos_embed[:, 1:]
  32. encoded=vit.blocks(tokens)
  33. bitmask=bitmask.squeeze(2)
  34. masked_tokens_encoded=encoded[bitmask]
  35. patch_height=patch_width=vit.patch_embed.patch_size
  36. decoder_out_dim=3*patch_height*patch_width
  37. decoder=Linear(
  38. in_features=token_dim,
  39. out_features=decoder_out_dim,
  40. )
  41. masked_patches_reconstructed=decoder(masked_tokens_encoded)
  42. # patterns tells einops how to rearrange the tensor
  43. # Its layout is as follows: 'shape_before -> shape_after'
  44. # In this case, the shape before would be batch_size X n_channels X height X width,
  45. # and the shape after would be batch_size X n_tokens X n_channels*patch_height*patch_width
  46. # However, in einops, variables that are in shape_before must be in shape_after as well and vice versa
  47. # For example, in this case, height is in shape_before but not shape_after.
  48. # Therefore, shape_before and shape_after must be restructured.
  49. # Particularly, two new variables can be introduced, n_patches_height and n_patches_width,
  50. # that say how many patches are along the height and width axes respectively.
  51. # Thus, height = n_patches_height * patch_height,
  52. # width = n_patches_width * patch_width, and
  53. # n_tokens = n_patches_height * n_patches width
  54. # Multiplying two variables in einops is denoted by (x y).
  55. pattern= (
  56. 'batch_size n_channels (n_patches_height patch_height) (n_patches_width patch_width) -> '
  57. 'batch_size (n_patches_height n_patches_width) (n_channels patch_height patch_width)'
  58. )
  59. # einops.rearrange is like torch.reshape
  60. # einops cannot infer patch_height and patch_width,
  61. # so they must be passed manually
  62. # patches is now of shape batch_size X n_tokens X 3*patch_height*patch_width
  63. patches=rearrange(
  64. tensor=input,
  65. pattern=pattern,
  66. patch_height=patch_height,
  67. patch_width=patch_width,
  68. )

得对应于 masked_patches_reconstructed 的patche部分,

  1. fromeinopsimport (
  2. rearrange,
  3. )
  4. fromtorchimport (
  5. randn,
  6. zeros,
  7. )
  8. fromtorch.nnimport (
  9. Linear,
  10. Parameter,
  11. )
  12. tokens=vit.patch_embed(input)
  13. mask_token=torch.nn.Parameter(torch.randn(token_dim))
  14. mask_tokens=self.mask_token.repeat(batch_size, n_tokens, 1)
  15. indices_to_mask=randn(batch_size, n_tokens)
  16. n_masked_tokens=int(0.5*n_tokens)
  17. indices_to_mask=indices_to_mask.topk(
  18. k=n_masked_tokens,
  19. dim=1,
  20. )
  21. indices_to_mask=indices_to_mask.indices
  22. bitmask=zeros(batch_size, n_tokens)
  23. bitmask=bitmask.scatter(
  24. dim=1,
  25. index=indices_to_mask,
  26. value=1,
  27. )
  28. bitmask=bitmask.bool()
  29. bitmask=bitmask.unsqueeze(2)
  30. tokens= (~bitmask)*tokens+bitmask*mask_tokens
  31. tokens=tokens+vit.pos_embed[:, 1:]
  32. encoded=vit.blocks(tokens)
  33. bitmask=bitmask.squeeze(2)
  34. masked_tokens_encoded=encoded[bitmask]
  35. patch_height=patch_width=vit.patch_embed.patch_size
  36. decoder_out_dim=3*patch_height*patch_width
  37. decoder=Linear(
  38. in_features=token_dim,
  39. out_features=decoder_out_dim,
  40. )
  41. masked_patches_reconstructed=decoder(masked_tokens_encoded)
  42. pattern= (
  43. 'batch_size n_channels (n_patches_height patch_height) (n_patches_width patch_width) -> '
  44. 'batch_size (n_patches_height n_patches_width) (n_channels patch_height patch_width)'
  45. )
  46. patches=einops.rearrange(
  47. tensor=input,
  48. pattern=pattern,
  49. patch_height=patch_height,
  50. patch_width=patch_width,
  51. )
  52. # Similar to how masked_tokens_encoded was computed
  53. maskes_patches_original=patches[bitmask]

评估损失。

  1. fromeinopsimport (
  2. rearrange,
  3. )
  4. fromtorchimport (
  5. randn,
  6. zeros,
  7. )
  8. fromtorch.nnimport (
  9. Linear,
  10. Parameter,
  11. )
  12. fromtorch.nn.functionalimport (
  13. l1_loss,
  14. )
  15. tokens=vit.patch_embed(input)
  16. mask_token=torch.nn.Parameter(torch.randn(token_dim))
  17. mask_tokens=self.mask_token.repeat(batch_size, n_tokens, 1)
  18. indices_to_mask=randn(batch_size, n_tokens)
  19. n_masked_tokens=int(0.5*n_tokens)
  20. indices_to_mask=indices_to_mask.topk(
  21. k=n_masked_tokens,
  22. dim=1,
  23. )
  24. indices_to_mask=indices_to_mask.indices
  25. bitmask=zeros(batch_size, n_tokens)
  26. bitmask=bitmask.scatter(
  27. dim=1,
  28. index=indices_to_mask,
  29. value=1,
  30. )
  31. bitmask=bitmask.bool()
  32. bitmask=bitmask.unsqueeze(2)
  33. tokens= (~bitmask)*tokens+bitmask*mask_tokens
  34. tokens=tokens+vit.pos_embed[:, 1:]
  35. encoded=vit.blocks(tokens)
  36. bitmask=bitmask.squeeze(2)
  37. masked_tokens_encoded=encoded[bitmask]
  38. patch_height=patch_width=vit.patch_embed.patch_size
  39. decoder_out_dim=3*patch_height*patch_width
  40. decoder=Linear(
  41. in_features=token_dim,
  42. out_features=decoder_out_dim,
  43. )
  44. masked_patches_reconstructed=decoder(masked_tokens_encoded)
  45. pattern= (
  46. 'batch_size n_channels (n_patches_height patch_height) (n_patches_width patch_width) -> '
  47. 'batch_size (n_patches_height n_patches_width) (n_channels patch_height patch_width)'
  48. )
  49. patches=einops.rearrange(
  50. tensor=input,
  51. pattern=pattern,
  52. patch_height=patch_height,
  53. patch_width=patch_width,
  54. )
  55. maskes_patches_original=patches[bitmask]
  56. # The loss is the L1 difference between
  57. # the predicted pixel values and the ground truth,
  58. # divided by the number of masked patches
  59. loss=l1_loss(
  60. input=masked_patches_reconstructed,
  61. target=maskes_patches_original,
  62. )/n_masked_tokens

把上面的代码封装成类并增加一些辅助函数,这里就不贴了有兴趣的看下最后的源代码。然后使用的时候如下:

  1. fromtimmimport (
  2. create_model,
  3. )
  4. fromtorch.nn.functionalimport (
  5. l1_loss,
  6. )
  7. fromtorch.optimimport (
  8. AdamW,
  9. )
  10. vit=create_model(
  11. 'vit_small_patch32_224',
  12. num_classes=0,
  13. )
  14. simmim=SimMIM(
  15. vit=vit,
  16. masking_ratio=0.5,
  17. )
  18. optimizer=AdamW(
  19. params=simmim.parameters(),
  20. lr=1e-4,
  21. weight_decay=5e-2,
  22. )
  23. forepochinrange(n_epochs):
  24. forinputindataloader:
  25. n_masked_tokens, masked_patches_reconstructed, masked_patches_original=simmim(input)
  26. loss=l1_loss(
  27. input=masked_patches_reconstructed,
  28. target=maskes_patches_original,
  29. )
  30. loss/=n_masked_tokens
  31. loss.backward()
  32. optimizer.backward()
  33. optimizer.zero_grad()

上面的代码可以配置各种超参数(例如学习率,使用余弦退火,但为简单起见,此处省略)。我们的dataloader仅返回随机调整大小的裁剪、随机水平翻转和标准化的图像。

使用上面的代码,任何VIT都可以在大量未注释的数据上进行训练,并且可以很好地学习下游任务。看起来很简单吧,这也就是论文的名字sample的来源。

总结

在本文中,我们介绍 SimMIM,这是一种受掩码建模启发的强大 SSL 算法,其中一部分输入数据被掩码,模型的目标是最小化重建损失。为了更熟悉模型的运行方式我们还是用Pytorch对其进行了实现 ,这样可以帮助我们了解模型的细节。

引用:

A Simple Framework for Contrastive Learning of Visual Representations

https://arxiv.org/abs/2002.05709

Exploring Simple Siamese Representation Learning

https://arxiv.org/abs/2011.10566

SimMIM: A Simple Framework for Masked Image Modeling

https://arxiv.org/abs/2111.09886

本文代码:

https://github.com/BobMcDear/PyTorch-SimMIM

作者:Borna Ahmadzadeh

“更简单的掩码图像建模框架SimMIM介绍和PyTorch代码实现”的评论:

还没有评论