0


构建基于Transformer的推荐系统

使用基于BERT的构建基于协同过滤的推荐模型

基于编码器的自注意力Transformer非常擅长预测自然语言生成任务的下一个字符,因为它们可以注意到给定字符周围的标记/字符的重要性。为什么我们不能应用这个概念来预测任何用户喜欢的给定物品序列中的下一个项目呢?这种推荐问题可以归类为基于物品的协同过滤。

在基于物品的协同过滤中,我们试图找到给定的物品集和不同用户的偏好之间的关系或模式。让我们举个例子,假设我们有两个用户Alice和Bob,每次Alice来我们的网站买她每月的食品杂货,她买牛奶,面包,奶酪,意大利面和番茄酱。

现在我们有一个完全未知的用户,假设叫他Guest来到并向他的购物车中添加面包,现在观察到Guest用户添加了面包,然后我们可以建议用户添加牛奶或奶酪,因为我们从其他用户的历史记录中知道。

我们并不关心用户的类型,比如他们的背景是什么,他们在哪里下单,或者他们的性别是什么。我们只关注每个用户购买或喜欢的物品集。

我们将通过预测给定的物品序列的下一个物品来重新表述推荐问题。这个问题将变得更加类似或完全类似于下一个字符预测或语言建模。我们还可以通过随机屏蔽给定序列中的任何项,并训练基于编码器的Transformer模型来预测被屏蔽的项,从而增加更多的变化。该模型可以从左右两个方向预测物品。

为什么我们要在两个方向上预测?让我们以上面讨论的问题为例。

假设Guest用户直接将奶酪添加到他们的购物车中,那么如果我们只从一个方向进行预测,我们可以向用户推荐番茄酱或面条,但对于这个用户来说,购买这些东西是没有意义的。

但是如果我们的模型以一个给定的顺序预测被遮蔽的物品,我们就可以预测两边,对于Guest用户我们可以建议他添加牛奶、面包或鸡蛋。

让我们尝试使用这个概念来构建和训练一个我们的模型,预测给定序列中的被屏蔽项。我们将通过下面的一些抽象来讨论代码。这里使用的是MovieLens-25m数据集。

数据预处理

与我们的例子中字符的标识id类似,我们将把每个惟一的电影转换为一个id,从2到电影数量开始。我们将使用id 0作为[PAD], id 1作为[MASK]。

  1. import numpy as np
  2. import pandas as pd
  3. import torch
  4. import torch.nn as nn
  5. import torch.nn.functional as F
  6. import random
  7. import sys
  8. sys.path.append("../")
  9. from constants import *
  10. movies_df = pd.read_csv("../data/ml-25m/ml-25m/movies.csv")
  11. ratings_df = pd.read_csv("../data/ml-25m/ml-25m/ratings.csv")
  12. ratings_df.sort_values(by=["timestamp"], inplace=True)
  13. grouped_ratings = ratings_df.groupby(by="userId").agg(list)
  14. movieIdMapping = {k:i+2 for i, k in enumerate(sorted(list(ratings_df.movieId.unique())))}
  15. ratings_df["movieId_mapped"] = ratings_df.movieId.map(movieIdMapping)
  16. movies_df["movieId_mapped"] = movies_df.movieId.map(movieIdMapping)

模型

  1. import os
  2. from requests import head
  3. import torch as T
  4. import torch.nn as nn
  5. import torch.nn.functional as F
  6. from modules import Encoder, Decoder
  7. class RecommendationTransformer(nn.Module):
  8. """Sequential recommendation model architecture
  9. """
  10. def __init__(self,
  11. vocab_size,
  12. heads=4,
  13. layers=6,
  14. emb_dim=256,
  15. pad_id=0,
  16. num_pos=128):
  17. super().__init__()
  18. """Recommendation model initializer
  19. Args:
  20. vocab_size (int): Number of unique tokens/items
  21. heads (int, optional): Number of heads in the Multi-Head Self Attention Transformers (). Defaults to 4.
  22. layers (int, optional): Number of Layers. Defaults to 6.
  23. emb_dim (int, optional): Embedding Dimension. Defaults to 256.
  24. pad_id (int, optional): Token used to pad tensors. Defaults to 0.
  25. num_pos (int, optional): Positional Embedding, fixed sequence. Defaults to 128
  26. """
  27. self.emb_dim = emb_dim
  28. self.pad_id = pad_id
  29. self.num_pos = num_pos
  30. self.vocab_size = vocab_size
  31. self.encoder = Encoder(source_vocab_size=vocab_size,
  32. emb_dim=emb_dim,
  33. layers=layers,
  34. heads=heads,
  35. dim_model=emb_dim,
  36. dim_inner=4 * emb_dim,
  37. dim_value=emb_dim,
  38. dim_key=emb_dim,
  39. pad_id=self.pad_id,
  40. num_pos=num_pos)
  41. self.rec = nn.Linear(emb_dim, vocab_size)
  42. def forward(self, source, source_mask):
  43. enc_op = self.encoder(source, source_mask)
  44. op = self.rec(enc_op)
  45. return op.permute(0, 2, 1)

训练的流程

  1. import os
  2. import re
  3. import pandas as pd
  4. from tqdm import trange, tnrange
  5. import torch as T
  6. import torch.nn as nn
  7. import torch.nn.functional as F
  8. from torch.utils.data import DataLoader
  9. from bert4rec_dataset import Bert4RecDataset
  10. from bert4rec_model import RecommendationModel, RecommendationTransformer
  11. from rich.table import Column, Table
  12. from rich import box
  13. from rich.console import Console
  14. from torch import cuda
  15. from train_validate import train_step, validate_step
  16. from sklearn.model_selection import train_test_split
  17. from AttentionTransformer.ScheduledOptimizer import ScheduledOptimizer
  18. from IPython.display import clear_output
  19. from AttentionTransformer.utilities import count_model_parameters
  20. import random
  21. import numpy as np
  22. device = T.device('cuda') if cuda.is_available() else T.device('cpu')
  23. def trainer(data_params,
  24. model_params,
  25. loggers,
  26. optimizer_params=None,
  27. warmup_steps=False,
  28. output_dir="./models/",
  29. modify_last_fc=False,
  30. validation=5):
  31. # console instance
  32. console = loggers.get("CONSOLE")
  33. # tables
  34. train_logger = loggers.get("TRAIN_LOGGER")
  35. valid_logger = loggers.get("VALID_LOGGER")
  36. # check if output_dir/model_files available; if not create
  37. if not os.path.exists(output_dir):
  38. console.log(f"OUTPUT DIRECTORY DOES NOT EXIST. CREATING...")
  39. os.mkdir(output_dir)
  40. os.mkdir(os.path.join(output_dir, "model_files"))
  41. os.mkdir(os.path.join(output_dir, "model_files_initial"))
  42. else:
  43. console.log(f"OUTPUT DIRECTORY EXISTS. CHECKING CHILD DIRECTORY...")
  44. if not os.path.exists(os.path.join(output_dir, "model_files")):
  45. os.mkdir(os.path.join(output_dir, "model_files"))
  46. os.mkdir(os.path.join(output_dir, "model_files_initial"))
  47. # seed
  48. console.log("SEED WITH: ", model_params.get("SEED"))
  49. T.manual_seed(model_params["SEED"])
  50. T.cuda.manual_seed(model_params["SEED"])
  51. np.random.seed(model_params.get("SEED"))
  52. random.seed(model_params.get("SEED"))
  53. T.backends.cudnn.deterministic = True
  54. # intialize model
  55. console.log("MODEL PARAMS: ", model_params)
  56. console.log("INITIALIZING MODEL: ", model_params)
  57. model = RecommendationTransformer(
  58. vocab_size=model_params.get("VOCAB_SIZE"),
  59. heads=model_params.get("heads", 4),
  60. layers=model_params.get("layers", 6),
  61. emb_dim=model_params.get("emb_dim", 512),
  62. pad_id=model_params.get("pad_id", 0),
  63. num_pos=model_params.get("history", 120))
  64. # model.encoder.sou
  65. if model_params.get("trained"):
  66. # load the already trained model
  67. console.log("TRAINED MODEL AVAILABLE. LOADING...")
  68. model.load_state_dict(
  69. T.load(model_params.get("trained"))["state_dict"])
  70. console.log("MODEL LOADED")
  71. console.log(f'MOVING MODEL TO DEVICE: {device}')
  72. if modify_last_fc:
  73. new_word_embedding = nn.Embedding(model_params.get("NEW_VOCAB_SIZE"),
  74. model_params.get("emb_dim"), 0)
  75. new_word_embedding.weight.requires_grad = False
  76. console.log(
  77. f"REQUIRES GRAD for `NEW WORD EMBEDDING` set to {new_word_embedding.weight.requires_grad}"
  78. )
  79. new_word_embedding.weight[:model.encoder.word_embedding.weight.size(
  80. 0)] = model.encoder.word_embedding.weight.clone().detach()
  81. model.encoder.word_embedding = new_word_embedding
  82. # model.encoder.word_embedding.weight.retain_grad()
  83. console.log(
  84. f"WORD EMBEDDING MODIFIED TO `{model.encoder.word_embedding}`")
  85. model.encoder.word_embedding.weight.requires_grad = True
  86. new_lin_layer = nn.Linear(model_params.get("emb_dim"),
  87. model_params.get("NEW_VOCAB_SIZE"))
  88. new_lin_layer.weight.requires_grad = False
  89. new_lin_layer.weight[:model.lin_op.weight.
  90. size(0)] = model.lin_op.weight.clone().detach()
  91. model.lin_op = new_lin_layer
  92. # model.lin_op.weight.retain_grad()
  93. model.lin_op.weight.requires_grad = True
  94. console.log("MODEL LIN OP: ", model.lin_op.out_features)
  95. model = model.to(device)
  96. console.log(
  97. f"TOTAL NUMBER OF MODEL PARAMETERS: {round(count_model_parameters(model)/1e6, 2)} Million"
  98. )
  99. optim_name = optimizer_params.get("OPTIM_NAME")
  100. if optim_name == "SGD":
  101. optimizer = T.optim.SGD(params=model.parameters(),
  102. **optimizer_params.get("PARAMS"))
  103. elif optim_name == "ADAM":
  104. optimizer = T.optim.Adam(params=model.parameters(),
  105. **optimizer_params.get("PARAMS"))
  106. else:
  107. optimizer = T.optim.SGD(params=model.parameters(),
  108. lr=model_params.get("LEARNING_RATE"),
  109. momentum=0.8,
  110. nesterov=True)
  111. if warmup_steps:
  112. optimizer = ScheduledOptimizer(optimizer, 1e-6,
  113. model_params.get("emb_dim"))
  114. console.log("OPTIMIZER AND MODEL DONE")
  115. console.log("CONFIGURING DATASET AND DATALOADER")
  116. console.log("DATA PARAMETERS: ", data_params)
  117. data = pd.read_csv(data_params.get("path"))
  118. train_data, valid_data = train_test_split(
  119. data, test_size=0.25, random_state=model_params.get("SEED"))
  120. console.log("LEN OF TRAIN DATASET: ", len(train_data))
  121. console.log("LEN OF VALID DATASET: ", len(valid_data))
  122. train_dataset = Bert4RecDataset(train_data,
  123. data_params.get("group_by_col"),
  124. data_params.get("data_col"),
  125. data_params.get("train_history", 120),
  126. data_params.get("valid_history", 5),
  127. data_params.get("padding_mode",
  128. "right"), "train",
  129. data_params.get("threshold_column"),
  130. data_params.get("threshold"),
  131. data_params.get("timestamp_col"))
  132. train_dl = DataLoader(train_dataset,
  133. **data_params.get("LOADERS").get("TRAIN"))
  134. console.save_text(os.path.join(output_dir,
  135. "logs_model_initialization.txt"),
  136. clear=True)
  137. losses = []
  138. for epoch in tnrange(1, model_params.get("EPOCHS") + 1):
  139. if epoch % 3 == 0:
  140. clear_output(wait=True)
  141. train_loss, train_acc = train_step(model, device, train_dl,
  142. optimizer, warmup_steps,
  143. data_params.get("MASK"),
  144. model_params.get("CLIP"),
  145. data_params.get("chunkify"))
  146. train_logger.add_row(str(epoch), str(train_loss), str(train_acc))
  147. console.log(train_logger)
  148. if epoch == 1:
  149. console.log(f"Saving Initial Model")
  150. T.save(
  151. model,
  152. os.path.join(output_dir, "model_files_initial",
  153. model_params.get("SAVE_NAME")))
  154. T.save(
  155. dict(state_dict=model.state_dict(),
  156. epoch=epoch,
  157. train_loss=train_loss,
  158. train_acc=train_acc,
  159. optimizer_dict=optimizer._optimizer.state_dict()
  160. if warmup_steps else optimizer.state_dict()),
  161. os.path.join(output_dir, "model_files_initial",
  162. model_params.get("SAVE_STATE_DICT_NAME")))
  163. if epoch > 1 and min(losses) > train_loss:
  164. console.log("SAVING BEST MODEL AT EPOCH -> ", epoch)
  165. console.log("LOSS OF BEST MODEL: ", train_loss)
  166. console.log("ACCURACY OF BEST MODEL: ", train_acc)
  167. T.save(
  168. model,
  169. os.path.join(output_dir, "model_files",
  170. model_params.get("SAVE_NAME")))
  171. T.save(
  172. dict(state_dict=model.state_dict(),
  173. epoch=epoch,
  174. train_acc=train_acc,
  175. train_loss=train_loss,
  176. optimizer_dict=optimizer._optimizer.state_dict()
  177. if warmup_steps else optimizer.state_dict()),
  178. os.path.join(output_dir, "model_files",
  179. model_params.get("SAVE_STATE_DICT_NAME")))
  180. losses.append(train_loss)
  181. if validation and epoch > 1 and epoch % validation == 0:
  182. valid_dataset = Bert4RecDataset(
  183. valid_data, data_params.get("group_by_col"),
  184. data_params.get("data_col"),
  185. data_params.get("train_history", 120),
  186. data_params.get("valid_history", 5),
  187. data_params.get("padding_mode", "right"), "valid")
  188. valid_dl = DataLoader(valid_dataset,
  189. **data_params.get("LOADERS").get("VALID"))
  190. valid_loss, valid_acc = validate_step(model, valid_dl, device,
  191. data_params.get("MASK"))
  192. valid_logger.add_row(str(epoch), str(valid_loss), str(valid_acc))
  193. console.log(valid_logger)
  194. del valid_dataset, valid_dl
  195. console.log("VALIDATION DONE AT EPOCH ", epoch)
  196. console.save_text(os.path.join(output_dir, "logs_training.txt"),
  197. clear=True)
  198. console.save_text(os.path.join(output_dir, "logs_training.txt"),
  199. clear=True)

训练

  1. from train_pipeline import trainer
  2. from constants import TRAIN_CONSTANTS
  3. from rich.table import Column, Table
  4. from rich import box
  5. from rich.console import Console
  6. console = Console(record=True)
  7. training_logger = Table(
  8. Column("Epoch", justify="center"),
  9. Column("Loss", justify="center"),
  10. Column("Accuracy", justify="center"),
  11. title="Training Status",
  12. pad_edge=False,
  13. box=box.ASCII,
  14. )
  15. valid_loggger = Table(
  16. Column("Epoch", justify="center"),
  17. Column("Loss", justify="center"),
  18. Column("Accuracy", justify="center"),
  19. title="Validation Status",
  20. pad_edge=False,
  21. box=box.ASCII,
  22. )
  23. loggers = dict(CONSOLE=console,
  24. TRAIN_LOGGER=training_logger,
  25. VALID_LOGGER=valid_loggger)
  26. model_params = dict(
  27. SEED=3007,
  28. VOCAB_SIZE=59049,
  29. heads=4,
  30. layers=6,
  31. emb_dim=256,
  32. pad_id=TRAIN_CONSTANTS.PAD,
  33. history=TRAIN_CONSTANTS.HISTORY,
  34. #trained=
  35. #"/content/drive/MyDrive/bert4rec/models/rec-transformer-model-9/model_files/bert4rec-state-dict.pth",
  36. trained=None,
  37. LEARNING_RATE=0.1,
  38. EPOCHS=5000,
  39. SAVE_NAME="bert4rec.pt",
  40. SAVE_STATE_DICT_NAME="bert4rec-state-dict.pth",
  41. CLIP=2
  42. # NEW_VOCAB_SIZE=59049
  43. )
  44. data_params = dict(
  45. # path="/content/bert4rec/data/ratings_mapped.csv",
  46. # path="drive/MyDrive/bert4rec/data/ml-25m/ratings_mapped.csv",
  47. path="/content/drive/MyDrive/bert4rec/data/ml-25m/ratings_mapped.csv",
  48. group_by_col="userId",
  49. data_col="movieId_mapped",
  50. train_history=TRAIN_CONSTANTS.HISTORY,
  51. valid_history=5,
  52. padding_mode="right",
  53. MASK=TRAIN_CONSTANTS.MASK,
  54. chunkify=False,
  55. threshold_column="rating",
  56. threshold=3.5,
  57. timestamp_col="timestamp",
  58. LOADERS=dict(TRAIN=dict(batch_size=512, shuffle=True, num_workers=0),
  59. VALID=dict(batch_size=32, shuffle=False, num_workers=0)))
  60. optimizer_params = {
  61. "OPTIM_NAME": "SGD",
  62. "PARAMS": {
  63. "lr": 0.142,
  64. "momentum": 0.85,
  65. }
  66. }
  67. output_dir = "/content/drive/MyDrive/bert4rec/models/rec-transformer-model-10/"
  68. trainer(data_params=data_params,
  69. model_params=model_params,
  70. loggers=loggers,
  71. warmup_steps=False,
  72. output_dir=output_dir,
  73. modify_last_fc=False,
  74. validation=False,
  75. optimizer_params=optimizer_params)

预测

  1. import torch as T
  2. import torch.nn.functional as F
  3. import torch.nn as nn
  4. import numpy as np
  5. import os
  6. import re
  7. from bert4rec_model import RecommendationTransformer
  8. from constants import TRAIN_CONSTANTS
  9. from typing import List, Dict, Tuple
  10. import random
  11. T.manual_seed(3007)
  12. T.cuda.manual_seed(3007)
  13. class Recommender:
  14. """Recommender Object
  15. """
  16. def __init__(self, model_path: str):
  17. """Recommender object to predict sequential recommendation
  18. Args:
  19. model_path (str): Path to the model
  20. """
  21. self.model = RecommendationTransformer(
  22. vocab_size=TRAIN_CONSTANTS.VOCAB_SIZE,
  23. heads=TRAIN_CONSTANTS.HEADS,
  24. layers=TRAIN_CONSTANTS.LAYERS,
  25. emb_dim=TRAIN_CONSTANTS.EMB_DIM,
  26. pad_id=TRAIN_CONSTANTS.PAD,
  27. num_pos=TRAIN_CONSTANTS.HISTORY)
  28. state_dict = T.load(model_path, map_location="cpu")
  29. self.model.load_state_dict(state_dict["state_dict"])
  30. self.model.eval()
  31. self.max_length = 25
  32. def predict(self, inp_tnsr: T.LongTensor, mode="post"):
  33. """Predict and return next or prev item in the sequence based on the mode
  34. Args:
  35. inp_tnsr (T.LongTensor): Input Tensor of items in the sequence
  36. mode (str, optional): Predict the start or end item based on the mode. Defaults to "post".
  37. Returns:
  38. int: Item ID
  39. """
  40. with T.no_grad():
  41. op = self.model(inp_tnsr.unsqueeze(0), None)
  42. _, pred = op.max(1)
  43. if mode == "post":
  44. pred = pred.flatten().tolist()[-1]
  45. elif mode == "pre":
  46. pred = pred.flatten().tolist()[0]
  47. else:
  48. pred = pred.flatten().tolist()[-1]
  49. return pred
  50. def recommendPre(self, sequence: List[int], num_recs: int = 5):
  51. """Predict item at start
  52. Args:
  53. sequence (List[int]): Input list of items
  54. num_recs (int, optional): Total number of items to predict. Defaults to 5.
  55. Returns:
  56. Tuple: Returns the sequence and history if more predictions than max length
  57. """
  58. history = []
  59. predict_hist = 0
  60. while predict_hist < num_recs:
  61. if len(sequence) > TRAIN_CONSTANTS.HISTORY - 1:
  62. history.extend(sequence)
  63. sequence = sequence[:TRAIN_CONSTANTS.HISTORY - 1]
  64. inp_seq = T.LongTensor(sequence)
  65. inp_tnsr = T.ones((inp_seq.size(0) + 1), dtype=T.long)
  66. inp_tnsr[1:] = inp_seq
  67. pred = self.predict(inp_tnsr, mode="pre")
  68. sequence = [pred] + sequence
  69. predict_hist += 1
  70. return sequence, history
  71. def recommendPost(self, sequence: List[int], num_recs: int = 5):
  72. """Predict item at end
  73. Args:
  74. sequence (List[int]): Input list of items
  75. num_recs (int, optional): Total number of item to predict. Defaults to 5.
  76. Returns:
  77. Tuple: Returns the sequence and history if more predictions than max length
  78. """
  79. history = []
  80. predict_hist = 0
  81. while predict_hist < num_recs:
  82. if len(sequence) > TRAIN_CONSTANTS.HISTORY - 1:
  83. history.extend(sequence)
  84. sequence = sequence[::-1][:TRAIN_CONSTANTS.HISTORY - 1][::-1]
  85. inp_seq = T.LongTensor(sequence)
  86. inp_tnsr = T.ones((inp_seq.size(0) + 1), dtype=T.long)
  87. inp_tnsr[:inp_seq.size(0)] = inp_seq
  88. pred = self.predict(inp_tnsr)
  89. sequence.append(pred)
  90. predict_hist += 1
  91. return sequence, history
  92. def recommendSequential(self, sequence: List[int], num_recs: int = 5):
  93. """Predicts both start and end items randomly
  94. Args:
  95. sequence (List[int]): Input list of items
  96. num_recs (int, optional): Total number of items to predict. Defaults to 5.
  97. Returns:
  98. Tuple: Returns the sequence and history (empty always)
  99. """
  100. assert num_recs < (
  101. self.max_length / 2
  102. ) - 1, f"Can only recommend: {num_recs < (self.max_length / 2) - 1} with sequential recommendation"
  103. history = []
  104. predict_hist = 0
  105. while predict_hist < num_recs:
  106. if bool(random.choice([0, 1])):
  107. # print(f"RECOMMEND POST")
  108. sequence, hist = self.recommendPost(sequence, 1)
  109. # print(f"SEQUENCE: {sequence}")
  110. if len(hist) > 0:
  111. history.extend(hist)
  112. else:
  113. # print(f"RECOMMEND PRE")
  114. sequence, hist = self.recommendPre(sequence, 1)
  115. # print(f"SEQUENCE: {sequence}")
  116. if len(hist) > 0:
  117. history.extend(hist)
  118. predict_hist += 1
  119. return sequence, []
  120. def cleanHistory(self, history: List[int]):
  121. """History might have multiple repetitions, we clean the history
  122. and maintain the sequence
  123. Args:
  124. history (List[int]): Predicted item ids
  125. Returns:
  126. List[int]: Returns cleaned item id
  127. """
  128. history = history[::-1]
  129. history = [
  130. h for ix, h in enumerate(history) if h not in history[ix + 1:]
  131. ]
  132. return history[::-1]
  133. def recommend(self,
  134. sequence: List[int],
  135. num_recs: int = 5,
  136. mode: str = "post"):
  137. """Recommend Items
  138. Args:
  139. sequence (List[int]): Input list of items
  140. num_recs (int, optional): Total number of items to predict. Defaults to 5.
  141. mode (str, optional): Predict start or end items or creates a random sequence around the input sequence. Defaults to "post".
  142. Returns:
  143. List[int]: Recommended items
  144. """
  145. if mode == "post":
  146. seq, hist = self.recommendPost(sequence, num_recs)
  147. elif mode == "pre":
  148. seq, hist = self.recommendPre(sequence, num_recs)
  149. else:
  150. seq, hist = self.recommendSequential(sequence, num_recs)
  151. hist = self.cleanHistory(hist)
  152. if len(hist) > 0 and len(hist) > len(seq):
  153. return hist
  154. return seq
  155. with __name__ == "__main__":
  156. rec_obj = Recommender(TRAIN_CONSTANTS.MODEL_PATH)
  157. rec = rec_obj.recommend(sequence=[2, 3],
  158. num_recs=10)

结果

上面代码我们看到了如何使用Transformer模型(NLP领域的流行模型)来构建基于物品的协同过滤模型。并且通过代码从头开始训练。

本文的例子在这里:https://www.moviebert.ml/

源代码:https://github.com/vatsalsaglani/bert4rec/tree/main/scripts

作者:Vatsal Saglani

记录[+]

2022-10-21T12:43:52+08:00 已修改

“构建基于Transformer的推荐系统”的评论:

还没有评论