0


微调大型语言模型示例:使用T5将自然语言转换成SQL语句

将自然语言转换为SQL语句已经不再遥不可及。NLP的进步使得我们不仅可以使用LLM(大型语言模型),还可以通过微调教授他们新的技能,这也被称为迁移学习。可以使用一个预先训练的模型作为起点,然后使用较小的标记数据集从而获得比单独使用数据训练更好的性能。

在本文中,我们将使用谷歌的文本到文本生成模型T5和我们的自定义数据进行迁移学习,这样它就可以将基本问题转换为SQL查询。我们将在T5中添加一个名为:将英语翻译为SQL的新任务,它可以转换以下示例查询:

  1. Cars built after 2020 and manufactured in Italy

将输出一下SQL语句

  1. SELECT name FROM cars WHERE location = 'Italy' AND date > 2020

创建训练数据

与翻译数据集不同,我们可以在模板的帮助下以编程方式自动构建训练的数据集,下面是整理出来的一些模板:

  1. templates = [
  2. ["[prop1] of [nns]","SELECT [prop1] FROM [nns]"],
  3. ["[agg] [prop1] for each [breakdown]","SELECT [agg]([prop1]) , [breakdown] FROM [prop1] GROUP BY [breakdown]"],
  4. ["[prop1] of [nns] by [breakdown]","SELECT [prop1] , [breakdown] FROM [nns] GROUP BY [breakdown]"],
  5. ["[prop1] of [nns] in [location] by [breakdown]","SELECT [prop1] , [breakdown] FROM [nns] WHERE location = '[location]' GROUP BY [breakdown]"],
  6. ["[nns] having [prop1] between [number1] and [number2]","SELECT name FROM [nns] WHERE [prop1] > [number1] and [prop1] < [number2]"],
  7. ["[prop] by [breakdown]","SELECT name , [breakdown] FROM [prop] GROUP BY [breakdown]"],
  8. ["[agg] of [prop1] of [nn]","SELECT [agg]([prop1]) FROM [nn]"],
  9. ["[prop1] of [nns] before [year]","SELECT [prop1] FROM [nns] WHERE date < [year]"],
  10. ["[prop1] of [nns] after [year] in [location]","SELECT [prop1] FROM [nns] WHERE date > [year] AND location='[location]'"],
  11. ["[nns] [verb] after [year] in [location]","SELECT name FROM [nns] WHERE location = '[location]' AND date > [year]"],
  12. ["[nns] having [prop1] between [number1] and [number2] by [breakdown]","SELECT name , [breakdown] FROM [nns] WHERE [prop1] < [number1] AND [prop1] > [number2] GROUP BY [breakdown]"],
  13. ["[nns] with a [prop1] of maximum [number1] by their [breakdown]","SELECT name , [breakdown] FROM [nns] WHERE [prop1] <= [number1] GROUP BY [breakdown]"],
  14. ["[prop1] and [prop2] of [nns] since [year]","SELECT [prop1] , [prop2] FROM [nns] WHERE date > [year]"],
  15. ["[nns] which have both [prop1] and [prop2]","SELECT name FROM [nns] WHERE [prop1] IS true AND [prop2] IS true"],
  16. ["Top [number1] [nns] by [prop1]","SELECT name FROM [nns] ORDER BY [prop1] DESC LIMIT [number1]"]
  17. ]
  18. template = random.choice(templates)
  19. print("Sample Query Template :", template[0])
  20. print("SQL Translation :", template[1])

构建函数使用这些模板并生成我们的数据集

  1. objects = ["countries","wines","wineries","tasters", "provinces","grapes","cities","bottles","deliveries"]
  2. object_single = ["country","wine","winery","taster", "province","grape","city","bottle", "delivery"]
  3. properties = ["points","price","taste","title","texture","age","duration","acidity","flavor","level"]
  4. aggs = [["average","avg"], ["total","sum"],["count","count"], ["minimum","min"], ["maximum","max"]]
  5. breakdowns = ["quality","price","province","country","point", "variety","flavor","age"]
  6. locations = ["Italy","US","Portugal","Spain","Chile","Turkey","Canada"]
  7. verbs = ["produced","bottled"]
  8. regex = r"\[([a-z0-9]*)\]"
  9. number_of_samples = 2500
  10. @dataset("english_sql_translations")
  11. def build_dataset():
  12. rows = []
  13. for index in range(0,number_of_samples):
  14. template = random.choice(templates)
  15. nl = template[0]
  16. sql = template[1]
  17. matches = re.finditer(regex, nl, re.MULTILINE)
  18. for matchNum, match in enumerate(matches, start=1):
  19. key = match.group()
  20. prop = None
  21. prop_sql = None
  22. if key.startswith("[prop"):
  23. prop = random.choice(properties)
  24. prop_sql = prop.replace(" ","_").lower()
  25. if key in ["[nns]"]:
  26. prop = random.choice(objects)
  27. prop_sql = prop
  28. if key in ["[nn]"]:
  29. prop = random.choice(object_single)
  30. prop_sql = prop.replace(" ","_").lower()
  31. if key == "[breakdown]":
  32. prop = random.choice(breakdowns)
  33. prop_sql = prop.replace(" ","_").lower()
  34. if key == "[verb]":
  35. prop = random.choice(verbs)
  36. prop_sql = prop.replace(" ","_").lower()
  37. if key == "[agg]":
  38. aggregation = random.choice(aggs)
  39. prop = aggregation[0]
  40. prop_sql = aggregation[1]
  41. if key == "[location]":
  42. prop = random.choice(locations)
  43. prop_sql = prop
  44. if key.startswith("[number"):
  45. prop = str(random.randint(1,1000))
  46. prop_sql = prop
  47. if key.startswith("[year"):
  48. prop = str(random.randint(1950,2022))
  49. prop_sql = prop
  50. if prop is not None:
  51. nl = nl.replace(key,prop)
  52. sql = sql.replace(key,prop_sql)
  53. prefix = random.randint(1,20)
  54. if prefix == 1:
  55. nl = "Show me "+nl
  56. elif prefix == 2:
  57. nl = "List "+nl
  58. elif prefix == 3:
  59. nl = "List of "+nl
  60. elif prefix == 4:
  61. nl = "Find "+nl
  62. rows.append([nl,sql])
  63. df = pd.DataFrame(rows, columns=["query", "sql"])
  64. return df

这里使用了@Dataset装饰器。现在,我们可以将此函数通过以下方式轻松分层:

  1. layer.run([build_dataset])

运行完成后,可以开始构建自定义数据集加载程序

创建数据加载程序

我们的数据集还需要使用Pytorch的Dataset实现,才能够使用Dataloader进行加载

  1. from torch.utils.data import Dataset
  2. class EnglishToSQLDataSet(Dataset):
  3. def __init__(self, dataframe, tokenizer, source_len, target_len, source_text, target_text):
  4. self.tokenizer = tokenizer
  5. self.data = dataframe
  6. self.source_len = source_len
  7. self.target_len = target_len
  8. self.target_text = self.data[target_text]
  9. self.source_text = self.data[source_text]
  10. self.data["query"] = "translate English to SQL: "+self.data["query"]
  11. self.data["sql"] = "<pad>" + self.data["sql"] + "</s>"
  12. def __len__(self):
  13. return len(self.target_text)
  14. def __getitem__(self, index):
  15. source_text = str(self.source_text[index])
  16. target_text = str(self.target_text[index])
  17. source_text = ' '.join(source_text.split())
  18. target_text = ' '.join(target_text.split())
  19. source = self.tokenizer.batch_encode_plus([source_text], max_length= self.source_len, pad_to_max_length=True, truncation=True, padding="max_length", return_tensors='pt')
  20. target = self.tokenizer.batch_encode_plus([target_text], max_length= self.target_len, pad_to_max_length=True, truncation=True, padding="max_length", return_tensors='pt')
  21. source_ids = source['input_ids'].squeeze()
  22. source_mask = source['attention_mask'].squeeze()
  23. target_ids = target['input_ids'].squeeze()
  24. target_mask = target['attention_mask'].squeeze()
  25. return {
  26. 'source_ids': source_ids.to(dtype=torch.long),
  27. 'source_mask': source_mask.to(dtype=torch.long),
  28. 'target_ids': target_ids.to(dtype=torch.long),
  29. 'target_ids_y': target_ids.to(dtype=torch.long)
  30. }

微调 T5

数据集已准备就完毕。现在可以开发微调逻辑。用@model对功能进行装饰,然后将其传递给层。

  1. ef train(epoch, tokenizer, model, device, loader, optimizer):
  2. import torch
  3. model.train()
  4. for _,data in enumerate(loader, 0):
  5. y = data['target_ids'].to(device, dtype = torch.long)
  6. y_ids = y[:, :-1].contiguous()
  7. lm_labels = y[:, 1:].clone().detach()
  8. lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100
  9. ids = data['source_ids'].to(device, dtype = torch.long)
  10. mask = data['source_mask'].to(device, dtype = torch.long)
  11. outputs = model(input_ids = ids, attention_mask = mask, decoder_input_ids=y_ids, labels=lm_labels)
  12. loss = outputs[0]
  13. step = (epoch * len(loader)) + _
  14. layer.log({"loss": float(loss)}, step)
  15. optimizer.zero_grad()
  16. loss.backward()
  17. optimizer.step()

在这里,我们使用三个单独装饰器:

@model:告诉图层使用此函数训练ML模型

@Fabric:告诉层训练模型所需的计算资源(CPU,GPU等)。T5是一个大型型号,我们需要GPU对其进行微调。

@pip_requirements:微调我们的模型所需的Python软件包。

  1. @model("t5-tokenizer")
  2. @fabric("f-medium")
  3. @pip_requirements(packages=["torch","transformers","sentencepiece"])
  4. def build_tokenizer():
  5. from transformers import T5Tokenizer
  6. # Load tokenizer from Hugging face
  7. tokenizer = T5Tokenizer.from_pretrained("t5-small")
  8. return tokenizer
  9. @model("t5-english-to-sql")
  10. @fabric("f-gpu-small")
  11. @pip_requirements(packages=["torch","transformers","sentencepiece"])
  12. def build_model():
  13. from torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler
  14. from transformers import T5Tokenizer, T5ForConditionalGeneration
  15. import torch.nn.functional as F
  16. from torch import cuda
  17. import torch
  18. parameters={
  19. "BATCH_SIZE":8,
  20. "EPOCHS":3,
  21. "LEARNING_RATE":2e-05,
  22. "MAX_SOURCE_TEXT_LENGTH":75,
  23. "MAX_TARGET_TEXT_LENGTH":75,
  24. "SEED": 42
  25. }
  26. # Log parameters to Layer
  27. layer.log(parameters)
  28. # Set seeds for reproducibility
  29. torch.manual_seed(parameters["SEED"])
  30. np.random.seed(parameters["SEED"])
  31. torch.backends.cudnn.deterministic = True
  32. # Load tokenizer from Layer
  33. tokenizer = layer.get_model("t5-tokenizer").get_train()
  34. # Load pretrained model from Hugging face
  35. model = T5ForConditionalGeneration.from_pretrained("t5-small")
  36. device = 'cuda' if cuda.is_available() else 'cpu'
  37. model.to(device)
  38. dataframe = layer.get_dataset("english_sql_translations").to_pandas()
  39. source_text = "query"
  40. target_text = "sql"
  41. dataframe = dataframe[[source_text,target_text]]
  42. train_dataset = dataframe.sample(frac=0.8,random_state = parameters["SEED"])
  43. train_dataset = train_dataset.reset_index(drop=True)
  44. layer.log({"FULL Dataset": str(dataframe.shape),
  45. "TRAIN Dataset": str(train_dataset.shape)
  46. })
  47. training_set = EnglishToSQLDataSet(train_dataset, tokenizer, parameters["MAX_SOURCE_TEXT_LENGTH"], parameters["MAX_TARGET_TEXT_LENGTH"], source_text, target_text)
  48. dataloader_paramaters = {
  49. 'batch_size': parameters["BATCH_SIZE"],
  50. 'shuffle': True,
  51. 'num_workers': 0
  52. }
  53. training_loader = DataLoader(training_set, **dataloader_paramaters)
  54. optimizer = torch.optim.Adam(params = model.parameters(), lr=parameters["LEARNING_RATE"])
  55. for epoch in range(parameters["EPOCHS"]):
  56. train(epoch, tokenizer, model, device, training_loader, optimizer)
  57. return model

现在可以将代码传递到远程GPU实例上训练我们的模型。

  1. layer.run([build_tokenizer, build_model], debug=True)

训练完成后,我们可以在UI层中找到我们的模型和指标。这是我们的损失曲线:

使用Gradio构建演示

Gradio是演示机器学习模型的最快方法,任何人在任何地方都可以使用它!我们将与Gradio建立互动演示,以为人们提供UI想要尝试我们的模型。

让我们开始编码。创建一个名为app.py的python文件,并列出以下代码:

  1. import gradio as gr
  2. import layer
  3. model = layer.get_model('layer/t5-fine-tuning-with-layer/models/t5-english-to-sql').get_train()
  4. tokenizer = layer.get_model('layer/t5-fine-tuning-with-layer/models/t5-tokenizer').get_train()
  5. def greet(query):
  6. input_ids = tokenizer.encode(f"translate English to SQL: {query}", return_tensors="pt")
  7. outputs = model.generate(input_ids, max_length=1024)
  8. sql = tokenizer.decode(outputs[0], skip_special_tokens=True)
  9. return sql
  10. iface = gr.Interface(fn=greet, inputs="text", outputs="text", examples=[
  11. "Show me the average price of wines in Italy by provinces",
  12. "Cars built after 2020 and manufactured in Italy",
  13. "Top 10 cities by their population"
  14. ])
  15. iface.launch()

在上面的代码中,使用Gradio创建一个简单的UI:一个用于查询输入的输入TextField和一个输出TextField以显示预测的SQL查询

我们将需要一些额外的库,所以需要创建一个具有以下内容的sumploy.txt文件:

  1. layer==0.9.350435
  2. torch==1.11.0
  3. sentencepiece==0.1.96

这样就可以发布我们的Gradio应用程序:

  1. Hugging Face创造一个space
  2. 输入GradioKey作为Space SDK

现在,克隆Hugging Face的space到本地目录中:

  1. $ git clone [YOUR_HUGGINGFACE_SPACE_URL]

将requirements.txt 和app.py文件放入克隆目录中,并在终端中运行以下命令:

  1. $ git add app.py
  2. $ git add requirements.txt
  3. $ git commit -m "Add application files"
  4. $ git push

现在前往Hugging Face的space,部署应用程序后,就可以看到如下的界面

最后

本文展示了如何微调大型语言模型来教他们新技能。我们可以设计自己的任务,并进行微调T5供自己使用。

本文的项目demo和完整代码在这里:

https://huggingface.co/spaces/mecevit/english-to-sql

https://app.layer.ai/layer/t5-fine-tuning-with-layer

作者:Mehmet Ecevit

“微调大型语言模型示例:使用T5将自然语言转换成SQL语句”的评论:

还没有评论