0


LangChain-10(2) 加餐 编写Agent获取本地Docker运行情况 无技术含量只是思路

请添加图片描述
可以先查看 上一节内容,会对本节有更好的理解。

安装依赖

  1. pip install langchainhub

编写代码

核心代码

  1. @tooldefget_docker_info(docker_name:str)->str:"""Get information about a docker pod container info."""
  2. result = subprocess.run(['docker','inspect',str(docker_name)], capture_output=True, text=True)return result.stdout

这里是通过执行 Shell的方式来获取状态的。
通过执行

  1. Docker

指令之后,可以获取到一大段的文本内容,此时把这些内容交给大模型去处理,大模型对内容进行提取和推理,最终回答我们。

  • 注意@tool注解,没有这个注解的话,无法使用
  • 注意要写"""xxx""" 要写明该工具的介绍,大模型将根据介绍来选择是否调用
  • 如果3.5的效果不好,可以尝试使用4
  1. from langchain import hub
  2. from langchain.agents import AgentExecutor, tool
  3. from langchain.agents.output_parsers import XMLAgentOutputParser
  4. from langchain_openai import ChatOpenAI
  5. import subprocess
  6. model = ChatOpenAI(
  7. model="gpt-3.5-turbo",)@tooldefsearch(query:str)->str:"""Search things about current events."""return"32 degrees"@tooldefget_docker_info(docker_name:str)->str:"""Get information about a docker pod container info."""
  8. result = subprocess.run(['docker','inspect',str(docker_name)], capture_output=True, text=True)return result.stdout
  9. tool_list =[search, get_docker_info]# Get the prompt to use - you can modify this!
  10. prompt = hub.pull("hwchase17/xml-agent-convo")# Logic for going from intermediate steps to a string to pass into model# This is pretty tied to the promptdefconvert_intermediate_steps(intermediate_steps):
  11. log =""for action, observation in intermediate_steps:
  12. log +=(f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"f"</tool_input><observation>{observation}</observation>")return log
  13. # Logic for converting tools to string to go in promptdefconvert_tools(tools):return"\n".join([f"{tool.name}: {tool.description}"for tool in tools])
  14. agent =({"input":lambda x: x["input"],"agent_scratchpad":lambda x: convert_intermediate_steps(
  15. x["intermediate_steps"]),}| prompt.partial(tools=convert_tools(tool_list))| model.bind(stop=["</tool_input>","</final_answer>"])| XMLAgentOutputParser())
  16. agent_executor = AgentExecutor(agent=agent, tools=tool_list)
  17. message1 = agent_executor.invoke({"input":"whats the weather in New york?"})print(f"message1: {message1}")
  18. message2 = agent_executor.invoke({"input":"what is docker pod which name 'lobe-chat-wzk' info? I want to know it 'Image' url"})print(f"message2: {message2}")

执行代码

  1. python3 test10.py
  2. message1: {'input':'whats the weather in New york?', 'output':'The weather in New York is 32 degrees'}
  3. message2: {'input':"what is docker pod which name 'lobe-chat-wzk' info? I want to know it 'Image' url", 'output':'The Image URL for the docker pod named \'lobe-chat-wzk\' is "lobehub/lobe-chat"'}

在这里插入图片描述

标签: langchain docker python

本文转载自: https://blog.csdn.net/w776341482/article/details/137449020
版权归原作者 武子康 所有, 如有侵权,请联系我们删除。

“LangChain-10(2) 加餐 编写Agent获取本地Docker运行情况 无技术含量只是思路”的评论:

还没有评论