0


国内网络在Ubuntu 22.04中在线安装Ollama并配置Open-Webui&Dify

配置docker科技网络

登录后复制

  1. 创建或编辑 Docker 配置文件 docker使用代理:
  2. sudo mkdir /etc/systemd/system/docker.service.d -p
  3. sudo vim /etc/systemd/system/docker.service.d/http-proxy.conf 文件,并添加以下内容:
  4. [Service]
  5. Environment="HTTP_PROXY=http://10.10.9.232:30809"
  6. Environment="HTTPS_PROXY=http://10.10.9.232:30809"
  7. Environment="NO_PROXY=localhost,127.0.0.1"
  8. 重新加载 systemd 配置并重启 Docker 服务:
  9. sudo systemctl daemon-reload
  10. sudo systemctl restart docker
  11. 验证配置是否生效
  12. sudo systemctl show --property=Environment docker

Ubuntu 22.04 安装 Docker

登录后复制

  1. curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
  2. systemctl enable --now docker

安装 docker-compose

登录后复制

  1. curl -L https://github.com/docker/compose/releases/download/v2.20.3/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
  2. chmod +x /usr/local/bin/docker-compose

验证安装

登录后复制

  1. docker -v
  2. docker-compose -v

Ubuntu 22.04 安装ollama

  • 在开始之前必须具备科技网络,然后安装并配置好proxychains
  • 配置好proxychains工具之后,将在线安装脚本的所有curl命令前面增加命令:proxychains
  • 然后正常执行安装脚本,期间保证科技网络能够使用,没有意外的话即可在线安装成功。

在线安装ollama

登录后复制

  1. 国内,安装因为速度太慢而超时安装不了,所以,将安装脚本下载下来,走代理安装
  2. wget https://ollama.com/install.sh
  3. chmod +x install.sh
  4. 或者
  5. curl -fsSL https://ollama.com/install.sh -o ollama_install.sh
  6. chmod +x ollama_install.sh
  7. 或者
  8. curl -O https://ollama.com/install.sh
  9. chmod +x install.sh
  • 修改版本原始安装脚本

登录后复制

  1. #!/bin/sh
  2. # This script installs Ollama on Linux.
  3. # It detects the current operating system architecture and installs the appropriate version of Ollama.
  4. set -eu
  5. status() { echo ">>> $*" >&2; }
  6. error() { echo "ERROR $*"; exit 1; }
  7. warning() { echo "WARNING: $*"; }
  8. TEMP_DIR=$(mktemp -d)
  9. cleanup() { rm -rf $TEMP_DIR; }
  10. trap cleanup EXIT
  11. available() { command -v $1 >/dev/null; }
  12. require() {
  13. local MISSING=''
  14. for TOOL in $*; do
  15. if ! available $TOOL; then
  16. MISSING="$MISSING $TOOL"
  17. fi
  18. done
  19. echo $MISSING
  20. }
  21. [ "$(uname -s)" = "Linux" ] || error 'This script is intended to run on Linux only.'
  22. ARCH=$(uname -m)
  23. case "$ARCH" in
  24. x86_64) ARCH="amd64" ;;
  25. aarch64|arm64) ARCH="arm64" ;;
  26. *) error "Unsupported architecture: $ARCH" ;;
  27. esac
  28. IS_WSL2=false
  29. KERN=$(uname -r)
  30. case "$KERN" in
  31. *icrosoft*WSL2 | *icrosoft*wsl2) IS_WSL2=true;;
  32. *icrosoft) error "Microsoft WSL1 is not currently supported. Please use WSL2 with 'wsl --set-version <distro> 2'" ;;
  33. *) ;;
  34. esac
  35. VER_PARAM="${OLLAMA_VERSION:+?version=$OLLAMA_VERSION}"
  36. SUDO=
  37. if [ "$(id -u)" -ne 0 ]; then
  38. # Running as root, no need for sudo
  39. if ! available sudo; then
  40. error "This script requires superuser permissions. Please re-run as root."
  41. fi
  42. SUDO="sudo"
  43. fi
  44. NEEDS=$(require curl awk grep sed tee xargs)
  45. if [ -n "$NEEDS" ]; then
  46. status "ERROR: The following tools are required but missing:"
  47. for NEED in $NEEDS; do
  48. echo " - $NEED"
  49. done
  50. exit 1
  51. fi
  52. for BINDIR in /usr/local/bin /usr/bin /bin; do
  53. echo $PATH | grep -q $BINDIR && break || continue
  54. done
  55. OLLAMA_INSTALL_DIR=$(dirname ${BINDIR})
  56. status "Installing ollama to $OLLAMA_INSTALL_DIR"
  57. $SUDO install -o0 -g0 -m755 -d $BINDIR
  58. $SUDO install -o0 -g0 -m755 -d "$OLLAMA_INSTALL_DIR"
  59. if proxychains curl -I --silent --fail --location "https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" >/dev/null ; then
  60. status "Downloading Linux ${ARCH} bundle"
  61. proxychains curl --fail --show-error --location --progress-bar \
  62. "https://ollama.com/download/ollama-linux-${ARCH}.tgz${VER_PARAM}" | \
  63. $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
  64. BUNDLE=1
  65. if [ "$OLLAMA_INSTALL_DIR/bin/ollama" != "$BINDIR/ollama" ] ; then
  66. status "Making ollama accessible in the PATH in $BINDIR"
  67. $SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
  68. fi
  69. else
  70. status "Downloading Linux ${ARCH} CLI"
  71. proxychains curl --fail --show-error --location --progress-bar -o "$TEMP_DIR/ollama"\
  72. "https://ollama.com/download/ollama-linux-${ARCH}${VER_PARAM}"
  73. $SUDO install -o0 -g0 -m755 $TEMP_DIR/ollama $OLLAMA_INSTALL_DIR/ollama
  74. BUNDLE=0
  75. if [ "$OLLAMA_INSTALL_DIR/ollama" != "$BINDIR/ollama" ] ; then
  76. status "Making ollama accessible in the PATH in $BINDIR"
  77. $SUDO ln -sf "$OLLAMA_INSTALL_DIR/ollama" "$BINDIR/ollama"
  78. fi
  79. fi
  80. install_success() {
  81. status 'The Ollama API is now available at 127.0.0.1:11434.'
  82. status 'Install complete. Run "ollama" from the command line.'
  83. }
  84. trap install_success EXIT
  85. # Everything from this point onwards is optional.
  86. configure_systemd() {
  87. if ! id ollama >/dev/null 2>&1; then
  88. status "Creating ollama user..."
  89. $SUDO useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
  90. fi
  91. if getent group render >/dev/null 2>&1; then
  92. status "Adding ollama user to render group..."
  93. $SUDO usermod -a -G render ollama
  94. fi
  95. if getent group video >/dev/null 2>&1; then
  96. status "Adding ollama user to video group..."
  97. $SUDO usermod -a -G video ollama
  98. fi
  99. status "Adding current user to ollama group..."
  100. $SUDO usermod -a -G ollama $(whoami)
  101. status "Creating ollama systemd service..."
  102. cat <<EOF | $SUDO tee /etc/systemd/system/ollama.service >/dev/null
  103. [Unit]
  104. Description=Ollama Service
  105. After=network-online.target
  106. [Service]
  107. ExecStart=$BINDIR/ollama serve
  108. User=ollama
  109. Group=ollama
  110. Restart=always
  111. RestartSec=3
  112. Environment="PATH=$PATH"
  113. [Install]
  114. WantedBy=default.target
  115. EOF
  116. SYSTEMCTL_RUNNING="$(systemctl is-system-running || true)"
  117. case $SYSTEMCTL_RUNNING in
  118. running|degraded)
  119. status "Enabling and starting ollama service..."
  120. $SUDO systemctl daemon-reload
  121. $SUDO systemctl enable ollama
  122. start_service() { $SUDO systemctl restart ollama; }
  123. trap start_service EXIT
  124. ;;
  125. esac
  126. }
  127. if available systemctl; then
  128. configure_systemd
  129. fi
  130. # WSL2 only supports GPUs via nvidia passthrough
  131. # so check for nvidia-smi to determine if GPU is available
  132. if [ "$IS_WSL2" = true ]; then
  133. if available nvidia-smi && [ -n "$(nvidia-smi | grep -o "CUDA Version: [0-9]*\.[0-9]*")" ]; then
  134. status "Nvidia GPU detected."
  135. fi
  136. install_success
  137. exit 0
  138. fi
  139. # Install GPU dependencies on Linux
  140. if ! available lspci && ! available lshw; then
  141. warning "Unable to detect NVIDIA/AMD GPU. Install lspci or lshw to automatically detect and install GPU dependencies."
  142. exit 0
  143. fi
  144. check_gpu() {
  145. # Look for devices based on vendor ID for NVIDIA and AMD
  146. case $1 in
  147. lspci)
  148. case $2 in
  149. nvidia) available lspci && lspci -d '10de:' | grep -q 'NVIDIA' || return 1 ;;
  150. amdgpu) available lspci && lspci -d '1002:' | grep -q 'AMD' || return 1 ;;
  151. esac ;;
  152. lshw)
  153. case $2 in
  154. nvidia) available lshw && $SUDO lshw -c display -numeric -disable network | grep -q 'vendor: .* \[10DE\]' || return 1 ;;
  155. amdgpu) available lshw && $SUDO lshw -c display -numeric -disable network | grep -q 'vendor: .* \[1002\]' || return 1 ;;
  156. esac ;;
  157. nvidia-smi) available nvidia-smi || return 1 ;;
  158. esac
  159. }
  160. if check_gpu nvidia-smi; then
  161. status "NVIDIA GPU installed."
  162. exit 0
  163. fi
  164. if ! check_gpu lspci nvidia && ! check_gpu lshw nvidia && ! check_gpu lspci amdgpu && ! check_gpu lshw amdgpu; then
  165. install_success
  166. warning "No NVIDIA/AMD GPU detected. Ollama will run in CPU-only mode."
  167. exit 0
  168. fi
  169. if check_gpu lspci amdgpu || check_gpu lshw amdgpu; then
  170. if [ $BUNDLE -ne 0 ]; then
  171. status "Downloading Linux ROCm ${ARCH} bundle"
  172. proxychains curl --fail --show-error --location --progress-bar \
  173. "https://ollama.com/download/ollama-linux-${ARCH}-rocm.tgz${VER_PARAM}" | \
  174. $SUDO tar -xzf - -C "$OLLAMA_INSTALL_DIR"
  175. install_success
  176. status "AMD GPU ready."
  177. exit 0
  178. fi
  179. # Look for pre-existing ROCm v6 before downloading the dependencies
  180. for search in "${HIP_PATH:-''}" "${ROCM_PATH:-''}" "/opt/rocm" "/usr/lib64"; do
  181. if [ -n "${search}" ] && [ -e "${search}/libhipblas.so.2" -o -e "${search}/lib/libhipblas.so.2" ]; then
  182. status "Compatible AMD GPU ROCm library detected at ${search}"
  183. install_success
  184. exit 0
  185. fi
  186. done
  187. status "Downloading AMD GPU dependencies..."
  188. $SUDO rm -rf /usr/share/ollama/lib
  189. $SUDO chmod o+x /usr/share/ollama
  190. $SUDO install -o ollama -g ollama -m 755 -d /usr/share/ollama/lib/rocm
  191. proxychains curl --fail --show-error --location --progress-bar "https://ollama.com/download/ollama-linux-amd64-rocm.tgz${VER_PARAM}" \
  192. | $SUDO tar zx --owner ollama --group ollama -C /usr/share/ollama/lib/rocm .
  193. install_success
  194. status "AMD GPU ready."
  195. exit 0
  196. fi
  197. CUDA_REPO_ERR_MSG="NVIDIA GPU detected, but your OS and Architecture are not supported by NVIDIA. Please install the CUDA driver manually https://docs.nvidia.com/cuda/cuda-installation-guide-linux/"
  198. # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-7-centos-7
  199. # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-8-rocky-8
  200. # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#rhel-9-rocky-9
  201. # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#fedora
  202. install_cuda_driver_yum() {
  203. status 'Installing NVIDIA repository...'
  204. case $PACKAGE_MANAGER in
  205. yum)
  206. $SUDO $PACKAGE_MANAGER -y install yum-utils
  207. if proxychains curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo" >/dev/null ; then
  208. $SUDO $PACKAGE_MANAGER-config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo
  209. else
  210. error $CUDA_REPO_ERR_MSG
  211. fi
  212. ;;
  213. dnf)
  214. if proxychains curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo" >/dev/null ; then
  215. $SUDO $PACKAGE_MANAGER config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-$1$2.repo
  216. else
  217. error $CUDA_REPO_ERR_MSG
  218. fi
  219. ;;
  220. esac
  221. case $1 in
  222. rhel)
  223. status 'Installing EPEL repository...'
  224. # EPEL is required for third-party dependencies such as dkms and libvdpau
  225. $SUDO $PACKAGE_MANAGER -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-$2.noarch.rpm || true
  226. ;;
  227. esac
  228. status 'Installing CUDA driver...'
  229. if [ "$1" = 'centos' ] || [ "$1$2" = 'rhel7' ]; then
  230. $SUDO $PACKAGE_MANAGER -y install nvidia-driver-latest-dkms
  231. fi
  232. $SUDO $PACKAGE_MANAGER -y install cuda-drivers
  233. }
  234. # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#ubuntu
  235. # ref: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#debian
  236. install_cuda_driver_apt() {
  237. status 'Installing NVIDIA repository...'
  238. if proxychains curl -I --silent --fail --location "https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-keyring_1.1-1_all.deb" >/dev/null ; then
  239. proxychains curl -fsSL -o $TEMP_DIR/cuda-keyring.deb https://developer.download.nvidia.com/compute/cuda/repos/$1$2/$(uname -m | sed -e 's/aarch64/sbsa/')/cuda-keyring_1.1-1_all.deb
  240. else
  241. error $CUDA_REPO_ERR_MSG
  242. fi
  243. case $1 in
  244. debian)
  245. status 'Enabling contrib sources...'
  246. $SUDO sed 's/main/contrib/' < /etc/apt/sources.list | $SUDO tee /etc/apt/sources.list.d/contrib.list > /dev/null
  247. if [ -f "/etc/apt/sources.list.d/debian.sources" ]; then
  248. $SUDO sed 's/main/contrib/' < /etc/apt/sources.list.d/debian.sources | $SUDO tee /etc/apt/sources.list.d/contrib.sources > /dev/null
  249. fi
  250. ;;
  251. esac
  252. status 'Installing CUDA driver...'
  253. $SUDO dpkg -i $TEMP_DIR/cuda-keyring.deb
  254. $SUDO apt-get update
  255. [ -n "$SUDO" ] && SUDO_E="$SUDO -E" || SUDO_E=
  256. DEBIAN_FRONTEND=noninteractive $SUDO_E apt-get -y install cuda-drivers -q
  257. }
  258. if [ ! -f "/etc/os-release" ]; then
  259. error "Unknown distribution. Skipping CUDA installation."
  260. fi
  261. . /etc/os-release
  262. OS_NAME=$ID
  263. OS_VERSION=$VERSION_ID
  264. PACKAGE_MANAGER=
  265. for PACKAGE_MANAGER in dnf yum apt-get; do
  266. if available $PACKAGE_MANAGER; then
  267. break
  268. fi
  269. done
  270. if [ -z "$PACKAGE_MANAGER" ]; then
  271. error "Unknown package manager. Skipping CUDA installation."
  272. fi
  273. if ! check_gpu nvidia-smi || [ -z "$(nvidia-smi | grep -o "CUDA Version: [0-9]*\.[0-9]*")" ]; then
  274. case $OS_NAME in
  275. centos|rhel) install_cuda_driver_yum 'rhel' $(echo $OS_VERSION | cut -d '.' -f 1) ;;
  276. rocky) install_cuda_driver_yum 'rhel' $(echo $OS_VERSION | cut -c1) ;;
  277. fedora) [ $OS_VERSION -lt '39' ] && install_cuda_driver_yum $OS_NAME $OS_VERSION || install_cuda_driver_yum $OS_NAME '39';;
  278. amzn) install_cuda_driver_yum 'fedora' '37' ;;
  279. debian) install_cuda_driver_apt $OS_NAME $OS_VERSION ;;
  280. ubuntu) install_cuda_driver_apt $OS_NAME $(echo $OS_VERSION | sed 's/\.//') ;;
  281. *) exit ;;
  282. esac
  283. fi
  284. if ! lsmod | grep -q nvidia || ! lsmod | grep -q nvidia_uvm; then
  285. KERNEL_RELEASE="$(uname -r)"
  286. case $OS_NAME in
  287. rocky) $SUDO $PACKAGE_MANAGER -y install kernel-devel kernel-headers ;;
  288. centos|rhel|amzn) $SUDO $PACKAGE_MANAGER -y install kernel-devel-$KERNEL_RELEASE kernel-headers-$KERNEL_RELEASE ;;
  289. fedora) $SUDO $PACKAGE_MANAGER -y install kernel-devel-$KERNEL_RELEASE ;;
  290. debian|ubuntu) $SUDO apt-get -y install linux-headers-$KERNEL_RELEASE ;;
  291. *) exit ;;
  292. esac
  293. NVIDIA_CUDA_VERSION=$($SUDO dkms status | awk -F: '/added/ { print $1 }')
  294. if [ -n "$NVIDIA_CUDA_VERSION" ]; then
  295. $SUDO dkms install $NVIDIA_CUDA_VERSION
  296. fi
  297. if lsmod | grep -q nouveau; then
  298. status 'Reboot to complete NVIDIA CUDA driver install.'
  299. exit 0
  300. fi
  301. $SUDO modprobe nvidia
  302. $SUDO modprobe nvidia_uvm
  303. fi
  304. # make sure the NVIDIA modules are loaded on boot with nvidia-persistenced
  305. if available nvidia-persistenced; then
  306. $SUDO touch /etc/modules-load.d/nvidia.conf
  307. MODULES="nvidia nvidia-uvm"
  308. for MODULE in $MODULES; do
  309. if ! grep -qxF "$MODULE" /etc/modules-load.d/nvidia.conf; then
  310. echo "$MODULE" | $SUDO tee -a /etc/modules-load.d/nvidia.conf > /dev/null
  311. fi
  312. done
  313. fi
  314. status "NVIDIA GPU ready."
  315. install_success

image

  • 手动安装参考:

配置环境变量

登录后复制

  1. vim /home/viadmin/.bashrc
  2. export OLLAMA_HOST=http://10.10.16.60:11434
  3. systemctl set-environment OLLAMA_HOST=http://10.10.16.60:11434
  4. source .bashrc
  5. systemctl restart ollama
  6. systemctl status ollama

image

image

Ollama相关参数

登录后复制

  1. viadmin@ollama-pro:~$ ollama --help
  2. Large language model runner
  3. Usage:
  4. ollama [flags]
  5. ollama [command]
  6. Available Commands:
  7. serve Start ollama
  8. create Create a model from a Modelfile
  9. show Show information for a model
  10. run Run a model
  11. stop Stop a running model
  12. pull Pull a model from a registry
  13. push Push a model to a registry
  14. list List models
  15. ps List running models
  16. cp Copy a model
  17. rm Remove a model
  18. help Help about any command
  19. Flags:
  20. -h, --help help for ollama
  21. -v, --version Show version information
  22. Use "ollama [command] --help" for more information about a command.

安装模型报错问题

image

登录后复制

  1. Error: could not connect to ollama app, is it running?
  • 经过查询,发现是需要先启动ollama app,启动方式是:sudo ollama serve

image

  • 上述启动是一个交互式的形式,可以使用screen命令,进入此空间后再执行。
  • 上述执行完成之后就可以下载模型了。
  • https://github.com/ollama/ollama

image

image

最终效果

image


使用docker安装Open-Webui

官方文档

登录后复制

  1. 官方:
  2. sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
  3. 自定义安装将3000端口改为80端口
  4. docker run -d -p 80:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

image

  • 经过测试上述操作,虽然open-webui成功启动,但是无法识别本地安装的ollama模型,所以需要使用以下方式启动

登录后复制

  1. sudo docker run -d --network=host -e OLLAMA_BASE_URL=http://127.0.0.1:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

效果展示

image


Docker搭建Dify

  1. 克隆 Dify 源代码至本地环境。git clone https://github.com/langgenius/dify.git
  2. 进入 Dify 源代码的 Docker 目录cd dify/docker
  3. 复制环境配置文件cp .env.example .env
  4. 启动 Docker 容器
  • 根据你系统上的 Docker Compose 版本,选择合适的命令来启动容器。你可以通过 $ docker compose version 命令检查版本,详细说明请参考 Docker 官方文档:

  • 如果版本是 Docker Compose V2,使用以下命令:docker compose up -d

  • 如果版本是 Docker Compose V1,使用以下命令:docker-compose up -d

  • 最后检查是否所有容器都正常运行:docker compose ps

  • 参考:

  • https://docs.dify.ai/zh-hans/getting-started/install-self-hosted/docker-compose

  • 上述完成之后,正常访问部署到服务器的IP地址,然后会让你初始化设置账户和密码,然后登录进去下面是登录进去的效果。

image

在Dify中加载Ollama

  • 这里刚开始踩了坑,就是默认情况下新版本的Ollama安装成功之后就是以Systemd服务的形式启动,而这个刚开始我没仔细看,导致自己也手动使用ollama serve启动了服务,这样导致互相混淆了,最终导致的结果就是我无论怎么配置加载,在Dify中都无法查看到Ollama所加载的模型。
  • 解决办法就是我关停了自己使用ollama serve启动的服务,然后使用Systemd形式进行管理配置服务,并且配置侦听所有IP地址。
  • 由于上述的操作,导致对应下载的模型路径不对,所以需要更改模型的路径。

需要添加到Systemd的配置参数

  • sudo vim /etc/systemd/system/ollama.service

登录后复制

  1. Environment="OLLAMA_HOST=0.0.0.0"
  2. Environment="OLLAMA_MODELS=/home/viadmin/.ollama/models"
  • 最终的配置结果

登录后复制

  1. viadmin@ollama-pro:/etc/systemd/system$ cat ollama.service
  2. [Unit]
  3. Description=Ollama Service
  4. After=network-online.target
  5. [Service]
  6. Environment="OLLAMA_HOST=0.0.0.0"
  7. ExecStart=/usr/local/bin/ollama serve
  8. User=ollama
  9. Group=ollama
  10. Restart=always
  11. RestartSec=3
  12. Environment="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin"
  13. Environment="OLLAMA_MODELS=/home/viadmin/.ollama/models"
  14. [Install]
  15. WantedBy=default.target
  • 目录权限记得给够,不行就给777

  • 重启服务

登录后复制

  1. sudo systemctl daemon-reload
  2. sudo systemctl restart ollama.service
  3. sudo systemctl status ollama.service

image

image

迷茫的人生,需要不断努力,才能看清远方模糊的志向!

标签: ubuntu linux 运维

本文转载自: https://blog.csdn.net/qq_35485206/article/details/143459935
版权归原作者 心上之秋 所有, 如有侵权,请联系我们删除。

“国内网络在Ubuntu 22.04中在线安装Ollama并配置Open-Webui&Dify”的评论:

还没有评论