目录
一、创建elk目录、elasticsearch目录、kibana目录
sudo mkdir -p /usr/local/elk/elasticsearch/config
sudo mkdir -p /usr/local/elk/elasticsearch/data
sudo mkdir -p /usr/local/elk/elasticsearch/logs
sudo mkdir -p /usr/local/elk/kibana/config
sudo mkdir -p /usr/local/elk/kibana/data
sudo mkdir -p /usr/local/elk/kibana/logs
sudo mkdir -p /usr/local/elk/filebeat/config
sudo mkdir -p /usr/local/elk/filebeat/data
sudo mkdir -p /usr/local/elk/filebeat/logs
设置权限将/usr/local/elk及其所有子目录的权限设置为当前用户的UID和GID。
sudo chown -R $(id -u):$(id -g) /usr/local/elk
二、创建docker-compose.yml (linux没装docker和docker-compose的先自行百度装一下)
cd /usr/local/elk/ && vi docker-compose.yml
然后将下面的代码粘贴到文件中,最后保存即可
注意文件中的数据卷除了时间同步外,其他数据卷先注释掉,如下内容
#docker-compose.yml
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.6.2
container_name: elasticsearch
environment:
- cluster.name=es-app-cluster
- bootstrap.memory_lock=true
- node.name=node-01
- discovery.type=single-node
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=false
- xpack.security.transport.ssl.enabled=false
- ingest.geoip.downloader.enabled=false # 使用正确的配置项
- ELASTIC_USERNAME=elastic
- ELASTIC_PASSWORD=elastic
- "ES_JAVA_OPTS=-Xms128m -Xmx128m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
#- /usr/local/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
#- /usr/local/elk/elasticsearch/data:/usr/share/elasticsearch/data
#- /usr/local/elk/elasticsearch/logs:/usr/share/elasticsearch/logs
- /etc/localtime:/etc/localtime:ro
ports:
- 9200:9200
- 9300:9300
networks:
- elk-net
restart: always
privileged: true
kibana:
image: docker.elastic.co/kibana/kibana:8.6.2
container_name: kibana
environment:
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=elastic
- XPACK_SECURITY_ENABLED=true
- SERVER_NAME=kibana
volumes:
#- /usr/local/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
#- /usr/local/elk/kibana/data:/usr/share/kibana/data
#- /usr/local/elk/kibana/logs:/usr/share/kibana/logs
- /etc/localtime:/etc/localtime:ro
ports:
- 5601:5601
networks:
- elk-net
depends_on:
- elasticsearch
restart: always
privileged: true
filebeat:
image: docker.elastic.co/beats/filebeat:8.6.2
container_name: filebeat
volumes:
#- ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml
#- ./filebeat/data:/usr/share/filebeat/data
#- ./filebeat/logs:/usr/share/filebeat/logs
#- /usr/workspace/logs/wclflow:/host/var/log/wclflow # 假设主机的日志位于/var/log下
#- /usr/nginx/logs/access.log:/host/var/log/nginx/logs/access.log
#- /usr/nginx/logs/access.log:/host/var/log/nginx/logs/error.log
- /etc/localtime:/etc/localtime:ro
networks:
- elk-net
depends_on:
- elasticsearch
restart: always
privileged: true
user: root
networks:
elk-net:
driver: bridge
三、启动容器并查看容器状态
启动容器(下载较慢,请耐心等待完成下载并启动容器)
docker-compose up -d
如果 docker-compose up -d执行后提示找不到命令,则本篇所涉及的docker-compose全部改成docker compose后执行
查看服务是否启动
docker-compose ps三个容器status都是Up,表示容器已启动
四、复制elasticsearch、kibana、filebeat配置文件
将elasticsearch、kibana容器内的config、data、logs这三个目录复制到宿主机咱们刚才创建的目录中,具体操作如下:
- elasticsearch容器目录复制到宿主机对应目录docker cp elasticsearch:/usr/share/elasticsearch/config /usr/local/elk/elasticsearch/docker cp elasticsearch:/usr/share/elasticsearch/data /usr/local/elk/elasticsearch/docker cp elasticsearch:/usr/share/elasticsearch/logs /usr/local/elk/elasticsearch/
- kibana容器目录复制到宿主机对应目录docker cp kibana:/usr/share/kibana/config /usr/local/elk/kibana/docker cp kibana:/usr/share/kibana/data /usr/local/elk/kibana/docker cp kibana:/usr/share/kibana/logs /usr/local/elk/kibana/
- filebeat容器目录复制到宿主机对应目录 docker cp filebeat:/usr/share/filebeat/filebeat.yml /usr/local/elk/filebeat/config/ docker cp filebeat:/usr/share/filebeat/data /usr/local/elk/filebeat/ docker cp filebeat:/usr/share/filebeat/logs /usr/local/elk/filebeat/
五、修改elasticsearch、kibana、filebeat配置文件
- 修改elasticsearch配置文件 cd elasticsearch/config/ && rm -rf elasticsearch.yml && vi elasticsearch.yml 回车后输入以下内容后保存
#sticsearch 配置文件 elasticsearch.yml内容cluster.name: "es-app-cluster"# 确保Elasticsearch监听所有接口network.host: 0.0.0.0node.name: node-01path.data: /usr/share/elasticsearch/datapath.logs: /usr/share/elasticsearch/logshttp.port: 9200discovery.type: single-nodexpack.security.enabled: truebootstrap.memory_lock: true# 禁用证书检查xpack.security.http.ssl.enabled: falsexpack.security.transport.ssl.enabled: false#GeoIP数据库用于将IP地址映射到地理位置信息,关闭它ingest.geoip.downloader.enabled: false
- 修改kibana配置文件 cd ../../kibana/config/ && rm -rf kibana.yml && vi kibana.yml 回车后输入以下内容后保存
server.host: "0.0.0.0"server.shutdownTimeout: "10s"elasticsearch.hosts: [ "http://elasticsearch:9200" ]monitoring.ui.container.elasticsearch.enabled: truei18n.locale: "zh-CN"xpack.reporting.roles.enabled: false
- 修改filebeat配置文件 cd ../../filebeat/config/ && rm -rf filebeat.yml && vi filebeat.yml 回车后输入以下内容后保存
filebeat.inputs:- type: filestream id: filestream-wclflow #id要唯一 enabled: true paths: - /host/var/log/wclflow/*.log fields_under_root: true fields: type: wclflow project: alllogs app: wclflow- type: filestream id: filestream-nginx-access #id要唯一 enabled: true paths: - /host/var/log/nginx/logs/access.log fields_under_root: true fields: type: nginx_access project: access app: nginx- type: filestream id: filestream-nginx-error #id要唯一 enabled: true paths: - /host/var/log/nginx/logs/error.log fields_under_root: true fields: type: nginx_error project: error app: nginxoutput.elasticsearch: hosts: ["http://elasticsearch:9200"] username: elastic password: elastic index: "wclflow-%{+yyyy.MM.dd}" indices: - index: "nginx-logs-access-%{+yyyy.MM.dd}" when.contains: type: "nginx_access" - index: "nginx-logs-error-%{+yyyy.MM.dd}" when.contains: type: "nginx_error"setup.template.name: "nginx-logs" # 设置模板名称setup.template.pattern: "nginx-logs-*" # 设置模板模式setup.ilm.enabled: false #如果你不需要自动管理索引生命周期,或者 Elasticsearch 集群没有配置 ILM 策略,建议禁用setup.kibana: host: "kibana:5601"
上面的配置说明一下 我主要模拟配置三个项目的日志配置项目1:配置wclflow项目日志作为一个模拟项目的日志目录 项目2:配置nginx访问日志作为一个模拟项目的日志目录项目3:配置nginx错误日志作为一个模拟项目的日志目录具体配置几个项目日志目录,就写几个类似下图的模块type、enabled、fields_under_root固定不变,其他的值自己根据实际情况自定义**(如果上述配置运行后报错yml文件格式错误,解决思路:需要将文件内容的-type模块整体往右缩进两个字符)**
六、修改完成配置文件后,修改docker-compose.yml配置文件
进入elk目录
cd /usr/local/elk
修改docker-compose.yml文件内容,将原来注释的数据卷挂载目录的注释取消掉****,同时为了filebeat有足够的权限,给filebeat容器配置用户为root
修改后的内容如下,请以这个内容为准(方便起见,直接复制粘贴下面的内容)
#docker-compose.yml version: '3.3' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.6.2 container_name: elasticsearch environment: - cluster.name=es-app-cluster - bootstrap.memory_lock=true - node.name=node-01 - discovery.type=single-node - xpack.security.enabled=true - xpack.security.http.ssl.enabled=false - xpack.security.transport.ssl.enabled=false - ingest.geoip.downloader.enabled=false # 使用正确的配置项 - ELASTIC_USERNAME=elastic - ELASTIC_PASSWORD=elastic - "ES_JAVA_OPTS=-Xms128m -Xmx128m" ulimits: memlock: soft: -1 hard: -1 volumes: - /usr/local/elk/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml - /usr/local/elk/elasticsearch/data:/usr/share/elasticsearch/data - /usr/local/elk/elasticsearch/logs:/usr/share/elasticsearch/logs - /etc/localtime:/etc/localtime:ro ports: - 9200:9200 - 9300:9300 networks: - elk-net restart: always privileged: true kibana: image: docker.elastic.co/kibana/kibana:8.6.2 container_name: kibana environment: - ELASTICSEARCH_HOSTS=http://elasticsearch:9200 - ELASTICSEARCH_USERNAME=kibana_system - ELASTICSEARCH_PASSWORD=elastic - XPACK_SECURITY_ENABLED=true - SERVER_NAME=kibana volumes: - /usr/local/elk/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml - /usr/local/elk/kibana/data:/usr/share/kibana/data - /usr/local/elk/kibana/logs:/usr/share/kibana/logs - /etc/localtime:/etc/localtime:ro ports: - 5601:5601 networks: - elk-net depends_on: - elasticsearch restart: always privileged: true filebeat: image: docker.elastic.co/beats/filebeat:8.6.2 container_name: filebeat volumes: - ./filebeat/config/filebeat.yml:/usr/share/filebeat/filebeat.yml - ./filebeat/data:/usr/share/filebeat/data - ./filebeat/logs:/usr/share/filebeat/logs - /usr/workspace/logs/wclflow:/host/var/log/wclflow # 假设主机的日志位于/var/log下 - /usr/nginx/logs/access.log:/host/var/log/nginx/logs/access.log - /usr/nginx/logs/access.log:/host/var/log/nginx/logs/error.log - /etc/localtime:/etc/localtime:ro networks: - elk-net depends_on: - elasticsearch restart: always privileged: true user: root networks: elk-net: driver: bridge
七、重新启动ELK
先关闭服务
docker-compose down
再启动服务
docker-compose up -d查看服务是否启动
docker-compose ps
三个容器status都是Up,表示容器都已启动
八、修改elasticsearch系统用户密码
进入elasticsearch容器
docker exec -it elasticsearch /bin/bash
执行下面的代码
./bin/elasticsearch-setup-passwords interactive回车后选择“y”后再回车,然后就是漫长的输入密码-确认密码的过程了,要耐心,一直输下去,直至最终结束
修改密码结束后exit退出容器
关闭并重启服务
docker-compose down 关闭容器
docker-compose up -d 启动容器
九、浏览器访问服务
我的ip是192.168.7.46,
打开浏览器访问http://服务器IP:9200/ 查看elasticsearch状态,提示登录,
输入刚才你设定的密码,就可以登录,比如我给elastic用户设定的密码是123456,然后就可以登录了,登录成功后如下图
打开浏览器访问http://服务器IP:5601/,注意ip是你的服务器ip,端口就是5601,首次访问页面如下,需要先配置Elastic,我们选择自己手动配置一下,然后配置Elastic服务地址,修改下ip和端口,如我就是配置的是192.168.7.46:9200,访问后提示登录,同上一步一样,我使用elastic用户,密码我设定的123456,然后登录即可
登录成功进入首页
查看索引
进入索引管理,就能看到我们配置的索引数据、数据流数据、索引模板数据了
配置kibana,进行日志查看
创建数据视图
然后按照上图创建视图的方法,依次创建nginx-logs-access数据视图、nginx-logs-error数据视图
创建好视图后,左侧菜单Discover进入日志查询界面
进入Discover日志查询界面,可切换不同项目,查看日志
查看具体项目日志,如下图查看wclflow服务日志
至此日志已同步到ELK上了。
十、其它命令说明
如果后期功能需求,需要改动某个容器的配置,可以执行下面命令,我以改动kibana容器为例:
停止kibana容器
docker-compose stop kibana
然后修改kibana.yml配置或修改docker-compose.yml有关kibana的配置,修改完成后,执行下面命令,单独启动kibana容器,其他容器不受影响
docker-compose up -d --force-recreate --no-deps kibana
版权归原作者 WCL-JAVA 所有, 如有侵权,请联系我们删除。