0


K8S搭建ELK(Elasticsearch,Kibana,Logstash和Filebeat)

部署前提:

1、完成K8S集群搭建,搭建参考:

基于ECS云主机搭建k8s集群-详细过程_Soft_Engneer的博客-CSDN博客Kubernetes集群详细部署过程,K8s避坑记录,快速搭建环境并把主要时间花在专注k8s的学习上https://blog.csdn.net/Soft_Engneer/article/details/124517916?spm=1001.2014.3001.55022、完成GlusterFS共享存储部署,搭建参考:

CentOS安装GlusterFS_Soft_Engneer的博客-CSDN博客glusterfs部署及测试https://blog.csdn.net/Soft_Engneer/article/details/124554384?spm=1001.2014.3001.55023、下载部署的镜像包:

  1. docker pull docker.elastic.co/elasticsearch/elasticsearch:7.17.2
  2. docker pull docker.elastic.co/kibana/kibana:7.17.2
  3. docker pull docker.elastic.co/logstash/logstash:7.17.2
  4. docker pull docker.elastic.co/beats/filebeat:7.17.2

一,搭建elasticsearch+kibana

elasticsearch配置文件:

  1. [root@k8s-node01 elk]# more elasticsearch.yml
  2. cluster.name: my-es
  3. node.name: "node-1"
  4. path.data: /usr/share/elasticsearch/data
  5. #path.logs: /var/log/elasticsearch
  6. bootstrap.memory_lock: false
  7. network.host: 0.0.0.0
  8. http.port: 9200
  9. #集群个节点IP地址,也可以使用els、els.shuaiguoxia.com等名称,需要各节点能够解析
  10. #discovery.zen.ping.unicast.hosts: ["172.16.30.11", "172.17.77.12"]
  11. #集群节点数
  12. #discovery.zen.minimum_master_nodes: 2
  13. discovery.seed_hosts: ["127.0.0.1", "[::1]"]
  14. cluster.initial_master_nodes: ["node-1"]
  15. #增加参数,使head插件可以访问es
  16. http.cors.enabled: true
  17. http.cors.allow-origin: "*"
  18. http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

kibana配置文件:

kibana连接的主机使用了域名,是由有状态应用statefulset创建的Pod

  1. [root@k8s-node01 elk]# more kibana.yml
  2. server.port: 5601
  3. server.host: "0.0.0.0"
  4. elasticsearch.hosts: "http://es-kibana-0.es-kibana.kube-system:9200"
  5. kibana.index: ".kibana"

创建elasticsearch和kibana的配置文件configmap:

  1. kubectl create configmap es-config -n kube-system --from-file=elasticsearch.yml
  2. kubectl create configmap kibana-config -n kube-system --from-file=kibana.yml

创建glusterfs存储,用于创建pv:

es-volume存储卷已创建,那么需要创建endpoint和svc:

es-endpoints.yaml

es-glusterfs-svc.yaml

  1. [root@k8s-node01 elk]# more es-endpoints.yaml
  2. apiVersion: v1
  3. kind: Endpoints
  4. metadata:
  5. name: glusterfs-es
  6. namespace: kube-system
  7. subsets:
  8. - addresses:
  9. - ip: 192.168.16.5
  10. ports:
  11. - port: 49155
  12. - addresses:
  13. - ip: 192.168.16.4
  14. ports:
  15. - port: 49155
  16. - addresses:
  17. - ip: 172.17.22.4
  18. ports:
  19. - port: 49155
  1. [root@k8s-node01 elk]#
  2. [root@k8s-node01 elk]# more es-glusterfs-svc.yaml
  3. apiVersion: v1
  4. kind: Service
  5. metadata:
  6. name: glusterfs-es
  7. namespace: kube-system
  8. spec:
  9. ports:
  10. - port: 49155
  11. [root@k8s-node01 elk]#

创建服务:

  1. [root@k8s-node01 elk]# kubectl create -f es-endpoints.yaml
  2. [root@k8s-node01 elk]# kubectl create -f es-glusterfs-svc.yaml

es存储pvc,pv配置文件:es-pv.yaml和es-pvc.yaml

  1. [root@k8s-node01 elk]# more es-pv.yaml
  2. apiVersion: v1
  3. kind: PersistentVolume
  4. metadata:
  5. name: es-pv
  6. namespace: kube-system
  7. spec:
  8. capacity:
  9. storage: 5Gi
  10. accessModes:
  11. - ReadWriteMany
  12. glusterfs:
  13. endpoints: "glusterfs-es"
  14. path: "es-volume"
  15. readOnly: false
  16. [root@k8s-node01 elk]# more es-pvc.yaml
  17. kind: PersistentVolumeClaim
  18. apiVersion: v1
  19. metadata:
  20. name: es-pv-claim
  21. namespace: kube-system
  22. labels:
  23. app: es
  24. spec:
  25. accessModes:
  26. - ReadWriteMany
  27. resources:
  28. requests:
  29. storage: 5Gi
  30. [root@k8s-node01 elk]#

创建pvc、pv:

  1. [root@k8s-node01 elk]#
  2. [root@k8s-node01 elk]# kubectl apply -f es-pv.yaml
  3. persistentvolume/es-pv created
  4. [root@k8s-node01 elk]# kubectl apply -f es-pvc.yaml
  5. persistentvolumeclaim/es-pv-claim created
  6. [root@k8s-node01 elk]#
  7. [root@k8s-node01 elk]# kubectl get pv,pvc -A
  8. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
  9. persistentvolume/es-pv 5Gi RWX Retain Bound kube-system/es-pv-claim 26s
  10. persistentvolume/prometheus 4Gi RWX Retain Bound prome-system/prometheus 23h
  11. NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
  12. kube-system persistentvolumeclaim/es-pv-claim Bound es-pv 5Gi RWX 22s
  13. prome-system persistentvolumeclaim/prometheus Bound prometheus 4Gi RWX 23h

创建es-kibana的yaml配置文件: es-statefulset.yaml

  1. [root@k8s-node01 elk]# more es-statefulset.yaml
  2. apiVersion: apps/v1
  3. kind: StatefulSet
  4. metadata:
  5. labels:
  6. app: es-kibana
  7. name: es-kibana
  8. namespace: kube-system
  9. spec:
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. app: es-kibana
  14. serviceName: "es-kibana"
  15. template:
  16. metadata:
  17. labels:
  18. app: es-kibana
  19. spec:
  20. imagePullSecrets:
  21. - name: registry-pull-secret
  22. containers:
  23. - image: elasticsearch:7.17.2
  24. imagePullPolicy: IfNotPresent
  25. lifecycle:
  26. postStart:
  27. exec:
  28. command: [ "/bin/bash", "-c", "sysctl -w vm.max_map_count=262144; ulimit -l unlimited;chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/da
  29. ta;" ]
  30. name: elasticsearch
  31. resources:
  32. requests:
  33. memory: "800Mi"
  34. cpu: "800m"
  35. limits:
  36. memory: "1Gi"
  37. cpu: "1000m"
  38. ports:
  39. - containerPort: 9200
  40. - containerPort: 9300
  41. volumeMounts:
  42. - name: es-config
  43. mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
  44. subPath: elasticsearch.yml
  45. - name: es-persistent-storage
  46. mountPath: /usr/share/elasticsearch/data
  47. env:
  48. - name: TZ
  49. value: Asia/Shanghai
  50. - image: kibana:7.17.2
  51. imagePullPolicy: IfNotPresent
  52. #command: [ "/bin/bash", "-ce", "tail -f /dev/null" ]
  53. name: kibana
  54. env:
  55. - name: TZ
  56. value: Asia/Shanghai
  57. volumeMounts:
  58. - name: kibana-config
  59. mountPath: /usr/share/kibana/config/kibana.yml
  60. subPath: kibana.yml
  61. volumes:
  62. - name: es-config
  63. configMap:
  64. name: es-config
  65. - name: kibana-config
  66. configMap:
  67. name: kibana-config
  68. - name: es-persistent-storage
  69. persistentVolumeClaim:
  70. claimName: es-pv-claim
  71. #hostNetwork: true
  72. #dnsPolicy: ClusterFirstWithHostNet
  73. # nodeSelector:
  74. # kubernetes.io/hostname: 172.16.30.1

创建es-kibana应用:

  1. [root@k8s-node01 elk]# kubectl create -f es-statefulset.yaml
  2. statefulset.apps/es-kibana created
  3. [root@k8s-node01 elk]#
  4. [root@k8s-node01 elk]# kubectl get pod -o wide -n kube-system|grep es
  5. es-kibana-0 2/2 Running 0 18s 10.244.1.22 k8s-node01 <none> <none>
  6. [root@k8s-node01 elk]#
  7. [root@k8s-node01 elk]#

使用curl命令测试elasticsearch是否正常:(es-kibana-0这个pod的IP:10.244.1.22)

  1. [root@k8s-node01 elk]# kubectl get pod -o wide -n kube-system|grep es
  2. es-kibana-0 2/2 Running 0 18s 10.244.1.22 k8s-node01 <none> <none>
  3. [root@k8s-node01 elk]#
  4. [root@k8s-node01 elk]# curl 10.244.1.22:9200
  5. {
  6. "name" : "node-1",
  7. "cluster_name" : "my-es",
  8. "cluster_uuid" : "0kCaXU_CSpi4yByyW0utsA",
  9. "version" : {
  10. "number" : "7.17.2",
  11. "build_flavor" : "default",
  12. "build_type" : "docker",
  13. "build_hash" : "de7261de50d90919ae53b0eff9413fd7e5307301",
  14. "build_date" : "2022-03-28T15:12:21.446567561Z",
  15. "build_snapshot" : false,
  16. "lucene_version" : "8.11.1",
  17. "minimum_wire_compatibility_version" : "6.8.0",
  18. "minimum_index_compatibility_version" : "6.0.0-beta1"
  19. },
  20. "tagline" : "You Know, for Search"
  21. }
  22. [root@k8s-node01 elk]#

创建es-kibana的cluserip的svc:es-cluster-none-svc.yaml

  1. [root@k8s-node01 elk]# more es-cluster-none-svc.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. labels:
  6. app: es-kibana
  7. name: es-kibana
  8. namespace: kube-system
  9. spec:
  10. ports:
  11. - name: es9200
  12. port: 9200
  13. protocol: TCP
  14. targetPort: 9200
  15. - name: es9300
  16. port: 9300
  17. protocol: TCP
  18. targetPort: 9300
  19. clusterIP: None
  20. selector:
  21. app: es-kibana
  22. type: ClusterIP
  23. [root@k8s-node01 elk]#
  24. [root@k8s-node01 elk]# kubectl apply -f es-cluster-none-svc.yaml
  25. service/es-kibana created
  26. [root@k8s-node01 elk]#
  27. [root@k8s-node01 elk]# kubectl get svc -n kube-system|grep es-kibana
  28. es-kibana ClusterIP None <none> 9200/TCP,9300/TCP 29s
  29. [root@k8s-node01 elk]#
  30. [root@k8s-node01 elk]#

创建完以后kiban及可以正常连接elasticsearch了

为了查看方便创建一个nodeport类型的svc:es-nodeport-svc.yaml

  1. [root@k8s-node01 elk]# more es-nodeport-svc.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. labels:
  6. app: es-kibana
  7. name: es-kibana-nodeport-svc
  8. namespace: kube-system
  9. spec:
  10. ports:
  11. - name: 9200-9200
  12. port: 9200
  13. protocol: TCP
  14. targetPort: 9200
  15. #nodePort: 9200
  16. - name: 5601-5601
  17. port: 5601
  18. protocol: TCP
  19. targetPort: 5601
  20. #nodePort: 5601
  21. selector:
  22. app: es-kibana
  23. type: NodePort
  24. [root@k8s-node01 elk]# kubectl apply -f es-nodeport-svc.yaml
  25. service/es-kibana-nodeport-svc created
  26. [root@k8s-node01 elk]#
  27. [root@k8s-node01 elk]# kubectl get svc -n kube-system|grep es-kibana
  28. es-kibana ClusterIP None <none> 9200/TCP,9300/TCP 3m39s
  29. es-kibana-nodeport-svc NodePort 10.110.159.163 <none> 9200:30519/TCP,5601:32150/TCP 13s
  30. [root@k8s-node01 elk]#
  31. [root@k8s-node01 elk]#

使用nodeip+port访问,本次端口为32150

页面显示正常即可

二,创建logstash服务

logstash.yml配置文件,输出至es使用域名配置:

  1. [root@k8s-node01 elk]# more logstash.yml
  2. http.host: "0.0.0.0"
  3. xpack.monitoring.elasticsearch.hosts: http://es-kibana-0.es-kibana.kube-system:9200

logstash.conf配置文件:

  1. [root@k8s-node01 elk]# more logstash.conf
  2. input {
  3. beats {
  4. port => 5044
  5. client_inactivity_timeout => 36000
  6. }
  7. }
  8. filter {
  9. #需要配置否则host是一个json不是文本则无法输出至elasticsearch
  10. mutate {
  11. rename => { "[host][name]" => "host" }
  12. }
  13. }
  14. output {
  15. elasticsearch {
  16. hosts => ["http://es-kibana-0.es-kibana.kube-system:9200"]
  17. index => "k8s-system-log-%{+YYYY.MM.dd}"
  18. }
  19. stdout{
  20. codec => rubydebug
  21. }
  22. }
  23. [root@k8s-node01 elk]#

创建两个配置文件:

  1. kubectl create configmap logstash-yml-config -n kube-system --from-file=logstash.yml
  2. kubectl create configmap logstash-config -n kube-system --from-file=logstash.conf

logstash的yaml配置文件:

  1. [root@k8s-node01 elk]# more logstash-statefulset.yaml
  2. apiVersion: apps/v1
  3. kind: StatefulSet
  4. metadata:
  5. labels:
  6. app: logstash
  7. name: logstash
  8. namespace: kube-system
  9. spec:
  10. serviceName: "logstash"
  11. replicas: 1
  12. selector:
  13. matchLabels:
  14. app: logstash
  15. template:
  16. metadata:
  17. labels:
  18. app: logstash
  19. spec:
  20. imagePullSecrets:
  21. - name: registry-pull-secret
  22. containers:
  23. - image: logstash:7.17.2
  24. name: logstash
  25. resources:
  26. requests:
  27. memory: "500Mi"
  28. cpu: "400m"
  29. limits:
  30. memory: "800Mi"
  31. cpu: "800m"
  32. volumeMounts:
  33. - name: logstash-yml-config
  34. mountPath: /usr/share/logstash/config/logstash.yml
  35. subPath: logstash.yml
  36. - name: logstash-config
  37. mountPath: /usr/share/logstash/pipeline/logstash.conf
  38. subPath: logstash.conf
  39. env:
  40. - name: TZ
  41. value: Asia/Shanghai
  42. volumes:
  43. - name: logstash-yml-config
  44. configMap:
  45. name: logstash-yml-config
  46. - name: logstash-config
  47. configMap:
  48. name: logstash-config
  49. #nodeSelector:
  50. # kubernetes.io/hostname: 172.16.30.1
  51. [root@k8s-node01 elk]#

创建logstash应用:

  1. [root@k8s-node01 elk]# kubectl create -f logstash-statefulset.yaml
  2. statefulset.apps/logstash created
  3. [root@k8s-node01 elk]#
  4. [root@k8s-node01 elk]# kubectl get pod -o wide -n kube-system|grep logstash
  5. logstash-0 1/1 Running 0 24s 10.244.1.23 k8s-node01 <none> <none>
  6. [root@k8s-node01 elk]#

注意:logstash的默认启动端口在9600-9700之间,一般是9600,具体可以查看logstash的日志确认;

logstash的svc配置文件:

  1. [root@k8s-node01 elk]# more logstash-none-svc.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. labels:
  6. app: logstash
  7. name: logstash
  8. namespace: kube-system
  9. spec:
  10. ports:
  11. - name: logstsh
  12. port: 5044
  13. protocol: TCP
  14. targetPort: 9600
  15. clusterIP: None
  16. selector:
  17. app: logstash
  18. type: ClusterIP
  19. [root@k8s-node01 elk]#
  20. [root@k8s-node01 elk]# kubectl create -f logstash-none-svc.yaml
  21. service/logstash created
  22. [root@k8s-node01 elk]#

三,创建filebeat服务

filebeat.yml配置文件:

  1. [root@k8s-node01 elk]# more filebeat.yml
  2. filebeat.inputs:
  3. - type: log
  4. enabled: true
  5. paths:
  6. - /messages
  7. fields:
  8. app: k8s
  9. type: module
  10. filebeat.config.modules:
  11. path: ${path.config}/modules.d/*.yml
  12. reload.enabled: false
  13. setup.template.settings:
  14. index.number_of_shards: 3
  15. setup.kibana:
  16. output.logstash:
  17. hosts: ["logstash-0.logstash.kube-system:5044"]
  18. processors:
  19. - add_host_metadata:
  20. - add_cloud_metadata:
  21. [root@k8s-node01 elk]#

解析

  容器日志路径为/messages 需要在启动Pod时候把该路径对应挂载

  使用的是k8s内部的dns配置elasticsearch服务

  创建filebeat的configmap

  1. kubectl create configmap filebeat-config -n kube-system --from-file=filebeat.yml

filebeat的yaml文件:

  1. [root@k8s-node01 elk]# more filebeat-daemonset.yaml
  2. apiVersion: apps/v1
  3. kind: DaemonSet
  4. metadata:
  5. labels:
  6. app: filebeat
  7. name: filebeat
  8. namespace: kube-system
  9. spec:
  10. selector:
  11. matchLabels:
  12. app: filebeat
  13. template:
  14. metadata:
  15. labels:
  16. app: filebeat
  17. spec:
  18. imagePullSecrets:
  19. - name: registry-pull-secret
  20. containers:
  21. - image: elastic/filebeat:7.17.2
  22. name: filebeat
  23. volumeMounts:
  24. - name: filebeat-config
  25. mountPath: /etc/filebeat.yml
  26. subPath: filebeat.yml
  27. - name: k8s-system-logs
  28. mountPath: /messages
  29. args: [
  30. "-c", "/etc/filebeat.yml",
  31. "-e",
  32. ]
  33. resources:
  34. requests:
  35. cpu: 100m
  36. memory: 100Mi
  37. limits:
  38. cpu: 500m
  39. memory: 500Mi
  40. securityContext:
  41. runAsUser: 0
  42. env:
  43. - name: TZ
  44. value: "CST-8"
  45. volumes:
  46. - name: filebeat-config
  47. configMap:
  48. name: filebeat-config
  49. - name: k8s-system-logs
  50. hostPath:
  51. path: /var/log/messages
  52. type: File
  53. [root@k8s-node01 elk]#

使用DaemonSet保证每个node有且仅调度一个Pod用于收集node主机的/var/log/messages日志

  1. [root@k8s-node01 elk]# kubectl apply -f filebeat-daemonset.yaml
  2. daemonset.apps/filebeat created
  3. [root@k8s-node01 elk]#
  4. [root@k8s-node01 elk]#
  5. [root@k8s-node01 elk]# kubectl get pod -o wide -n kube-system|grep filebeat
  6. filebeat-5h58b 1/1 Running 0 24s 10.244.0.17 k8s-master <none> <none>
  7. filebeat-7x9xm 1/1 Running 0 24s 10.244.1.24 k8s-node01 <none> <none>
  8. filebeat-z6lwv 1/1 Running 0 24s 10.244.2.6 k8s-node02 <none> <none>
  9. [root@k8s-node01 elk]#

在kibana添加日志索引以后查看:

到discover页面,创建索引模式,“k8s-system-log-*” ,选择时间"timestamp",然后,就可以看到日志了

ELK日志系统搭建完成!

这篇博文写的不错,本文参考:k8s之使用k8s搭建ELK日志收集系统 - minseo - 博客园


本文转载自: https://blog.csdn.net/Soft_Engneer/article/details/124553616
版权归原作者 Soft_Engneer 所有, 如有侵权,请联系我们删除。

“K8S搭建ELK(Elasticsearch,Kibana,Logstash和Filebeat)”的评论:

还没有评论