0


云原生安全解决方案NeuVector 5.X部署实践

云原生安全解决方案NeuVector 5.X部署实践

NeuVector 5.X支持部署到docker、Rancher、Openshift以及各种公有云平台的Kubernetes集群环境上,支持YAML、Operator、helm等安装方式。本文实践在本地Kubernetes集群上部署NeuVector 5.X。

1. 部署方式概述

YAML方式部署

使用 Kubernetes 来部署单独的管理器、控制器和执行器容器,并确保所有新节点都部署了执行器。NeuVector 需要并支持 Kubernetes 网络插件,例如 flannel、weave 或 calico。

示例文件将部署一个管理器和三个控制器。它将作为 DaemonSet 在每个节点上部署一个执行器。默认情况下,以下示例也会部署到 Master 节点。

有关使用节点标签指定专用的管理器或控制器节点的信息,请参见底部部分。

注意:

由于潜在的会话状态问题,不建议在负载均衡器后面部署(扩展)多个管理器。如果您计划使用 PersistentVolume 声明来存储 NeuVector 配置文件的备份,请参阅“部署 NeuVector 概述”中的“备份/持久数据”部分。

如果部署需要配置负载均衡器,请将下面 yaml 文件中控制台的类型从 NodePort 更改为 LoadBalancer。

NeuVector 镜像在 Docker Hub

镜像存储在 NeuVector Docker Hub registry 中。请使用适当的版本标签来标识管理器、控制器和执行器,而扫描器和更新器的版本则保持为“latest”。例如:

  • neuvector/manager:5.4.0
  • neuvector/controller:5.4.0
  • neuvector/enforcer:5.4.0
  • neuvector/scanner:latest
  • neuvector/updater:latest

请确保在相应的 YAML 文件中更新镜像引用。

可以从 Docker Hub 下载 NeuVector 镜像推送到私有镜像仓库,或直接从docker.io仓库拉取:

[root@harbor neuvector]# cat images.txt
neuvector/manager:5.4.0
neuvector/controller:5.4.0
neuvector/enforcer:5.4.0
neuvector/scanner:latest
neuvector/updater:latest
[root@harbor neuvector]# cat pull_push.shwhileIFS=read -r image;do
   docker pull "$image"new_tag="harbor.jdzx.com/${image}"# 请替换为私有镜像仓库地址
   docker tag "$image""$new_tag"echo"$new_tag"
   docker push "$new_tag"done< images.txt

Helm部署

NeuVector 支持基于 Helm 的部署,其 Helm chart 位于 https://github.com/neuvector/neuvector-helm。如果使用当前的 NeuVector Helm chart(v1.8.9 及以上版本),应对

values.yml

文件进行以下更改:

  • 将注册表更新为 docker.io
  • 将镜像名称/标签更新为上面所示的 Docker Hub 上的适当版本
  • imagePullSecrets 保持为空

Rancher部署

NeuVector 镜像也已镜像到 Rancher registry,以便从 Rancher 部署。有关更多信息,请参见 Rancher 部署部分。请在每次发布后等待几天,以便镜像能被镜像到 Rancher 注册表。

注意:

从 Rancher Manager 2.6.5 及以上版本部署 NeuVector chart时,将从 rancher-mirrored 仓库中拉取,并部署到 cattle-neuvector-system 命名空间。

2. 在Kubernetes上部署NeuVector

本节记录使用YAML方式在Kubernetes上部署NeuVector,使用K8S版本为v1.27.6。

Deploy NeuVector

  1. 创建NeuVector命名空间和必要的服务账户
kubectl create namespace neuvector
kubectl create sa controller -n neuvector
kubectl create sa enforcer -n neuvector
kubectl create sa basic -n neuvector
kubectl create sa updater -n neuvector
kubectl create sa scanner -n neuvector
kubectl create sa registry-adapter -n neuvector
kubectl create sa cert-upgrader -n neuvector
  1. (可选)创建 NeuVector Pod 安全准入(PSA)或 Pod 安全策略(PSP)。

如果您在 Kubernetes 1.25+ 中启用了 Pod 安全准入(PSA,即 Pod 安全标准),或者在 1.25 之前的 Kubernetes 集群中启用了 Pod 安全策略(PSP),请为 NeuVector 添加以下内容(例如,

nv_psp.yaml

)。

注意1:PSP 在 Kubernetes 1.21 中被弃用,并将在 1.25 中完全移除。

注意2:Manager 和 Scanner pods 以无用户 ID 运行。如果您的 PSP 规则中包含

Run As User: Rule: MustRunAsNonRoot

,请在下面的示例 yaml 中添加以下内容(并使用合适的值替换 ###):

securityContext:
    runAsUser: ###

对于 Kubernetes 1.25+ 中的 PSA,在启用了 PSA 的集群上部署时,请为 NeuVector 命名空间标记具有特权配置文件。

kubectl label  namespace neuvector "pod-security.kubernetes.io/enforce=privileged"
  1. 为 NeuVector 安全规则创建自定义资源(CRD)。适用于 Kubernetes 1.19+:
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/waf-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/dlp-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/com-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/vul-crd-k8s-1.19.yaml
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/admission-crd-k8s-1.19.yaml
  1. 添加读取权限以访问 Kubernetes API。

重要提示

标准的 NeuVector 5.2+ 部署使用的是最低权限的服务账户,而不是默认服务账户。如果您从 5.3 之前的版本进行升级,请参见下文。

注意

如果您正在升级到 5.3.0 及以上版本,请根据您当前的版本运行以下命令:

# 对于5.2.0版本
kubectl delete clusterrole neuvector-binding-nvsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules neuvector-binding-nvwafsecurityrules

# 对于5.2.0之前的版本
kubectl delete clusterrolebinding neuvector-binding-app neuvector-binding-rbac neuvector-binding-admission neuvector-binding-customresourcedefinition neuvector-binding-nvsecurityrules neuvector-binding-view neuvector-binding-nvwafsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules
kubectl delete rolebinding neuvector-admin -n neuvector

通过以下“创建 clusterrole”命令应用读取权限:

kubectl create clusterrole neuvector-binding-app --verb=get,list,watch,update --resource=nodes,pods,services,namespaces
kubectl create clusterrole neuvector-binding-rbac --verb=get,list,watch --resource=rolebindings.rbac.authorization.k8s.io,roles.rbac.authorization.k8s.io,clusterrolebindings.rbac.authorization.k8s.io,clusterroles.rbac.authorization.k8s.io
kubectl create clusterrolebinding neuvector-binding-app --clusterrole=neuvector-binding-app --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-rbac --clusterrole=neuvector-binding-rbac --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-admission --verb=get,list,watch,create,update,delete --resource=validatingwebhookconfigurations,mutatingwebhookconfigurations
kubectl create clusterrolebinding neuvector-binding-admission --clusterrole=neuvector-binding-admission --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-customresourcedefinition --verb=watch,create,get,update --resource=customresourcedefinitions
kubectl create clusterrolebinding neuvector-binding-customresourcedefinition --clusterrole=neuvector-binding-customresourcedefinition --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-nvsecurityrules --verb=get,list,delete --resource=nvsecurityrules,nvclustersecurityrules
kubectl create clusterrole neuvector-binding-nvadmissioncontrolsecurityrules --verb=get,list,delete --resource=nvadmissioncontrolsecurityrules
kubectl create clusterrole neuvector-binding-nvdlpsecurityrules --verb=get,list,delete --resource=nvdlpsecurityrules
kubectl create clusterrole neuvector-binding-nvwafsecurityrules --verb=get,list,delete --resource=nvwafsecurityrules
kubectl create clusterrolebinding neuvector-binding-nvsecurityrules --clusterrole=neuvector-binding-nvsecurityrules --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-view --clusterrole=view --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-nvwafsecurityrules --clusterrole=neuvector-binding-nvwafsecurityrules --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-nvadmissioncontrolsecurityrules --clusterrole=neuvector-binding-nvadmissioncontrolsecurityrules --serviceaccount=neuvector:controller
kubectl create clusterrolebinding neuvector-binding-nvdlpsecurityrules --clusterrole=neuvector-binding-nvdlpsecurityrules --serviceaccount=neuvector:controller
kubectl create role neuvector-binding-scanner --verb=get,patch,update,watch --resource=deployments -n neuvector
kubectl create rolebinding neuvector-binding-scanner --role=neuvector-binding-scanner --serviceaccount=neuvector:updater --serviceaccount=neuvector:controller -n neuvector
kubectl create role neuvector-binding-secret --verb=get,list,watch --resource=secrets -n neuvector
kubectl create rolebinding neuvector-binding-secret --role=neuvector-binding-secret --serviceaccount=neuvector:controller --serviceaccount=neuvector:enforcer --serviceaccount=neuvector:scanner --serviceaccount=neuvector:registry-adapter -n neuvector
kubectl create clusterrole neuvector-binding-nvcomplianceprofiles --verb=get,list,delete --resource=nvcomplianceprofiles
kubectl create clusterrolebinding neuvector-binding-nvcomplianceprofiles --clusterrole=neuvector-binding-nvcomplianceprofiles --serviceaccount=neuvector:controller
kubectl create clusterrole neuvector-binding-nvvulnerabilityprofiles --verb=get,list,delete --resource=nvvulnerabilityprofiles
kubectl create clusterrolebinding neuvector-binding-nvvulnerabilityprofiles --clusterrole=neuvector-binding-nvvulnerabilityprofiles --serviceaccount=neuvector:controller

kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/neuvector-roles-k8s.yaml

kubectl create role neuvector-binding-lease --verb=create,get,update --resource=leases -n neuvector
kubectl create rolebinding neuvector-binding-cert-upgrader --role=neuvector-binding-cert-upgrader --serviceaccount=neuvector:cert-upgrader -n neuvector
kubectl create rolebinding neuvector-binding-job-creation --role=neuvector-binding-job-creation --serviceaccount=neuvector:controller -n neuvector
kubectl create rolebinding neuvector-binding-lease --role=neuvector-binding-lease --serviceaccount=neuvector:controller --serviceaccount=neuvector:cert-upgrader -n neuvector
  1. 运行以下命令检查 neuvector/controllerneuvector/updater 服务账户是否已成功添加。
kubectl get ClusterRoleBinding neuvector-binding-app neuvector-binding-rbac neuvector-binding-admission neuvector-binding-customresourcedefinition neuvector-binding-nvsecurityrules neuvector-binding-view neuvector-binding-nvwafsecurityrules neuvector-binding-nvadmissioncontrolsecurityrules neuvector-binding-nvdlpsecurityrules -o wide

示例输出:

NAME                                                ROLE                                                            AGE   USERS   GROUPS   SERVICEACCOUNTS
neuvector-binding-app                               ClusterRole/neuvector-binding-app                               56d                    neuvector/controller
neuvector-binding-rbac                              ClusterRole/neuvector-binding-rbac                              34d                    neuvector/controller
neuvector-binding-admission                         ClusterRole/neuvector-binding-admission                         72d                    neuvector/controller
neuvector-binding-customresourcedefinition          ClusterRole/neuvector-binding-customresourcedefinition          72d                    neuvector/controller
neuvector-binding-nvsecurityrules                   ClusterRole/neuvector-binding-nvsecurityrules                   72d                    neuvector/controller
neuvector-binding-view                              ClusterRole/view                                                72d                    neuvector/controller
neuvector-binding-nvwafsecurityrules                ClusterRole/neuvector-binding-nvwafsecurityrules                72d                    neuvector/controller
neuvector-binding-nvadmissioncontrolsecurityrules   ClusterRole/neuvector-binding-nvadmissioncontrolsecurityrules   72d                    neuvector/controller
neuvector-binding-nvdlpsecurityrules                ClusterRole/neuvector-binding-nvdlpsecurityrules                72d                    neuvector/controller</code>

还有如下命令:

kubectl get RoleBinding neuvector-binding-scanner neuvector-binding-cert-upgrader neuvector-binding-job-creation neuvector-binding-lease neuvector-binding-secret -n neuvector -o wide

示例输出:

NAME                              ROLE                                   AGE    USERS   GROUPS   SERVICEACCOUNTS
neuvector-binding-scanner         Role/neuvector-binding-scanner         8m8s                    neuvector/controller, neuvector/updater
neuvector-binding-cert-upgrader   Role/neuvector-binding-cert-upgrader   8m8s                    neuvector/cert-upgrader
neuvector-binding-job-creation    Role/neuvector-binding-job-creation    8m8s                    neuvector/controller
neuvector-binding-lease           Role/neuvector-binding-lease           8m8s                    neuvector/controller, neuvector/cert-upgrader
neuvector-binding-secret          Role/neuvector-binding-secret          8m8s                    neuvector/controller, neuvector/enforcer, neuvector/scanner, neuvector/registry-adapter
  1. (可选)创建联邦主控和/或远程多集群管理服务。如果您计划在 NeuVector 中使用多集群管理功能,则一个集群必须部署联邦主控服务,而每个远程集群必须拥有联邦工作者服务。为了灵活起见,您可以选择在每个集群上同时部署主控和工作者服务,这样任何集群都可以成为主控或远程集群。

联邦集群管理服务YAML:

apiVersion: v1
kind: Service
metadata:name: neuvector-service-controller-fed-master
  namespace: neuvector
spec:ports:-port:11443name: fed
    protocol: TCP
  type: LoadBalancer
  selector:app: neuvector-controller-pod

---apiVersion: v1
kind: Service
metadata:name: neuvector-service-controller-fed-worker
  namespace: neuvector
spec:ports:-port:10443name: fed
    protocol: TCP
  type: LoadBalancer
  selector:app: neuvector-controller-pod

创建服务:

kubectl create -f nv_master_worker.yaml
  1. 使用预设版本命令创建主要的 NeuVector 服务和 pod,或者修改下面的示例 yaml 文件。预设版本将为 NeuVector 控制台调用一个 LoadBalancer。如果使用下面的示例 yaml 文件,请替换 yaml 文件中管理器、控制器和执行器镜像引用的镜像名称和 <version> 标签。同时,根据您的部署环境进行其他必要的修改(例如,管理器访问的 LoadBalancer/NodePort/Ingress 等)。
kubectl apply -f https://raw.githubusercontent.com/neuvector/manifests/main/kubernetes/5.4.0/neuvector-k8s.yaml

或者根据需求修改yaml示例文件:

# 修改webui的service类型为NodePort...
---

apiVersion: v1
kind: Service
metadata:
  name: neuvector-service-webui
  namespace: neuvector
spec:
  ports:
    - port: 8443
      name: manager
      protocol: TCP
#  type: LoadBalancer
  type: NodePort
  selector:
    app: neuvector-manager-pod

---
...

# 应用YAML文件
kubectl create -f neuvector.yaml

确认相关pod运行正常,示例输出如下:

# pod[root@k8s-master1 neuvector]# kubectl get pod -n neuvector
NAME                                       READY   STATUS      RESTARTS   AGE
neuvector-controller-pod-9558f8954-28478   1/1     Running     0          10h
neuvector-controller-pod-9558f8954-28jnm   1/1     Running     0          10h
neuvector-controller-pod-9558f8954-lw8cm   1/1     Running     0          10h
neuvector-enforcer-pod-jrfwf               1/1     Running     0          10h
neuvector-enforcer-pod-mvhp9               1/1     Running     0          10h
neuvector-enforcer-pod-rhp7t               1/1     Running     0          10h
neuvector-enforcer-pod-wcnnn               1/1     Running     0          10h
neuvector-manager-pod-7c44bfbd4c-w4pnl     1/1     Running     0          9h
neuvector-scanner-pod-7cfc95b64f-74dr6     1/1     Running     0          72m
neuvector-scanner-pod-7cfc95b64f-76psj     1/1     Running     0          73m
neuvector-updater-pod-28851840-kmk66       0/1     Completed   0          73m

# svc[root@k8s-master1 neuvector]# kubectl get svc -n neuvector
NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
neuvector-service-webui           NodePort    10.97.188.186   <none>8443:31585/TCP                  10h
neuvector-svc-admission-webhook   ClusterIP   10.109.45.10    <none>443/TCP                         10h
neuvector-svc-controller          ClusterIP   None            <none>18300/TCP,18301/TCP,18301/UDP   10h
neuvector-svc-crd-webhook         ClusterIP   10.106.99.188   <none>443/TCP                         10h

通过

https://<public-ip>:8443

访问NeuVector控制台,用户名和密码:

admin/admin

注意

neuvector.yaml

文件中指定的 NodePort 服务会在所有 Kubernetes 节点上为 NeuVector 管理 Web 控制台端口打开一个随机端口。或者,您也可以使用 LoadBalancer 或 Ingress,使用公共 IP 和默认端口 8443。对于 NodePort,如果有需要,请确保通过防火墙为该端口打开访问权限。

如果您想查看主机节点上开放的端口,可以运行以下命令:

kubectl get svc -n neuvector

输出将类似于:

NAME                          CLUSTER-IP      EXTERNAL-IP   PORT(S)                                          AGE
neuvector-service-webui     10.100.195.99     <nodes>       8443:31585/TCP

如果是NodePort类型的服务,则可以通过

https://<public-ip>:<nodeport>

访问:

dashboard

PKS 变更(可选)

注意

PKS[VMware Enterprise PKS(Pivotal Container Service)] 已在实际场景中测试,要求在plan/tile中启用特权容器,并更改 Allinone、Controller、Enforcer 的 yaml 文件中的

hostPath

,如下所示:

hostPath:path: /var/vcap/sys/run/docker/docker.sock

主节点污点和容忍(可选)

要在节点上调度 Enforcer,所有污点信息必须匹配。可以使用以下命令检查节点(例如主节点)的污点信息:

kubectl get node taintnodename -o yaml

示例输出:

spec:taints:-effect: NoSchedule
    key: node-role.kubernetes.io/master
  # 以下可能有额外的污点信息-effect: NoSchedule
    key: mykey
    value: myvalue

如果存在如上所示的额外污点信息,请将其添加到 yaml 文件的容忍部分:

spec:template:spec:tolerations:-effect: NoSchedule
          key: node-role.kubernetes.io/master
        -effect: NoSchedule
          key: node-role.kubernetes.io/control-plane
        # 如果有如上的额外污点信息,请在此添加。必须匹配污点节点上定义的所有污点信息,否则 Enforcer 无法部署到污点节点上。-effect: NoSchedule
          key: mykey
          value: myvalue

使用节点标签指定 Manager 和 Controller 节点(可选)

为了控制 Manager 和 Controller 部署在特定节点上,可以为每个节点添加标签。将

nodename

替换为相应的节点名称(可以使用

kubectl get nodes

查看)。

注意:默认情况下,Kubernetes 不会在主节点上调度 Pod。

kubectl label nodes nodename nvcontroller=true

然后在 Manager 和 Controller 的部署 yaml 文件中添加

nodeSelector

,示例如下:

-mountPath: /host/cgroup
          name: cgroup-vol
          readOnly:truenodeSelector:nvcontroller:"true"restartPolicy: Always

如果该 Controller 节点是一个专用管理节点(不包含要监控的应用容器),并且希望防止 Enforcer 部署到 Controller 节点上,可以在 Enforcer 的 yaml 文件部分中添加

nodeAffinity

。示例如下:

app: neuvector-enforcer-pod
    spec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-matchExpressions:-key: nvcontroller
                  operator: NotIn
                  values:["true"]imagePullSecrets:

滚动更新(可选)

Kubernetes、RedHat OpenShift 和 Rancher 等编排工具支持具有可配置策略的滚动更新。可以利用该功能更新 NeuVector 容器。最重要的是确保至少有一个 Controller(或 Allinone)在运行,以免策略、日志和连接数据丢失。请确保容器更新间隔至少为 120 秒,以便新主节点选举和数据在控制器之间同步。

提供的示例部署 yaml 文件已配置滚动更新策略。如果通过 NeuVector Helm chart 进行更新,请拉取最新的 chart 以正确配置新功能(如准入控制),并删除旧的 NeuVector 集群角色和集群角色绑定。如果通过 Kubernetes 更新,可以使用以下示例命令手动更新到新版本。

Kubernetes 滚动更新示例

对于只需更新到新镜像版本的升级,您可以使用以下简单方法。

如果您的 Deployment 或 DaemonSet 已在运行,可以将 yaml 文件更改为新版本,然后应用更新:

kubectl apply -f <yaml file>

从命令行更新到新版本的 NeuVector:

  1. 对于作为 Deployment 的 Controller(Manager 也类似操作):
kubectl set image deployment/neuvector-controller-pod neuvector-controller-pod=neuvector/controller:<version> -n neuvector
  1. 对于任何作为 DaemonSet 的容器:
kubectl set image -n neuvector ds/neuvector-enforcer-pod neuvector-enforcer-pod=neuvector/enforcer:<version>

检查滚动更新的状态:

kubectl rollout status -n neuvector ds/neuvector-enforcer-pod
kubectl rollout status -n neuvector deployment/neuvector-controller-pod

回滚更新:

kubectl rollout undo -n neuvector ds/neuvector-enforcer-pod
kubectl rollout undo -n neuvector deployment/neuvector-controller-pod

在 Kubernetes 中暴露 REST API(可选)

要从 Kubernetes 集群外部访问 REST API,以下是一个示例 yaml 文件:

apiVersion: v1
kind: Service
metadata:name: neuvector-service-rest
  namespace: neuvector
spec:ports:-port:10443name: controller
      protocol: TCP
  type: LoadBalancer
  selector:app: neuvector-controller-pod

有关 REST API 的更多信息,请参见自动化部分。

在非特权模式下部署(可选)

以下说明可用于在不使用特权模式容器的情况下部署 NeuVector。控制器已经处于非特权模式,而执法器的部署需要进行更改,相关代码片段如下所示。

执法器(Enforcer):

spec:
  template:
    metadata:
      annotations:
        container.apparmor.security.beta.kubernetes.io/neuvector-enforcer-pod: unconfined
        # this line is required to be added if k8s version is pre-v1.19# container.seccomp.security.alpha.kubernetes.io/neuvector-enforcer-pod: unconfined
    spec:
      containers:
          securityContext:
            # the following two lines are required for k8s v1.19+. pls comment out both lines if version is pre-1.19. Otherwise, a validating data error message will show
            seccompProfile:
              type: Unconfined
            capabilities:
              add:
              - SYS_ADMIN
              - NET_ADMIN
              - SYS_PTRACE
              - IPC_LOCK

以下示例是一个完整的部署参考(Kubernetes 1.19+)。

apiVersion: v1
kind: Service
metadata:name: neuvector-svc-crd-webhook
  namespace: neuvector
spec:ports:-port:443targetPort:30443protocol: TCP
    name: crd-webhook
  type: ClusterIP
  selector:app: neuvector-controller-pod

---apiVersion: v1
kind: Service
metadata:name: neuvector-svc-admission-webhook
  namespace: neuvector
spec:ports:-port:443targetPort:20443protocol: TCP
    name: admission-webhook
  type: ClusterIP
  selector:app: neuvector-controller-pod

---apiVersion: v1
kind: Service
metadata:name: neuvector-service-webui
  namespace: neuvector
spec:ports:-port:8443name: manager
      protocol: TCP
  type: LoadBalancer
  #type: NodePortselector:app: neuvector-manager-pod

---apiVersion: v1
kind: Service
metadata:name: neuvector-svc-controller
  namespace: neuvector
spec:ports:-port:18300protocol:"TCP"name:"cluster-tcp-18300"-port:18301protocol:"TCP"name:"cluster-tcp-18301"-port:18301protocol:"UDP"name:"cluster-udp-18301"clusterIP: None
  selector:app: neuvector-controller-pod

---apiVersion: apps/v1
kind: Deployment
metadata:name: neuvector-manager-pod
  namespace: neuvector
spec:selector:matchLabels:app: neuvector-manager-pod
  replicas:1template:metadata:labels:app: neuvector-manager-pod
    spec:serviceAccountName: basic
      serviceAccount: basic
      containers:-name: neuvector-manager-pod
          image: neuvector/manager:5.4.0
          env:-name: CTRL_SERVER_IP
              value: neuvector-svc-controller.neuvector
      restartPolicy: Always

---apiVersion: apps/v1
kind: Deployment
metadata:name: neuvector-controller-pod
  namespace: neuvector
spec:selector:matchLabels:app: neuvector-controller-pod
  minReadySeconds:60strategy:type: RollingUpdate
    rollingUpdate:maxSurge:1maxUnavailable:0replicas:3template:metadata:labels:app: neuvector-controller-pod
    spec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:-weight:100podAffinityTerm:labelSelector:matchExpressions:-key: app
                  operator: In
                  values:- neuvector-controller-pod
              topologyKey:"kubernetes.io/hostname"serviceAccountName: controller
      serviceAccount: controller
      containers:-name: neuvector-controller-pod
          image: neuvector/controller:5.4.0
          securityContext:runAsUser:0readinessProbe:exec:command:- cat
              - /tmp/ready
            initialDelaySeconds:5periodSeconds:5env:-name: CLUSTER_JOIN_ADDR
              value: neuvector-svc-controller.neuvector
            -name: CLUSTER_ADVERTISED_ADDR
              valueFrom:fieldRef:fieldPath: status.podIP
            -name: CLUSTER_BIND_ADDR
              valueFrom:fieldRef:fieldPath: status.podIP
            # - name: CTRL_PERSIST_CONFIG#   value: "1"volumeMounts:# - mountPath: /var/neuvector#   name: nv-share#   readOnly: false-mountPath: /etc/config
              name: config-volume
              readOnly:trueterminationGracePeriodSeconds:300restartPolicy: Always
      volumes:# - name: nv-share#   persistentVolumeClaim:#     claimName: neuvector-data-name: config-volume
          projected:sources:-configMap:name: neuvector-init
                  optional:true-secret:name: neuvector-init
                  optional:true-secret:name: neuvector-secret
                  optional:true---apiVersion: apps/v1
kind: DaemonSet
metadata:name: neuvector-enforcer-pod
  namespace: neuvector
spec:selector:matchLabels:app: neuvector-enforcer-pod
  updateStrategy:type: RollingUpdate
  template:metadata:labels:app: neuvector-enforcer-pod
      annotations:container.apparmor.security.beta.kubernetes.io/neuvector-enforcer-pod: unconfined
      # Add the following for pre-v1.19# container.seccomp.security.alpha.kubernetes.io/neuvector-enforcer-pod: unconfinedspec:tolerations:-effect: NoSchedule
          key: node-role.kubernetes.io/master
        -effect: NoSchedule
          key: node-role.kubernetes.io/control-plane
      hostPID:trueserviceAccountName: enforcer
      serviceAccount: enforcer
      containers:-name: neuvector-enforcer-pod
          image: neuvector/enforcer:5.4.0
          securityContext:# the following two lines are required for k8s v1.19+. pls comment out both lines if version is pre-1.19. Otherwise, a validating data error message will showseccompProfile:type: Unconfined
            capabilities:add:- SYS_ADMIN
              - NET_ADMIN
              - SYS_PTRACE
              - IPC_LOCK
          env:-name: CLUSTER_JOIN_ADDR
              value: neuvector-svc-controller.neuvector
            -name: CLUSTER_ADVERTISED_ADDR
              valueFrom:fieldRef:fieldPath: status.podIP
            -name: CLUSTER_BIND_ADDR
              valueFrom:fieldRef:fieldPath: status.podIP
          volumeMounts:-mountPath: /lib/modules
              name: modules-vol
              readOnly:true# - mountPath: /run/runtime.sock#   name: runtime-sock#   readOnly: true# - mountPath: /host/proc#   name: proc-vol#   readOnly: true# - mountPath: /host/cgroup#   name: cgroup-vol#   readOnly: true-mountPath: /var/nv_debug
              name: nv-debug
              readOnly:falseterminationGracePeriodSeconds:1200restartPolicy: Always
      volumes:-name: modules-vol
          hostPath:path: /lib/modules
        # - name: runtime-sock#   hostPath:#     path: /var/run/docker.sock#     path: /var/run/containerd/containerd.sock#     path: /run/dockershim.sock#     path: /run/k3s/containerd/containerd.sock#     path: /var/run/crio/crio.sock#     path: /var/vcap/sys/run/docker/docker.sock# - name: proc-vol#   hostPath:#     path: /proc# - name: cgroup-vol#   hostPath:#     path: /sys/fs/cgroup-name: nv-debug
          hostPath:path: /var/nv_debug

---apiVersion: apps/v1
kind: Deployment
metadata:name: neuvector-scanner-pod
  namespace: neuvector
spec:selector:matchLabels:app: neuvector-scanner-pod
  strategy:type: RollingUpdate
    rollingUpdate:maxSurge:1maxUnavailable:0replicas:2template:metadata:labels:app: neuvector-scanner-pod
    spec:serviceAccountName: scanner
      serviceAccount: scanner
      containers:-name: neuvector-scanner-pod
          image: neuvector/scanner:latest
          imagePullPolicy: Always
          env:-name: CLUSTER_JOIN_ADDR
              value: neuvector-svc-controller.neuvector
      restartPolicy: Always

---apiVersion: batch/v1
kind: CronJob
metadata:name: neuvector-updater-pod
  namespace: neuvector
spec:schedule:"0 0 * * *"jobTemplate:spec:template:metadata:labels:app: neuvector-updater-pod
        spec:serviceAccountName: updater
          serviceAccount: updater
          containers:-name: neuvector-updater-pod
            image: neuvector/updater:latest
            imagePullPolicy: Always
            command:- TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; /usr/bin/curl -kv -X PATCH -H "Authorization:Bearer $TOKEN" -H "Content-Type:application/strategic-merge-patch+json" -d '{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"'`date +%Y-%m-%dT%H:%M:%S%z`'"}}}}}' 'https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod'
          restartPolicy: Never

PKS 变更

注意

PKS 已经过现场测试,并要求在计划/模板中启用特权容器,同时需要更改 yaml 文件中的

hostPath

,如下所示,适用于 Allinone 和 Enforcer:

      hostPath:
            path: /var/vcap/sys/run/docker/docker.sock

dater
serviceAccount: updater
containers:

  • name: neuvector-updater-pod
    image: neuvector/updater:latest
    imagePullPolicy: Always
    command:
  • TOKEN=
    cat /var/run/secrets/kubernetes.io/serviceaccount/token
    
    ; /usr/bin/curl -kv -X PATCH -H “Authorization:Bearer $TOKEN” -H “Content-Type:application/strategic-merge-patch+json” -d ‘{“spec”:{“template”:{“metadata”:{“annotations”:{“kubectl.kubernetes.io/restartedAt”:“‘
    date +%Y-%m-%dT%H:%M:%S%z
    
    ’”}}}}}’ ‘https://kubernetes.default/apis/apps/v1/namespaces/neuvector/deployments/neuvector-scanner-pod’
    restartPolicy: Never

**PKS 变更**

> **注意**
>
> PKS 已经过现场测试,并要求在计划/模板中启用特权容器,同时需要更改 yaml 文件中的 `hostPath`,如下所示,适用于 Allinone 和 Enforcer:

```bash
      hostPath:
            path: /var/vcap/sys/run/docker/docker.sock
标签: 云原生 安全

本文转载自: https://blog.csdn.net/codelearning/article/details/143639776
版权归原作者 lldhsds 所有, 如有侵权,请联系我们删除。

“云原生安全解决方案NeuVector 5.X部署实践”的评论:

还没有评论