一、容器网络
1、Docker Native Network drivers
1. Docker 提供如下 5 种原生的 Network drivers
模型说明bridge
默认 网络驱动程序。主要用于多个容器在同一个Docker宿主机上进行通信(当创建新容器时,默认就是bridge)
host
容器加入到宿主机的Network namespace,容器直接使用宿主机网络(注意端口不能冲突) 网卡数和物理机网卡数量相同
nonenone 网络中的容器,不能与外部通信(只有一块lo网卡) 只有一块网卡OverlayOverlay 网络基于 Linux 网桥和 Vxlan,实现跨主机的容器通信MacvlanMacvlan 用于跨主机通信场景
2. Docker 安装时,自动在host上创建了如下3个网络
2、none 网络
**1. **none 网络的 driver 类型是 null,IPAM字段为空
挂载 none 网络上的容器只有 lo,无法与外界通信
# docker inspect none [ { "Name": "none", "Id": "8a84fded05e5362b29b80ea97f793528b04c85d78f61de261fa63b34f574d6b6", "Created": "2022-08-21T08:05:09.923418335Z", "Scope": "local", "Driver": "null", #驱动类型 "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [] #字段为空 }, ...输出省略
2. 测试
# docker run -itd --network none centos # docker ps #查看容器是否运行 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b355f02ca0d6 centos "/bin/bash" 19 seconds ago Up 15 seconds bold_williamson # docker exec -it b355 /bin/bash [root@b355f02ca0d6 /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
3、host 网络
1. 挂在 host 网络上的容器共享宿主机的 network namespace
即容器的网络配置与 host 网络配置完全一样
# docker run -itd --network host --name h1 centos #以host网络运行容器 # docker run -itd --network host --name h2 centos # docker ps #查看容器是否运行 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8a1164f599a4 centos "/bin/bash" 5 minutes ago Up 5 minutes h2 158ba0eb2438 centos "/bin/bash" 5 minutes ago Up 5 minutes h1 ~# ip a #查看当前网络 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:e3:d1:38 brd ff:ff:ff:ff:ff:ff altname enp2s1 inet 192.168.147.102/24 brd 192.168.147.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fee3:d138/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:58:c8:a7:47 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever
k8s-master:~# docker exec -it h1 /bin/bash #发现网卡相关信息与宿主机一致 [root@k8s-master /]# ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:e3:d1:38 brd ff:ff:ff:ff:ff:ff altname enp2s1 inet 192.168.147.102/24 brd 192.168.147.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fee3:d138/64 scope link valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:58:c8:a7:47 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever '容器h2与h1一致'
4、bridge 网络
**1.**docker0 网络
① 容器创建时,默认挂载在 docker0 上 ② docker0 是一个 linux bridge ③ docker0 网络创建时已默认配置了 Subnet
2. 在宿主机上查看 docker0
# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet '172.17.0.1' netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:58:c8:a7:47 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
3. 查看docker0 网络配置
# docker network inspect bridge [ { "Name": "bridge", "Id": "7a84c0c796f04413629020321d2adc0d35fbfdef419d5b9cee78998b0c494274", 'bridge-ID与容器network-ID对应' "Created": "2022-10-26T00:43:27.914929427Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ...输出省略
4. 在后台运行一个名为httpd1 的 httpd 容器
# docker run -itd --name httpd1 httpd
5. 查看该容器的网络配置
确认"NetworkID"和docker0的ID相同,"IPAddress"同网段
# docker inspect httpd1
...输出省略
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "7a84c0c796f04413629020321d2adc0d35fbfdef419d5b9cee78998b0c494274",
#networkID对应brigde-ID
"EndpointID": "94c254fa5a2cb2da2c475ee4a24f5f00a8a6975d7c8cdb338e0fc0226389a7b9",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2", #与网桥在同一网段
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
...输出省略
5、user-defined Bridge 网络
1. 用户可按需创建 bridge 网桥,称为 user-defined Bridge
根据实际情况可以创建多个bridge
2. 创建一个 user-defined Bridge,命名为net1
# docker network create --driver bridge net1 #通过bridge桥接设备驱动创建net1
324c90cd97719e363e2e2c2ce0508f8a2d964bc41898ebcaf8bc827db3627fa8
3. 查看 net1 网桥信息,已自动配置 subnet 和 gateway
root@k8s-master:~# docker network inspect net1
...输出省略
"Name": "net1",
"Id": "324c90cd97719e363e2e2c2ce0508f8a2d964bc41898ebcaf8bc827db3627fa8",
"Created": "2022-10-26T08:10:28.784844522Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
...输出省略
4. 创建第二个网桥,指定IP网段,命名为 net2
# docker network create --driver bridge --subnet 172.10.10.0/24 --gateway 172.10.10.1 net2
6eecbf049df44ecbd7aad0978115aac0cad4942831fa76af2e34f99f8d09fcd5
5. 启动3个 centos 容器,分别命名为c1、c2、c3
其中c1加入net1,c2加入 net2, c3 加入 net2 并配置静态IP
# docker run -itd --name c1 --network net1 centos # docker run -itd --name c2 --network net2 centos # docker run -itd --name c3 --network net2 --ip 172.10.10.10 centos
6. 查看三个 centos 容器的 IP 地址信息
# docker inspect c1 c2 c3 | grep -A 12 Networks | grep "IPAddress" #依次对应c1、c2、c3
"IPAddress": "172.18.0.2",
"IPAddress": "172.10.10.2",
"IPAddress": "172.10.10.10",
7. 进入容器c3,进行连通性测试
结论:c3 与 c2 可以通信,但c1不能通信(c2和c3在一个网桥)
~# docker exec -it c2 ping -c 3 172.10.10.10 #c2和c3可通信 PING 172.10.10.10 (172.10.10.10) 56(84) bytes of data. 64 bytes from 172.10.10.10: icmp_seq=1 ttl=64 time=0.151 ms 64 bytes from 172.10.10.10: icmp_seq=2 ttl=64 time=0.050 ms 64 bytes from 172.10.10.10: icmp_seq=3 ttl=64 time=0.071 ms --- 172.10.10.10 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2033ms rtt min/avg/max/mdev = 0.050/0.090/0.151/0.044 ms # docker exec -it c2 ping -c 3 172.18.0.2 ##c2和c1无法通信 PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data. --- 172.18.0.2 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2035ms
8. 为 c1 添加一块网卡,加入到 net2 网络
~# docker network connect net2 c1
'查看容器内网卡'
docker inspect c1 | grep -wA `docker inspect c1|wc -l` "Networks" | sed -n '/".*{$/ s/": {//p'|sed -n '2,3s/.*"//p'
net1
net2
9. 进入 c1,验证连通性
# docker exec -it c1 ping -c 3 172.10.10.1
PING 172.10.10.1 (172.10.10.1) 56(84) bytes of data.
64 bytes from 172.10.10.1: icmp_seq=1 ttl=64 time=0.170 ms
64 bytes from 172.10.10.1: icmp_seq=2 ttl=64 time=0.073 ms
64 bytes from 172.10.10.1: icmp_seq=3 ttl=64 time=0.042 ms
--- 172.10.10.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.042/0.095/0.170/0.054 ms
10. 删除相关容器
'注意停止和删除顺序,不然无法删除'
# docker stop c1 c2 c3 h1 h2 #停止容器
# docker rm c1 c2 c3 h1 h2 #删除容器
# docker network rm net1 net2 #删除bridge
版权归原作者 *_花非人陌_* 所有, 如有侵权,请联系我们删除。