0


linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan/macvtap 模式

linux 网卡模式

linux网卡支持非vlan模式、vlan模式、bond模式、bridge模式,macvlan模式、ipvlan模式等,下面介绍交换机端及服务器端配置示例。

前置要求:

  • 准备一台物理交换机,以 H3C S5130 三层交换机为例
  • 准备一台物理服务器,以 Ubuntu 22.04 LTS 操作系统为例

交换机创建2个示例VLAN,vlan10和vlan20,及VLAN接口。

  1. <H3C>system-view
  2. [H3C]vlan 1020[H3C]interface Vlan-interface 10[H3C-Vlan-interface10]ip address 172.16.10.1 24[H3C-Vlan-interface10]undo shutdown[H3C-Vlan-interface10]exit
  3. [H3C][H3C]interface Vlan-interface 20[H3C-Vlan-interface20]ip address 172.16.20.1 24[H3C-Vlan-interface20]undo shutdown[H3C-Vlan-interface20]exit
  4. [H3C]

网卡非vlan模式

网卡非vlan模式,一般直接配置IP地址,对端上连交换机配置为access口,access口一般用于连接纯物理服务器或办公终端设备。

示意图如下
在这里插入图片描述

交换机配置,交换机接口配置为access模式,并加入对应vlan

  1. <H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/1]port link-type access
  4. [H3C-GigabitEthernet1/0/1]port access vlan 10[H3C-GigabitEthernet1/0/1]exit
  5. [H3C][H3C]interface GigabitEthernet 1/0/2
  6. [H3C-GigabitEthernet1/0/2]port link-type access
  7. [H3C-GigabitEthernet1/0/2]port access vlan 20[H3C-GigabitEthernet1/0/2]exit
  8. [H3C]

服务器1配置,服务器网卡直接配置IP地址

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.10.10/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 172.16.10.1
  6. version:2

服务器2配置,服务器网卡直接配置IP地址

  1. root@server2:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.20.10/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 172.16.20.1
  6. version:2

应用网卡配置

  1. netplan apply

查看服务器接口信息

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
  10. valid_lft forever preferred_lft forever
  11. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  12. valid_lft forever preferred_lft forever

通过server1 ping server2测试连通性,三层交换机支持路由功能,能够打通二层隔离的vlan网段。

  1. root@server1:~# ping 172.16.20.10 -c 4
  2. PING 172.16.20.10 (172.16.20.10)56(84) bytes of data.
  3. 64 bytes from 172.16.20.10: icmp_seq=1ttl=64time=0.033 ms
  4. 64 bytes from 172.16.20.10: icmp_seq=2ttl=64time=0.048 ms
  5. 64 bytes from 172.16.20.10: icmp_seq=3ttl=64time=0.048 ms
  6. 64 bytes from 172.16.20.10: icmp_seq=4ttl=64time=0.047 ms
  7. --- 172.16.20.10 ping statistics ---
  8. 4 packets transmitted, 4 received, 0% packet loss, time 3061ms
  9. rtt min/avg/max/mdev =0.033/0.044/0.048/0.006 ms

网卡vlan模式

vlan模式下,对端上连交换机需要配置为trunk口,允许多个vlan通过。

示意图如下
在这里插入图片描述

交换机配置,交换机需要配置为trunk口,允许多个vlan通过

  1. H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/1]port link-type trunk
  4. [H3C-GigabitEthernet1/0/1]port trunk permit vlan 1020[H3C-GigabitEthernet1/0/1]exit
  5. [H3C]

服务器配置,服务器需要配置vlan子接口

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:truevlans:vlan10:id:10link: enp1s0
  2. addresses:["172.16.10.10/24"]routes:-to: default
  3. via: 172.16.10.1
  4. metric:200vlan20:id:20link: enp1s0
  5. addresses:["172.16.20.10/24"]routes:-to: default
  6. via: 172.16.20.1
  7. metric:300version:2

查看接口信息,新建了两个vlan子接口vlan10和vlan20

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  10. valid_lft forever preferred_lft forever
  11. 10: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  12. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  13. inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
  14. valid_lft forever preferred_lft forever
  15. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  16. valid_lft forever preferred_lft forever
  17. 11: vlan20@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  18. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  19. inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
  20. valid_lft forever preferred_lft forever
  21. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  22. valid_lft forever preferred_lft forever

通过vlan10 和 vlan20测试与网关连通性

  1. root@server1:~# ping 172.16.10.1 -c 4
  2. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  3. 64 bytes from 172.16.10.1: icmp_seq=1ttl=64time=0.033 ms
  4. 64 bytes from 172.16.10.1: icmp_seq=2ttl=64time=0.048 ms
  5. 64 bytes from 172.16.10.1: icmp_seq=3ttl=64time=0.048 ms
  6. 64 bytes from 172.16.10.1: icmp_seq=4ttl=64time=0.047 ms
  7. --- 172.16.10.1 ping statistics ---
  8. 4 packets transmitted, 4 received, 0% packet loss, time 3061ms
  9. rtt min/avg/max/mdev =0.033/0.044/0.048/0.006 ms
  10. root@server1:~#
  11. root@server1:~# ping 172.16.20.1 -c 4
  12. PING 172.16.20.1 (172.16.20.1)56(84) bytes of data.
  13. 64 bytes from 172.16.20.1: icmp_seq=1ttl=64time=0.033 ms
  14. 64 bytes from 172.16.20.1: icmp_seq=2ttl=64time=0.048 ms
  15. 64 bytes from 172.16.20.1: icmp_seq=3ttl=64time=0.048 ms
  16. 64 bytes from 172.16.20.1: icmp_seq=4ttl=64time=0.047 ms
  17. --- 172.16.20.1 ping statistics ---
  18. 4 packets transmitted, 4 received, 0% packet loss, time 3061ms
  19. rtt min/avg/max/mdev =0.033/0.044/0.048/0.006 ms

网卡bond模式

bond模式下,对端交换机需要配置bond聚合口。

示意图如下
在这里插入图片描述

交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。然后将bond口配置为trunk模式。

  1. <H3C>system-view
  2. [H3C]interface Bridge-Aggregation 1[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
  3. [H3C-Bridge-Aggregation1]quit
  4. [H3C]interface GigabitEthernet 1/0/1
  5. [H3C-GigabitEthernet1/0/1]port link-aggregation group 1[H3C-GigabitEthernet1/0/1]exit
  6. [H3C]interface GigabitEthernet 1/0/3
  7. [H3C-GigabitEthernet1/0/3]port link-aggregation group 1[H3C-GigabitEthernet1/0/3]exit
  8. [H3C]interface Bridge-Aggregation 1[H3C-Bridge-Aggregation1]port link-type trunk
  9. [H3C-Bridge-Aggregation1]port trunk permit vlan 1020[H3C-Bridge-Aggregation1]exit

服务器配置

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:version:2ethernets:enp1s0:dhcp4: no
  2. enp2s0:dhcp4: no
  3. bonds:bond0:interfaces:- enp1s0
  4. - enp2s0
  5. parameters:mode: 802.3ad
  6. lacp-rate: fast
  7. mii-monitor-interval:100transmit-hash-policy: layer2+3
  8. vlans:vlan10:id:10link: bond0
  9. addresses:["172.16.10.10/24"]routes:-to: default
  10. via: 172.16.10.1
  11. metric:200vlan20:id:20link: bond0
  12. addresses:["172.16.20.10/24"]routes:-to: default
  13. via: 172.16.20.1
  14. metric:300

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20,enp1s0和enp2s0显示

  1. master bond0

,说明两个网卡属于bond0成员接口。

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
  8. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
  9. 3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
  10. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
  11. 7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  12. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
  13. inet6 fe80::acfd:60ff:fe48:841a/64 scope link
  14. valid_lft forever preferred_lft forever
  15. 8: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  16. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
  17. inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
  18. valid_lft forever preferred_lft forever
  19. inet6 fe80::acfd:60ff:fe48:841a/64 scope link
  20. valid_lft forever preferred_lft forever
  21. 9: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  22. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
  23. inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
  24. valid_lft forever preferred_lft forever
  25. inet6 fe80::acfd:60ff:fe48:841a/64 scope link
  26. valid_lft forever preferred_lft forever

查看bond状态,

  1. Bonding Mode

显示为

  1. IEEE 802.3ad Dynamic link aggregation

,并且下面

  1. Slave Interface

显示了两个成员接口的信息。

  1. root@server1:~# cat /proc/net/bonding/bond0Ethernet Channel Bonding Driver: v5.15.0-60-generic
  2. Bonding Mode: IEEE 802.3ad Dynamic link aggregation
  3. Transmit Hash Policy: layer2+3 (2)
  4. MII Status: up
  5. MII Polling Interval (ms):100Up Delay (ms):0Down Delay (ms):0Peer Notification Delay (ms):0
  6. 802.3ad info
  7. LACP active: on
  8. LACP rate: fast
  9. Min links:0Aggregator selection policy (ad_select): stable
  10. System priority:65535System MAC address: ae:fd:60:48:84:1a
  11. Active Aggregator Info:Aggregator ID:1Number of ports:2Actor Key:9Partner Key:1Partner Mac Address: fc:60:9b:35:ad:18Slave Interface: enp1s0
  12. MII Status: up
  13. Speed: 1000 Mbps
  14. Duplex: full
  15. Link Failure Count:2Permanent HW addr: 7c:b5:9b:59:0a:71Slave queue ID:0Aggregator ID:1Actor Churn State: none
  16. Partner Churn State: none
  17. Actor Churned Count:0Partner Churned Count:0details actor lacp pdu:system priority:65535system mac address: ae:fd:60:48:84:1a
  18. port key:9port priority:255port number:1port state:63details partner lacp pdu:system priority:32768system mac address: fc:60:9b:35:ad:18oper key:1port priority:32768port number:2port state:61Slave Interface: enp2s0
  19. MII Status: up
  20. Speed: 1000 Mbps
  21. Duplex: full
  22. Link Failure Count:3Permanent HW addr: e4:54:e8:dc:e5:88Slave queue ID:0Aggregator ID:1Actor Churn State: none
  23. Partner Churn State: none
  24. Actor Churned Count:0Partner Churned Count:0details actor lacp pdu:system priority:65535system mac address: ae:fd:60:48:84:1a
  25. port key:9port priority:255port number:2port state:63details partner lacp pdu:system priority:32768system mac address: fc:60:9b:35:ad:18oper key:1port priority:32768port number:1port state:61

测试连通性,测试与交换机网关地址的连通性:

  1. root@server1:~# ping 172.16.10.1 -c 4
  2. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  3. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=1.64 ms
  4. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=1.59 ms
  5. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=1.95 ms
  6. 64 bytes from 172.16.10.1: icmp_seq=4ttl=255time=1.93 ms
  7. --- 172.16.10.1 ping statistics ---
  8. 4 packets transmitted, 4 received, 0% packet loss, time 3006ms
  9. rtt min/avg/max/mdev =1.589/1.776/1.953/0.165 ms
  10. root@server1:~#

关闭一个接口,再次测试连通性,依然能够ping通

  1. root@server1:~# ip link set dev enp2s0 down
  2. root@server1:~# ip link show enp2s03: enp2s0: <BROADCAST,MULTICAST,SLAVE> mtu 1500 qdisc fq_codel master bond0 state DOWN mode DEFAULT group default qlen 1000
  3. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
  4. root@server1:~#
  5. root@server1:~# ping 172.16.10.1 -c 4
  6. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  7. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=1.54 ms
  8. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=1.64 ms
  9. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=2.73 ms
  10. 64 bytes from 172.16.10.1: icmp_seq=4ttl=255time=1.47 ms
  11. --- 172.16.10.1 ping statistics ---
  12. 4 packets transmitted, 4 received, 0% packet loss, time 3006ms
  13. rtt min/avg/max/mdev =1.470/1.844/2.732/0.516 ms

网卡桥接模式

桥接模式下,对端交换机可配置access模式或trunk模式。

示意图如下

在这里插入图片描述

交换机配置,交换机接口配置为access模式为例,并加入对应vlan

  1. <H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/1]port link-type access
  4. [H3C-GigabitEthernet1/0/1]port access vlan 10[H3C-GigabitEthernet1/0/1]exit
  5. [H3C]

服务器配置,物理网卡加入到网桥中,IP地址配置到网桥接口br0上。

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:version:2ethernets:enp1s0:dhcp4: no
  2. dhcp6: no
  3. bridges:br0:interfaces:[enp1s0]addresses:[172.16.10.10/24]routes:-to: default
  4. via: 172.16.10.1
  5. metric:100on-link:truemtu:1500nameservers:addresses:- 223.5.5.5
  6. - 223.6.6.6
  7. parameters:stp:trueforward-delay:4

查看网卡信息

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. 12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  10. link/ether 0e:d0:7e:31:9c:74 brd ff:ff:ff:ff:ff:ff
  11. inet 172.16.10.10/24 brd 172.16.10.255 scope global br0
  12. valid_lft forever preferred_lft forever
  13. inet6 fe80::cd0:7eff:fe31:9c74/64 scope link
  14. valid_lft forever preferred_lft forever

查看网桥及接口,当前网桥上只有一个物理接口enp1s0。

  1. root@server1:~# apt install -y bridge-utils
  2. root@ubuntu:~# brctl show
  3. bridge name bridge id STP enabled interfaces
  4. br0 8000.0ed07e319c74 yes enp1s0
  5. root@server1:~#

这样在KVM虚拟化环境,虚拟机实例连接到网桥后,虚拟机可以配置与物理网卡相同网段的IP地址。访问虚拟机可以像访问物理机一样方便。

网卡macvlan模式

macvlan(MAC Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它允许在一个物理网卡接口上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的MAC地址,也可以配置上 IP 地址进行通信。Macvlan 下的虚拟机或者容器网络和主机在同一个网段中,共享同一个广播域。

macvlan模式下,对端交换机可配置access模式或trunk模式,trunk模式下macvlan能够与vlan很好的结合使用。

示意图如下:

在这里插入图片描述

macvlan IP模式

该模式下,上连交换机接口配置为access模式,服务器macvlan主网卡和子接口直接配置相同网段的IP地址。

交换机配置

  1. <H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/1]port link-type access
  4. [H3C-GigabitEthernet1/0/1]port access vlan 10[H3C-GigabitEthernet1/0/1]exit

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

  1. cat>/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
  2. #! /bin/bash
  3. ip link add macvlan0 link enp1s0 type macvlan mode bridge
  4. ip link add macvlan1 link enp1s0 type macvlan mode bridge
  5. EOFchmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.10.10/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 172.16.10.1
  6. macvlan0:addresses:- 172.16.10.11/24
  7. macvlan1:addresses:- 172.16.10.12/24
  8. version:2

应用网卡配置

  1. netplan apply

查看网卡信息,新建了两个macvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有独立的MAC地址。

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
  10. valid_lft forever preferred_lft forever
  11. 13: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  12. link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
  13. inet 172.16.10.11/24 brd 172.16.10.255 scope global macvlan0
  14. valid_lft forever preferred_lft forever
  15. inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link
  16. valid_lft forever preferred_lft forever
  17. 14: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  18. link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
  19. inet 172.16.10.12/24 brd 172.16.10.255 scope global macvlan1
  20. valid_lft forever preferred_lft forever
  21. inet6 fe80::d073:75ff:fe14:b204/64 scope link
  22. valid_lft forever preferred_lft forever

测试与网关的连通性

  1. root@server1:~# ping -c 3 172.16.10.1
  2. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  3. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=3.60 ms
  4. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=1.45 ms
  5. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=1.44 ms
  6. --- 172.16.10.1 ping statistics ---
  7. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  8. rtt min/avg/max/mdev =1.441/2.163/3.602/1.017 ms
  9. root@server1:~#

macvlan vlan模式

该模式下,上连交换机接口配置为trunk模式,服务器macvlan主网卡不配置IP地址,每个macvlan子接口配置为不同的vlan子接口。

交换机配置

  1. <H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/3]port link-type trunk
  4. [H3C-GigabitEthernet1/0/3]port trunk permit vlan 1020[H3C-GigabitEthernet1/0/1]exit
  5. [H3C]

服务器配置,macvlan支持多种模式,这里使用bridge模式,并持久化配置

  1. cat>/etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh<<EOF
  2. #! /bin/bash
  3. ip link add macvlan0 link enp1s0 type macvlan mode bridge
  4. ip link add macvlan1 link enp1s0 type macvlan mode bridge
  5. EOFchmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-macvlan-interfaces.sh

配置netplan,两个macvlan接口macvlan0和macvlan1分别配置vlan子接口vlan10和vlan20。

  1. root@ubuntu:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falsemacvlan0:dhcp4:falsemacvlan1:dhcp4:falsevlans:vlan10:id:10link: macvlan0
  2. addresses:["172.16.10.10/24"]routes:-to: default
  3. via: 172.16.10.1
  4. metric:200vlan20:id:20link: macvlan1
  5. addresses:["172.16.20.10/24"]routes:-to: default
  6. via: 172.16.20.1
  7. metric:300version:2

应用网卡配置

  1. netplan apply

查看网卡信息,新建了两个macvlan接口,以及对应的两个vlan子接口。

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  10. valid_lft forever preferred_lft forever
  11. 11: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  12. link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
  13. inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link
  14. valid_lft forever preferred_lft forever
  15. 12: macvlan1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  16. link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
  17. inet6 fe80::d073:75ff:fe14:b204/64 scope link
  18. valid_lft forever preferred_lft forever
  19. 13: vlan10@macvlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  20. link/ether 32:e8:b4:0a:47:62 brd ff:ff:ff:ff:ff:ff
  21. inet 172.16.10.10/24 brd 172.16.10.255 scope global vlan10
  22. valid_lft forever preferred_lft forever
  23. inet6 fe80::30e8:b4ff:fe0a:4762/64 scope link
  24. valid_lft forever preferred_lft forever
  25. 14: vlan20@macvlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  26. link/ether d2:73:75:14:b2:04 brd ff:ff:ff:ff:ff:ff
  27. inet 172.16.20.10/24 brd 172.16.20.255 scope global vlan20
  28. valid_lft forever preferred_lft forever
  29. inet6 fe80::d073:75ff:fe14:b204/64 scope link
  30. valid_lft forever preferred_lft forever

测试两个VLAN接口与外部网关的连通性

  1. root@server1:~# ping -c 3 172.16.10.1
  2. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  3. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=3.60 ms
  4. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=1.45 ms
  5. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=1.44 ms
  6. --- 172.16.10.1 ping statistics ---
  7. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  8. rtt min/avg/max/mdev =1.441/2.163/3.602/1.017 ms
  9. root@server1:~#
  10. root@server1:~# ping -c 3 172.16.20.1
  11. PING 172.16.20.1 (172.16.20.1)56(84) bytes of data.
  12. 64 bytes from 172.16.20.1: icmp_seq=1ttl=255time=1.35 ms
  13. 64 bytes from 172.16.20.1: icmp_seq=2ttl=255time=1.48 ms
  14. 64 bytes from 172.16.20.1: icmp_seq=3ttl=255time=1.46 ms
  15. --- 172.16.20.1 ping statistics ---
  16. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  17. rtt min/avg/max/mdev =1.353/1.429/1.477/0.054 ms
  18. root@server1:~#

网卡ipvlan模式

IPVLAN(IP Virtual LAN)是Linux内核提供的一种网络虚拟化技术,它可以在一个物理网卡上创建多个虚拟网卡接口,每个虚拟接口都有自己独立的IP地址。

IPVLAN和macvlan类似,都是从一个主机接口虚拟出多个虚拟网络接口。唯一比较大的区别就是ipvlan虚拟出的子接口都有相同的mac地址(与物理接口共用同个mac地址),但可配置不同的ip地址。

ipvlan模式下,对端交换机也可以配置access模式或trunk模式,trunk模式下ipvlan能够与vlan很好的结合使用。

示意图如下
在这里插入图片描述

交换机配置

  1. <H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/1]port link-type access
  4. [H3C-GigabitEthernet1/0/1]port access vlan 10[H3C-GigabitEthernet1/0/1]exit
  5. [H3C]

服务器配置,ipvlan支持三种模式(l2、l3、l3s),这里使用l3模式,并持久化配置

  1. cat>/etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh<<EOF
  2. #! /bin/bash
  3. ip link add ipvlan0 link enp1s0 type ipvlan mode l3
  4. ip link add ipvlan1 link enp1s0 type ipvlan mode l3
  5. EOFchmod o+x,g+x,u+x /etc/networkd-dispatcher/routable.d/10-ipvlan-interfaces.sh

配置netplan

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.10.10/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 172.16.10.1
  6. ipvlan0:addresses:- 172.16.10.11/24
  7. ipvlan1:addresses:- 172.16.10.12/24
  8. version:2

应用网卡配置

  1. netplan apply

查看网卡信息,新建了两ipvlan接口,IP地址与主网卡位于同一网段,并且每个接口都有与主网卡相同的MAC地址。

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
  10. valid_lft forever preferred_lft forever
  11. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  12. valid_lft forever preferred_lft forever
  13. 9: ipvlan0@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
  14. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  15. inet 172.16.10.11/24 brd 172.16.10.255 scope global ipvlan0
  16. valid_lft forever preferred_lft forever
  17. inet6 fe80::7cb5:9b00:159:a71/64 scope link
  18. valid_lft forever preferred_lft forever
  19. 10: ipvlan1@enp1s0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
  20. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  21. inet 172.16.10.12/24 brd 172.16.10.255 scope global ipvlan1
  22. valid_lft forever preferred_lft forever
  23. inet6 fe80::7cb5:9b00:259:a71/64 scope link
  24. valid_lft forever preferred_lft forever

测试与网关的连通性

  1. root@server1:~# ping -c 3 172.16.10.1
  2. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  3. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=3.60 ms
  4. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=1.45 ms
  5. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=1.44 ms
  6. --- 172.16.10.1 ping statistics ---
  7. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  8. rtt min/avg/max/mdev =1.441/2.163/3.602/1.017 ms
  9. root@server1:~#

网卡 macvtap 模式

使用 bridge 使 KVM 虚拟机能够进行外部通信的另一种替代方法是使用 Linux MacVTap 驱动程序。当不想创建普通网桥,但希望本地网络中的用户访问虚拟机时,可以使用 MacVTap。

与使用bridge 的一个主要区别是 MacVTap 直接连接到 KVM 主机中的网络接口。这种直接连接绕过了 KVM 主机中与连接和使用软件bridge 相关的大部分代码和组件,有效地缩短了代码路径。这种较短的代码路径通常会提高吞吐量并减少外部系统的延迟。

示意图如下:
在这里插入图片描述

交换机配置

  1. <H3C>system-view
  2. [H3C]interface GigabitEthernet 1/0/1
  3. [H3C-GigabitEthernet1/0/1]port link-type access
  4. [H3C-GigabitEthernet1/0/1]port access vlan 10[H3C-GigabitEthernet1/0/1]exit

主机网卡配置

  1. root@server1:~# cat /etc/netplan/00-installer-config.yaml# This is the network config written by 'subiquity'network:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.10.10/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 192.168.137.2
  6. version:2

安装kvm虚拟化环境,创建两个虚拟机,指定从enp1s0主网卡分配mavtap子接口。

  1. virt-install \--name vm1 \--vcpus1\--memory2048\--diskpath=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  2. --os-variant ubuntu22.04 \--noautoconsole\--import\--autostart\--networktype=direct,source=enp1s0,source_mode=bridge,model=virtio
  3. virt-install \--name vm2 \--vcpus1\--memory2048\--diskpath=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  4. --os-variant ubuntu22.04 \--noautoconsole\--import\--autostart\--networktype=direct,source=enp1s0,source_mode=bridge,model=virtio

查看网卡信息,新创建了两个macvtap接口

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 7c:b5:9b:59:0a:71 brd ff:ff:ff:ff:ff:ff
  9. inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
  10. valid_lft forever preferred_lft forever
  11. inet6 fe80::7eb5:9bff:fe59:a71/64 scope link
  12. valid_lft forever preferred_lft forever
  13. 5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
  14. link/ether 52:54:00:bb:15:22 brd ff:ff:ff:ff:ff:ff
  15. inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
  16. valid_lft forever preferred_lft forever
  17. 6: macvtap0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
  18. link/ether 52:54:00:41:8f:a3 brd ff:ff:ff:ff:ff:ff
  19. inet6 fe80::5054:ff:fe41:8fa3/64 scope link
  20. valid_lft forever preferred_lft forever
  21. 7: macvtap1@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
  22. link/ether 52:54:00:93:2c:4a brd ff:ff:ff:ff:ff:ff
  23. inet6 fe80::5054:ff:fe93:2c4a/64 scope link
  24. valid_lft forever preferred_lft forever

虚拟机1配置IP地址

  1. root@vm1:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.10.11/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 172.16.10.1
  6. version:2

虚拟机2配置IP地址

  1. root@vm2:~# cat /etc/netplan/00-installer-config.yamlnetwork:ethernets:enp1s0:dhcp4:falseaddresses:- 172.16.10.12/24
  2. nameservers:addresses:- 223.5.5.5
  3. - 223.6.6.6
  4. routes:-to: default
  5. via: 172.16.10.1
  6. version:2

测试与网关的连通性

  1. root@vm1:~# ping 172.16.10.1 -c 3
  2. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  3. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=1.38 ms
  4. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=1.75 ms
  5. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=4.34 ms
  6. --- 172.16.10.1 ping statistics ---
  7. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  8. rtt min/avg/max/mdev =1.382/2.491/4.344/1.318 ms

bond、vlan与桥接混合配置

将服务器两块网卡组成bond口,在bond口之上创建两个vlan子接口,分别加入两个linux bridge中,然后在不同bridge下创建虚拟机,虚拟机将属于不同vlan。

示意图如下:
在这里插入图片描述
交换机配置,配置动态链路聚合,将端口g1/0/1和g1/0/3加入聚合组。将聚合口配置为trunk模式,允许vlan 8 10 20通过,并且将vlan8 配置为聚合口的native vlan,作为管理使用。

  1. <H3C>system-view
  2. [H3C]interface Vlan-interface 8[H3C-Vlan-interface8]ip address 172.16.8.1 24[H3C-Vlan-interface8]exit
  3. [H3C][H3C]interface Bridge-Aggregation 1[H3C-Bridge-Aggregation1]link-aggregation mode dynamic
  4. [H3C-Bridge-Aggregation1]quit
  5. [H3C]interface GigabitEthernet 1/0/1
  6. [H3C-GigabitEthernet1/0/1]port link-aggregation group 1[H3C-GigabitEthernet1/0/1]exit
  7. [H3C]interface GigabitEthernet 1/0/3
  8. [H3C-GigabitEthernet1/0/3]port link-aggregation group 1[H3C-GigabitEthernet1/0/3]exit
  9. [H3C]interface Bridge-Aggregation 1[H3C-Bridge-Aggregation1]port link-type trunk
  10. [H3C-Bridge-Aggregation1]port trunk permit vlan 81020[H3C-Bridge-Aggregation1]port trunk pvid vlan 8[H3C-Bridge-Aggregation1]undo port trunk permit vlan 1[H3C-Bridge-Aggregation1]exit
  11. [H3C]

服务器网卡配置,注意bond0配置了管理IP地址,匹配交换机native vlan 8。

  1. root@server1:~# cat /etc/netplan/00-installer-config.yamlnetwork:version:2ethernets:enp1s0:dhcp4:falseenp2s0:dhcp4:falsebonds:bond0:dhcp4:falsedhcp6:falseinterfaces:- enp1s0
  2. - enp2s0
  3. addresses:- 172.16.8.10/24
  4. nameservers:addresses:- 223.5.5.5
  5. - 223.6.6.6
  6. routes:-to: default
  7. via: 172.16.8.1
  8. parameters:mode: 802.3ad
  9. lacp-rate: fast
  10. mii-monitor-interval:100transmit-hash-policy: layer2+3
  11. bridges:br10:interfaces:[ vlan10 ]br20:interfaces:[ vlan20 ]vlans:vlan10:id:10link: bond0
  12. vlan20:id:20link: bond0

查看网卡信息,新建了bond0网口,并且基于bond0网口创建了两个vlan子接口vlan10和vlan20。

  1. root@server1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
  8. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr 7c:b5:9b:59:0a:71
  9. 3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
  10. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff permaddr e4:54:e8:dc:e5:88
  11. 15: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  12. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
  13. inet 172.16.8.10/24 brd 172.16.8.255 scope global bond0
  14. valid_lft forever preferred_lft forever
  15. inet6 fe80::acfd:60ff:fe48:841a/64 scope link
  16. valid_lft forever preferred_lft forever
  17. 16: br10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  18. link/ether ee:df:66:ab:c2:4b brd ff:ff:ff:ff:ff:ff
  19. inet6 fe80::ecdf:66ff:feab:c24b/64 scope link
  20. valid_lft forever preferred_lft forever
  21. 17: br20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
  22. link/ether 9e:4d:f4:0a:6d:13 brd ff:ff:ff:ff:ff:ff
  23. inet6 fe80::9c4d:f4ff:fe0a:6d13/64 scope link
  24. valid_lft forever preferred_lft forever
  25. 18: vlan10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br10 state UP group default qlen 1000
  26. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff
  27. 19: vlan20@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br20 state UP group default qlen 1000
  28. link/ether ae:fd:60:48:84:1a brd ff:ff:ff:ff:ff:ff

查看创建的网桥

  1. root@server1:~# brctl show
  2. bridge name bridge id STP enabled interfaces
  3. br10 8000.eedf66abc24b no vlan10
  4. br20 8000.9e4df40a6d13 no vlan20

测试bond0 IP与外部网关连通性

  1. root@server1:~# ping 172.16.8.1 -c 3
  2. PING 172.16.8.1 (172.16.8.1)56(84) bytes of data.
  3. 64 bytes from 172.16.8.1: icmp_seq=1ttl=255time=1.55 ms
  4. 64 bytes from 172.16.8.1: icmp_seq=2ttl=255time=1.61 ms
  5. 64 bytes from 172.16.8.1: icmp_seq=3ttl=255time=1.62 ms
  6. --- 172.16.8.1 ping statistics ---
  7. 3 packets transmitted, 3 received, 0% packet loss, time 2004ms
  8. rtt min/avg/max/mdev =1.550/1.593/1.620/0.030 ms
  9. root@server1:~#

在server1安装kvm虚拟化环境,然后创建两个新的kvm网络,分别绑定到不同网桥

  1. cat >br10-network.xml<<EOF<network><name>br10-net</name><forwardmode="bridge"/><bridgename="br10"/></network>
  2. EOF
  3. cat >br20-network.xml<<EOF<network><name>br20-net</name><forwardmode="bridge"/><bridgename="br20"/></network>
  4. EOF
  5. virsh net-define br10-network.xml
  6. virsh net-define br20-network.xml
  7. virsh net-start br10-net
  8. virsh net-start br20-net
  9. virsh net-autostart br10-net
  10. virsh net-autostart br20-net

查看新建的网络

  1. root@server1:~# virsh net-list
  2. Name State Autostart Persistent
  3. ---------------------------------------------
  4. br10-net active yesyes
  5. br20-net active yesyes
  6. default active yesyes

创建两个虚拟机,指定使用不同网络

  1. virt-install \--name vm1 \--vcpus1\--memory2048\--diskpath=/var/lib/libvirt/images/vm1/jammy-server-cloudimg-amd64.img \
  2. --os-variant ubuntu22.04 \--import\--autostart\--noautoconsole\--networknetwork=br10-net
  3. virt-install \--name vm2 \--vcpus1\--memory2048\--diskpath=/var/lib/libvirt/images/vm2/jammy-server-cloudimg-amd64.img \
  4. --os-variant ubuntu22.04 \--import\--autostart\--noautoconsole\--networknetwork=br20-net

查看创建的虚拟机

  1. root@server1:~# virsh list
  2. Id Name State
  3. ----------------------
  4. 13 vm1 running
  5. 14 vm2 running

为vm1配置vlan10的IP地址

  1. virsh console vm1
  2. cat >/etc/netplan/00-installer-config.yaml<<EOF
  3. network:ethernets:enp1s0:addresses:- 172.16.10.10/24
  4. nameservers:addresses:- 223.5.5.5
  5. routes:-to: default
  6. via: 172.16.10.1
  7. version:2
  8. EOF
  9. netplan apply

为vm2配置vlan20的IP地址

  1. virsh console vm2
  2. cat >/etc/netplan/00-installer-config.yaml<<EOF
  3. network:ethernets:enp1s0:addresses:- 172.16.20.10/24
  4. nameservers:addresses:- 223.5.5.5
  5. routes:-to: default
  6. via: 172.16.20.1
  7. version:2
  8. EOF
  9. netplan apply

登录到vm1,测试vm1与外部网关连通性

  1. root@vm1:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 52:54:00:a4:aa:9d brd ff:ff:ff:ff:ff:ff
  9. inet 172.16.10.10/24 brd 172.16.10.255 scope global enp1s0
  10. valid_lft forever preferred_lft forever
  11. inet6 fe80::5054:ff:fea4:aa9d/64 scope link
  12. valid_lft forever preferred_lft forever
  13. root@vm1:~#
  14. root@vm1:~# ping 172.16.10.1 -c 3
  15. PING 172.16.10.1 (172.16.10.1)56(84) bytes of data.
  16. 64 bytes from 172.16.10.1: icmp_seq=1ttl=255time=1.51 ms
  17. 64 bytes from 172.16.10.1: icmp_seq=2ttl=255time=7.10 ms
  18. 64 bytes from 172.16.10.1: icmp_seq=3ttl=255time=2.10 ms
  19. --- 172.16.10.1 ping statistics ---
  20. 3 packets transmitted, 3 received, 0% packet loss, time 2003ms
  21. rtt min/avg/max/mdev =1.505/3.568/7.101/2.509 ms
  22. root@vm1:~#

登录到vm2,测试vm2与外部网关连通性

  1. root@vm2:~# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
  2. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  3. inet 127.0.0.1/8 scope host lo
  4. valid_lft forever preferred_lft forever
  5. inet6 ::1/128 scope host
  6. valid_lft forever preferred_lft forever
  7. 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
  8. link/ether 52:54:00:89:61:da brd ff:ff:ff:ff:ff:ff
  9. inet 172.16.20.10/24 brd 172.16.20.255 scope global enp1s0
  10. valid_lft forever preferred_lft forever
  11. inet6 fe80::5054:ff:fe89:61da/64 scope link
  12. valid_lft forever preferred_lft forever
  13. root@vm2:~#
  14. root@vm2:~# ping 172.16.20.1 -c 3
  15. PING 172.16.20.1 (172.16.20.1)56(84) bytes of data.
  16. 64 bytes from 172.16.20.1: icmp_seq=1ttl=255time=1.73 ms
  17. 64 bytes from 172.16.20.1: icmp_seq=2ttl=255time=2.00 ms
  18. 64 bytes from 172.16.20.1: icmp_seq=3ttl=255time=2.00 ms
  19. --- 172.16.20.1 ping statistics ---
  20. 3 packets transmitted, 3 received, 0% packet loss, time 2003ms
  21. rtt min/avg/max/mdev =1.732/1.911/2.003/0.126 ms
  22. root@vm2:~#
标签: linux 网络 运维

本文转载自: https://blog.csdn.net/networken/article/details/137021517
版权归原作者 willops 所有, 如有侵权,请联系我们删除。

“linux 网卡配置 vlan/bond/bridge/macvlan/ipvlan/macvtap 模式”的评论:

还没有评论