關于Linux中網卡Teamd(分流容災)的一些筆記

      網友投稿 833 2022-05-29

      寫在前面

      嗯,準備RHCA,學習整理這部分知識

      博文涉及內容:

      網絡teamd 配置Demo、網卡容災機制演示

      管理網絡teamd的常用命令Demo

      寫在前面

      嗯,準備RHCA,學習整理這部分知識

      博文涉及內容:

      網絡teamd 配置Demo、網卡容災機制演示

      管理網絡teamd的常用命令Demo

      網絡teamd 配置Demo、網卡容災機制演示

      管理網絡teamd的常用命令Demo

      「 我感到難過,不是因為你欺騙了我,而是因為我再也不能相信你了。 ------尼采」

      保持服務應用高可用的前提,是要確保網絡的可用性。只有在網絡可用的前提下,才可以考慮通過應用服務的橫向擴展,利用集群負載等方式避免單點故障,那么如何確保網絡的高可用性呢?

      在鏈路層的容災處理中,一般通過冗余的方式,避免單點故障:

      交換機之間可以通過鏈路聚合 Eth-Trunk的方式使多條網線成為一個邏輯鏈路,實現多條線路的負載均衡,提高帶寬,同時提供容錯,當一條線路失效時,不會造成全網中斷。

      華為交換機的Eth-Trunk配置,需要注意兩個交換機都需要配置

      #交換機之間增加了一條網線,兩端接口均為 Ethernet 0/0/8 system-view #進入系統視圖 [Huawei]clear configuration interface Ethernet 0/0/7 #如果接口上之前做了配置,先清空接口信息 [Huawei]interface Ethernet0/0/7 [Huawei-Ethernet0/0/7]display this #發現shutdown,接口關閉,需要開啟 [Huawei-Ethernet0/0/7]undo shutdown #開啟接口 [Huawei]interface Eth-Trunk 1 #進入的同時會自己先創建鏈路聚合接口 [Huawei-Eth-Trunk1]trunkport Ethernet 0/0/7 0/0/8 #將兩個接口一起加入Eth-Trunk1中 [Huawei-Eth-Trunk1]port link-type trunk [Huawei-Eth-Trunk1]port trunk allow-pass vlan all [Huawei-Eth-Trunk1]display this #可以查看到允許所以的vlan通過 [Huawei]display current-configuration #驗證自己的配置結果 save #存盤 #配置完成,用ping命令測試,刪除一條線,再次用ping命令測試,發現都OK

      交換機到服務器之間的,鏈路層又是如何實現容災的呢?

      這就是今天要和小伙伴分享的,服務器通過冗余網絡接口(網卡)的方式避免單點故障,將多個網絡接口鏈接到一起形成一個聚合的邏輯網絡接口,然后把流量分散到內部的多個網絡接口上,實現容錯和吞吐量,其實和線路上的鏈路聚合類似。

      網絡接口鏈路聚合具體的技術有很多,在紅帽的發行版中,RHEL5、RHEL6中使用的是Bonding。而RHEL7、RHEL8使用Teaming用來實現網絡接口鏈路聚合的功能,在RHEL7、RHEL8中,teaming和bonding它們是并存的,我們可以選擇Teaming,也可以選Bonding。

      通過Teaming(網絡組)技術把同一臺服務器上的多個物理網卡通過軟件綁定成一個虛擬網卡(同理虛機中,將多個虛擬網卡綁定為一個邏輯網卡).

      對于外部網絡而言,這臺服務器只有一個可用網絡接口。對于任何應用程序和網絡,這臺服務器只有一個網絡鏈接或者說只有一個可以訪問的IP地址。

      對于外部網絡而言,這臺服務器只有一個可用網絡接口。對于任何應用程序和網絡,這臺服務器只有一個網絡鏈接或者說只有一個可以訪問的IP地址。

      對于服務器內部而言,通過team將網絡流量分散在多個網絡接口上,從而實現故障轉移或提高吞吐量。

      對于服務器內部而言,通過team將網絡流量分散在多個網絡接口上,從而實現故障轉移或提高吞吐量。

      配置網絡Team

      需要的軟件包teamd

      ┌──[root@workstation.lab.example.com]-[~/web] └─$yum list teamd Last metadata expiration check: 0:41:27 ago on Sun 17 Apr 2022 08:34:23 PM CST. Installed Packages teamd.x86_64 1.28-4.el8 @anaconda Available Packages teamd.i686 1.28-4.el8 rhel-8.1-for-x86_64-baseos-rpms ┌──[root@workstation.lab.example.com]-[~/web] └─$rpm -qc teamd /etc/dbus-1/system.d/teamd.conf /usr/lib/systemd/system/teamd@.service ┌──[root@workstation.lab.example.com]-[~/web] └─$nmcli connection add type team team team-slave

      網絡Team是由內核驅動程序和用戶空間守護程序(teamd)實現:

      內核驅動程序,有效地處理網絡數據包。

      teamd守護程序,處理邏輯和接口。

      既然冗余,那一定會涉及負載分流,team通過runner來定義,支持的負載分流類型:

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0 ifname team0 type team team.runner activebackup broadcast lacp loadbalance random roundrobin

      所有網絡交互均通過team接口(或master接口)完成。team interface包涵多個port interface(ports,slave interfaces)。

      使用NetworkManager控制team接口時,特別是在查找故障時,請牢記以下幾點:

      啟動team interface,不會自動啟動其port interface

      停止team interface,始終會停止port interface

      啟動port interface,始終啟動team interface

      停止port interface,不會停止team interface

      不含port的team interface可以啟動靜態IP連接

      在啟動DHCP連接時,不含port interface的team interface將等待port

      如果team interface具有DHCP連接且在等待port,則在添加具有載波信號的port interface時,它會完成激活。

      如果team interface具有DHCP連接且在等待port,則在添加不具有載波信號的port時它會繼續等待。

      使用nmcli命令配置和管理team interface和port interfaces,包含4個步驟:

      當前的環境

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con show NAME UUID TYPE DEVICE ethO-static 7c6d44fe-8349-45ea-beb5-226fe674225b ethernet eth0 Wired connection 1 4ae4bb9e-8f2d-3774-95f8-868d74edcc3c ethernet -- Wired connection 2 c0e6d328-fcb8-3715-8d82-f8c37cb42152 ethernet -- Wired connection 3 9b5ac87b-572c-3632-b8a2-ca242f22733d ethernet -- ┌──[root@servera.lab.example.com]-[~] └─$nmcli dev DEVICE TYPE STATE CONNECTION eth0 ethernet connected ethO-static eth1 ethernet disconnected -- eth2 ethernet disconnected -- lo loopback unmanaged --

      我們要創建一個team鏈路聚合網絡接口,然后修改eth1和eth2這兩個網絡接口master為創建的team接口

      創建team類型接口team0,設置負載類型為activebackup(主備模式)

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0 ifname team0 type team team.runner activebackup Connection 'team0' (2053fe72-6785-4b16-90f0-256c2bf8c4f3) successfully added. ┌──[root@servera.lab.example.com]-[~] └─$nmcli con show NAME UUID TYPE DEVICE team0 2053fe72-6785-4b16-90f0-256c2bf8c4f3 team team0 Wired connection 2 c0e6d328-fcb8-3715-8d82-f8c37cb42152 ethernet eth1 Wired connection 3 9b5ac87b-572c-3632-b8a2-ca242f22733d ethernet eth2 ethO-static 7c6d44fe-8349-45ea-beb5-226fe674225b ethernet eth0 Wired connection 1 4ae4bb9e-8f2d-3774-95f8-868d74edcc3c ethernet --

      我們要和serverd做ping測試,所以根據serverd的IP設置一個ip

      ┌──[root@workstation.lab.example.com]-[~/web] └─$ansible serverd -m shell -a 'ip a' serverd | CHANGED | rc=0 >> 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 。。。。。。。。 3: eth1: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:01:fa:0d brd ff:ff:ff:ff:ff:ff inet 192.168.0.254/24 brd 192.168.0.255 scope global noprefixroute eth1 。。。。。。。 ┌──[root@workstation.lab.example.com]-[~/web] └─$

      修改team0,添加ipv4的地址和掩碼 192.168.0.100/24 (這個地址為創建team鏈路聚合后服務器的網卡IP),并激活網絡接口

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con modify team0 ipv4.method manual ipv4.addresses 192.168.0.100/24 ┌──[root@servera.lab.example.com]-[~] └─$nmcli con up team0 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/33)

      添加eth1網絡接口team0-port1,設置master為team0,并激活

      關于Linux中網卡Teamd(分流容災)的一些筆記

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0-port1 type team-slave ifname eth1 master team0 Connection 'team0-port1' (fd24db64-6f9a-41d3-87a5-79f825731d7f) successfully added. ┌──[root@servera.lab.example.com]-[~] └─$nmcli con up team0-port1 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/29)

      添加eth2網絡接口team0-port2,設置master為team0,并激活

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con add con-name team0-port2 type team-slave ifname eth2 master team0 Connection 'team0-port2' (16a95c2a-b581-4b99-ab5a-b9d5ea6b3a87) successfully added. ┌──[root@servera.lab.example.com]-[~] └─$nmcli con up team0-port2 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/32)

      查看當前的網絡接口信息

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con show NAME UUID TYPE DEVICE team0 2053fe72-6785-4b16-90f0-256c2bf8c4f3 team team0 ethO-static 7c6d44fe-8349-45ea-beb5-226fe674225b ethernet eth0 team0-port1 fd24db64-6f9a-41d3-87a5-79f825731d7f ethernet eth1 team0-port2 16a95c2a-b581-4b99-ab5a-b9d5ea6b3a87 ethernet eth2 。。。。。。。。 ┌──[root@servera.lab.example.com]-[~] └─$

      使用ping 192.168.0.254(servera)測試,指定網卡,可以看到是通過我們設定的192.168.0.100出去的

      ┌──[root@servera.lab.example.com]-[~] └─$ping -I team0 -c 4 192.168.0.254 PING 192.168.0.254 (192.168.0.254) from 192.168.0.100 team0: 56(84) bytes of data. 64 bytes from 192.168.0.254: icmp_seq=1 ttl=64 time=0.663 ms 64 bytes from 192.168.0.254: icmp_seq=2 ttl=64 time=0.675 ms 64 bytes from 192.168.0.254: icmp_seq=3 ttl=64 time=0.670 ms 64 bytes from 192.168.0.254: icmp_seq=4 ttl=64 time=0.701 ms --- 192.168.0.254 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 69ms rtt min/avg/max/mdev = 0.663/0.677/0.701/0.023 ms ┌──[root@servera.lab.example.com]-[~] └─$ping -c 4 192.168.0.254 PING 192.168.0.254 (192.168.0.254) 56(84) bytes of data. 64 bytes from 192.168.0.254: icmp_seq=1 ttl=64 time=1.32 ms 64 bytes from 192.168.0.254: icmp_seq=2 ttl=64 time=0.891 ms 64 bytes from 192.168.0.254: icmp_seq=3 ttl=64 time=1.10 ms 64 bytes from 192.168.0.254: icmp_seq=4 ttl=64 time=0.677 ms --- 192.168.0.254 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 8ms rtt min/avg/max/mdev = 0.677/0.997/1.323/0.240 ms ┌──[root@servera.lab.example.com]-[~] └─$

      通過teamdctl命令查看team接口信息

      ┌──[root@servera.lab.example.com]-[~] └─$teamdctl team0 state setup: runner: activebackup ports: eth1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth1 ┌──[root@servera.lab.example.com]-[~] └─$

      我們可以看到,當前的負載模式為主備負載,即當前eth1為主網絡接口,eth2為備用網絡接口。當前的流量都是通過eth1通信。

      持續ping 192.168.0.254

      ┌──[root@servera.lab.example.com]-[~] └─$ping 192.168.0.254 > /dev/null & [1] 3324

      監控網絡接口的流量信息,會發現eht1持續發送ICMP包

      ┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 01:55:45.049388 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 47, length 64 01:55:45.050152 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 47, length 64 01:55:45.833842 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:55:46.073251 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 48, length 64 01:55:46.073964 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 48, length 64 01:55:47.097140 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 49, length 64 01:55:47.098096 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 49, length 64 ^C 7 packets captured 7 packets received by filter 0 packets dropped by kernel ┌──[root@servera.lab.example.com]-[~] └─$

      eth2沒有流量通過,屬于空閑狀態

      ┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth2 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes 01:56:11.817347 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 01:56:13.801263 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 01:56:15.121093 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:02:fa:0d (oui Unknown), length 286 01:56:15.849784 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel ┌──[root@servera.lab.example.com]-[~] └─$

      使用nmcli con down命令將eth1的連接中斷,模擬eth1網絡接口異常

      ┌──[root@servera.lab.example.com]-[~] └─$nmcli con down team0-port1 Connection 'team0-port1' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/34)

      監控網絡接口的流量信息,會發現eht2持續發送ICMP包,eth1接口空閑。

      ┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth1 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes 01:57:15.817487 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:57:17.802205 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:57:19.849582 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 01:57:21.834175 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8004, length 35 ^C 4 packets captured 4 packets received by filter 0 packets dropped by kernel

      ┌──[root@servera.lab.example.com]-[~] └─$tcpdump -i eth2 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 262144 bytes 01:57:25.721245 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 146, length 64 01:57:25.722066 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 146, length 64 01:57:25.801453 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 01:57:26.722599 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 147, length 64 01:57:26.723161 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 147, length 64 01:57:27.737269 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from 52:54:00:01:fa:0a (oui Unknown), length 286 01:57:27.769125 IP servera.lab.example.com > 192.168.0.254: ICMP echo request, id 3324, seq 148, length 64 01:57:27.769796 IP 192.168.0.254 > servera.lab.example.com: ICMP echo reply, id 3324, seq 148, length 64 01:57:27.850081 STP 802.1d, Config, Flags [none], bridge-id 8000.52:54:00:d7:bc:61.8005, length 35 ^C 9 packets captured 9 packets received by filter 0 packets dropped by kernel ┌──[root@servera.lab.example.com]-[~] └─$

      teamdctl team0 state查看網絡接口狀態,當前活動接口為eth2,且eth1子接口不在team接口中

      ┌──[root@servera.lab.example.com]-[~] └─$teamdctl team0 state setup: runner: activebackup ports: eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2 ┌──[root@servera.lab.example.com]-[~] └─$

      管理網絡Team

      網絡Team網絡配置文件在/etc/sysconfig/network-scripts,包括team interface和port interfaces。

      ┌──[root@servera.lab.example.com]-[~] └─$cd /etc/sysconfig/network-scripts/ ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$ls ifcfg-ethO-static ifcfg-team0 ifcfg-team0-port1 ifcfg-team0-port2 ifcfg-Wired_connection_1

      查看ifcfg-team0配置文件信息

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$cat ifcfg-team0 TEAM_CONFIG="{ \"runner\": { \"name\": \"activebackup\" } }" PROXY_METHOD=none BROWSER_ONLY=no BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_FAILURE_FATAL=no IPV6_ADDR_GEN_MODE=stable-privacy NAME=team0 UUID=2053fe72-6785-4b16-90f0-256c2bf8c4f3 DEVICE=team0 ONBOOT=yes DEVICETYPE=Team IPADDR=192.168.0.100 PREFIX=24

      子接口ifcfg-team0-port1配置文件信息

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$cat ifcfg-team0-port1 NAME=team0-port1 UUID=fd24db64-6f9a-41d3-87a5-79f825731d7f DEVICE=eth1 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$

      創建team interface時,runner默認使用roundrobin,可以使用team.runner指定其他值。

      「命令行修改Team配置」

      nmcli con mod CONN_NAME team.config JSON-configuration-file-or-string JSON-configuration-file-or-string格式,參考man teamd.conf中EXAMPLES。

      可以修改的字段

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con modify team0 team. team.config team.runner team.runner-sys-prio team.link-watchers team.runner-active team.runner-tx-balancer team.mcast-rejoin-count team.runner-agg-select-policy team.runner-tx-balancer-interval team.mcast-rejoin-interval team.runner-fast-rate team.runner-tx-hash team.notify-peers-count team.runner-hwaddr-policy team.notify-peers-interval team.runner-min-ports

      使用JSON串修改

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con modify team0 team.config '{ "runner": { "name": "activebackup" } }' ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con show team0 | grep run team.config: { "runner": { "name": "activebackup" } } team.runner: activebackup team.runner-hwaddr-policy: -- team.runner-tx-hash: -- team.runner-tx-balancer: -- team.runner-tx-balancer-interval: -1 (unset) team.runner-active: yes team.runner-fast-rate: no team.runner-sys-prio: -1 (unset) team.runner-min-ports: -1 (unset) team.runner-agg-select-policy: -- ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$

      teamdctl 命令

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl No team device specified. teamdctl [options] teamdevname command [command args] -h --help Show this help -v --verbose Increase output verbosity -o --oneline Force output to one line if possible -D --force-dbus Force to use D-Bus interface -Z --force-zmq=ADDRESS Force to use ZeroMQ interface [-Z[Address]] -U --force-usock Force to use UNIX domain socket interface Commands: config dump config dump noports config dump actual state state dump state view state item get ITEMPATH state item set ITEMPATH VALUE port add PORTDEV port remove PORTDEV port present PORTDEV port config update PORTDEV PORTCONFIG port config dump PORTDEV

      「根據team.conf配置文件修改team配置」

      導出配置文件

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 config dump >team.conf

      修改配置文件

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$cat team.conf { "device": "team0", "mcast_rejoin": { "count": 1 }, "notify_peers": { "count": 1 }, "ports": { "eth1": { "link_watch": { "name": "ethtool" # 網卡監控方式 } }, "eth2": { "link_watch": { "name": "ethtool" } } }, "runner": { "name": "activebackup" } }

      加載配置信息

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con mod team0 team.config team.conf

      激活team接口

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$nmcli con up team0 Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/51)

      通過teamdctl操作team接口,添加刪除子接口

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 state setup: runner: activebackup ports: eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2

      刪除子接口eth2

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 port remove eth2 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 state setup: runner: activebackup runner: active port:

      添加子接口eth2

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 port add eth2 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamdctl team0 state setup: runner: activebackup ports: eth2 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up down count: 0 runner: active port: eth2 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$

      teamnl 命令操作team

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamnl No team device specified. teamnl [options] teamdevname command [command args] -h --help Show this help -p --port_name team slave port name -a --array_index team option array index Commands: ports options getoption OPT_NAME setoption OPT_NAME OPT_VALUE monitor OPT_STYLE

      查看子接口狀態,ID

      ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamnl team0 port 4: eth2: up 4294967295Mbit FD ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$teamnl team0 getoption activeport 4 ┌──[root@servera.lab.example.com]-[/etc/sysconfig/network-scripts] └─$

      linux 彈性負載均衡 ELB 網絡 負載均衡緩存

      版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。

      上一篇:不懂代碼也想學會深度學習?這本書告訴你真的很簡單
      下一篇:【大廠內參】第10期:深入原生冰山安全體系,詳解華為云安全服務如何構筑全棧安全
      相關文章
      亚洲av无码专区国产不乱码| 妇女自拍偷自拍亚洲精品| 亚洲成?Ⅴ人在线观看无码| 久久亚洲国产成人影院| 亚洲大片免费观看| 亚洲成无码人在线观看| 亚洲天堂久久精品| 亚洲国产91精品无码专区| 久久精品国产亚洲AV电影网| 亚洲欧美国产国产综合一区| 亚洲色大成网站www永久男同| 亚洲国产日韩精品| 亚洲熟妇少妇任你躁在线观看| 亚洲第一男人天堂| 亚洲色偷偷综合亚洲AV伊人蜜桃| 亚洲欧美日韩中文字幕在线一区| 亚洲偷自拍另类图片二区| 亚洲日韩一区精品射精| 性色av极品无码专区亚洲| 精品韩国亚洲av无码不卡区| 国产亚洲精品第一综合| 亚洲精品第一国产综合境外资源| 亚洲女人被黑人巨大进入| 亚洲男人的天堂在线va拉文| 亚洲无码在线播放| 久久青草亚洲AV无码麻豆| 久久精品国产亚洲精品2020| 亚洲免费在线观看视频| 2020亚洲男人天堂精品| 亚洲国产无线乱码在线观看| 国产亚洲漂亮白嫩美女在线 | 亚洲色图在线播放| 中国亚洲女人69内射少妇| 亚洲看片无码在线视频| 亚洲日产乱码一二三区别| 亚洲AV电影天堂男人的天堂| 亚洲乱亚洲乱少妇无码| 亚洲国产精品成人精品无码区在线| 亚洲第一福利网站| 亚洲国产成人精品久久| 亚洲日韩AV一区二区三区四区|