原生時代必須具備的核心技能之Docker高級篇(Docker網(wǎng)絡(luò)詳解)

      網(wǎng)友投稿 901 2022-05-30

      前面給大家項目的介紹了Docker的基礎(chǔ)內(nèi)容

      Docker基礎(chǔ)篇

      接下來給大家系統(tǒng)的介紹下Docker高級篇的內(nèi)容:網(wǎng)絡(luò)核心、Docker實戰(zhàn)、DockerCompose、Harbor以及Swarm。歡迎關(guān)注哦

      Docker網(wǎng)絡(luò)介紹

      Docker是基于Linux Kernel的namespace,CGroups,UnionFileSystem等技術(shù)封裝成的一種自定義容器格式,從而提供了一套虛擬運行環(huán)境。

      namespace: 用來做隔離的,比如 pid[進程]、net【網(wǎng)絡(luò)】、mnt【掛載點】

      CGroups:Controller Groups 用來做資源限制,比如內(nèi)存和CPU等

      Union File Systems:用來做Image和Container分層

      1.計算機網(wǎng)絡(luò)模型

      Docker網(wǎng)絡(luò)官網(wǎng):https://docs.docker.com/network/。

      OSI:開放系統(tǒng)互聯(lián)參考模型(Open System Interconnect)

      TCP/IP:傳輸控制協(xié)議/網(wǎng)際協(xié)議(Transmission Control/Internet Protocol),是指能夠在多個不同網(wǎng)絡(luò)間實現(xiàn)信息傳輸?shù)膮f(xié)議簇。TCP/IP協(xié)議不僅僅指的是TCP 和IP兩個協(xié)議,而是指一個由FTP、SMTP、TCP、UDP、IP等協(xié)議構(gòu)成的協(xié)議簇, 只是因為在TCP/IP協(xié)議中TCP協(xié)議和IP協(xié)議最具代表性,所以被稱為TCP/IP協(xié)議。

      分層思想:分層的基本想法是每一層都在它的下層提供的服務(wù)基礎(chǔ)上提供更高級的增值服務(wù),而最高層提供能運行分布式應(yīng)用程序的服務(wù)

      客戶端發(fā)送請求:

      服務(wù)端接受請求:

      2 Liunx中網(wǎng)卡

      2.1 查看網(wǎng)卡信息

      查看網(wǎng)卡的命令:ip a

      [vagrant@localhost ~]$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 85987sec preferred_lft 85987sec inet6 fe80::5054:ff:fe4d:77d3/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6e:31:45 brd ff:ff:ff:ff:ff:ff inet 192.168.56.10/24 brd 192.168.56.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe6e:3145/64 scope link valid_lft forever preferred_lft forever 4: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:bf:79:9f:de brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      通過ip a 可以看到當前的centos中有的4個網(wǎng)卡信息作用分別是

      ip link show:

      [vagrant@localhost ~]$ ip link show 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth0: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff 3: eth1: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 08:00:27:6e:31:45 brd ff:ff:ff:ff:ff:ff 4: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:bf:79:9f:de brd ff:ff:ff:ff:ff:ff

      1

      2

      3

      4

      5

      6

      7

      8

      9

      以文件的形式查看網(wǎng)卡:ls /sys/class/net

      [vagrant@localhost ~]$ ls /sys/class/net docker0 eth0 eth1 lo

      1

      2

      2.2 配置文件

      在Linux中網(wǎng)卡對應(yīng)的其實就是文件,所以找到對應(yīng)的網(wǎng)卡文件即可,存放的路徑

      [vagrant@localhost network-scripts]$ cd /etc/sysconfig/network-scripts/ [vagrant@localhost network-scripts]$ ls ifcfg-eth0 ifdown-eth ifdown-ppp ifdown-tunnel ifup-ippp ifup-post ifup-TeamPort network-functions-ipv6 ifcfg-eth1 ifdown-ippp ifdown-routes ifup ifup-ipv6 ifup-ppp ifup-tunnel ifcfg-lo ifdown-ipv6 ifdown-sit ifup-aliases ifup-isdn ifup-routes ifup-wireless ifdown ifdown-isdn ifdown-Team ifup-bnep ifup-plip ifup-sit init.ipv6-global ifdown-bnep ifdown-post ifdown-TeamPort ifup-eth ifup-plusb ifup-Team network-functions

      1

      2

      3

      4

      5

      6

      7

      云原生時代必須具備的核心技能之Docker高級篇(Docker網(wǎng)絡(luò)詳解)

      2.3 網(wǎng)卡操作

      網(wǎng)卡中增加ip地址

      [root@localhost ~]# ip addr add 192.168.100.120/24 dev eth0 [root@localhost ~]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 84918sec preferred_lft 84918sec inet 192.168.100.120/24 scope global eth0 #### 增加了一個IP地址 valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe4d:77d3/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6e:31:45 brd ff:ff:ff:ff:ff:ff inet 192.168.56.10/24 brd 192.168.56.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe6e:3145/64 scope link valid_lft forever preferred_lft forever 4: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:bf:79:9f:de brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      刪除IP地址: ip addr delete 192.168.100.120/24 dev eth0

      [root@localhost ~]# ip addr delete 192.168.100.120/24 dev eth0 [root@localhost ~]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 84847sec preferred_lft 84847sec inet6 fe80::5054:ff:fe4d:77d3/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6e:31:45 brd ff:ff:ff:ff:ff:ff inet 192.168.56.10/24 brd 192.168.56.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe6e:3145/64 scope link valid_lft forever preferred_lft forever 4: docker0: mtu 1500 qdisc noqueue state DOWN group default lik/ether 02:42:bf:79:9f:de brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      2.4 網(wǎng)卡信息解析

      狀態(tài):UP/DOWN/UNKOWN等

      link/ether:MAC地址

      inet:綁定的IP地址

      3 Network Namespace

      Network Namespace 是實現(xiàn)網(wǎng)絡(luò)虛擬化的重要功能,它能創(chuàng)建多個隔離的網(wǎng)絡(luò)空間,它們有獨自的網(wǎng)絡(luò)棧信息。不管是虛擬機還是容器,運行的時候仿佛自己就在獨立的網(wǎng)絡(luò)中。

      3.1 Network Namespce 實戰(zhàn)

      添加一個namespace

      ip netns add ns1

      1

      查看當前具有的namespace

      ip netns list

      1

      [root@localhost ~]# ip netns add ns1 [root@localhost ~]# ip netns list ns1

      1

      2

      3

      刪除namespace

      ip netns delete ns1

      1

      [root@localhost ~]# ip netns add ns1 [root@localhost ~]# ip netns list ns1 [root@localhost ~]# ip netns delete ns1 [root@localhost ~]# ip netns list [root@localhost ~]#

      1

      2

      3

      4

      5

      6

      查看namespace【ns1】的網(wǎng)卡情況

      ip netns exec ns1 ip a

      1

      [root@localhost ~]# ip netns exec ns1 ip a 1: lo: mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

      1

      2

      3

      啟動網(wǎng)絡(luò)狀態(tài)

      ip netns exec ns1 ifup lo

      1

      [root@localhost ~]# ip netns exec ns1 ip link show 1: lo: mtu 65536 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 [root@localhost ~]# ip netns exec ns1 ifup lo [root@localhost ~]# ip netns exec ns1 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever [root@localhost ~]#

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      關(guān)掉網(wǎng)絡(luò)狀態(tài)

      [root@localhost ~]# ip netns exec ns1 ifdown lo [root@localhost ~]# ip netns exec ns1 ip a 1: lo: mtu 65536 qdisc noqueue state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

      1

      2

      3

      4

      還可以通過 link 來設(shè)置狀態(tài)

      [root@localhost ~]# ip netns exec ns1 ip link set lo up [root@localhost ~]# ip netns exec ns1 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever [root@localhost ~]# ip netns exec ns1 ip link set lo down [root@localhost ~]# ip netns exec ns1 ip a 1: lo: mtu 65536 qdisc noqueue state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever [root@localhost ~]#

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      再次添加一個namespace【ns2】

      [root@localhost ~]# ip netns add ns2 [root@localhost ~]# ip netns list ns2 ns1

      1

      2

      3

      4

      現(xiàn)在要實現(xiàn)兩個namespace的通信

      要實現(xiàn)兩個network namespace的通信,我們需要實現(xiàn)到的技術(shù)是:

      veth pair:Virtual Ethernet Pair,是一個成對的端口,可以實現(xiàn)上述功能

      創(chuàng)建一對link,也就是接下來要通過veth pair連接的link

      ip link add veth-ns1 type veth peer name veth-ns2

      1

      然后在宿主機中就會多出一對網(wǎng)卡信息

      然后將創(chuàng)建好的 veth-ns1交給namespace1,把veth-ns2交給namespace2

      ip link set veth-ns1 netns ns1 ip link set veth-ns2 netns ns2

      1

      2

      再查看ns1和ns2中的link情況

      [root@localhost ~]# ip netns exec ns1 ip link 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 6: veth-ns1@if5: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 7e:bb:ee:13:a2:9a brd ff:ff:ff:ff:ff:ff link-netnsid 1 [root@localhost ~]# ip netns exec ns2 ip link 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 5: veth-ns2@if6: mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 7e:f8:18:5a:ef:1f brd ff:ff:ff:ff:ff:ff link-netnsid 0

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      此時veth-ns1和veth-ns2還沒有ip地址,顯然通信還缺少點條件

      ip netns exec ns1 ip addr add 192.168.0.11/24 dev veth-ns1 ip netns exec ns2 ip addr add 192.168.0.12/24 dev veth-ns2

      1

      2

      再次查看,發(fā)現(xiàn)state是DOWN.所以我們需要啟用對應(yīng)的網(wǎng)卡

      [root@localhost ~]# ip netns exec ns1 ip link set veth-ns1 up [root@localhost ~]# ip netns exec ns2 ip link set veth-ns2 up

      1

      2

      然后查看狀態(tài)

      然后就可以相互之間ping通了

      ip netns exec ns1 ping 192.168.0.12 ip netns exec ns2 ping 192.168.0.11

      1

      3.2 Container的NameSpace

      按照上面的描述,實際上每個container,都會有自己的network namespace,并且是獨立的,我們可以進入到容器中進行驗證

      創(chuàng)建兩個Tomcat容器

      docker run -d --name tomcat01 -p 8081:8080 tomcat

      docker run -d --name tomcat02 -p 8082:8080 tomcat

      進入到兩個容器中,查看ip

      docker exec -it tomcat01 ip a

      docker exec -it tomcat02 ip a

      相互ping是可以ping通的

      問題:此時tomcat01和tomcat02屬于兩個network namespace,是如何能夠ping通的? 有些小伙伴可能會想,不就跟上面的namespace實戰(zhàn)一樣嗎?注意這里并沒有veth-pair技術(shù)

      4 深入分析container網(wǎng)絡(luò)-Bridge

      4.1 Docker默認Bridge

      首先我們通過ip a可以查看當前宿主機的網(wǎng)絡(luò)情況

      [root@localhost tomcat]# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:4d:77:d3 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0 valid_lft 66199sec preferred_lft 66199sec inet6 fe80::5054:ff:fe4d:77d3/64 scope link valid_lft forever preferred_lft forever 3: eth1: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:6e:31:45 brd ff:ff:ff:ff:ff:ff inet 192.168.56.10/24 brd 192.168.56.255 scope global noprefixroute eth1 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe6e:3145/64 scope link valid_lft forever preferred_lft forever 4: docker0: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:52:d4:0a:9f brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:52ff:fed4:a9f/64 scope link valid_lft forever preferred_lft forever 24: veth78a90d0@if23: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 7e:6b:8c:bf:7e:30 brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::7c6b:8cff:febf:7e30/64 scope link valid_lft forever preferred_lft forever 26: vetha2bfbf4@if25: mtu 1500 qdisc noqueue master docker0 state UP group default link/ether ce:2f:ed:e5:61:32 brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::cc2f:edff:fee5:6132/64 scope link valid_lft forever preferred_lft forever

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      然后查看 tomcat01中的網(wǎng)絡(luò): docker exec -it tomcat01 ip a可以發(fā)現(xiàn)

      [root@localhost tomcat]# docker exec -it tomcat01 ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 23: eth0@if24: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever

      1

      2

      3

      4

      5

      6

      7

      8

      9

      我們發(fā)現(xiàn)在宿主機中是可以ping通Tomcat01的網(wǎng)絡(luò)的。

      [root@localhost tomcat]# ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.038 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.038 ms ^C --- 172.17.0.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 999ms rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms

      1

      2

      3

      4

      5

      6

      7

      8

      既然可以ping通,而且centos和tomcat01又屬于兩個不同的NetWork NameSpace,他們是怎么連接的?看圖

      其實在tomcat01中有一個eth0和centos的docker0中有一個veth是成對的,類似于之前實戰(zhàn)中的veth-ns1和veth-ns2,要確認也很簡單

      yum install bridge-utils brctl show

      1

      2

      執(zhí)行

      [root@localhost tomcat]# brctl show bridge name bridge id STP enabled interfaces docker0 8000.024252d40a9f no veth78a90d0 vetha2bfbf4

      1

      2

      3

      4

      對比 ip a 情況

      那么畫圖說明:

      這種網(wǎng)絡(luò)連接方法我們稱之為Bridge,其實也可以通過命令查看docker中的網(wǎng)絡(luò)模式:docker network ls , bridge也是docker中默認的網(wǎng)絡(luò)模式

      [root@localhost tomcat]# docker network ls NETWORK ID NAME DRIVER SCOPE 92242fc0f805 bridge bridge local 96b999d7fcc2 host host local 17b86f9caa33 none null local

      1

      2

      3

      4

      5

      不妨檢查一下bridge:docker network inspect bridge

      "Containers": { "4b3500fed6b99c00b3ed1ae46bd6bc33040c77efdab343175363f32fbcf42e63": { "Name": "tomcat01", "EndpointID": "40fc0925fcb59c9bb002779580107ab9601640188bf157fa57b1c2de9478053a", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "172.17.0.2/16", "IPv6Address": "" }, "92d2ff3e9be523099ac4b45058c5bf4652a77a27b7053a9115ea565ab43f9ab0": { "Name": "tomcat02", "EndpointID": "1d6c3bd73e3727dd368edf3cc74d2f01b5c458223f844d6188486cb26ea255bc", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "172.17.0.3/16", "IPv6Address": "" } }

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      在tomcat01容器中是可以訪問互聯(lián)網(wǎng)的,順便把這張圖畫一下咯,NAT是通過iptables實現(xiàn)的

      4.2 自定義NetWork

      創(chuàng)建一個network,類型為 Bridge

      docker network create tomcat-net 或者 docker network create tomcat-net --subnet=172.18.0.0/24 tomcat-net

      1

      2

      3

      查看已有的NetWork: docker network ls

      [root@localhost ~]# docker network create tomcat-net 43915cba1f9204751b48896d7d28b83b4b6cf35f06fac6ff158ced5fb9ddb5b3 [root@localhost ~]# docker network ls NETWORK ID NAME DRIVER SCOPE b5c9cfbc0410 bridge bridge local 96b999d7fcc2 host host local 17b86f9caa33 none null local 43915cba1f92 tomcat-net bridge local

      1

      2

      3

      4

      5

      6

      7

      8

      查看tomcat-net詳情信息:docker network inspect tomcat-net

      [root@localhost ~]# docker network inspect tomcat-net [ { "Name": "tomcat-net", "Id": "43915cba1f9204751b48896d7d28b83b4b6cf35f06fac6ff158ced5fb9ddb5b3", "Created": "2021-10-11T12:10:19.543766962Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      刪除network:docker network rm tomcat-net

      創(chuàng)建tomcat容器,并指定使用tomcat-net

      [root@localhost ~]# docker run -d --name custom-net-tomcat --network tomcat-net tomcat-ip:1.0 264b3901f8f12fd7f4cc69810be6a24de48f82402b1e5b0df364bd1ee72d8f0e

      1

      2

      查看custom-net-tomcat的網(wǎng)絡(luò)信息:截取了關(guān)鍵信息

      12: br-43915cba1f92: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:71:a6:67:c7 brd ff:ff:ff:ff:ff:ff inet 172.18.0.1/16 brd 172.18.255.255 scope global br-43915cba1f92 valid_lft forever preferred_lft forever inet6 fe80::42:71ff:fea6:67c7/64 scope link valid_lft forever preferred_lft forever 14: veth282a555@if13: mtu 1500 qdisc noqueue master br-43915cba1f92 state UP group default link/ether 3a:3d:83:15:3f:ed brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::383d:83ff:fe15:3fed/64 scope link valid_lft forever preferred_lft forever

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      查看網(wǎng)卡接口信息

      [root@localhost ~]# brctl show bridge name bridge id STP enabled interfaces br-43915cba1f92 8000.024271a667c7 no veth282a555 docker0 8000.02423964f095 no veth4526c0c vethaa2f6f4 vethc6ad4c2

      1

      2

      3

      4

      5

      6

      7

      此時在custom-net-tomcat容器中ping一些tomcat01發(fā)現(xiàn)是ping不通的

      [root@localhost ~]# docker exec -it custom-net-tomcat ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. ^C --- 172.17.0.2 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2000ms

      1

      2

      3

      4

      5

      此時如果tomcat01容器能夠連接上tomcat-net上應(yīng)該就可以了

      docker network connect tomcat-net tomcat01

      1

      [root@localhost ~]# docker exec -it tomcat01 ping custom-net-tomcat PING custom-net-tomcat (172.18.0.2) 56(84) bytes of data. 64 bytes from custom-net-tomcat.tomcat-net (172.18.0.2): icmp_seq=1 ttl=64 time=0.138 ms ^C --- custom-net-tomcat ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms [root@localhost ~]# docker exec -it custom-net-tomcat ping tomcat01 PING tomcat01 (172.18.0.3) 56(84) bytes of data. 64 bytes from tomcat01.tomcat-net (172.18.0.3): icmp_seq=1 ttl=64 time=0.031 ms

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      5 深入分析 Container網(wǎng)絡(luò)-Host&None

      5.1 Host

      Host模式下,容器將共享主機的網(wǎng)絡(luò)堆棧,并且主機的所有接口都可供容器使用.容器的主機名將與主機系統(tǒng)上的主機名匹配

      創(chuàng)建一個容器,并指定網(wǎng)絡(luò)為host

      docker run -d --name my-tomcat-host --network host tomcat-ip:1.0

      1

      查看ip地址

      docker exec -it my-tomcat-host ip a

      1

      檢查host網(wǎng)絡(luò)

      docker network inspect host

      1

      "Containers": { "f495a6892d422e61daab01e3fcfa4abb515753e5f9390af44c93cae376ca7464": { "Name": "my-tomcat-host", "EndpointID": "77012b1ac5d15bde3105d2eb2fe0e58a5ef78fb44a88dc8b655d373d36cde5da", "MacAddress": "", "IPv4Address": "", "IPv6Address": "" } }

      1

      2

      3

      4

      5

      6

      7

      8

      9

      5.2 None

      None模式不會為容器配置任何IP,也不能訪問外部網(wǎng)絡(luò)以及其他容器.它具有環(huán)回地址,可用于運行批處理作業(yè).

      創(chuàng)建一個tomcat容器,并指定網(wǎng)絡(luò)為none

      docker run -d --name my-tomcat-none --network none tomcat-ip:1.0

      1

      查看ip地址

      docker exec -it my-tomcat-none

      1

      檢查none網(wǎng)絡(luò)

      docker network inspect none

      1

      "Containers": { "c957b61dae93fbb9275acf73c370e5df1aaf44a986579ee43ab751f790220807": { "Name": "my-tomcat-none", "EndpointID": "16bf30fb7328ceb433b55574dc071bf346efa58e2eb92b6f40d7a902ddc94293", "MacAddress": "", "IPv4Address": "", "IPv6Address": "" } }

      1

      2

      3

      4

      5

      6

      7

      8

      9

      6 端口映射

      創(chuàng)建一個tomcat容器,名稱為port-tomcat

      docker run -d --name port-tomcat tomcat-ip:1.0

      1

      思考如何訪問tomcat的服務(wù)

      docker exec -it port-tomcat bash curl localhost:8080

      1

      2

      如果要載centos7上訪問呢

      docker exec -it port-tomcat ip a curl 172.17.0.4:8080

      1

      2

      如果我們需要在centos中通過localhost來訪問呢?這時我們就需要將port-tomcat中的8080端口映射到centos上了

      docker rm -f port-tomcat docker run -d --name port-tomcat -p 8090:8080 tomcat-ip:1.0 curl localhost:8090

      1

      2

      3

      centos7是運行在win10上的虛擬機,如果想要在win10上通過ip:port方式訪問呢?

      #此時需要centos和win網(wǎng)絡(luò)在同一個網(wǎng)段,所以在Vagrantfile文件中 #這種方式等同于橋接網(wǎng)絡(luò)。也可以給該網(wǎng)絡(luò)指定使用物理機哪一塊網(wǎng)卡,比如 #config.vm.network"public_network",:bridge=>'en1: Wi-Fi (AirPort)' config.vm.network"public_network" centos7: ip a --->192.168.8.118 win10:瀏覽器訪問 192.168.8.118:9080

      1

      2

      3

      4

      5

      6

      7

      8

      7 多機之間通信

      具體深入介紹會在 Docker Swarm 中詳聊,本節(jié)簡單介紹。

      在同一臺centos7機器上,發(fā)現(xiàn)無論怎么折騰,我們一定有辦法讓兩個container通信。 那如果是在兩臺centos7機器上呢?畫個圖

      VXLAN技術(shù)實現(xiàn):Virtual Extensible LAN(虛擬可擴展局域網(wǎng))。

      ps:掌握了Docker的網(wǎng)絡(luò),其實也就掌握整個技術(shù)的核心了,如果文章有幫助歡迎關(guān)注哦

      下篇:Docker實現(xiàn)

      Docker 云原生 網(wǎng)絡(luò)

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。

      上一篇:自然語言處理之循環(huán)神經(jīng)網(wǎng)絡(luò)(自然語言處理為什么要用到循環(huán)神經(jīng)網(wǎng)絡(luò))
      下一篇:excel表格數(shù)據(jù)處理的方法(excel表格數(shù)據(jù)處理技巧)
      相關(guān)文章
      亚洲av片劲爆在线观看| 亚洲欧美精品午睡沙发| 国产亚洲中文日本不卡二区| 久久亚洲国产精品成人AV秋霞| 久久夜色精品国产亚洲| 亚洲欧洲中文日韩久久AV乱码| 毛片亚洲AV无码精品国产午夜| 亚洲熟妇AV乱码在线观看| 97se亚洲国产综合自在线| 亚洲伊人久久大香线蕉在观| 亚洲国产高清在线精品一区| 亚洲乱码一区av春药高潮| 亚洲国产综合第一精品小说| 亚洲成人福利在线| 亚洲国产高清美女在线观看| 亚洲制服丝袜中文字幕| 中文字幕在线观看亚洲日韩| 99亚偷拍自图区亚洲| 亚洲日韩国产一区二区三区在线 | 自拍偷自拍亚洲精品第1页| 亚洲AV之男人的天堂| 亚洲 无码 在线 专区| 国产精品亚洲综合| 亚洲精品和日本精品| 爱情岛论坛网亚洲品质自拍| 国产亚洲av片在线观看18女人| 亚洲人成影院在线观看| 综合久久久久久中文字幕亚洲国产国产综合一区首 | 亚洲GV天堂GV无码男同| 久久久久亚洲国产AV麻豆| 偷自拍亚洲视频在线观看99| 亚洲国产精品碰碰| 亚洲综合色自拍一区| 亚洲av永久无码精品国产精品| 亚洲国产日韩一区高清在线 | 亚洲综合在线成人一区| 亚洲精品第一国产综合野| 亚洲人成网站18禁止| 怡红院亚洲红怡院在线观看| 亚洲精品高清一二区久久| 亚洲中文字幕无码一区|