從零搭建Linux+Docker+Ansible+kubernetes 學(xué)習(xí)環(huán)境(1*Master+3*Node)
一直想學(xué)K8s,但是沒有環(huán)境,本身K8s就有些重。上學(xué)之前租了一個(gè)阿里云的ESC,單核2G的,單機(jī)版K8s的勉強(qiáng)可以裝上去,多節(jié)點(diǎn)沒法搞,書里的Demo也沒法學(xué)。需要多個(gè)節(jié)點(diǎn),涉及到多機(jī)器操作,所以順便溫習(xí)一下ansible?。
這是一個(gè)在Win10上從零搭建學(xué)習(xí)環(huán)境的教程,包含:
通過Vmware Workstation安裝四個(gè)linux系統(tǒng)虛擬機(jī),一個(gè)Master管理節(jié)點(diǎn),三個(gè)Node計(jì)算節(jié)點(diǎn)。
通過橋接模式,可以訪問外網(wǎng),并且可以通過win10物理機(jī)ssh遠(yuǎn)程訪問。
可以通過Master節(jié)點(diǎn)機(jī)器ssh免密登錄任意Node節(jié)點(diǎn)機(jī)。
配置Ansible,Master節(jié)點(diǎn)做controller節(jié)點(diǎn),使用角色配置時(shí)間同步,使用playbook?安裝配置docker K8S等。
Docker,K8s集群相關(guān)包安裝,網(wǎng)絡(luò)配置等
關(guān)于Vmware Workstation 和 Linux ios包,默認(rèn)小伙伴已經(jīng)擁有。Vmware Workstation?默認(rèn)小伙伴已經(jīng)安裝好,沒有的可以網(wǎng)上下載一下。
我所渴求的,無非是將心中脫穎語出的本性付諸生活,為何竟如此艱難呢 ------《彷徨少年時(shí)》
這里默認(rèn)小伙伴已經(jīng)安裝了Vmware Workstation(VMware-workstation-full-15.5.6-16341506.exe),已經(jīng)準(zhǔn)備了linux系統(tǒng) 安裝光盤(CentOS-7-x86_64-DVD-1810.iso)。括號(hào)內(nèi)是我用的版本,我們的方式:
先安裝一個(gè)Node節(jié)點(diǎn)機(jī)器,然后通過克隆的方式得到剩余的兩個(gè)Node機(jī)器和一個(gè)Master機(jī)器
1. 系統(tǒng)安裝
2. 配置網(wǎng)絡(luò)
配置網(wǎng)卡為DHCP模式(自動(dòng)分配IP地址)
┌──[root@localhost.localdomain]-[~] └─$ nmcli connection modify 'ens33' ipv4.method auto connection.autoconnect yes ┌──[root@localhost.localdomain]-[~] └─$ nmcli connection up 'ens33' 連接已成功激活(D-Bus 活動(dòng)路徑:/org/freedesktop/NetworkManager/ActiveConnection/4) ┌──[root@localhost.localdomain]-[~] └─$ ifconfig | head -2 ens33: flags=4163
┌──[root@192.168.1.7]-[~] └─$ ifconfig ens33: flags=4163
網(wǎng)絡(luò)配置這里,如果覺得不是特別方面,可以使用NAT模式,即通過vm1或者vmm8 做虛擬交換機(jī)來使用,這樣就不用考慮ip問題了。
3. 機(jī)器克隆
我們以相同的方式,克隆剩余的一個(gè)node節(jié)點(diǎn)機(jī)器,和一個(gè)Master節(jié)點(diǎn)機(jī)。這里不做展示
4.管理控制節(jié)點(diǎn)到計(jì)算節(jié)點(diǎn)DNS配置
┌──[root@192.168.1.10]-[~] └─$ vim /etc/hosts ┌──[root@192.168.1.10]-[~] └─$ cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.7 node0 192.168.1.9 node1 192.168.1.11 node2 192.168.1.10 master ┌──[root@192.168.1.10]-[~] └─$
5.管理控制節(jié)點(diǎn)到計(jì)算節(jié)點(diǎn)SSH免密配置
ssh-keygen生成密匙,全部回車
┌──[root@192.168.1.10]-[~] └─$ ssh usage: ssh [-1246AaCfGgKkMNnqsTtVvXxYy] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-E log_file] [-e escape_char] [-F configfile] [-I pkcs11] [-i identity_file] [-J [user@]host[:port]] [-L address] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port] [-Q query_option] [-R address] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] [user@]hostname [command] ┌──[root@192.168.1.10]-[~] └─$ ls -ls ~/.ssh/ ls: 無法訪問/root/.ssh/: 沒有那個(gè)文件或目錄 ┌──[root@192.168.1.10]-[~] └─$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:qHboVj/WfMTYCDFDZ5ISf3wEcmfsz0EXJH19U6SnxbY root@node0 The key's randomart image is: +---[RSA 2048]----+ | .o+.=o+.o+*| | ..=B +. o==| | ..+o.....O| | ... .. .=.| | . S. = o.E | | o. o + o | | +... o . | | o.. + o . | | .. . . . | +----[SHA256]-----+
SSH 免密配置,使用ssh-copy-id傳遞密匙
ssh-copy-id root@node0 ssh-copy-id root@node1 ssh-copy-id root@node2
免密測試
ssh root@node0 ssh root@node1 ssh root@node2
到這一步,我們已經(jīng)做好了linux環(huán)境的搭建,想學(xué)linux的小伙伴就可以從這里開始學(xué)習(xí)啦。這是我linux學(xué)習(xí)一路整理的筆記,有些實(shí)戰(zhàn),感興趣小伙伴可以看看
這里為了方便,我們直接在物理機(jī)操作,而且我們已經(jīng)配置了ssh,因?yàn)槲冶緳C(jī)的內(nèi)存不夠,所以我只能啟三臺(tái)機(jī)器了。
## 1. SSH到控制節(jié)點(diǎn)即192.168.1.10,配置yum源,安裝ansible
┌──(liruilong?Liruilong)-[/mnt/e/docker] └─$ ssh root@192.168.1.10 Last login: Sat Sep 11 00:23:10 2021 ┌──[root@master]-[~] └─$ ls anaconda-ks.cfg initial-setup-ks.cfg 下載 公共 圖片 文檔 桌面 模板 視頻 音樂 ┌──[root@master]-[~] └─$ cd /etc/yum.repos.d/ ┌──[root@master]-[/etc/yum.repos.d] └─$ ls CentOS-Base.repo CentOS-CR.repo CentOS-Debuginfo.repo CentOS-fasttrack.repo CentOS-Media.repo CentOS-Sources.repo CentOS-Vault.repo CentOS-x86_64-kernel.repo ┌──[root@master]-[/etc/yum.repos.d] └─$ mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup ┌──[root@master]-[/etc/yum.repos.d] └─$ wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
查找 ansible安裝包
┌──[root@master]-[/etc/yum.repos.d] └─$ yum list | grep ansible ansible-collection-microsoft-sql.noarch 1.1.0-1.el7_9 extras centos-release-ansible-27.noarch 1-1.el7 extras centos-release-ansible-28.noarch 1-1.el7 extras centos-release-ansible-29.noarch 1-1.el7 extras centos-release-ansible26.noarch 1-3.el7.centos extras ┌──[root@master]-[/etc/yum.repos.d]
阿里云的yum鏡像沒有ansible包,所以我們需要使用epel安裝
┌──[root@master]-[/etc/yum.repos.d] └─$ wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo --2021-09-11 00:40:11-- http://mirrors.aliyun.com/repo/epel-7.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 1.180.13.237, 1.180.13.236, 1.180.13.240, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|1.180.13.237|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 664 [application/octet-stream] Saving to: ‘/etc/yum.repos.d/epel.repo’ 100%[=======================================================================================================================================================================>] 664 --.-K/s in 0s 2021-09-11 00:40:12 (91.9 MB/s) - ‘/etc/yum.repos.d/epel.repo’ saved [664/664] ┌──[root@master]-[/etc/yum.repos.d] └─$ yum install -y epel-release
查找ansible安裝包,并安裝
┌──[root@master]-[/etc/yum.repos.d] └─$ yum list|grep ansible Existing lock /var/run/yum.pid: another copy is running as pid 12522. Another app is currently holding the yum lock; waiting for it to exit... The other application is: PackageKit Memory : 28 M RSS (373 MB VSZ) Started: Sat Sep 11 00:40:41 2021 - 00:06 ago State : Sleeping, pid: 12522 ansible.noarch 2.9.25-1.el7 epel ansible-collection-microsoft-sql.noarch 1.1.0-1.el7_9 extras ansible-doc.noarch 2.9.25-1.el7 epel ansible-inventory-grapher.noarch 2.4.4-1.el7 epel ansible-lint.noarch 3.5.1-1.el7 epel ansible-openstack-modules.noarch 0-20140902git79d751a.el7 epel ansible-python3.noarch 2.9.25-1.el7 epel ansible-review.noarch 0.13.4-1.el7 epel ansible-test.noarch 2.9.25-1.el7 epel centos-release-ansible-27.noarch 1-1.el7 extras centos-release-ansible-28.noarch 1-1.el7 extras centos-release-ansible-29.noarch 1-1.el7 extras centos-release-ansible26.noarch 1-3.el7.centos extras Kubernetes-ansible.noarch 0.6.0-0.1.gitd65ebd5.el7 epel python2-ansible-runner.noarch 1.0.1-1.el7 epel python2-ansible-tower-cli.noarch 3.3.9-1.el7 epel vim-ansible.noarch 3.2-1.el7 epel ┌──[root@master]-[/etc/yum.repos.d] └─$ yum install -y ansible
┌──[root@master]-[/etc/yum.repos.d] └─$ ansible --version ansible 2.9.25 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] ┌──[root@master]-[/etc/yum.repos.d] └─$
查看主機(jī)清單
┌──[root@master]-[/etc/yum.repos.d] └─$ ansible 127.0.0.1 --list-hosts hosts (1): 127.0.0.1 ┌──[root@master]-[/etc/yum.repos.d]
2. ansible環(huán)境配置
我們這里使用liruilong這個(gè)普通賬號(hào),一開始裝機(jī)配置的那個(gè)用戶,生產(chǎn)中會(huì)配置特定的用戶,不使用root用戶;
1. 主配置文件?ansible.cfg?編寫
┌──[root@master]-[/home/liruilong] └─$ su liruilong [liruilong@master ~]$ pwd /home/liruilong [liruilong@master ~]$ mkdir ansible;cd ansible;vim ansible.cfg [liruilong@master ansible]$ cat ansible.cfg [defaults] # 主機(jī)清單文件,就是要控制的主機(jī)列表 inventory=inventory # 連接受管機(jī)器的遠(yuǎn)程的用戶名 remote_user=liruilong # 角色目錄 roles_path=roles # 設(shè)置用戶的su 提權(quán) [privilege_escalation] become=True become_method=sudo become_user=root become_ask_pass=False [liruilong@master ansible]$
2. 主機(jī)清單:
被控機(jī)列表,可以是 域名,IP,分組([組名]),聚合([組名:children]),也可以主動(dòng)的設(shè)置用戶名密碼
[liruilong@master ansible]$ vim inventory [liruilong@master ansible]$ cat inventory [nodes] node1 node2 [liruilong@master ansible]$ ansible all --list-hosts hosts (2): node1 node2 [liruilong@master ansible]$ ansible nodes --list-hosts hosts (2): node1 node2 [liruilong@master ansible]$ ls ansible.cfg inventory [liruilong@master ansible]$
3. 配置liruilong用戶的ssh免密
master節(jié)點(diǎn)上以liruilong用戶對(duì)三個(gè)節(jié)點(diǎn)分布配置
[liruilong@master ansible]$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/liruilong/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/liruilong/.ssh/id_rsa. Your public key has been saved in /home/liruilong/.ssh/id_rsa.pub. The key fingerprint is: SHA256:cJ+SHgfMk00X99oCwEVPi1Rjoep7Agfz8DTjvtQv0T0 liruilong@master The key's randomart image is: +---[RSA 2048]----+ | .oo*oB. | | o +.+ B + | | . B . + o .| | o+=+o . o | | SO=o .o..| | ..==.. .E.| | .+o .. .| | .o.o. | | o+ .. | +----[SHA256]-----+
[liruilong@master ansible]$ ssh-copy-id node1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/liruilong/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys liruilong@node1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node1'" and check to make sure that only the key(s) you wanted were added.
嗯 ,node2和mater也需要配置
[liruilong@master ansible]$ ssh-copy-id node2 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/liruilong/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys liruilong@node2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node2'" and check to make sure that only the key(s) you wanted were added. [liruilong@master ansible]$
4. 配置liruilong普通用戶提權(quán)
這里有個(gè)問題,我的機(jī)器上配置了sudo免密,但是第一次沒有生效,需要輸入密碼,之后就不需要了,使用ansible還是不行。后來發(fā)現(xiàn),在/etc/sudoers.d 下新建一個(gè)以普通用戶命名的文件的授權(quán)就可以了,不知道啥原因了。
node1
┌──[root@node1]-[~] └─$ visudo ┌──[root@node1]-[~] └─$ cat /etc/sudoers | grep liruilong liruilong ALL=(ALL) NOPASSWD:ALL ┌──[root@node1]-[/etc/sudoers.d] └─$ cd /etc/sudoers.d/ ┌──[root@node1]-[/etc/sudoers.d] └─$ vim liruilong ┌──[root@node1]-[/etc/sudoers.d] └─$ cat liruilong liruilong ALL=(ALL) NOPASSWD:ALL ┌──[root@node1]-[/etc/sudoers.d] └─$
┌──[root@node2]-[~] └─$ vim /etc/sudoers.d/liruilong
node2 和 master 按照相同的方式設(shè)置
5. 測試臨時(shí)命令
ansible 清單主機(jī)地址列表 -m 模塊名 [-a '任務(wù)參數(shù)']
[liruilong@master ansible]$ ansible all -m ping node2 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } node1 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" } [liruilong@master ansible]$ ansible nodes -m command -a 'ip a list ens33' node2 | CHANGED | rc=0 >> 2: ens33:
嗯,到這一步,ansible我們就配置完成了,可以在當(dāng)前環(huán)境學(xué)習(xí)ansible。這是我ansible學(xué)習(xí)整理的筆記,主要是CHRE考試的筆記,有些實(shí)戰(zhàn),感興趣小伙伴可以看看
關(guān)于docker以及k8s的安裝,我們可以通過rhel-system-roles基于角色進(jìn)行安裝,也可以自定義角色進(jìn)行安裝,也可以直接寫劇本進(jìn)行安裝,這里我們使用直接部署ansible劇本的方式,一步一步構(gòu)建。
docker的話,感興趣的小伙伴可以看看我的筆記。容器化技術(shù)學(xué)習(xí)筆記?我們主要看看K8S,
1. 使用ansible部署Docker
這里部署的話,一種是直接刷大佬寫好的腳本,一種是自己一步一步來,這里我們使用第二種方式。
我們現(xiàn)在有的機(jī)器
1. 配置節(jié)點(diǎn)機(jī)yum源
這里因?yàn)槲覀円霉?jié)點(diǎn)機(jī)裝包,所以需要配置yum源,ansible配置的方式有很多,可以通過yum_repository配置,我們這里為了方便,直接使用執(zhí)行shell的方式。
[liruilong@master ansible]$ ansible nodes -m shell -a 'mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup;wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo' node2 | CHANGED | rc=0 >> --2021-09-11 11:40:20-- http://mirrors.aliyun.com/repo/Centos-7.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 1.180.13.241, 1.180.13.238, 1.180.13.237, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|1.180.13.241|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2523 (2.5K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’ 0K .. 100% 3.99M=0.001s 2021-09-11 11:40:20 (3.99 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523] node1 | CHANGED | rc=0 >> --2021-09-11 11:40:20-- http://mirrors.aliyun.com/repo/Centos-7.repo Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 1.180.13.241, 1.180.13.238, 1.180.13.237, ... Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|1.180.13.241|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 2523 (2.5K) [application/octet-stream] Saving to: ‘/etc/yum.repos.d/CentOS-Base.repo’ 0K .. 100% 346M=0s 2021-09-11 11:40:20 (346 MB/s) - ‘/etc/yum.repos.d/CentOS-Base.repo’ saved [2523/2523] [liruilong@master ansible]$
配置好了yum源,我們需要確認(rèn)一下
[liruilong@master ansible]$ ansible all -m shell -a 'yum repolist | grep aliyun' [liruilong@master ansible]$
2. 配置時(shí)間同步
這里為了方便。我們直接使用 ansible角色 安裝RHEL角色軟件包,拷貝角色目錄到角色目錄下,并創(chuàng)建劇本 timesync.yml
┌──[root@master]-[/home/liruilong/ansible] └─$ yum -y install rhel-system-roles 已加載插件:fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com base | 3.6 kB 00:00:00 epel | 4.7 kB 00:00:00 extras | 2.9 kB 00:00:00 updates | 2.9 kB 00:00:00 (1/2): epel/x86_64/updateinfo | 1.0 MB 00:00:00 (2/2): epel/x86_64/primary_db | 7.0 MB 00:00:01 正在解決依賴關(guān)系 There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help). --> 正在檢查事務(wù) ---> 軟件包 rhel-system-roles.noarch.0.1.0.1-4.el7_9 將被 安裝 --> 解決依賴關(guān)系完成 依賴關(guān)系解決 ======================================================================================================================== Package 架構(gòu) 版本 源 大小 ======================================================================================================================== 正在安裝: rhel-system-roles noarch 1.0.1-4.el7_9 extras 988 k 事務(wù)概要 ======================================================================================================================== 安裝 1 軟件包 總下載量:988 k 安裝大小:4.8 M Downloading packages: rhel-system-roles-1.0.1-4.el7_9.noarch.rpm | 988 kB 00:00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction 正在安裝 : rhel-system-roles-1.0.1-4.el7_9.noarch 1/1 驗(yàn)證中 : rhel-system-roles-1.0.1-4.el7_9.noarch 1/1 已安裝: rhel-system-roles.noarch 0:1.0.1-4.el7_9 完畢! ┌──[root@master]-[/home/liruilong/ansible] └─$ su - liruilong 上一次登錄:六 9月 11 13:16:23 CST 2021pts/2 上 [liruilong@master ~]$ cd /home/liruilong/ansible/ [liruilong@master ansible]$ ls ansible.cfg inventory [liruilong@master ansible]$ cp -r /usr/share/ansible/roles/rhel-system-roles.timesync roles/
[liruilong@master ansible]$ ls ansible.cfg inventory roles timesync.yml [liruilong@master ansible]$ cat timesync.yml - name: timesync hosts: all vars: - timesync_ntp_servers: - hostname: 192.168.1.10 iburst: yes roles: - rhel-system-roles.timesync [liruilong@master ansible]$
3. docker環(huán)境初始化
編寫 docker環(huán)境初始化的劇本?install_docker_playbook.yml
- name: install docker on node1,node2 hosts: node1,node2 tasks: - yum: name=docker state=absent - yum: name=docker state=present - yum: name=firewalld state=absent - shell: echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf - shell: sysctl -p - shell: sed -i '18 i ExecStartPort=/sbin/iptables -P FORWARD ACCEPT' /lib/systemd/system/docker.service - shell: cat /lib/systemd/system/docker.service - shell: systemctl daemon-reload - service: name=docker state=restarted enabled=yes
執(zhí)行劇本
[liruilong@master ansible]$ cat install_docker_playbook.yml - name: install docker on node1,node2 hosts: node1,node2 tasks: - yum: name=docker state=absent - yum: name=docker state=present - yum: name=firewalld state=absent - shell: echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf - shell: sysctl -p - shell: sed -i '18 i ExecStartPort=/sbin/iptables -P FORWARD ACCEPT' /lib/systemd/system/docker.service - shell: cat /lib/systemd/system/docker.service - shell: systemctl daemon-reload - service: name=docker state=restarted enabled=yes [liruilong@master ansible]$ ls ansible.cfg install_docker_check.yml install_docker_playbook.yml inventory roles timesync.yml [liruilong@master ansible]$ ansible-playbook install_docker_playbook.yml
然后,我們編寫一個(gè)檢查的劇本install_docker_check.yml?,用來檢查docker的安裝情況
- name: install_docker-check hosts: node1,node2 ignore_errors: yes tasks: - shell: docker info register: out - debug: msg="{{out}}" - shell: systemctl -all | grep firewalld register: out1 - debug: msg="{{out1}}" - shell: cat /etc/sysctl.conf register: out2 - debug: msg="{{out2}}" - shell: cat /lib/systemd/system/docker.service register: out3 - debug: msg="{{out3}}"
[liruilong@master ansible]$ ls ansible.cfg install_docker_check.yml install_docker_playbook.yml inventory roles timesync.yml [liruilong@master ansible]$ cat install_docker_check.yml - name: install_docker-check hosts: node1,node2 ignore_errors: yes tasks: - shell: docker info register: out - debug: msg="{{out}}" - shell: systemctl -all | grep firewalld register: out1 - debug: msg="{{out1}}" - shell: cat /etc/sysctl.conf register: out2 - debug: msg="{{out2}}" - shell: cat /lib/systemd/system/docker.service register: out3 - debug: msg="{{out3}}" [liruilong@master ansible]$ ansible-playbook install_docker_check.yml
2. etcd 安裝
安裝etcd(鍵值型數(shù)據(jù)庫),在Kube-master上操作,創(chuàng)建配置網(wǎng)絡(luò)
編寫ansible劇本?install_etcd_playbook.yml
- name: install etcd or master hosts: 127.0.0.1 tasks: - yum: name=etcd state=present - lineinfile: path=/etc/etcd/etcd.conf regexp=ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" line=ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" - shell: cat /etc/etcd/etcd.conf register: out - debug: msg="{{out}}" - service: name=etcd state=restarted enabled=yes
[liruilong@master ansible]$ ls ansible.cfg install_docker_playbook.yml inventory timesync.yml install_docker_check.yml install_etcd_playbook.yml roles [liruilong@master ansible]$ cat install_etcd_playbook.yml - name: install etcd or master hosts: 127.0.0.1 tasks: - yum: name=etcd state=present - lineinfile: path=/etc/etcd/etcd.conf regexp=ETCD_LISTEN_CLIENT_URLS="http://localhost:2379" line=ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" - shell: cat /etc/etcd/etcd.conf register: out - debug: msg="{{out}}" - service: name=etcd state=restarted enabled=yes [liruilong@master ansible]$ ansible-playbook install_etcd_playbook.yml
1. 創(chuàng)建配置網(wǎng)絡(luò):10.254.0.0/16
[liruilong@master ansible]$ etcdctl ls / [liruilong@master ansible]$ etcdctl mk /atomic.io/network/config '{"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}} ' {"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}} [liruilong@master ansible]$ etcdctl ls / /atomic.io [liruilong@master ansible]$ etcdctl ls /atomic.io /atomic.io/network [liruilong@master ansible]$ etcdctl ls /atomic.io/network /atomic.io/network/config [liruilong@master ansible]$ etcdctl get /atomic.io/network/config {"Network": "10.254.0.0/16", "Backend": {"Type":"vxlan"}} [liruilong@master ansible]$
3. flannel 安裝配置(k8s所有機(jī)器上操作)
flannel是一個(gè)網(wǎng)絡(luò)規(guī)劃服務(wù),它的功能是讓k8s集群中,不同節(jié)點(diǎn)主機(jī)創(chuàng)建的docker容器,都具有在集群中唯一的虛擬IP地址。flannel 還可以在這些虛擬機(jī)IP地址之間建立一個(gè)覆蓋網(wǎng)絡(luò),通過這個(gè)覆蓋網(wǎng)絡(luò),實(shí)現(xiàn)不同主機(jī)內(nèi)的容器互聯(lián)互通;嗯,類似一個(gè)vlan的作用。
kube-master 管理主機(jī)上沒有docker,只需要安裝flannel,修改配置,啟動(dòng)并設(shè)置開機(jī)自啟動(dòng)即可。
1. ansible 主機(jī)清單添加 master節(jié)點(diǎn)
嗯,這里因?yàn)閙aster節(jié)點(diǎn)機(jī)需要裝包配置,所以我們?cè)谥鳈C(jī)清單里加了master節(jié)點(diǎn)
[liruilong@master ansible]$ sudo cat /etc/hosts 192.168.1.11 node2 192.168.1.9 node1 192.168.1.10 master [liruilong@master ansible]$ ls ansible.cfg install_docker_playbook.yml inventory timesync.yml install_docker_check.yml install_etcd_playbook.yml roles [liruilong@master ansible]$ cat inventory master [nodes] node1 node2 [liruilong@master ansible]$ ansible master -m ping master | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" }
2. flannel 安裝配置
編寫劇本?install_flannel_playbook.yml:
- name: install flannel or all hosts: all vars: group_node: nodes tasks: - yum: name: flannel state: present - lineinfile: path: /etc/sysconfig/flanneld regexp: FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379" line: FLANNEL_ETCD_ENDPOINTS="http://192.168.1.10:2379" - service: name: docker state: stopped when: group_node in group_names - service: name: flanneld state: restarted enabled: yes - service: name: docker state: restarted when: group_node in group_names
執(zhí)行劇本之前要把master的firewalld 關(guān)掉。也可以把2379端口放開
[liruilong@master ansible]$ su root 密碼: ┌──[root@master]-[/home/liruilong/ansible] └─$ systemctl disable flanneld.service --now Removed symlink /etc/systemd/system/multi-user.target.wants/flanneld.service. Removed symlink /etc/systemd/system/docker.service.wants/flanneld.service. ┌──[root@master]-[/home/liruilong/ansible] └─$ systemctl status flanneld.service ● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled) Active: inactive (dead) 9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.046900 50344 manager.go:149] Using interface with name ens33 and address 192.168.1.10 9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.046958 50344 manager.go:166] Defaulting external address to interface address (192.168.1.10) 9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.056681 50344 local_manager.go:134] Found lease (10.254.68.0/24) for current IP (192..., reusing 9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.060343 50344 manager.go:250] Lease acquired: 10.254.68.0/24 9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.062427 50344 network.go:58] Watching for L3 misses 9月 12 18:34:24 master flanneld-start[50344]: I0912 18:34:24.062462 50344 network.go:66] Watching for new subnet leases 9月 12 18:34:24 master systemd[1]: Started Flanneld overlay address etcd agent. 9月 12 18:40:42 master systemd[1]: Stopping Flanneld overlay address etcd agent... 9月 12 18:40:42 master flanneld-start[50344]: I0912 18:40:42.194559 50344 main.go:172] Exiting... 9月 12 18:40:42 master systemd[1]: Stopped Flanneld overlay address etcd agent. Hint: Some lines were ellipsized, use -l to show in full. ┌──[root@master]-[/home/liruilong/ansible] └─$
┌──[root@master]-[/home/liruilong/ansible] └─$ cat install_flannel_playbook.yml - name: install flannel or all hosts: all vars: group_node: nodes tasks: - yum: name: flannel state: present - lineinfile: path: /etc/sysconfig/flanneld regexp: FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379" line: FLANNEL_ETCD_ENDPOINTS="http://192.168.1.10:2379" - service: name: docker state: stopped when: group_node in group_names - service: name: flanneld state: restarted enabled: yes - service: name: docker state: restarted when: group_node in group_names ┌──[root@master]-[/home/liruilong/ansible] └─$ ansible-playbook install_flannel_playbook.yml
3. 測試 flannel
這里也可以使用ansible 的docker相關(guān)模塊處理,我們這里為了方便直接用shell模塊
編寫?install_flannel_check.yml
- name: flannel config check hosts: all vars: nodes: nodes tasks: - block: - shell: ifconfig docker0 | head -2 register: out - debug: msg="{{out}}" - shell: docker rm -f {{inventory_hostname}} - shell: docker run -itd --name {{inventory_hostname}} centos register: out1 - debug: msg="{{out1}}" when: nodes in group_names - shell: ifconfig flannel.1 | head -2 register: out - debug: msg="{{out}}"
執(zhí)行劇本
[liruilong@master ansible]$ cat install_flannel_check.yml - name: flannel config check hosts: all vars: nodes: nodes tasks: - block: - shell: ifconfig docker0 | head -2 register: out - debug: msg="{{out}}" - shell: docker rm -f {{inventory_hostname}} - shell: docker run -itd --name {{inventory_hostname}} centos register: out1 - debug: msg="{{out1}}" when: nodes in group_names - shell: ifconfig flannel.1 | head -2 register: out - debug: msg="{{out}}" [liruilong@master ansible]$ [liruilong@master ansible]$ ansible-playbook install_flannel_check.yml PLAY [flannel config check] ************************************************************************************************************************************************************************************* TASK [Gathering Facts] ****************************************************************************************************************************************************************************************** ok: [master] ok: [node2] ok: [node1] TASK [shell] **************************************************************************************************************************************************************************************************** skipping: [master] changed: [node2] changed: [node1] TASK [debug] **************************************************************************************************************************************************************************************************** skipping: [master] ok: [node1] => { "msg": { "changed": true, "cmd": "ifconfig docker0 | head -2", "delta": "0:00:00.021769", "end": "2021-09-12 21:51:44.826682", "failed": false, "rc": 0, "start": "2021-09-12 21:51:44.804913", "stderr": "", "stderr_lines": [], "stdout": "docker0: flags=4163
驗(yàn)證node1上的centos容器能否ping通 node2上的centos容器
[liruilong@master ansible]$ ssh node1 Last login: Sun Sep 12 21:58:49 2021 from 192.168.1.10 [liruilong@node1 ~]$ sudo docker exec -it node1 /bin/bash [root@1c0628dcb7e7 /]# ip a 1: lo:
測試可以ping通,到這一步,我們配置了 flannel 網(wǎng)絡(luò),實(shí)現(xiàn)不同機(jī)器間容器互聯(lián)互通
4. 安裝部署 kube-master
嗯,網(wǎng)絡(luò)配置好之后,我們要在master管理節(jié)點(diǎn)安裝配置相應(yīng)的kube-master。先看下有沒有包
[liruilong@master ansible]$ yum list Kubernetes-* 已加載插件:fastestmirror, langpacks Determining fastest mirrors * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com 可安裝的軟件包 kubernetes.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-ansible.noarch 0.6.0-0.1.gitd65ebd5.el7 epel kubernetes-client.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-master.x86_64 1.5.2-0.7.git269f928.el7 extras kubernetes-node.x86_64 1.5.2-0.7.git269f928.el7 extras [liruilong@master ansible]$ ls /etc/yum.repos.d/
嗯,如果有1.10的包,最好用 1.10 的,這里我們只有1.5 的就先用1.5 的試試,1.10 的yum源沒找到
編寫?install_kube-master_playbook.yml?劇本
- name: install kube-master or master hosts: master tasks: - shell: swapoff -a - replace: path: /etc/fstab regexp: "/dev/mapper/centos-swap" replace: "#/dev/mapper/centos-swap" - shell: cat /etc/fstab register: out - debug: msg="{{out}}" - shell: getenforce register: out - debug: msg="{{out}}" - shell: setenforce 0 when: out.stdout != "Disabled" - replace: path: /etc/selinux/config regexp: "SELINUX=enforcing" replace: "SELINUX=disabled" - shell: cat /etc/selinux/config register: out - debug: msg="{{out}}" - yum_repository: name: Kubernetes description: K8s aliyun yum baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck: yes gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg repo_gpgcheck: yes enabled: yes - yum: name: kubernetes-master,kubernetes-client state: absent - yum: name: kubernetes-master state: present - yum: name: kubernetes-client state: present - lineinfile: path: /etc/kubernetes/config regexp: KUBE_MASTER="--master=http://127.0.0.1:8080" line: KUBE_MASTER="--master=http://192.168.1.10:8080" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" line: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" line: KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" line: KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" line: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" - service: name: kube-apiserver state: restarted enabled: yes - service: name: kube-controller-manager state: restarted enabled: yes - service: name: kube-scheduler state: restarted enabled: yes - shell: kubectl get cs register: out - debug: msg="{{out}}"
執(zhí)行劇本
[liruilong@master ansible]$ ansible-playbook install_kube-master_playbook.yml ............ TASK [debug] ************************************************************************************************************************************************** ok: [master] => { "msg": { "changed": true, "cmd": "kubectl get cs", "delta": "0:00:05.653524", "end": "2021-09-12 23:44:58.030756", "failed": false, "rc": 0, "start": "2021-09-12 23:44:52.377232", "stderr": "", "stderr_lines": [], "stdout": "NAME STATUS MESSAGE ERROR\nscheduler Healthy ok \ncontroller-manager Healthy ok \netcd-0 Healthy {\"health\":\"true\"} ", "stdout_lines": [ "NAME STATUS MESSAGE ERROR", "scheduler Healthy ok ", "controller-manager Healthy ok ", "etcd-0 Healthy {\"health\":\"true\"} " ] } } PLAY RECAP **************************************************************************************************************************************************** master : ok=13 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 [liruilong@master ansible]$ cat install_kube-master_playbook.yml - name: install kube-master or master hosts: master tasks: - shell: swapoff -a - replace: path: /etc/fstab regexp: "/dev/mapper/centos-swap" replace: "#/dev/mapper/centos-swap" - shell: cat /etc/fstab register: out - debug: msg="{{out}}" - shell: getenforce register: out - debug: msg="{{out}}" - shell: setenforce 0 when: out.stdout != "Disabled" - replace: path: /etc/selinux/config regexp: "SELINUX=enforcing" replace: "SELINUX=disabled" - shell: cat /etc/selinux/config register: out - debug: msg="{{out}}" - yum_repository: name: Kubernetes description: K8s aliyun yum baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck: yes gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg repo_gpgcheck: yes enabled: yes - yum: name: kubernetes-master,kubernetes-client state: absent - yum: name: kubernetes-master state: present - yum: name: kubernetes-client state: present - lineinfile: path: /etc/kubernetes/config regexp: KUBE_MASTER="--master=http://127.0.0.1:8080" line: KUBE_MASTER="--master=http://192.168.1.10:8080" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1" line: KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" line: KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.1.10:2379" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" line: KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" - lineinfile: path: /etc/kubernetes/apiserver regexp: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" line: KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" - service: name: kube-apiserver state: restarted enabled: yes - service: name: kube-controller-manager state: restarted enabled: yes - service: name: kube-scheduler state: restarted enabled: yes - shell: kubectl get cs register: out - debug: msg="{{out}}" [liruilong@master ansible]$
5. 安裝部署 kube-node
管理節(jié)點(diǎn)安裝成功之后我們要部署相應(yīng)的計(jì)算節(jié)點(diǎn),kube-node 的安裝 ( 在所有node服務(wù)器上部署 )
劇本編寫:?install_kube-node_playbook.yml
[liruilong@master ansible]$ cat - name: install kube-node or nodes hosts: nodes tasks: - shell: swapoff -a - replace: path: /etc/fstab regexp: "/dev/mapper/centos-swap" replace: "#/dev/mapper/centos-swap" - shell: cat /etc/fstab register: out - debug: msg="{{out}}" - shell: getenforce register: out - debug: msg="{{out}}" - shell: setenforce 0 when: out.stdout != "Disabled" - replace: path: /etc/selinux/config regexp: "SELINUX=enforcing" replace: "SELINUX=disabled" - shell: cat /etc/selinux/config register: out - debug: msg="{{out}}" - yum_repository: name: Kubernetes description: K8s aliyun yum baseurl: https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ gpgcheck: yes gpgkey: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg repo_gpgcheck: yes enabled: yes - yum: name: kubernetes-node state: absent - yum: name: kubernetes-node state: present - lineinfile: path: /etc/kubernetes/config regexp: KUBE_MASTER="--master=http://127.0.0.1:8080" line: KUBE_MASTER="--master=http://192.168.1.10:8080" - lineinfile: path: /etc/kubernetes/kubelet regexp: KUBELET_ADDRESS="--address=127.0.0.1" line: KUBELET_ADDRESS="--address=0.0.0.0" - lineinfile: path: /etc/kubernetes/kubelet regexp: KUBELET_HOSTNAME="--hostname-override=127.0.0.1" line: KUBELET_HOSTNAME="--hostname-override={{inventory_hostname}}" - lineinfile: path: /etc/kubernetes/kubelet regexp: KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080" line: KUBELET_API_SERVER="--api-servers=http://192.168.1.10:8080" - lineinfile: path: /etc/kubernetes/kubelet regexp: KUBELET_ARGS="" line: KUBELET_ARGS="--cgroup-driver=systemd --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" - shell: kubectl config set-cluster local --server="http://192.168.1.10:8080" - shell: kubectl config set-context --cluster="local" local - shell: kubectl config set current-context local - shell: kubectl config view register: out - debug: msg="{{out}}" - copy: dest: /etc/kubernetes/kubelet.kubeconfig content: "{{out.stdout}}" force: yes - shell: docker pull tianyebj/pod-infrastructure:latest - service: name: kubelet state: restarted enabled: yes - service: name: kube-proxy state: restarted enabled: yes - name: service check hosts: master tasks: - shell: sleep 10 async: 11 - shell: kubectl get node register: out - debug: msg="{{out}}"
執(zhí)行劇本?install_kube-node_playbook.yml
[liruilong@master ansible]$ ansible-playbook install_kube-node_playbook.yml ........ ... TASK [debug] ************************************************************************************************************************************************************************************************** ok: [master] => { "msg": { "changed": true, "cmd": "kubectl get node", "delta": "0:00:00.579772", "end": "2021-09-15 02:00:34.829752", "failed": false, "rc": 0, "start": "2021-09-15 02:00:34.249980", "stderr": "", "stderr_lines": [], "stdout": "NAME STATUS AGE\nnode1 Ready 1d\nnode2 Ready 1d", "stdout_lines": [ "NAME STATUS AGE", "node1 Ready 1d", "node2 Ready 1d" ] } } PLAY RECAP **************************************************************************************************************************************************************************************************** master : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 node1 : ok=27 changed=19 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 node2 : ok=27 changed=19 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 [liruilong@master ansible]$
6. 安裝部署 kube-dashboard
dashboard 鏡像安裝:kubernetes-dashboard 是 kubernetes 的web管理面板.這里的話一定要和K8s的版本對(duì)應(yīng),包括配置文件
[liruilong@master ansible]$ ansible node1 -m shell -a 'docker search kubernetes-dashboard' [liruilong@master ansible]$ ansible node1 -m shell -a 'docker pull docker.io/rainf/kubernetes-dashboard-amd64'
kube-dashboard.yaml 文件,修改dashboard的yaml文件,在kube-master上操作
kind: Deployment apiVersion: extensions/v1beta1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 selector: matchLabels: app: kubernetes-dashboard template: metadata: labels: app: kubernetes-dashboard # Comment the following annotation if Dashboard must not be deployed on master annotations: scheduler.alpha.kubernetes.io/tolerations: | [ { "key": "dedicated", "operator": "Equal", "value": "master", "effect": "NoSchedule" } ] spec: containers: - name: kubernetes-dashboard image: docker.io/rainf/kubernetes-dashboard-amd64 #默認(rèn)的鏡像是使用google的,這里改成docker倉庫的 imagePullPolicy: Always ports: - containerPort: 9090 protocol: TCP args: # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. - --apiserver-host=http://192.168.1.10:8080 #注意這里是api的地址 livenessProbe: httpGet: path: / port: 9090 initialDelaySeconds: 30 timeoutSeconds: 30 --- kind: Service apiVersion: v1 metadata: labels: app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 80 targetPort: 9090 nodePort: 30090 selector: app: kubernetes-dashboard
根據(jù)yaml文件,創(chuàng)建dashboard容器,在kube-master上操作
[liruilong@master ansible]$ vim kube-dashboard.yaml [liruilong@master ansible]$ kubectl create -f kubernetes-dashboard.yaml deployment "kubernetes-dashboard" created service "kubernetes-dashboard" created [liruilong@master ansible]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kubernetes-dashboard-1953799730-jjdfj 1/1 Running 0 6s [liruilong@master ansible]$
看一下在那個(gè)節(jié)點(diǎn)上,然后訪問試試
[liruilong@master ansible]$ ansible nodes -a "docker ps" node2 | CHANGED | rc=0 >> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 14433d421746 docker.io/rainf/kubernetes-dashboard-amd64 "/dashboard --port..." 10 minutes ago Up 10 minutes k8s_kubernetes-dashboard.c82dac6b_kubernetes-dashboard-1953799730-jjdfj_kube-system_ea2ec370-1594-11ec-bbb1-000c294efe34_9c65bb2a afc4d4a56eab registry.access.redhat.com/rhel7/pod-infrastructure:latest "/usr/bin/pod" 10 minutes ago Up 10 minutes k8s_POD.28c50bab_kubernetes-dashboard-1953799730-jjdfj_kube-system_ea2ec370-1594-11ec-bbb1-000c294efe34_6851b7ee node1 | CHANGED | rc=0 >> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES [liruilong@master ansible]$
在 node2上,即可以通過?http://192.168.1.11:30090/?訪問,我們測試一下
嗯,到這里,就完成了全部的Linux+Docker+Ansible+K8S學(xué)習(xí)環(huán)境搭建。k8s的搭建方式有些落后,但是剛開始學(xué)習(xí),慢慢來,接下來就進(jìn)行愉快的 K8S學(xué)習(xí)吧。
Docker Kubernetes Linux
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請(qǐng)聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時(shí)內(nèi)刪除侵權(quán)內(nèi)容。
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請(qǐng)聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時(shí)內(nèi)刪除侵權(quán)內(nèi)容。