表格插入流程圖 模糊怎么辦(怎么用表格畫流程圖)
880
2022-05-30
## Kubernetes1.11手動搭建
本次實驗手動搭建一個內部的k8s集群,即不進行認證:
1.通過vagrant和virtual box 構建vm.
2.設計為一個etcd和一個node,初步,master先不搭建node,即目前一個master和一個
node.
- pre
一直想找一篇簡單的手動搭建k8s的教程(不進行認證),以初步學習k8s, 形成一個
簡單的框架.結果,不是沒有對應版本,就是采用一些工具自動搭建,最后還是選擇
帶來的問題.
- cluster info
node1為master,node2為集群的工作節點node.
name | ip
---|---
master/node1 | 192.168.59.11
node2 | 192.168.59.12
- 注:
- 更簡單的,搭建單節點k8s環境
- 即,不搭建node2,將master和node1部署在一個物理機上
- node1的配置方式同node2,僅要修改相應的參數(如 kubelet 地址)
- 搭建過程
- 簡述
>1.獲取所需要的kubernetes二進制文件,這里采用編譯源碼的方式進行
>2.獲取etcd二進制文件,這里采用編譯源碼方式進行
>3.啟動虛擬機,這里采用virtualbox、vagrant、ubuntu16.04作為宿主機
>4.master節點上配置etcd、kube-apiserver、kube-controller-manager、
kube-scheduler
>5.node2節點上配置kubelet、kube-proxy
>6.檢測集群環境,master上運行```kubectl get nodes```查看集群運行狀態
- 具體搭建參考:
- [kubernetes權威指南 第2章](https://item.jd.com/12230380.html)
- [domac的菜園子—深入學習Kubernetes(三):手工安裝Kubernetes](http://lihaoquan.me/2017/3/12/install-kubernetes-cluster--on-centos.html)
- 結果:
```bash
root@node1:/etc/kubernetes# kubectl get nodes
NAME? ? ? STATUS? ? ROLES? ? ?AGE? ? ? ?VERSION
node2? ? ?Ready? ? ?
```
- QA
- Q: k8s版本變化加大,參考內容為1.8之前的,很多啟動參數發生了變化
- A: 1.8+k8s的kubelet的--api-server參數取消,采用kubelet.kubeconfig文件的形式
- kubelet啟動參數
```bash
#kubelet.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet? $KUBELET_ARGS? $KUBELET_ADDRESS
#/etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=192.168.59.12"
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \
--logtostderr=false --log-dir=/var/log/kubernetes --v=2"
```
- bootstrap.kubeconfig(若需要認證,則有ssl等生成)
```bash
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
server: https://192.168.59.11:8080
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token:
```
- Q: master上```kubectl get node```返回結果```No resources found.```
- A: 這里出現原因是沒有認證通過,本實驗是在沒有認證環境下進行的,但是
bootstrap.kubeconfig中的server地址是需要認證的https,這里改成http即可訪問
- ```kubectl get nodes```
```bash
root@node1:/etc/kubernetes# kubectl get nodes
NAME? ? ? STATUS? ? ROLES? ? ?AGE? ? ? ?VERSION
node2? ? ?Ready? ? ?
```
- 解決過程
- 查看kubelet服務的錯誤日志
```bash
kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-08-25 13:49:07 UTC; 12h ago
Main PID: 14611 (kubelet)
Tasks: 12
Memory: 43.4M
CPU: 42.624s
CGroup: /system.slice/kubelet.service
└─14611 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --address=192.168.59.12
Aug 26 02:13:14 node2 kubelet[14611]: E0826 02:13:14.960652? ?14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.59.11:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
Aug 26 02:13:14 node2 kubelet[14611]: E0826 02:13:14.966460? ?14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
Aug 26 02:13:15 node2 kubelet[14611]: E0826 02:13:15.016605? ?14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.59.11:8080/api/v1/services?limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
Aug 26 02:13:15 node2 kubelet[14611]: E0826 02:13:15.963891? ?14611 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.59.11:8080/api/v1/pods?fieldSelector=spec.nodeName%3Dnode2&limit=500&resourceVersion=0: http: server gave HTTP response to HTTPS client
```
這里出現訪問失敗的錯誤,需要測試一下接口:
```https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2```
- http 測試接口:出現ssl錯誤,即認證問題,改為http,測試
- http https://192.168.59.11:8080/...
```bash
root@node2:/etc/systemd/system# http https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0
[1] 22638
[2] 22639
root@node2:/etc/systemd/system#
http: error: SSLError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
[1]-? Exit 1? ? ? ? ? ? ? ? ? http https://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2
[2]+? Done? ? ? ? ? ? ? ? ? ? limit=500
root@node2:/etc/systemd/system#
```
- http http://192.168.59.11:8080/... ,得到了相應
```bash
root@node2:/etc/systemd/system# http http://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2&limit=500&resourceVersion=0
[1] 22833
[2] 22834
root@node2:/etc/systemd/system# HTTP/1.1 200 OK
Content-Type: application/json
Date: Sun, 26 Aug 2018 03:53:20 GMT
Transfer-Encoding: chunked
{
"apiVersion": "v1",
"items": [
{
"metadata": {
"annotations": {
"node.alpha.kubernetes.io/ttl": "0",
"volumes.kubernetes.io/controller-managed-attach-detach": "true"
},
"creationTimestamp": "2018-08-26T02:33:27Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/hostname": "node2"
},
"name": "node2",
"resourceVersion": "15170",
"selfLink": "/api/v1/nodes/node2",
"uid": "69b8f2ca-a8d8-11e8-a889-02483e15b50c"
},
"spec": {},
"status": {
"addresses": [
{
"address": "192.168.59.12",
"type": "InternalIP"
},
{
"address": "node2",
"type": "Hostname"
}
],
"allocatable": {
"cpu": "1",
"ephemeral-storage": "9306748094",
"hugepages-2Mi": "0",
"memory": "1945760Ki",
"pods": "110"
},
"capacity": {
"cpu": "1",
"ephemeral-storage": "10098468Ki",
"hugepages-2Mi": "0",
"memory": "2048160Ki",
"pods": "110"
},
"conditions": [
{
"lastHeartbeatTime": "2018-08-26T03:53:13Z",
"lastTransitionTime": "2018-08-26T03:15:11Z",
"message": "kubelet has sufficient disk space available",
"reason": "KubeletHasSufficientDisk",
"status": "False",
"type": "OutOfDisk"
},
{
"lastHeartbeatTime": "2018-08-26T03:53:13Z",
"lastTransitionTime": "2018-08-26T03:15:11Z",
"message": "kubelet has sufficient memory available",
"reason": "KubeletHasSufficientMemory",
"status": "False",
"type": "MemoryPressure"
},
{
"lastHeartbeatTime": "2018-08-26T03:53:13Z",
"lastTransitionTime": "2018-08-26T03:15:11Z",
"message": "kubelet has no disk pressure",
"reason": "KubeletHasNoDiskPressure",
"status": "False",
"type": "DiskPressure"
},
{
"lastHeartbeatTime": "2018-08-26T03:53:13Z",
"lastTransitionTime": "2018-08-26T02:33:27Z",
"message": "kubelet has sufficient PID available",
"reason": "KubeletHasSufficientPID",
"status": "False",
"type": "PIDPressure"
},
{
"lastHeartbeatTime": "2018-08-26T03:53:13Z",
"lastTransitionTime": "2018-08-26T03:15:21Z",
"message": "kubelet is posting ready status. AppArmor enabled",
"reason": "KubeletReady",
"status": "True",
"type": "Ready"
}
],
"daemonEndpoints": {
"kubeletEndpoint": {
"Port": 10250
}
},
"nodeInfo": {
"architecture": "amd64",
"bootID": "f4cb0a01-e5b9-4851-83d9-ea6556bd285e",
"containerRuntimeVersion": "docker://17.3.2",
"kernelVersion": "4.4.0-133-generic",
"kubeProxyVersion": "v1.11.3-beta.0.3+798ca4d3ceb5b2",
"kubeletVersion": "v1.11.3-beta.0.3+798ca4d3ceb5b2",
"machineID": "fe02b8afeb1041cfa61a6b1d40371316",
"operatingSystem": "linux",
"osImage": "Ubuntu 16.04.5 LTS",
"systemUUID": "98A4443F-059B-462C-900A-AFA32971670D"
}
}
}
],
"kind": "NodeList",
"metadata": {
"resourceVersion": "15179",
"selfLink": "/api/v1/nodes"
}
}
[1]-? Done? ? ? ? ? ? ? ? ? ? http http://192.168.59.11:8080/api/v1/nodes?fieldSelector=metadata.name%3Dnode2
[2]+? Done? ? ? ? ? ? ? ? ? ? limit=500
root@node2:/etc/systemd/system#
```
- master上測試```kubectl get node```,查到資源
```bash
root@node1:/etc/kubernetes# kubectl get nodes
NAME? ? ? STATUS? ? ROLES? ? ?AGE? ? ? ?VERSION
node2? ? ?Ready? ? ?
```
綜上,因為k8s版本的變化,啟動參數變化,按照舊版資料搭建集群,會出現一
些問題,這里解決的就是kubelet的--api-server變化.建議,搭建過程參考
經典資料,但是出錯后,一定查看對應版本的官網文檔.
- 接下來
- 在集群上運行demo
- 加上認證機制
Kubernetes
版權聲明:本文內容由網絡用戶投稿,版權歸原作者所有,本站不擁有其著作權,亦不承擔相應法律責任。如果您發現本站中有涉嫌抄襲或描述失實的內容,請聯系我們jiasou666@gmail.com 處理,核實后本網站將在24小時內刪除侵權內容。