Elasticsearch+Filebeat+Kibana+Metricbea關(guān)于 K8s中集群統(tǒng)一日志平臺Demo
寫在前面
學(xué)習(xí)K8s,所以整理分享給小伙伴
這里要說明的是:這一套方案太吃硬件了,需要高配的本才能跑起來
我的16G運(yùn)存,集群用三個虛機(jī)部署的,工作節(jié)點(diǎn)都是3核5G的配置
折騰了兩天沒有跑起來,后來放棄了,查了下,還是資源的問題。浪費(fèi)了兩天的假期:(
然后睡了一覺,起來好了,勉強(qiáng)能跑起來,但是巨卡
博文涉及主要內(nèi)容
k8s集群helm方式日志管理方案(Elasticsearch+Filebeat+Kibana+Metricbeat)搭建Demo
一些搭建過程的避坑說明
部分內(nèi)容參考
《CKA/CKAD應(yīng)試指南》
《基于Kubernetes的容器云平臺實(shí)戰(zhàn)》
《Kubernetes權(quán)威指南》
我所渴求的,無非是將心中脫穎語出的本性付諸生活,為何竟如此艱難呢 ------赫爾曼·黑塞《德米安》
方案簡述
部署完Kubernetes平臺后,會安裝運(yùn)行大量的應(yīng)用, Kubernetes平臺及各種應(yīng)用系統(tǒng)運(yùn)行過程中會產(chǎn)生大量的各種各樣的系統(tǒng)日志和應(yīng)用日志,通過對這些日志的采集、傳輸、存儲、分析,掌握系統(tǒng)運(yùn)行狀況,對故障處理、安全審計(jì)、行為統(tǒng)計(jì)等有非常重要的作用。下面和小伙伴聊聊日志采集、傳輸、存儲及分析的具體方案及實(shí)踐。
Kubernetes平臺上的日志按照運(yùn)維領(lǐng)域分為應(yīng)用日志和平臺日志.
平臺日志主要是指容器云平臺執(zhí)行過程中產(chǎn)生的系統(tǒng)日志
應(yīng)用日志是指租戶部署的容器應(yīng)用所產(chǎn)生的日志。一般容器內(nèi)的應(yīng)用日志會通過標(biāo)準(zhǔn)輸出設(shè)備輸出到宿主機(jī),
容器引擎可以通過DockerDeamon來接收和重定向這些日志數(shù)據(jù)。
在容器中輸出到控制臺的日志,都會以*-json.log的命名方式保存在/var/lib/docker/containers/目錄下
┌──[root@vms81.liruilongs.github.io]-[/] └─$cd /var/lib/docker/containers/ ┌──[root@vms81.liruilongs.github.io]-[/var/lib/docker/containers] └─$ls 0606dd217f4f2f315f86fb378ffa65ed0c59ba7580869b9a88634dd6e171fdc0 5fde81847fefef953a4575a98e850ce59e1f12c870c9f3e0707a993b8fbecdf0 a72407dce4ac3cd294f8cd39e3fe9b882cbab98d97ffcfad5595f85fb15fec86 06b93b6329012fe79c746bd1136a73396dfbb616b5cecc6ed36baf06fa5c0ba1 622f9edfcadbe129624c57d36e0f30f0857e1fca942bf3fa43ef8c1ebfcd66dd a9e28c6b2799f4ee6a3c9e3a5aa87b221c005de255b2e26af6269e05e029ca2c 0f527574d23415f46c572eeeb567e91a0e9f8e7257b77a3b12f8c730a4ce02d3 63eb2da8d7b2fa0228b8ab96cf7d324021ef86671d9274671e5cb2742d0ef82c aca4e90b9098278f464a958d867deed97902017021821ce2b4c7a19d101654a6 18915947e8df200e4dee89681d9494caabf123d706a2620b13834bfd66f6d78b 6800a57d9800e8f7a9003b1703874ecb6901e5a405e756615e067ae12d2e73cd cc630cbd652e029ca5e31bef39a9c5ee34edac9bd42290eb3453bb418daa13b6 19f3460f872892d06e0ebae8edf23205f1ee414b50d3369f8fd43c6c1c56f5ca 68932b255da53c87bde7267f4e0cdce62354233ff4edb4c2c5ea3d52678cc7eb d317fec9c72bc6ba8ea269e7d051469812c21fe34ae8f485d771fc0442e726eb 1e0290e55ef51f8244b22fd06ec7ab92ba710bb4020ccbe7e17680fef7123217 717a70f583304f964fe1e49cb70ae5ed0736cc07d3ae99c317d603b6c32a4d7b d935bf0aa2729e97af8cec9f068add2204af58fc274fe0be16de78767f45ae25 2e045fb8b9d0cf9d11942e4d354e5a227d719eb89833c00f10a13ab754c60550 79d7cd4921310c2ebc67f9b991c978315c9976a2fd40259c679a896b360a79c4 df71f8784e0e6edc8ff5d97b75aa02e748afbdf469e887c7b26869918c170057 39155bfd5218c7646d944b91287473c4327df1cf65a75376180243dccbb3d993 8e0790b31b66978013df49d57adeaad6e494fc8300ef60545a09227988610fc2 ea17614942ba0d562064f93601a976887ee2f376a2c53362baf880dbc2c17f9e 3a83887643718df6e59ccdb2fe487dd02ebe0215d23b1c2b7188964efee648d4 9d6e87acdc3a86ed570380210fde566dd3a8238d5229841a5c58bddd70b3164c eebf0c7156b0d092bde382b7faf14b5ce398d18f97709234ff3d2f66ec0bc2f9 3bd8ffe600611bcb97d2e4b3c698092b684a89dfc4217872807a982508c604eb 9e603394e21c0d9217c0d25feae9e728bc028fa350d413dadea9b2b91817fa49 ef6b34f07e2d4c70885729a3055a55d918bd0e4d653ba0b5d81ae51fb4697d1f 3daa58413a70f73224dafcfeb71bd1dae1664e6e83e37524ecf1bd26a0aca8fd 9ebf4fbf13c0c45033123edb3d5f6d4254897739a37295e9b7699e0d40c1005d f106a4acfdb893407bc12a5a2467be0097bbfbadd737cf8f0dc99e111b477d9d 46db748a9a51a0b04fbf96c1f0203db0667bea1fbaf488c7c65f5d3451a2ec7c a12a02457513345604e6251d2c8a286f617e972a800f19e4d04b8888114ea6e8 fcc0a0be4353a25adcf5e6016d5a29d06e27d5945c559c811d19799b340fc962 5983afe06986e8958e417b036e673586358309e87b0e76a407c788a88348e6d7 a66a7eee79ced84e6d9201ee38f9ab887f7bfc0103e1172e107b337108a63638 ff545c3a08264a28968800fe0fb2bbd5f4381029f089e9098c9e1484d310fcc1 ┌──[root@vms81.liruilongs.github.io]-[/var/lib/docker/containers] └─$cd 0606dd217f4f2f315f86fb378ffa65ed0c59ba7580869b9a88634dd6e171fdc0;ls 0606dd217f4f2f315f86fb378ffa65ed0c59ba7580869b9a88634dd6e171fdc0-json.log checkpoints config.v2.json hostconfig.json mounts ┌──[root@vms81.liruilongs.github.io]-[/var/lib/docker/containers/0606dd217f4f2f315f86fb378ffa65ed0c59ba7580869b9a88634dd6e171fdc0] └─$
對于應(yīng)用日志這種數(shù)據(jù)量比較大的數(shù)據(jù)可以考慮通過網(wǎng)絡(luò)直接重定向到收集器,不必通過文件中轉(zhuǎn),這樣可以一定程度地加快數(shù)據(jù)流轉(zhuǎn)的速度。對于容器應(yīng)用數(shù)據(jù)的收集,由于容器的動態(tài)申請和釋放的特性,因此日志中能夠包含日志來源的拓?fù)湫畔⒑虸P地址等身份信息就顯得尤為重要。應(yīng)用日志由于數(shù)據(jù)量巨大,般應(yīng)當(dāng)根據(jù)系統(tǒng)確定保存時間,并對存儲空間溢出的異常進(jìn)行保護(hù)。
在Kubernetes集群環(huán)境中,一個完整的應(yīng)用或服務(wù)都會涉及為數(shù)眾多的組件運(yùn)行,各組件所在的Node及實(shí)例數(shù)量都是可變的。日志子系統(tǒng)如果不做集中化管理,則會系統(tǒng)的運(yùn)維支撐造成很大的困難,因此有必要在集群層面對日志進(jìn)行統(tǒng)一收集和檢索等工作。所以就有了ELF和EFK
ELK
ELK
是ElasticSearch、 Logstash, Kibana的簡稱,是容器日志管理的核心套件。
ElasticSearch:
是基于Apache Lucene引擎開發(fā)的實(shí)時全文搜索和分析引擎,提供結(jié)構(gòu)化數(shù)據(jù)的搜集、分析、存儲數(shù)據(jù)三大功能。它提供Rest和Java API兩種接口方式。
Logstash:
是一個日志搜集處理框架,也是日志透傳和過濾的工具,它支持多種類型的日志,包括系統(tǒng)日志、錯誤日志和自定義應(yīng)用程序日志。它可以從許多來源接收日志,這些來源包括syslog、消息傳遞(例如RabbitMQ)和JMX,它能夠以多種方式輸出數(shù)據(jù),包括電子郵件、Websockets和ElasticSearch
Kibana:
是一個圖形化的Web應(yīng)用,它通過調(diào)用ElastieSearch提供的Rest或Java API接口搜索存儲在ElasticSearch中的日志,并圍繞分析主題進(jìn)行分析得出結(jié)果,并將這些結(jié)果數(shù)據(jù)按照需要進(jìn)行界面可視化展示;另外, Kibana可以定制儀表盤視圖,展示不同的信息以便滿足不同的需求。
EFK
EFK:
Logstas性能低,消耗資源,且存在不支持消息隊(duì)列緩存及存在數(shù)據(jù)丟失的問題,所以logstash一般可以用fluentd或者filebeat替代
關(guān)于日志方案涉及的組件,這里列出官網(wǎng)地址,小伙伴可以去看看
在K8s集群部署統(tǒng)一的日志管理系統(tǒng),需要以下兩個前提條件。
API Server正確配置了CA證書。
DNS服務(wù),跨主機(jī)組網(wǎng)啟動、運(yùn)行。
需要注意的是,這套方案涉及到RBAC權(quán)限處理的一些資源對象,要結(jié)合helm中chart的資源文件和當(dāng)前集群的資源對象版本去修改。關(guān)于基于RBAC的權(quán)限處理Kubernetes在1.5版本中引入,在1.6版本時升級為Beta版本,在1.8版本時升級為GA,我們這里搭建用的7.9.1版本,RBAC用的Bate的版本,但是集群是1.22的版本,所以需要修改資源文件。
┌──[root@vms81.liruilongs.github.io]-[~/efk] └─$kubectl api-versions | grep rbac rbac.authorization.k8s.io/v1 ┌──[root@vms81.liruilongs.github.io]-[~/efk] └─$
架構(gòu)簡述
關(guān)于搭建的架構(gòu)。我們這里簡單描述下,其實(shí)和我之前講的k8s集群監(jiān)控類似,下面為我們搭建好的EFK相關(guān)的pod列表,我們可以看到,elasticsearch集群作為日志存儲平臺,向上對接kibana(kibana是一個Web應(yīng)用,用于調(diào)用接口查詢elasticsearch集群的數(shù)據(jù)),向下對接metricbeat,filebeat,這兩個組件主要用于工作節(jié)點(diǎn)的日志采集,采集完日志給elasticsearch集群
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES .... elasticsearch-master-0 1/1 Running 2 (3h21m ago) 3h55m 10.244.70.40 vms83.liruilongs.github.io
搭建環(huán)境
helm版本
┌──[root@vms81.liruilongs.github.io]-[/var/lib/docker/containers/0606dd217f4f2f315f86fb378ffa65ed0c59ba7580869b9a88634dd6e171fdc0] └─$helm version version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
k8s集群版本
┌──[root@vms81.liruilongs.github.io]-[/var/lib/docker/containers/0606dd217f4f2f315f86fb378ffa65ed0c59ba7580869b9a88634dd6e171fdc0] └─$kubectl get nodes NAME STATUS ROLES AGE VERSION vms81.liruilongs.github.io Ready control-plane,master 54d v1.22.2 vms82.liruilongs.github.io Ready
EFK(helm)源設(shè)置
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm repo add elastic https://helm.elastic.co "elastic" has been added to your repositories
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm repo list NAME URL azure http://mirror.azure.cn/kubernetes/charts/ ali https://apphub.aliyuncs.com liruilong_repo http://192.168.26.83:8080/charts stable https://charts.helm.sh/stable prometheus-community https://prometheus-community.github.io/helm-charts elastic https://helm.elastic.co
安裝的版本為: --version=7.9.1
EFK chart包下載
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm pull elastic/elasticsearch --version=7.9.1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm pull elastic/filebeat --version=7.9.1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm pull elastic/metricbeat --version=7.9.1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm pull elastic/kibana --version=7.9.1
查看chart下載列表,下載好之后直接通過tar zxf 解壓即可
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$ls *7.9.1* elasticsearch-7.9.1.tgz filebeat-7.9.1.tgz kibana-7.9.1.tgz metricbeat-7.9.1.tgz
涉及到的相關(guān)鏡像導(dǎo)入
需要注意的是,這里鏡像下載特別費(fèi)時間,最好提前下載一下,下面是我在所以工作節(jié)點(diǎn)導(dǎo)入的命令,要注意所以的節(jié)點(diǎn)都需要導(dǎo)入
┌──[root@vms82.liruilongs.github.io]-[/] └─$docker load -i elastic7.9.1.tar .... Loaded image: docker.elastic.co/elasticsearch/elasticsearch:7.9.1 ┌──[root@vms82.liruilongs.github.io]-[/] └─$docker load -i filebeat7.9.1.tar .... Loaded image: docker.elastic.co/beats/filebeat:7.9.1 ┌──[root@vms82.liruilongs.github.io]-[/] └─$docker load -i kibana7.9.1.tar .... Loaded image: docker.elastic.co/kibana/kibana:7.9.1 ┌──[root@vms82.liruilongs.github.io]-[/] └─$docker load -i metricbeat7.9.1.tar ..... Loaded image: docker.elastic.co/beats/metricbeat:7.9.1 ┌──[root@vms82.liruilongs.github.io]-[/] └─$
EFK heml安裝
這里需要注意的是,我們還需要修改一些配置文件,根據(jù)集群的情況
elasticsearch安裝
es集群參數(shù)的修改
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$cd elasticsearch/ ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$ls Chart.yaml examples Makefile README.md templates values.yaml
修改集群副本數(shù)為2,我們只有2個工作節(jié)點(diǎn),所以只能部署兩個
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$cat values.yaml | grep repli replicas: 3 # # Add a template to adjust number of shards/replicas # curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}' ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$sed 's#replicas: 3#replicas: 2#g' values.yaml | grep replicas: replicas: 2 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$sed -i 's#replicas: 3#replicas: 2#g' values.yaml
修改集群最小master節(jié)點(diǎn)數(shù)為1
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$cat values.yaml | grep mini minimumMasterNodes: 2 ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$sed -i 's#minimumMasterNodes: 2#minimumMasterNodes: 1#g' values.yaml
修改數(shù)據(jù)持續(xù)化方式,這里修改不持久化
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$cat values.yaml | grep -A 2 persistence: persistence: enabled: false labels: ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/elasticsearch] └─$
安裝elasticsearch
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm install elasticsearch elasticsearch NAME: elasticsearch LAST DEPLOYED: Sat Feb 5 03:15:45 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 NOTES: 1. Watch all cluster members come up. $ kubectl get pods --namespace=kube-system -l app=elasticsearch-master -w 2. Test cluster health using Helm test. $ helm test elasticsearch --cleanup
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$kubectl get pods --namespace=kube-system -l app=elasticsearch-master -w NAME READY STATUS RESTARTS AGE elasticsearch-master-0 0/1 Running 0 41s elasticsearch-master-1 0/1 Running 0 41s
等待一些時間,查看pod狀態(tài),elasticsearc集群安裝成功
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$kubectl get pods --namespace=kube-system -l app=elasticsearch-master NAME READY STATUS RESTARTS AGE elasticsearch-master-0 1/1 Running 0 2m23s elasticsearch-master-1 1/1 Running 0 2m23s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$
metricbeat 安裝
metricbeat的安裝就需要注意資源文件的版本問題,直接安裝會包如下錯誤
報錯問題解決
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$helm install metricbeat . Error: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1", unable to recognize "": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"]
解決辦法,涉及到版本的直接修改
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat/charts/kube-state-metrics/templates] └─$sed -i 's#rbac.authorization.k8s.io/v1beta1#rbac.authorization.k8s.io/v1#g' *.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$helm install metricbeat . NAME: metricbeat LAST DEPLOYED: Sat Feb 5 03:42:48 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Watch all containers come up. $ kubectl get pods --namespace=kube-system -l app=metricbeat-metricbeat -w
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$kubectl get pods --namespace=kube-system -l app=metricbeat-metricbeat NAME READY STATUS RESTARTS AGE metricbeat-metricbeat-cvqsm 1/1 Running 0 65s metricbeat-metricbeat-gfdqz 1/1 Running 0 65s
filebeat安裝
安裝filebeat
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/filebeat] └─$helm install filebeat . NAME: filebeat LAST DEPLOYED: Sat Feb 5 03:27:13 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Watch all containers come up. $ kubectl get pods --namespace=kube-system -l app=filebeat-filebeat -w
過一會時間查看pod狀態(tài),安裝成功
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/filebeat] └─$kubectl get pods --namespace=kube-system -l app=filebeat-filebeat -w NAME READY STATUS RESTARTS AGE filebeat-filebeat-df4s4 1/1 Running 0 20s filebeat-filebeat-hw9xh 1/1 Running 0 21s
kibana 安裝
修改SVC類型
這里需要注意的是,我么需要找集群外部訪問,所以SVC需要修改為NodePort
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$cat values.yaml | grep -A 2 "type: ClusterIP" type: ClusterIP loadBalancerIP: "" port: 5601 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$sed -i 's#type: ClusterIP#type: NodePort#g' values.yaml ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$cat values.yaml | grep -A 3 service: service: type: NodePort loadBalancerIP: "" port: 5601
安裝kibana
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$helm install kibana . NAME: kibana LAST DEPLOYED: Sat Feb 5 03:47:07 2022 NAMESPACE: kube-system STATUS: deployed REVISION: 1 TEST SUITE: None
pending解決
一直創(chuàng)建不成功,我們查看事件,發(fā)現(xiàn)是CPU和內(nèi)存不夠,沒辦法,這里只能重新調(diào)整虛機(jī)資源
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$kubectl get pods kibana-kibana-f88767f86-fsbhf NAME READY STATUS RESTARTS AGE kibana-kibana-f88767f86-fsbhf 0/1 Pending 0 6m14s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$kubectl describe pods kibana-kibana-f88767f86-fsbhf | grep -A 10 -i events Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4s (x7 over 6m33s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu, 2 Insufficient memory. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/kibana] └─$
調(diào)整完資源可能還會報這樣的錯,查了下,還是資源的問題,我的資源沒辦法調(diào)整了,后就休息了,睡起來發(fā)現(xiàn)可以了
Readiness probe failed: Error: Got HTTP code 000 but expected a 200
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$kubectl describe pods kibana-kibana-f88767f86-qkrk7 | grep -i -A 10 event Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 8m21s default-scheduler Successfully assigned kube-system/kibana-kibana-f88767f86-qkrk7 to vms83.liruilongs.github.io Normal Pulled 7m42s kubelet Container image "docker.elastic.co/kibana/kibana:7.9.1" already present on machine Normal Created 7m42s kubelet Created container kibana Normal Started 7m41s kubelet Started container kibana Warning Unhealthy 4m17s (x19 over 7m30s) kubelet Readiness probe failed: Error: Got HTTP code 000 but expected a 200 Warning Unhealthy 2m19s (x6 over 7m5s) kubelet Readiness probe failed: ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$
下面是一些解決辦法的參考,小伙伴有遇到可以看看
https://stackoverflow.com/questions/68831025/kibana-health-prob-fails-when-elasticsearch-host-is-added-to-fluentd-forwarder-c
https://stackoverflow.com/questions/48540929/kubernetes-readiness-probe-failed-error
通過helm ls可以看到我們的EFK已經(jīng)安裝完成了
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION elasticsearch kube-system 1 2022-02-05 03:15:45.827750596 +0800 CST deployed elasticsearch-7.9.1 7.9.1 filebeat kube-system 1 2022-02-05 03:27:13.473157636 +0800 CST deployed filebeat-7.9.1 7.9.1 kibana kube-system 1 2022-02-05 03:47:07.618651858 +0800 CST deployed kibana-7.9.1 7.9.1 metricbeat kube-system 1 2022-02-05 03:42:48.807772112 +0800 CST deployed metricbeat-7.9.1 7.9.1 ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create] └─$
pod列表查看所有的pod都Running了
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-78d6f96c7b-85rv9 1/1 Running 327 (172m ago) 51d 10.244.88.83 vms81.liruilongs.github.io
查看節(jié)點(diǎn)核心指標(biāo)
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% vms81.liruilongs.github.io 335m 16% 1443Mi 46% vms82.liruilongs.github.io 372m 12% 2727Mi 57% vms83.liruilongs.github.io 554m 13% 2513Mi 56%
EFK測試
查看kibana對應(yīng)的端口,訪問測試
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-helm-create/metricbeat] └─$kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch-master ClusterIP 10.96.232.233
Elasticsearch Kubernetes
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。