kubernetes1.13.1集群使用ceph rbd塊存儲

      網(wǎng)友投稿 928 2022-05-28

      參考文檔

      https://github.com/Kubernetes/examples/tree/master/staging/volumes/rbdhttp://docs.Ceph.com/docs/mimic/rados/operations/pools/https://blog.csdn.net/aixiaoyang168/article/details/78999851?https://www.cnblogs.com/keithtt/p/6410302.htmlhttps://kubernetes.io/docs/concepts/storage/volumes/https://kubernetes.io/docs/concepts/storage/persistent-volumes/https://blog.csdn.net/wenwenxiong/article/details/78406136http://www.mamicode.com/info-detail-1701743.html

      文檔目錄

      kubernetes1.13.1+etcd3.3.10+flanneld0.10集群部署

      kubernetes1.13.1部署kuberneted-dashboard v1.10.1

      kubernetes1.13.1部署coredns

      kubernetes1.13.1部署ingress-nginx并配置https轉(zhuǎn)發(fā)dashboard

      kubernetes1.13.1部署metrics-server0.3.1

      kubernetes1.13.1集群使用ceph rbd存儲塊

      kubernetes1.13.1集群結(jié)合ceph rbd部署最新版本jenkins

      簡介

      ceph支持對象存儲,文件系統(tǒng)及塊存儲,是三合一存儲類型,kubernetes的樣例中有cephfs與rbd兩種使用方式的介紹,cephfs需要node節(jié)點安裝ceph才能支持,rbd需要node節(jié)點安裝ceph-common才支持。

      使用上的區(qū)別如下:

      Volume?Plugin???ReadWriteOnce???ReadOnlyMany????ReadWriteMany CephFS??????????????????????????????????????????????? RBD?????????????????????????????????????????????????-

      基本環(huán)境

      k81集群1.13.1版本

      [root@elasticsearch01?~]#?kubectl?get?nodesNAME????????STATUS???ROLES????AGE???VERSION10.2.8.34???Ready???????24d???v1.13.110.2.8.65???Ready???????24d???v1.13.1

      ceph集群 luminous版本

      [root@ceph01?~]#?ceph?-s ??services: ????mon:?3?daemons,?quorum?ceph01,ceph02,ceph03 ????mgr:?ceph03(active),?standbys:?ceph02,?ceph01 ????osd:?24?osds:?24?up,?24?in ????rgw:?3?daemons?active

      操作步驟

      [root@ceph01?~]#?ceph?osd?pool?create?rbd-k8s?1024?1024?For?better?initial?performance?on?pools?expected?to?store?a?large?number?of?objects,?consider?supplying?the?expected_num_objects?parameter?when?creating?the?pool. [root@ceph01?~]#?ceph?osd?lspools?1?rbd-es,2?.rgw.root,3?default.rgw.control,4?default.rgw.meta,5?default.rgw.log,6?default.rgw.buckets.index,7?default.rgw.buckets.data,8?default.rgw.buckets.non-ec,9?rbd-k8s, [root@ceph01?~]#?rbd?create?rbd-k8s/cephimage1?--size?10240[root@ceph01?~]#?rbd?create?rbd-k8s/cephimage2?--size?20480[root@ceph01?~]#?rbd?create?rbd-k8s/cephimage3?--size?40960[root@ceph01?~]#?rbd?list?rbd-k8scephimage1 cephimage2 cephimage3

      1、下載樣例

      [root@elasticsearch01?~]#?git?clone?https://github.com/kubernetes/examples.gitCloning?into?'examples'... remote:?Enumerating?objects:?11475,?done. remote:?Total?11475?(delta?0),?reused?0?(delta?0),?pack-reused?11475Receiving?objects:?100%?(11475/11475),?16.94?MiB?|?6.00?MiB/s,?done. Resolving?deltas:?100%?(6122/6122),?done. [root@elasticsearch01?~]#?cd?examples/staging/volumes/rbd[root@elasticsearch01?rbd]#?lsrbd-with-secret.yaml??rbd.yaml??README.md??secret [root@elasticsearch01?rbd]#?cp?-a?./rbd?/k8s/yaml/volumes/

      2、k8s集群節(jié)點安裝ceph客戶端

      [root@elasticsearch01?ceph]#?yum??install?ceph-common

      3、修改rbd-with-secret.yaml配置文件

      修改后配置如下:

      [root@elasticsearch01?rbd]#?cat?rbd-with-secret.yaml?apiVersion:?v1 kind:?Pod metadata: ??name:?rbd2 spec: ??containers: ????-?image:?kubernetes/pause ??????name:?rbd-rw ??????volumeMounts: ??????-?name:?rbdpd ????????mountPath:?/mnt/rbd ??volumes: ????-?name:?rbdpd ??????rbd: ????????monitors: ????????-?'10.0.4.10:6789' ????????-?'10.0.4.13:6789' ????????-?'10.0.4.15:6789' ????????pool:?rbd-k8s ????????image:?cephimage1 ????????fsType:?ext4????????readOnly:?true ????????user:?admin ????????secretRef: ??????????name:?ceph-secret

      如下參數(shù)根據(jù)實際情況修改:

      monitors:這是 Ceph集群的monitor 監(jiān)視器,Ceph 集群可以配置多個 monitor,本配置3個mon

      pool:這是Ceph集群中存儲數(shù)據(jù)進(jìn)行歸類區(qū)分使用,這里用的pool為rbd-ceph

      image:這是Ceph 塊設(shè)備中的磁盤映像文件,這里用的是cephimage1

      fsType:文件系統(tǒng)類型,默認(rèn)使用 ext4 即可

      readOnly:是否為只讀,這里測試使用只讀即可

      user:這是Ceph Client訪問Ceph存儲集群所使用的用戶名,這里我們使用admin 即可

      keyring:這是Ceph集群認(rèn)證需要的密鑰環(huán),搭建Ceph存儲集群時生成的ceph.client.admin.keyring

      imageformat:這是磁盤映像文件格式,可以使用 2,或者老一些的1,內(nèi)核版本比較低的使用1

      imagefeatures: 這是磁盤映像文件的特征,需要uname -r查看集群系統(tǒng)內(nèi)核所支持的特性,這里Ceontos7.4內(nèi)核版本為3.10.0-693.el7.x86_64只支持layering

      4、使用ceph認(rèn)證秘鑰

      在集群中使用secret更方便易于擴展且安全

      [root@ceph01?~]#?cat?/etc/ceph/ceph.client.admin.keyring?[client.admin] ????key?=?AQBHVp9bPirBCRAAUt6Mjw5PUjiy/RDHyHZrUw== [root@ceph01?~]#?grep?key?/etc/ceph/ceph.client.admin.keyring?|awk?'{printf?"%s",?$NF}'|base64QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ==

      5、創(chuàng)建ceph-secret

      [root@elasticsearch01?rbd]#?cat?secret/ceph-secret.yaml?apiVersion:?v1kind:?Secretmetadata: ??name:?ceph-secrettype:?"kubernetes.io/rbd"data: ??key:?QVFCSFZwOWJQaXJCQ1JBQVV0Nk1qdzVQVWppeS9SREh5SFpyVXc9PQ== [root@elasticsearch01?rbd]#?kubectl?create?-f?secret/ceph-secret.yaml?secret/ceph-secret?created

      6、創(chuàng)建pod測試rbd

      按照官網(wǎng)的案例直接創(chuàng)建即可

      [root@elasticsearch01?rbd]#?kubectl?create?-frbd-with-secret.yaml

      但是生產(chǎn)環(huán)境中不直接使用volumes,他會隨著pods的創(chuàng)建兒創(chuàng)建,刪除而刪除,數(shù)據(jù)得不到保存,如果需要數(shù)據(jù)不丟失,需要借助pv和pvc實現(xiàn)

      7、創(chuàng)建ceph-pv

      注意rbd是讀寫一次,只讀多次,目前還不支持讀寫多次,我們?nèi)粘J褂胷bd映射磁盤時也是一個image只掛載一個客戶端上;cephfs可以支持讀寫多次

      [root@elasticsearch01?rbd]#?cat?rbd-pv.yaml?apiVersion:?v1 kind:?PersistentVolume metadata: ??name:?ceph-rbd-pv spec: ??capacity: ????storage:?20Gi ??accessModes: ????-?ReadWriteOnce ??rbd: ????monitors: ??????-?'10.0.4.10:6789' ??????-?'10.0.4.13:6789' ??????-?'10.0.4.15:6789' ????pool:?rbd-k8s ????image:?cephimage2 ????user:?admin ????secretRef: ??????name:?ceph-secret ????fsType:?ext4 ????readOnly:?false ??persistentVolumeReclaimPolicy:?Recycle [root@elasticsearch01?rbd]#?kubectl?create?-f?rbd-pv.yaml?persistentvolume/ceph-rbd-pv?created [root@elasticsearch01?rbd]#?kubectl?get?pvNAME??????????CAPACITY???ACCESS?MODES???RECLAIM?POLICY???STATUS??????CLAIM???STORAGECLASS???REASON???AGE ceph-rbd-pv???20Gi???????RWO????????????Recycle??????????Available

      8、創(chuàng)建ceph-pvc

      kubernetes1.13.1集群使用ceph rbd塊存儲

      [root@elasticsearch01?rbd]#?cat?rbd-pv-claim.yaml?apiVersion:?v1 kind:?PersistentVolumeClaim metadata: ??name:?ceph-rbd-pv-claim spec: ??accessModes: ????-?ReadWriteOnce ??resources: ????requests: ??????storage:?10Gi [root@elasticsearch01?rbd]#?kubectl?create?-f?rbd-pv-claim.yaml?persistentvolumeclaim/ceph-rbd-pv-claim?created [root@elasticsearch01?rbd]#?kubectl?get?pvcNAME????????????????STATUS???VOLUME????????CAPACITY???ACCESS?MODES???STORAGECLASS???AGE ceph-rbd-pv-claim???Bound????ceph-rbd-pv???20Gi???????RWO???????????????????????????6s [root@elasticsearch01?rbd]#?kubectl?get?pvNAME??????????CAPACITY???ACCESS?MODES???RECLAIM?POLICY???STATUS???CLAIM???????????????????????STORAGECLASS???REASON???AGE ceph-rbd-pv???20Gi???????RWO????????????Recycle??????????Bound????default/ceph-rbd-pv-claim???????????????????????????5m28s

      9、創(chuàng)建pod通過pv、pvc方式測試rbd

      由于需要格式化掛載rbd,rbd空間比較大10G,需要時間比較久,大概需要幾分鐘

      [root@elasticsearch01?rbd]#?cat?rbd-pv-pod.yaml?apiVersion:?v1 kind:?Pod metadata: ??name:?ceph-rbd-pv-pod1 spec: ??containers: ??-?name:?ceph-rbd-pv-busybox ????image:?busybox ????command:?["sleep",?"60000"] ????volumeMounts: ????-?name:?ceph-rbd-vol1 ??????mountPath:?/mnt/ceph-rbd-pvc/busybox ??????readOnly:?false ??volumes: ??-?name:?ceph-rbd-vol1 ????persistentVolumeClaim: ??????claimName:?ceph-rbd-pv-claim [root@elasticsearch01?rbd]#?kubectl?create?-f?rbd-pv-pod.yaml?pod/ceph-rbd-pv-pod1?created [root@elasticsearch01?rbd]#?kubectl?get?podsNAME???????????????READY???STATUS??????????????RESTARTS???AGE busybox????????????1/1?????Running?????????????432????????18d ceph-rbd-pv-pod1???0/1?????ContainerCreating???0??????????19s

      報錯如下

      MountVolume.WaitForAttach failed for volume "ceph-rbd-pv" : rbd: map failed exit status 6, rbd output: rbd: sysfs write failed RBD image feature set mismatch. Try disabling features unsupported by the kernel with "rbd feature disable". In some cases useful info is found in syslog - try "dmesg | tail". rbd: map failed: (6) No such device or address

      解決方法

      禁用一些特性,這些特性在centos7.4內(nèi)核上不支持,所以生產(chǎn)環(huán)境中k8s及相關(guān)ceph最好使用內(nèi)核版本高的系統(tǒng)做為底層操作系統(tǒng)

      rbd feature disable rbd-k8s/cephimage2 exclusive-lock object-map fast-diff deep-flatten

      [root@ceph01?~]#?rbd?feature?disable?rbd-k8s/cephimage2?exclusive-lock?object-map?fast-diff?deep-flatten

      1、k8s集群端驗證

      [root@elasticsearch01?rbd]#?kubectl?get?pods?-o?wideNAME???????????????READY???STATUS????RESTARTS???AGE?????IP????????????NODE????????NOMINATED?NODE???READINESS?GATES busybox????????????1/1?????Running???432????????18d?????10.254.35.3???10.2.8.65?????????????? ceph-rbd-pv-pod1???1/1?????Running???0??????????3m39s???10.254.35.8???10.2.8.65?????????????? [root@elasticsearch02?ceph]#?df?-h?|grep?rbd/dev/rbd0??????????????????493G??162G??306G??35%?/data /dev/rbd1???????????????????20G???45M???20G???1%?/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2 [root@elasticsearch02?ceph]#?cd?/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2[root@elasticsearch02?rbd-k8s-image-cephimage2]#?lslost+found [root@elasticsearch01?rbd]#?kubectl?exec?-ti?ceph-rbd-pv-pod1?sh/?#?df?-hFilesystem????????????????Size??????Used?Available?Use%?Mounted?onoverlay??????????????????49.1G??????7.4G?????39.1G??16%?/ tmpfs????????????????????64.0M?????????0?????64.0M???0%?/dev tmpfs?????????????????????7.8G?????????0??????7.8G???0%?/sys/fs/cgroup /dev/vda1????????????????49.1G??????7.4G?????39.1G??16%?/dev/termination-log /dev/vda1????????????????49.1G??????7.4G?????39.1G??16%?/etc/resolv.conf /dev/vda1????????????????49.1G??????7.4G?????39.1G??16%?/etc/hostname /dev/vda1????????????????49.1G??????7.4G?????39.1G??16%?/etc/hosts shm??????????????????????64.0M?????????0?????64.0M???0%?/dev/shm /dev/rbd1????????????????19.6G?????44.0M?????19.5G???0%?/mnt/ceph-rbd-pvc/busybox tmpfs?????????????????????7.8G?????12.0K??????7.8G???0%?/var/run/secrets/kubernetes.io/serviceaccount tmpfs?????????????????????7.8G?????????0??????7.8G???0%?/proc/acpi tmpfs????????????????????64.0M?????????0?????64.0M???0%?/proc/kcore tmpfs????????????????????64.0M?????????0?????64.0M???0%?/proc/keys tmpfs????????????????????64.0M?????????0?????64.0M???0%?/proc/timer_list tmpfs????????????????????64.0M?????????0?????64.0M???0%?/proc/timer_stats tmpfs????????????????????64.0M?????????0?????64.0M???0%?/proc/sched_debug tmpfs?????????????????????7.8G?????????0??????7.8G???0%?/proc/scsi tmpfs?????????????????????7.8G?????????0??????7.8G???0%?/sys/firmware /?#?cd?/mnt/ceph-rbd-pvc/busybox//mnt/ceph-rbd-pvc/busybox?#?lslost+found /mnt/ceph-rbd-pvc/busybox?#?touch?ceph-rbd-pods/mnt/ceph-rbd-pvc/busybox?#?lsceph-rbd-pods??lost+found /mnt/ceph-rbd-pvc/busybox?#?echo?busbox>ceph-rbd-pods?/mnt/ceph-rbd-pvc/busybox?#?cat?ceph-rbd-pods?busbox [root@elasticsearch02?ceph]#?cd?/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-image-cephimage2[root@elasticsearch02?rbd-k8s-image-cephimage2]#?lsceph-rbd-pods??lost+found

      2、ceph集群端驗證

      [root@ceph01?~]#?ceph?dfGLOBAL:????SIZE????????AVAIL???????RAW?USED?????%RAW?USED? ????65.9TiB?????58.3TiB??????7.53TiB?????????11.43?POOLS:????NAME???????????????????????????ID?????USED????????%USED?????MAX?AVAIL?????OBJECTS? ????rbd-es?????????????????????????1??????1.38TiB??????7.08???????18.1TiB??????362911? ????.rgw.root??????????????????????2??????1.14KiB?????????0???????18.1TiB???????????4? ????default.rgw.control????????????3???????????0B?????????0???????18.1TiB???????????8? ????default.rgw.meta???????????????4??????46.9KiB?????????0????????104GiB?????????157? ????default.rgw.log????????????????5???????????0B?????????0???????18.1TiB?????????345? ????default.rgw.buckets.index??????6???????????0B?????????0????????104GiB????????2012? ????default.rgw.buckets.data???????7??????1.01TiB??????5.30???????18.1TiB?????2090721? ????default.rgw.buckets.non-ec?????8???????????0B?????????0???????18.1TiB???????????0? ????rbd-k8s????????????????????????9???????137MiB?????????0???????18.1TiB??????????67

      ---------------------------------------

      本文轉(zhuǎn)自三杯水博客51CTO博客

      Kubernetes

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。

      上一篇:【獎勵公示】第16期 2021年12月月度博主評選與博客同步獎勵名單公示
      下一篇:Redis-3.2主從復(fù)制與集群搭建【綻放吧!數(shù)據(jù)庫】
      相關(guān)文章
      亚洲色图国产精品| 亚洲国产第一页www| 亚洲精品视频在线播放| 亚洲AV成人片色在线观看| 亚洲AV无码一区东京热| 亚洲AV无码一区二区三区DV| 亚洲αv久久久噜噜噜噜噜| 精品亚洲综合在线第一区| 亚洲AV无码一区二区乱子伦| 亚洲a一级免费视频| 亚洲日本在线看片| 中文字幕亚洲色图| 亚洲成a人片毛片在线| 中文文字幕文字幕亚洲色| 日韩亚洲国产高清免费视频| 国产午夜亚洲精品国产| 亚洲欧洲无码一区二区三区| 亚洲av纯肉无码精品动漫| 亚洲欧美在线x视频| 亚洲情侣偷拍精品| 亚洲日韩v无码中文字幕| 久久噜噜噜久久亚洲va久| 久久久久亚洲AV无码网站| 老司机亚洲精品影院| 亚洲另类古典武侠| 亚洲天然素人无码专区| 老子影院午夜伦不卡亚洲| 亚洲av无码不卡私人影院| 区久久AAA片69亚洲| 久久久久亚洲av无码尤物| 亚洲欧洲高清有无| 亚洲中文字幕久久精品无码A| 亚洲高清一区二区三区电影| 亚洲Aⅴ无码一区二区二三区软件 亚洲AⅤ视频一区二区三区 | 亚洲人成网77777亚洲色| 亚洲av色福利天堂| 亚洲国产精品成人综合久久久 | 亚洲第一中文字幕| 亚洲精品人成电影网| 中国亚洲呦女专区| www国产亚洲精品久久久|