Kubernetes 集群分布式存儲插件 Rook Ceph部署

      網(wǎng)友投稿 1153 2022-05-29

      前言

      Rook 介紹

      簡介

      2、Rook 架構(gòu)

      前言

      Rook 介紹

      Kubernetes 集群分布式存儲插件 Rook Ceph部署

      簡介

      2、Rook 架構(gòu)

      Rook 部署

      前期規(guī)劃

      準(zhǔn)備工作

      獲取 YAML

      部署 Rook Operator

      部署 cluster

      部署 Toolbox

      測試 Rook

      設(shè)置 dashboard

      部署 Node SVC

      確認(rèn)驗(yàn)證

      Ceph 塊存儲應(yīng)用

      創(chuàng)建StorageClass

      創(chuàng)建PVC

      消費(fèi)塊設(shè)備

      測試持久性

      遇到問題

      dashboard 點(diǎn)擊概述500 Internal Server Error

      前言

      我們經(jīng)常會說:容器和 Pod 是短暫的。其含義是它們的生命周期可能很短,會被頻繁地銷毀和創(chuàng)建。容器銷毀時(shí),保存在容器內(nèi)部文件系統(tǒng)中的數(shù)據(jù)都會被清除。 為了持久化保存容器的數(shù)據(jù),可以使用存儲插件在容器里掛載一個基于網(wǎng)絡(luò)或者其他機(jī)制的遠(yuǎn)程數(shù)據(jù)卷,使得在容器里創(chuàng)建的文件,實(shí)際上是保存在遠(yuǎn)程存儲服務(wù)器上,或者以分布式的方式保存在多個節(jié)點(diǎn)上,而與當(dāng)前宿主機(jī)沒有綁定關(guān)系。這樣,無論在哪個節(jié)點(diǎn)上啟動新的容器,都可以請求掛載指定的持久化存儲卷。

      由于 Kubernetes 本身的松耦合設(shè)計(jì),絕大多數(shù)存儲項(xiàng)目,比如 Ceph、GlusterFS、NFS 等,都可以為 Kubernetes 提供持久化存儲能力。在這次的部署實(shí)踐中,選擇一個很重要生產(chǎn)級的存儲插件項(xiàng)目:Rook。

      Rook 介紹

      簡介

      Rook 項(xiàng)目是一個基于 Ceph 的 Kubernetes 存儲插件(后期也在加入對更多存儲的支持)。不過,不同于對 Ceph 的簡單封裝,Rook 在自己的實(shí)現(xiàn)中加入了水平擴(kuò)展、遷移、災(zāi)難備份、監(jiān)控等大量的企業(yè)級功能,使得這個項(xiàng)目變成了一個高度可擴(kuò)展的分布式存儲解決方案,提供對象、文件和塊存儲。

      Rook 目前支持 Ceph、NFS、Minio Object Store、Edegefs、Cassandra、CockroachDB 存儲的搭建。

      Rook 機(jī)制:

      Rook 提供了卷插件,來擴(kuò)展了 K8S 的存儲系統(tǒng),使用 Kubelet 代理程序 Pod 可以掛載 Rook 管理的塊設(shè)備和文件系統(tǒng)。

      Rook Operator 負(fù)責(zé)啟動并監(jiān)控整個底層存儲系統(tǒng),例如 Ceph Pod、Ceph OSD 等,同時(shí)它還管理 CRD、對象存儲、文件系統(tǒng)。

      Rook Agent 代理部署在 K8S 每個節(jié)點(diǎn)上以 Pod 容器運(yùn)行,每個代理 Pod 都配置一個 Flexvolume 驅(qū)動,該驅(qū)動主要用來跟 K8S 的卷控制框架集成起來,每個節(jié)點(diǎn)上的相關(guān)的操作,例如添加存儲設(shè)備、掛載、格式化、刪除存儲等操作,都有該代理來完成。

      更多參考如下官網(wǎng):

      https://rook.io

      https://ceph.com/

      2、Rook 架構(gòu)

      Rook 部署

      前期規(guī)劃

      準(zhǔn)備工作

      為了配置 Ceph 存儲集群,至少需要以下本地存儲選項(xiàng)之一:

      原始設(shè)備(無分區(qū)或格式化的文件系統(tǒng))

      原始分區(qū)(無格式文件系統(tǒng))

      可通過 block 模式從存儲類別獲得 PV

      可以使用以下命令確認(rèn)分區(qū)或設(shè)備是格式化的文件系統(tǒng):

      $ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT vda ├─vda1 xfs e16ad84e-8cef-4cb1-b19b-9105c57f97b1 /boot ├─vda2 LVM2_member Vg3nyB-iW9Q-4xp0-LEIO-gzHc-2eax-D1razB │ └─centos-root xfs 0bb4bfa4-b315-43ca-a789-2b43e726c10c / ├─vda3 LVM2_member VZMibm-DJ8e-apig-YhR3-a1dF-wHYQ-8pjKan │ └─centos-root xfs 0bb4bfa4-b315-43ca-a789-2b43e726c10c / └─vda4

      如果該 FSTYPE 字段不為空,則在相應(yīng)設(shè)備的頂部有一個文件系統(tǒng)。在這種情況下,可以將 vda4 用于 Ceph。

      獲取 YAML

      git clone --single-branch --branch master https://github.com/rook/rook.git

      部署 Rook Operator

      本實(shí)驗(yàn)使用k8s-node1、k8s-node2、k8s-node3 三個節(jié)點(diǎn),因此需要如下修改:

      kubectl label nodes {k8s-node1,k8s-node2,k8s-node3} ceph-osd=enabled kubectl label nodes {k8s-node1,k8s-node2,k8s-node3} ceph-mon=enabled kubectl label nodes k8s-node1 ceph-mgr=enabled

      注意:當(dāng)前版 本 rook 中 mgr 只能支持一個節(jié)點(diǎn)運(yùn)行。

      執(zhí)行腳本:

      cd rook/cluster/examples/kubernetes/ceph kubectl create -f common.yaml kubectl create -f operator.yaml

      注意:如上創(chuàng)建了相應(yīng)的基礎(chǔ)服務(wù)(如 serviceaccounts),同時(shí) rook-ceph-operator 會在每個節(jié)點(diǎn)創(chuàng)建 rook-ceph-agent 和 rook-discover。

      部署 cluster

      配置cluster.yaml

      vi cluster.yaml

      修改完如下:

      ################################################################################################################# # Define the settings for the rook-ceph cluster with common settings for a production cluster. # All nodes with available raw devices will be used for the Ceph cluster. At least three nodes are required # in this example. See the documentation for more details on storage settings available. # For example, to create the cluster: # kubectl create -f common.yaml # kubectl create -f operator.yaml # kubectl create -f cluster.yaml ################################################################################################################# apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: # The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw). # v13 is mimic, v14 is nautilus, and v15 is octopus. # RECOMMENDATION: In production, use a specific version tag instead of the general v14 flag, which pulls the latest release and could result in different # versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/. # If you want to be more precise, you can always use a timestamp tag such ceph/ceph:v14.2.5-20190917 # This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities image: ceph/ceph:v15.2.3 # Whether to allow unsupported versions of Ceph. Currently mimic and nautilus are supported, with the recommendation to upgrade to nautilus. # Octopus is the version allowed when this is set to true. # Do not set to true in production. allowUnsupported: false # The path on the host where configuration files will be persisted. Must be specified. # Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster. # In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment. dataDirHostPath: /var/lib/rook # Whether or not upgrade should continue even if a check fails # This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise # Use at your OWN risk # To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/master/ceph-upgrade.html#ceph-version-upgrades skipUpgradeChecks: false # Whether or not continue if PGs are not clean during an upgrade continueUpgradeAfterChecksEvenIfNotHealthy: false # set the amount of mons to be started mon: count: 3 allowMultiplePerNode: false mgr: modules: # Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules # are already enabled by other settings in the cluster CR and the "rook" module is always enabled. - name: pg_autoscaler enabled: true # enable the ceph dashboard for viewing cluster status dashboard: enabled: true # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy) # urlPrefix: /ceph-dashboard # serve the dashboard at the given port. # port: 8443 # serve the dashboard using SSL ssl: true # enable prometheus alerting for cluster monitoring: # requires Prometheus to be pre-installed enabled: false # namespace to deploy prometheusRule in. If empty, namespace of the cluster will be used. # Recommended: # If you have a single rook-ceph cluster, set the rulesNamespace to the same namespace as the cluster or keep it empty. # If you have multiple rook-ceph clusters in the same k8s cluster, choose the same namespace (ideally, namespace with prometheus # deployed) to set rulesNamespace for all the clusters. Otherwise, you will get duplicate alerts with multiple alert definitions. rulesNamespace: rook-ceph network: # enable host networking #provider: host # EXPERIMENTAL: enable the Multus network provider #provider: multus #selectors: # The selector keys are required to be `public` and `cluster`. # Based on the configuration, the operator will do the following: # 1. if only the `public` selector key is specified both public_network and cluster_network Ceph settings will listen on that interface # 2. if both `public` and `cluster` selector keys are specified the first one will point to 'public_network' flag and the second one to 'cluster_network' # # In order to work, each selector value must match a NetworkAttachmentDefinition object in Multus # #public: public-conf --> NetworkAttachmentDefinition object name in Multus #cluster: cluster-conf --> NetworkAttachmentDefinition object name in Multus # enable the crash collector for ceph daemon crash collection crashCollector: disable: false cleanupPolicy: # cleanup should only be added to the cluster when the cluster is about to be deleted. # After any field of the cleanup policy is set, Rook will stop configuring the cluster as if the cluster is about # to be destroyed in order to prevent these settings from being deployed unintentionally. # To signify that automatic deletion is desired, use the value "yes-really-destroy-data". Only this and an empty # string are valid values for this field. confirmation: "" # To control where various services will be scheduled by kubernetes, use the placement configuration sections below. # The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and # tolerate taints with a key of 'storage-node'. placement: mon: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: ceph-mon operator: In values: - enabled podAffinity: podAntiAffinity: topologySpreadConstraints: tolerations: - key: ceph-mon operator: Exists osd: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: ceph-osd operator: In values: - enabled podAffinity: podAntiAffinity: topologySpreadConstraints: tolerations: - key: ceph-osd operator: Exists mgr: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: ceph-mgr operator: In values: - enabled podAffinity: podAntiAffinity: topologySpreadConstraints: tolerations: - key: ceph-mgr operator: Exists # cleanup: annotations: # all: # mon: # osd: # cleanup: # If no mgr annotations are set, prometheus scrape annotations will be set by default. # mgr: resources: # The requests and limits set here, allow the mgr pod to use half of one CPU core and 1 gigabyte of memory # mgr: # limits: # cpu: "500m" # memory: "1024Mi" # requests: # cpu: "500m" # memory: "1024Mi" # The above example requests/limits can also be added to the mon and osd components # mon: # osd: # prepareosd: # crashcollector: # cleanup: # The option to automatically remove OSDs that are out and are safe to destroy. removeOSDsIfOutAndSafeToRemove: false # priorityClassNames: # all: rook-ceph-default-priority-class # mon: rook-ceph-mon-priority-class # osd: rook-ceph-osd-priority-class # mgr: rook-ceph-mgr-priority-class storage: # cluster level storage configuration and selection useAllNodes: false #關(guān)閉使用所有Node useAllDevices: false #關(guān)閉使用所有設(shè)備 deviceFilter: vda4 config: # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore. # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB # journalSizeMB: "1024" # uncomment if the disks are 20 GB or smaller # osdsPerDevice: "1" # this value can be overridden at the node or device level # encryptedDevice: "true" # the default value for this option is "false" # Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named # nodes below will be used as storage resources. Each node's 'name' field should match their 'kubernetes.io/hostname' label. nodes: - name: "k8s-node1" #指定存儲節(jié)點(diǎn)主機(jī) devices: - name: "vda4" #指定磁盤為sdb config: storeType: bluestore - name: "k8s-node2" devices: - name: "vda4" config: storeType: bluestore - name: "k8s-node3" devices: - name: "vda4" config: storeType: bluestore # The section for configuring management of daemon disruptions during upgrade or fencing. disruptionManagement: # If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically # via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will # block eviction of OSDs by default and unblock them safely when drains are detected. managePodBudgets: false # A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the # default DOWN/OUT interval) when it is draining. This is only relevant when `managePodBudgets` is `true`. The default value is `30` minutes. osdMaintenanceTimeout: 30 # If true, the operator will create and manage MachineDisruptionBudgets to ensure OSDs are only fenced when the cluster is healthy. # Only available on OpenShift. manageMachineDisruptionBudgets: false # Namespace in which to watch for the MachineDisruptionBudgets. machineDisruptionBudgetNamespace: openshift-machine-api

      更多 cluster 的 CRD 配置參考:

      https://github.com/rook/rook/blob/master/Documentation/ceph-cluster-crd.md

      https://blog.gmem.cc/rook-based-k8s-storage-solution

      執(zhí)行cluster.yaml

      kubectl create -f cluster.yaml # 查看部署 log $ kubectl logs -f -n rook-ceph rook-ceph-operator-567d7945d6-t9rd4 # 等待一定時(shí)間,部分中間態(tài)容器可能會波動 [7d@k8s-master ceph]$ kubectl get pods -n rook-ceph -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES csi-cephfsplugin-dr4dq 3/3 Running 0 24h 172.16.106.239 k8s-node2 csi-cephfsplugin-provisioner-6bcb7cdd75-dtzmn 5/5 .......

      注意:若部署失敗

      master 節(jié)點(diǎn)執(zhí)行

      kubectl delete -f ./

      所有 node 節(jié)點(diǎn)執(zhí)行如下清理操作:

      rm -rf /var/lib/rook /dev/mapper/ceph-* dmsetup ls dmsetup remove_all dd if=/dev/zero of=/dev/vda4 bs=512k count=1 wipefs -af /dev/vda4

      部署 Toolbox

      Toolbox 是一個 Rook 的工具集容器,該容器中的命令可以用來調(diào)試、測試 Rook,對 Ceph 臨時(shí)測試的操作一般在這個容器內(nèi)執(zhí)行。

      # 啟動 rook-ceph-tools pod: $ kubectl create -f toolbox.yaml deployment.apps/rook-ceph-tools created # 等待 rook-ceph-tools 載其容器并進(jìn)入 running 狀態(tài): $ kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" NAME READY STATUS RESTARTS AGE rook-ceph-tools-6d659f5579-knt6x 1/1 Running 0 7s

      測試 Rook

      $ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. # 查看 Ceph 狀態(tài) [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph status cluster: id: 550e2978-26a6-4f3b-b101-10369ab63cf4 health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 2m) mgr: a(active, since 85s) osd: 3 osds: 3 up (since 114s), 3 in (since 114s) data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 3.0 GiB used, 147 GiB / 150 GiB avail pgs: 1 active+clean [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 k8s-node2 1026M 48.9G 0 0 0 0 exists,up 1 k8s-node1 1026M 48.9G 0 0 0 0 exists,up 2 k8s-node3 1026M 48.9G 0 0 0 0 exists,up [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 150 GiB 147 GiB 6.2 MiB 3.0 GiB 2.00 TOTAL 150 GiB 147 GiB 6.2 MiB 3.0 GiB 2.00 --- POOLS --- POOL ID STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 0 B 0 0 B 0 46 GiB [root@rook-ceph-tools-6d659f5579-knt6x /]# rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR device_health_metrics 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B total_objects 0 total_used 3.0 GiB total_avail 147 GiB total_space 150 GiB # 查看 Ceph 所有 keyring [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph auth ls installed auth entries: osd.0 key: AQAjofFe1j9pGhAABnjTXAYZeZdwo2FGHIFv+g== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQAjofFeY0LaHhAAMVLxrH1lqXqyYsZE9yJ5dg== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.2 key: AQAkofFeBjtoDBAAVYW7FursqpbttekW54u2rA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * client.admin key: AQDhoPFeqLE4ORAAvBeGwV7p1YY25owP8nS02Q== caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow * client.bootstrap-mds key: AQABofFeQclcCxAAEtA9Y4+yF3I6H9RM0/f1DQ== caps: [mon] allow profile bootstrap-mds client.bootstrap-mgr key: AQABofFeaeBcCxAA9SEnt+RV7neC4uy/xQb5qg== caps: [mon] allow profile bootstrap-mgr client.bootstrap-osd key: AQABofFe0/NcCxAAqCKwJpzPlav8MuajRk8xmw== caps: [mon] allow profile bootstrap-osd client.bootstrap-rbd key: AQABofFesAZdCxAAZyWJg+Pa3F0g5Toy4LamPw== caps: [mon] allow profile bootstrap-rbd client.bootstrap-rbd-mirror key: AQABofFejRpdCxAA/9NbTQDJILdSoYJZdol7bQ== caps: [mon] allow profile bootstrap-rbd-mirror client.bootstrap-rgw key: AQABofFeAi5dCxAAKu67ZyM8PRRPcluTXR3YRw== caps: [mon] allow profile bootstrap-rgw client.crash key: AQAcofFeLDr3KBAAw9UowFd26JiQSGjCFyhx8w== caps: [mgr] allow profile crash caps: [mon] allow profile crash client.csi-cephfs-node key: AQAcofFeFRh4DBAA7Z8kgcHGM92vHj6cvGbXXg== caps: [mds] allow rw caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs *=* client.csi-cephfs-provisioner key: AQAbofFemLuJMRAA4WlGWBjONb1av48rox1q6g== caps: [mgr] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs metadata=* client.csi-rbd-node key: AQAbofFepu7rFhAA+vdit2ipDgVFc/yKUpHHug== caps: [mon] profile rbd caps: [osd] profile rbd client.csi-rbd-provisioner key: AQAaofFe3Yw9OxAAiJzZ6HQne/e9Zob5G311OA== caps: [mgr] allow rw caps: [mon] profile rbd caps: [osd] profile rbd mgr.a key: AQAdofFeh4VZHhAA8VL9gH5jgOxzjTDtEaFWBQ== caps: [mds] allow * caps: [mon] allow profile mgr caps: [osd] allow * [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph version ceph version 15.2.3 (d289bbdec69ed7c1f516e0a093594580a76b78d0) octopus (stable) [root@rook-ceph-tools-6d659f5579-knt6x /]# exit exit

      這樣,一個基于 Rook 持久化存儲集群就以容器的方式運(yùn)行起來了,而接下來在 Kubernetes 項(xiàng)目上創(chuàng)建的所有 Pod 就能夠通過 Persistent Volume(PV)和 Persistent Volume Claim(PVC)的方式,在容器里掛載由 Ceph 提供的數(shù)據(jù)卷了。而 Rook 項(xiàng)目,則會負(fù)責(zé)這些數(shù)據(jù)卷的生命周期管理、災(zāi)難備份等運(yùn)維工作。

      設(shè)置 dashboard

      dashboard 是非常有用的工具,可讓你大致了解 Ceph 集群的狀態(tài),包括總體運(yùn)行狀況,單仲裁狀態(tài),mgr,osd 和其他 Ceph 守護(hù)程序的狀態(tài),查看池和 PG 狀態(tài),顯示日志用于守護(hù)程序等等。Rook 使啟用儀表板變得簡單。

      部署 Node SVC

      修改dashboard-external-https.yaml

      $ vi dashboard-external-https.yaml apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-https namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard port: 8443 protocol: TCP targetPort: 8443 selector: app: rook-ceph-mgr rook_cluster: rook-ceph sessionAffinity: None type: NodePort

      創(chuàng)建 Node SVC

      $ kubectl create -f dashboard-external-https.yaml service/rook-ceph-mgr-dashboard-external-https created $ kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE csi-cephfsplugin-metrics ClusterIP 10.102.32.77 8080/TCP,8081/TCP 17m csi-rbdplugin-metrics ClusterIP 10.101.121.5 8080/TCP,8081/TCP 17m rook-ceph-mgr ClusterIP 10.99.155.138 9283/TCP 16m rook-ceph-mgr-dashboard ClusterIP 10.97.61.135 8443/TCP 16m rook-ceph-mgr-dashboard-external-https NodePort 10.108.210.25 8443:32364/TCP 6s rook-ceph-mon-a ClusterIP 10.102.116.81 6789/TCP,3300/TCP 16m rook-ceph-mon-b ClusterIP 10.101.141.241 6789/TCP,3300/TCP 16m rook-ceph-mon-c ClusterIP 10.101.157.247 6789/TCP,3300/TCP 16m

      Rook operator 將啟用 ceph-mgr dashboard 模塊。將創(chuàng)建一個服務(wù)對象以在 Kubernetes 集群中公開該端口。Rook 將啟用端口 8443 進(jìn)行 https 訪問。

      確認(rèn)驗(yàn)證

      登錄 dashboard 需要安全訪問。Rook 在運(yùn)行 Rook Ceph 集群的名稱空間中創(chuàng)建一個默認(rèn)用戶,admin 并生成一個稱為的秘密rook-ceph-dashboard-admin-password。

      要檢索生成的密碼,可以運(yùn)行以下命令:

      kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data']['password']}" | base64 --decode && echo

      Ceph 塊存儲應(yīng)用

      創(chuàng)建StorageClass

      在提供(Provisioning)塊存儲之前,需要先創(chuàng)建StorageClass 和存儲池。K8S 需要這兩類資源,才能和Rook 交互,進(jìn)而分配持久卷(PV)。

      cd rook/cluster/examples/kubernetes/ceph/csi/rbd kubectl create -f csi/rbd/storageclass.yaml

      解讀:如下配置文件中會創(chuàng)建一個名為 replicapool 的存儲池,和rook-ceph-block的 storageClass。

      apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 # Disallow setting pool with replica 1, this could lead to data loss without recovery. # Make sure you're *ABSOLUTELY CERTAIN* that is what you want requireSafeReplicaSize: true # gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool # for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size #targetSizeRatio: .5 --- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: # clusterID is the namespace where the rook cluster is running # If you change this namespace, also change the namespace below where the secret namespaces are defined clusterID: rook-ceph # If you want to use erasure coded pool with RBD, you need to create # two pools. one erasure coded and one replicated. # You need to specify the replicated pool here in the `pool` parameter, it is # used for the metadata of the images. # The erasure coded pool must be set as the `dataPool` parameter below. #dataPool: ec-data-pool pool: replicapool # RBD image format. Defaults to "2". imageFormat: "2" # RBD image features. Available for imageFormat: "2". CSI RBD currently supports only `layering` feature. imageFeatures: layering # The secrets contain Ceph admin credentials. These are generated automatically by the operator # in the same namespace as the cluster. csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # Specify the filesystem type of the volume. If not specified, csi-provisioner # will set default as `ext4`. csi.storage.k8s.io/fstype: ext4 # uncomment the following to use rbd-nbd as mounter on supported nodes # **IMPORTANT**: If you are using rbd-nbd as the mounter, during upgrade you will be hit a ceph-csi # issue that causes the mount to be disconnected. You will need to follow special upgrade steps # to restart your application pods. Therefore, this option is not recommended. #mounter: rbd-nbd allowVolumeExpansion: true reclaimPolicy: Delete

      $ kubectl get storageclasses.storage.k8s.io NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 44m

      創(chuàng)建PVC

      $ kubectl create -f pvc.yaml $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-b2b7ce1d-7cad-4b7b-afac-dcf6cd597e88 1Gi RWO rook-ceph-block 45m $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b2b7ce1d-7cad-4b7b-afac-dcf6cd597e88 1Gi RWO Delete Bound default/rbd-pvc rook-ceph-block 45m

      解讀:如上創(chuàng)建相應(yīng)的PVC,storageClassName:為基于 rook Ceph 集群的 rook-ceph-block。

      pvc.yaml:

      apiVersion: v1 kind: PersistentVolumeClaim metadata: name: rbd-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-ceph-block

      消費(fèi)塊設(shè)備

      $ kubectl create -f rookpod01.yaml $ kubectl get pods NAME READY STATUS RESTARTS AGE rookpod01 0/1 Completed 0 4m1s

      解讀:創(chuàng)建如上 Pod,并掛載之前所創(chuàng)建的PVC,等待執(zhí)行完畢

      rookpod01.yaml:

      apiVersion: v1 kind: Pod metadata: name: rookpod01 spec: restartPolicy: OnFailure containers: - name: test-container image: busybox volumeMounts: - name: block-pvc mountPath: /var/test command: ['sh', '-c', 'echo "Hello World" > /var/test/data; exit 0'] volumes: - name: block-pvc persistentVolumeClaim: claimName: rbd-pvc readOnly: false

      測試持久性

      # 刪除rookpod01 $ kubectl delete -f rookpod01.yaml pod "rookpod01" deleted # 創(chuàng)建rookpod02 $ kubectl create -f rookpod02.yaml pod/rookpod02 created $ kubectl get pods NAME READY STATUS RESTARTS AGE rookpod02 0/1 Completed 0 59s $ kubectl logs rookpod02 Hello World

      解讀:創(chuàng)建 rookpod02,并使用所創(chuàng)建的 PVC,測試持久性。

      rookpod02.yaml:

      apiVersion: v1 kind: Pod metadata: name: rookpod02 spec: restartPolicy: OnFailure containers: - name: test-container image: busybox volumeMounts: - name: block-pvc mountPath: /var/test command: ['sh', '-c', 'cat /var/test/data; exit 0'] volumes: - name: block-pvc persistentVolumeClaim: claimName: rbd-pvc readOnly: false

      遇到問題

      dashboard 點(diǎn)擊概述500 Internal Server Error

      解決辦法:

      創(chuàng)建內(nèi)置管理員角色的新副本,從該角色中刪除 iscsi 權(quán)限,然后將此新角色分配給管理員。大概是在上游解決此問題后,可以刪除新角色并將管理員角色重新分配給 admin 用戶。

      ceph dashboard ac-role-create admin-no-iscsi for scope in dashboard-settings log rgw prometheus grafana nfs-ganesha manager hosts rbd-image config-opt rbd-mirroring cephfs user osd pool monitor; do ceph dashboard ac-role-add-scope-perms admin-no-iscsi ${scope} create delete read update; done ceph dashboard ac-user-set-roles admin admin-no-iscsi

      [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph dashboard ac-role-create admin-no-iscsi {"name": "admin-no-iscsi", "description": null, "scopes_permissions": {}} [root@rook-ceph-tools-6d659f5579-knt6x /]# for scope in dashboard-settings log rgw prometheus grafana nfs-ganesha manager hosts rbd-image config-opt rbd-mirroring cephfs user osd pool monitor; do > ceph dashboard ac-role-add-scope-perms admin-no-iscsi ${scope} create delete read update; [root@rook-ceph-tools-6d659f5579-knt6x /]# ceph dashboard ac-user-set-roles admin admin-no-iscsi

      源碼地址:

      https://github.com/zuozewei/blog-example/tree/master/Kubernetes/k8s-rook-ceph

      Kubernetes 分布式 存儲

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時(shí)內(nèi)刪除侵權(quán)內(nèi)容。

      上一篇:干貨 | 測試專家(前阿里P8)聊測試職業(yè)發(fā)展常見瓶頸
      下一篇:零基礎(chǔ)學(xué)前端.jQuery入門與實(shí)踐教程
      相關(guān)文章
      亚洲第一网站免费视频| 亚洲人成在线电影| 亚洲午夜国产精品| 18亚洲男同志videos网站| 亚洲国产精品无码久久久秋霞2| 久久久久亚洲精品无码网址| 亚洲区不卡顿区在线观看| 亚洲另类激情专区小说图片| 亚洲一区二区三区乱码A| 久久亚洲国产成人影院网站 | 亚洲美女又黄又爽在线观看| 中文字幕亚洲天堂| 亚洲高清无码综合性爱视频| 久久亚洲精品无码网站| 亚洲欧美日韩综合久久久久| 国产亚洲精品bv在线观看| 亚洲六月丁香六月婷婷色伊人| 亚洲视频在线观看视频| 亚洲一区二区三区四区在线观看| 国产亚洲A∨片在线观看| 亚洲免费人成在线视频观看| 亚洲AV中文无码乱人伦在线视色| 亚洲性色AV日韩在线观看| 久久精品国产亚洲AV蜜臀色欲| 亚洲春色在线观看| 亚洲成人免费在线观看| 亚洲精品国产成人中文| 亚洲美女视频网站| 亚洲精品一区二区三区四区乱码 | 亚洲不卡在线观看| 亚洲国产成人九九综合| 亚洲国产成人精品无码区在线网站 | 亚洲精品中文字幕乱码三区| 亚洲一区二区三区偷拍女厕| 亚洲人成人网站色www| 精品久久久久亚洲| 亚洲国产精华液2020| 午夜亚洲WWW湿好爽| xvideos亚洲永久网址| 亚洲人成无码www久久久| 国产亚洲视频在线播放|