關(guān)于 KubernetesVolume的一些筆記(二)

      網(wǎng)友投稿 1026 2022-05-27

      寫在前面

      學(xué)習(xí)K8s,剛把這一塊學(xué)完,整理筆記,理論很少,偏實(shí)戰(zhàn),適合溫習(xí)

      博文內(nèi)容涉及:

      常見(jiàn)nfs,hostPath,emptyDir數(shù)據(jù)卷類型

      PV+PVC的創(chuàng)建

      持久性存儲(chǔ)及動(dòng)態(tài)卷供應(yīng)

      男女情事,誰(shuí)先動(dòng)心誰(shuí)吃虧,越吃虧越難忘,到最后,到底是喜歡對(duì)方呢,還是喜歡自己,都搞不清楚了,答案偏偏在對(duì)方身上,所以才說(shuō),由愛(ài)故生憂。--------《劍來(lái)》

      持久性存儲(chǔ)(Persistent Volume)

      Volume是定義在Pod上的,屬于“計(jì)算資源”的一部分,而實(shí)際上, “網(wǎng)絡(luò)存儲(chǔ)”是相對(duì)獨(dú)立于“計(jì)算資源”而存在的一種實(shí)體資源。比如在使用虛擬機(jī)的情況下,我們通常會(huì)先定義一個(gè)網(wǎng)絡(luò)存儲(chǔ),然后從中劃出一個(gè)“網(wǎng)盤”并掛接到虛擬機(jī)上

      Persistent Volume(簡(jiǎn)稱PV)和與之相關(guān)聯(lián)的Persistent Volume Claim (簡(jiǎn)稱PVC)也起到了類似的作用。PV可以理解成

      Kubernetes集群中的某個(gè)網(wǎng)絡(luò)存儲(chǔ)中對(duì)應(yīng)的一塊存儲(chǔ)

      ,它與Volume很類似,但有以下區(qū)別。

      這里也可以結(jié)合物理盤區(qū)和邏輯卷來(lái)理解,PV可以理解為物理卷,PVC可以理解為劃分的邏輯卷。

      pv的創(chuàng)建

      PV的accessModes屬性

      , 目前有以下類型:

      ReadWriteOnce:讀寫權(quán)限、并且只能被單個(gè)Node掛載。

      ReadOnlyMany:只讀權(quán)限、允許被多個(gè)Node掛載。

      ReadWriteMany:讀寫權(quán)限、允許被多個(gè)Node掛載。

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv No resources found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volunms-pv.yaml

      apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle #storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: vms81.liruilongs.github.io

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat /etc/exports /liruilong *(rw,sync,no_root_squash) ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$echo "/tmp *(rw,sync,no_root_squash)" >>/etc/exports ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$cat /etc/exports /liruilong *(rw,sync,no_root_squash) /tmp *(rw,sync,no_root_squash) ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$ ┌──[root@vms81.liruilongs.github.io]-[~/ansible] └─$exportfs -avr exporting *:/tmp exporting *:/liruilong

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volunms-pv.yaml persistentvolume/pv0003 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv -o wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE pv0003 5Gi RWO Recycle Available 16s Filesystem

      PVC的創(chuàng)建

      如果某個(gè)Pod想申請(qǐng)某種類型的PV,則首先需要定義一個(gè)PersistentVolumeClaim (PVC)對(duì)象:

      PVC是基于命名空間相互隔離的,不同命名空間的PVC相互隔離PVC通過(guò)accessModes和storage的約束關(guān)系來(lái)匹配PV,不需要顯示定義,accessModes必須相同,storage必須小于等于。

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc No resources found in liruilong-volume-create namespace. ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volumes-pvc.yaml

      apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc01 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 4Gi #storageClassName: slow

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumes-pvc.yaml persistentvolumeclaim/mypvc01 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE mypvc01 Bound pv0003 5Gi RWO 10s Filesystem ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$

      storageClassName 用于控制那個(gè)PVC能和PV綁定,只有在storageClassName相同的情況下才去匹配storage和accessModes

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$vim pod_volunms-pv.yaml

      pod_volunms-pv.yaml

      apiVersion: v1 kind: PersistentVolume metadata: name: pv0003 spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle storageClassName: slow mountOptions: - hard - nfsvers=4.1 nfs: path: /tmp server: vms81.liruilongs.github.io

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volunms-pv.yaml persistentvolume/pv0003 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv -A NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0003 5Gi RWO Recycle Available slow 8s

      pod_volumes-pvc.yaml

      apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc01 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 4Gi storageClassName: slow

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc -A No resources found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumes-pvc.yaml persistentvolumeclaim/mypvc01 created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE liruilong-volume-create mypvc01 Bound pv0003 5Gi RWO slow 5s

      使用持久性存儲(chǔ)

      在pod里面使用PVC

      apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumepvc name: podvolumepvc spec: volumes: - name: volumes1 persistentVolumeClaim: claimName: mypvc01 containers: - image: nginx name: podvolumehostpath resources: {} volumeMounts: - mountPath: /liruilong name: volumes1 imagePullPolicy: IfNotPresent dnsPolicy: ClusterFirst restartPolicy: Always status: {}

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_volumespvc.yaml pod/podvolumepvc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES podvolumepvc 1/1 Running 0 15s 10.244.171.184 vms82.liruilongs.github.io ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl exec -it podvolumepvc -- sh # ls bin dev docker-entrypoint.sh home lib64 media opt root sbin sys usr boot docker-entrypoint.d etc lib liruilong mnt proc run srv tmp var # cd liruilong # ls runc-process838092734 systemd-private-66344110bb03430193d445f816f4f4c4-chronyd.service-SzL7id systemd-private-6cf1f72056ed4482a65bf89ec2a130a9-chronyd.service-5m7c2i systemd-private-b1dc4ffda1d74bb3bec5ab11e5832635-chronyd.service-cPC3Bv systemd-private-bb19f3d6802e46ab8dcb5b88a38b41b8-chronyd.service-cjnt04 #

      pv回收策略

      persistentVolumeReclaimPolicy: Recycle

      會(huì)生成一個(gè)pod回收數(shù)據(jù)

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0003 5Gi RWO Recycle Bound liruilong-volume-create/mypvc01 slow 131m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl describe pv pv0003 .................. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal RecyclerPod 53s persistentvolume-controller Recycler pod: Successfully assigned default/recycler-for-pv0003 to vms82.liruilongs.github.io Normal RecyclerPod 51s persistentvolume-controller Recycler pod: Pulling image "busybox:1.27" ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv0003 5Gi RWO Recycle Available slow 136m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$

      動(dòng)態(tài)卷供應(yīng)storageClass

      通過(guò)storageClass來(lái)動(dòng)態(tài)處理PV的創(chuàng)建,管理員只需要?jiǎng)?chuàng)建好storageClass就可以了,用戶創(chuàng)建PVC時(shí)會(huì)自動(dòng)的創(chuàng)建PV和PVC。當(dāng)創(chuàng)建 pvc 的時(shí)候,系統(tǒng)會(huì)通知 storageClass,storageClass 會(huì)從它所關(guān)聯(lián)的分配器來(lái)獲取后端存儲(chǔ)類型,然后動(dòng)態(tài)的創(chuàng)建一個(gè) pv 出來(lái)和此 pvc 進(jìn)行關(guān)聯(lián)

      關(guān)于 Kubernetes中Volume的一些筆記(二)

      定義 storageClass 時(shí)必須要包含一個(gè)分配器(provisioner),不同的分配器指定了動(dòng)態(tài)創(chuàng)建 pv時(shí)使用什么后端存儲(chǔ)。

      分配器使用 aws 的 ebs 作為 pv 的后端存儲(chǔ)

      apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: slow provisioner: kubernetes.io/aws-ebs parameters: type: io1 iopsPerGB: "10" fsType: ext4

      分配器使用 lvm 作為 pv 的后端存儲(chǔ)

      apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-lvm provisioner: lvmplugin.csi.alibabacloud.com parameters: vgName: volumegroup1 fsType: ext4 reclaimPolicy: Delete

      使用 hostPath 作為 pv 的后端存儲(chǔ)

      apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-hostpath-sc provisioner: hostpath.csi.k8s.io reclaimPolicy: Delete #volumeBindingMode: Immediate volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true

      上面 3 個(gè)例子里所使用的分配器中,有一些是 kubernetes 內(nèi)置的分配器,比如kubernetes.io/aws-ebs,其他兩個(gè)分配器不是 kubernetes 自帶的。kubernetes 自帶的分配器:

      kubernetes.io/aws-ebs

      kubernetes.io/gce-pd

      kubernetes.io/glusterfs

      kubernetes.io/cinder

      kubernetes.io/vsphere-volume

      kubernetes.io/rbd

      kubernetes.io/quobyte

      kubernetes.io/azure-disk

      kubernetes.io/azure-file

      kubernetes.io/portworx-volume

      kubernetes.io/scaleio

      kubernetes.io/storageos

      kubernetes.io/no-provisioner

      在動(dòng)態(tài)創(chuàng)建 pv 的時(shí)候,根據(jù)使用不同的后端存儲(chǔ),應(yīng)該選擇一個(gè)合適的分配器。但是像lvmplugin.csi.alibabacloud.com 和 hostpath.csi.k8s.io 這樣的分配器不是 kubernetes 自帶的,稱之為外部分配器,這些外部分配器由第三方提供,是通過(guò)自定義

      CSIDriver(容器存儲(chǔ)接口驅(qū)動(dòng))來(lái)實(shí)現(xiàn)的分配器

      所以整個(gè)流程就是,管理員創(chuàng)建storageClass時(shí)會(huì)通過(guò)provisioner 字段指定分配器。創(chuàng)建好storageClass之后,用戶在定義pvc時(shí)需要通過(guò).spec.storageClassName指定使用哪個(gè)storageClass。

      利用 nfs 創(chuàng)建動(dòng)態(tài)卷供應(yīng)

      創(chuàng)建一個(gè)目錄/vdisk,并共享這個(gè)目錄。

      ┌──[root@vms81.liruilongs.github.io]-[~] └─$cat /etc/exports /liruilong *(rw,sync,no_root_squash) /tmp *(rw,sync,no_root_squash) ┌──[root@vms81.liruilongs.github.io]-[~] └─$echo "/vdisk *(rw,sync,no_root_squash)" >>/etc/exports ┌──[root@vms81.liruilongs.github.io]-[~] └─$exportfs -avr exporting *:/vdisk exportfs: Failed to stat /vdisk: No such file or directory exporting *:/tmp exporting *:/liruilong ┌──[root@vms81.liruilongs.github.io]-[/] └─$mkdir vdisks

      因?yàn)?kubernetes 里,nfs 沒(méi)有內(nèi)置分配器,所以需要下載相關(guān)插件來(lái)創(chuàng)建 nfs 外部分配器。

      插件包-:

      https://github.com/kubernetes-incubator/external-storage.git

      rbac.yaml 部署 rbac 權(quán)限。命名空間更換

      apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: liruilong-volume-create --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: liruilong-volume-create roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: liruilong-volume-create rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: liruilong-volume-create subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: liruilong-volume-create roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io

      因?yàn)?nfs 分配器不是自帶的,所以這里需要先把 nfs 分配器創(chuàng)建出來(lái)。

      配置文件參數(shù)設(shè)置,1.20之后的版本都需要:- --feature-gates=RemoveSelfLink=false

      ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$pwd /etc/kubernetes/manifests ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$head -n 20 kube-apiserver.yaml apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.26.81:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=192.168.26.81 - --feature-gates=RemoveSelfLink=false - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt ┌──[root@vms81.liruilongs.github.io]-[/etc/kubernetes/manifests] └─$

      deployment.yaml

      因?yàn)楫?dāng)前是在命名空間 liruilong-volume-create里的,所以要把 namespace 的值改為 liruilong-volume-create

      image 后面的鏡像需要提前在所有節(jié)點(diǎn)上 pull下來(lái),并修改鏡像下載策略

      env字段里,PROVISIONER_NAME用于指定分配器的名字,這里是 fuseim.pri/ifs,NFS_SERVER 和 NFS_PATH分別指定這個(gè)分配器所使用的存儲(chǔ)信息。

      在 volumes 里的 server 和 path 里指定共享服務(wù)器和目錄

      apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: liruilong-volume-create spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.26.81 - name: NFS_PATH value: /vdisk volumes: - name: nfs-client-root nfs: server: 192.168.26.81 path: /vdisk

      部署 nfs 分配器,查看 pod 的運(yùn)行情況

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl apply -f deployment.yaml deployment.apps/nfs-client-provisioner created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65b5569d76-cz6hh 1/1 Running 0 73s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$

      創(chuàng)建了 nfs 分配器之后,下面開(kāi)始創(chuàng)建一個(gè)使用這個(gè)分配器的 storageClass。

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get sc No resources found ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl apply -f class.yaml storageclass.storage.k8s.io/managed-nfs-storage created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 3s

      class.yaml

      apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: archiveOnDelete: "false"

      這里 provisioner 的值 fuseim.pri/ifs 是由 deployment.yaml文件里指定的分配器的名字,這

      個(gè) yaml 文件的意思是創(chuàng)建一個(gè)名字是managed-nfs-storage的 storageClass,使用名字為fuseim.pri/ifs的分配器。

      下面開(kāi)始創(chuàng)建 pvc

      pvc_nfs.yaml

      kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-nfs spec: accessModes: - ReadWriteMany resources: requests: storage: 20Mi storageClassName: "managed-nfs-storage"

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f ./pvc_nfs.yaml persistentvolumeclaim/pvc-nfs created

      查看創(chuàng)建信息

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 35s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage fuseim.pri/ifs Delete Immediate false 30s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-nfs Bound pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX managed-nfs-storage 28s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b12e988a-8b55-4d48-87cf-998500df16f8 20Mi RWX Delete Bound liruilong-volume-create/pvc-nfs managed-nfs-storage 126m ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create/nfsdy] └─$

      使用聲明的PVC

      pod_storageclass.yaml

      apiVersion: v1 kind: Pod metadata: creationTimestamp: null labels: run: podvolumepvc name: podvolumepvc spec: volumes: - name: volumes1 persistentVolumeClaim: claimName: pvc-nfs containers: - image: nginx name: podvolumehostpath resources: {} volumeMounts: - mountPath: /liruilong name: volumes1 imagePullPolicy: IfNotPresent dnsPolicy: ClusterFirst restartPolicy: Always status: {}

      ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl apply -f pod_storageclass.yaml pod/podvolumepvc created ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl get pods NAME READY STATUS RESTARTS AGE nfs-client-provisioner-65b5569d76-7k6gm 1/1 Running 0 140m podvolumepvc 1/1 Running 0 7s ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-volume-create] └─$kubectl describe pods podvolumepvc | grep -A 4 Volumes: Volumes: volumes1: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: pvc-nfs ReadOnly: false

      其他的數(shù)據(jù)卷類型

      gcePersistentDisk

      使用這種類型的Volume表示使用谷歌公有云提供的永久磁盤(PersistentDisk, PD)存放Volume的數(shù)據(jù),它與emptyDir不同, PD上的內(nèi)容會(huì)被永久存,當(dāng)Pod被刪除時(shí), PD只是被卸載(Unmount),但不會(huì)被刪除。需要注意是,你需要先創(chuàng)建一個(gè)永久磁盤(PD),才能使用gcePersistentDisk.

      awsElasticBlockStore

      與GCE類似,該類型的Volume使用亞馬遜公有云提供的EBS Volume存儲(chǔ)數(shù)據(jù),需要先創(chuàng)建一個(gè)EBS Volume才能使用awsElasticBlockStore.

      Kubernetes

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請(qǐng)聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時(shí)內(nèi)刪除侵權(quán)內(nèi)容。

      上一篇:TOP級(jí)CG行業(yè)云渲染服務(wù)的演進(jìn)之路
      下一篇:基于小熊派WIFI-ESP8266實(shí)踐(中)-多功能處理顯示等大雜燴
      相關(guān)文章
      亚洲精品无码AV中文字幕电影网站| 亚洲爆乳无码精品AAA片蜜桃| 日韩亚洲国产二区| 色老板亚洲视频免在线观| 亚洲福利视频网站| 亚洲高清中文字幕| 亚洲欧洲日韩不卡| 久久久久亚洲av无码专区| 亚洲国产精品人久久| 亚洲国产精品久久66| 亚洲永久永久永久永久永久精品| 西西人体44rt高清亚洲| 久久久久亚洲Av片无码v| 亚洲国产精品久久久久久| 自怕偷自怕亚洲精品| 亚洲精品一区二区三区四区乱码 | 亚洲av不卡一区二区三区| 日本亚洲成高清一区二区三区| 亚洲精品制服丝袜四区| 亚洲国产精品无码中文字| 亚洲av无码不卡一区二区三区| 久久青草亚洲AV无码麻豆| 亚洲国产成人久久精品影视| 亚洲日本一区二区三区| 亚洲毛片一级带毛片基地| 亚洲另类精品xxxx人妖| 亚洲国产情侣一区二区三区| 亚洲一卡一卡二新区无人区| 亚洲大码熟女在线观看| 国产尤物在线视精品在亚洲| 亚洲中文字幕丝袜制服一区| 丁香五月亚洲综合深深爱| 亚洲国产另类久久久精品| 老司机亚洲精品影院无码| 亚洲福利视频网址| 亚洲色大成网站www| 亚洲国产av无码精品| 亚洲免费观看视频| 91亚洲国产成人精品下载| 亚洲中文字幕久久精品无码2021| 亚洲综合激情五月色一区|