概述 相信使用过pv和pvc的肯定会想到很多问题,比如每次申请 pvc 都需要手动添加pv,这岂不是太不方便了。那我们如何实现类似于公有云或者私有云的共享存储模式呢?kubernetes 提供了 storageclass 的概念,接下来我们来一探究竟。
先上一张图大家就比较清楚了:
环境 k8s集群环境 Node(宿主机上)都要安装nfs
1 2 [root@node-1 ~]# yum -y install nfs-utils [root@node-2 ~]# yum -y install nfs-utils
nfs 环境
搭建nfs服务端1 2 3 4 5 6 7 8 9 10 11 yum -y install rpcbind nfs-utils systemctl start rpcbind systemctl start nfs systemctl enable rpcbind systemctl enable nfs mkdir /home/nfsfile chmod -R 777 /home/nfsfile cd /home/nfsfile echo "This is a test file" > /nfsfile/test.txt
vi /etc/exports
1 /home/nfsfile *(rw,sync,root_squash,insecure)
这行代码的意思是把共享目录 /home/nfsfile
共享给 *
这个客户端ip,后面括号里的内容是权限参数,其中:
rw 表示设置目录可读写。
sync 表示数据会同步写入到内存和硬盘中,相反 rsync 表示数据会先暂存于内存中,而非直接写入到硬盘中。
no_root_squash NFS客户端连接服务端时如果使用的是root的话,那么对服务端分享的目录来说,也拥有root权限。
no_all_squash 不论NFS客户端连接服务端时使用什么用户,对服务端分享的目录来说都不会拥有匿名用户权限。showmount -e localhost
1 2 Export list for localhost: /home/nfsfile *
客户端验证nfs 我们在客户端执行以下命令:showmount -e 10.8.111.153
1 2 Exports list on 10.8.111.153: /home/nfsfile *
客户端开始挂载共享目录:1 2 mkdir nfsfile # 客户端新建挂载点 mount -t nfs 10.8.111.153:/home/nfsfile /root/nfsfile # 挂载服务端共享目录到新创建的挂载点
客户端验证是否挂载成功:1 2 cd /root/nfsfile # 进入该目录后,将会看到之前在服务端创建的 test.txt 文件 cat test.txt # 打开后,发现文件内容与服务端文件内容的一致。说明本次 nfs 共享文件系统搭建成功!
最后,如果需要永久挂载该共享目录(即实现开机自动挂载),则可以通过如下方式实现:1 echo "mount -t nfs 10.8.111.153:/home/nfsfile /root/nfsfile" >> /etc/rc.d/rc.local # 将挂载命令写入 rc.local
直接pod挂载nfs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: name: nginx spec: replicas: 1 selector: matchLabels: name: nginx template: metadata: labels: name: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort: 80 volumeMounts: - name: wwwroot mountPath: /usr/share/nginx/html volumes: - name: wwwroot nfs: server: 10.8.111.153 path: "/home/nfsfile/www"
使用storageClass、pv、pvc rbac.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: storages --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: storages roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: storages rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: storages subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: storages roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
nfs-subdir-external-provisioner.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner # image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 image: k8s.dockerproxy.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2 volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER # value: <YOUR NFS SERVER HOSTNAME> value: 10.8.111.153 - name: NFS_PATH # value: /var/nfs value: /home/nfsfile volumes: - name: nfs-client-root nfs: # server: <YOUR NFS SERVER HOSTNAME> server: 10.8.111.153 path: /home/nfsfile
nfs-storage-class.yaml
1 2 3 4 5 6 7 8 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME' parameters: pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # 此处也可以使用 "${.PVC.namespace}/${.PVC.name}" 来使用pvc的名称作为nfs中真实目录名称 onDelete: delete
nfs-test-pvc.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-pvc annotations: nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description spec: storageClassName: nfs-client accessModes: - ReadWriteMany resources: requests: storage: 5Gi
nfs-test-nginx-pod.yaml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 apiVersion: v1 kind: Pod metadata: name: test-nginx-pod spec: containers: - name: nginx image: nginx:latest volumeMounts: - name: nginx-data mountPath: /usr/share/nginx/html volumes: - name: nginx-data persistentVolumeClaim: claimName: test-pvc
其他 https://blog.51cto.com/u_16175526/6718397 https://blog.51cto.com/u_16213459/7344688 https://blog.csdn.net/qq_30051761/article/details/131055705