K8s的StorageClass

PV 都是静态的,就是我要使用的一个 PVC 的话就必须手动去创建一个 PV,这种方式在很大程度上并不能满足我们的需求,比如有一个应用需要对存储的并发度要求比较高,而另外一个应用对读写速度又要求比较高,特别是对于StatefulSet 类型的应用简单的来使用静态的 PV 就很不合适了,这种情况下我们就需要用到动态 PV,也就是 StorageClass。

官方文档:https://kubernetes.io/zh/docs/concepts/storage/storage-classes/

要使用 StorageClass,就得安装对应的自动配置程序,比如我们这里存储后端使用的是 nfs,那么我们就需要使用到一个 nfs-client 的自动配置程序,我们也叫它 Provisioner,这个程序使用我们已经配置好的 nfs 服务器,来自动创建持久卷,也就是自动帮我们创建 PV

配置 Deployment,将里面的对应的参数替换成我们自己的 nfs 配置:

vim nfs-client.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs #注意这个值
            - name: NFS_SERVER
              value: 192.168.11.11
            - name: NFS_PATH
              value: /data/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.11.11
            path: /data/k8s

将环境变量 NFS_SERVER 和 NFS_PATH 替换,当然也包括下面的nfs 配置,我们可以看到我们这里使用了一个名为 nfs-client-provisioner的serviceAccount,所以我们也需要创建一个 sa,然后绑定上对应的权限:

vim nfs-client-sa.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update","patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["create", "delete", "get", "list", "watch","patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

#新建的一个名为 nfs-client-provisioner 的ServiceAccount,然后绑定了一个名为 nfs-client-provisioner-runner 的ClusterRole,而该ClusterRole声明了一些权限,其中就包括对persistentvolumes 的增、删、改、查等权限,所以我们可以利用该ServiceAccount来自动创建PV

nfs-client 的 Deployment 声明完成后,就可以来创建一个StorageClass 对象了:

vim nfs-client-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: course-nfs-storage
provisioner: fuseim.pri/ifs #值要和上面定义的一致
kubectl create -f nfs-client.yaml
kubectl create -f nfs-client-sa.yaml
kubectl create -f nfs-client-class.yaml

kubectl get storageclass
NAME                 PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
course-nfs-storage   fuseim.pri/ifs     Delete          Immediate              false                  111s
local (default)      openebs.io/local   Delete          WaitForFirstConsumer   false                  17d
#Immediate 模式表示一旦创建了 PersistentVolumeClaim 也就完成了卷绑定和动态制备

StorageClass 资源对象创建成功,测试下动态 PV

这里有两种方法可以来利用上面创建的 StorageClass 对象来自动帮我们创建一个合适的 PV:

第一种方法:在这个 PVC 对象中添加一个声明 StorageClass 对象的标识,需要利用一个 annotations 属性来标识,如下:

vim test-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

第二种方法:可以设置这个 course-nfs-storage 的 StorageClass为 Kubernetes 的默认存储后端,可以用 kubectl patch 命令来更新: (不推荐)

kubectl patch storageclass course-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-defaultclass":"true"}}}'

如果报错:

发现pvc一直是 penning状态,通过查看 provisioner 的日志获取到报错信息

[root@k8s-master StorageClass]# kubectl get po
NAME                                      READY   STATUS    RESTARTS   AGE
jenkins-0                                 2/2     Running   1          5d21h
nfs-client-provisioner-7644c576cb-tsb7m   1/1     Running   0          10m
[root@k8s-master StorageClass]# kubectl logs nfs-client-provisioner-7644c576cb-tsb7m
E0918 14:57:33.663695       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"fuseim.pri-ifs", GenerateName:"", Namespace:"default", SelfLink:"", UID:"03f29cb7-363a-47b9-b182-6b58fde2dd66", ResourceVersion:"2445168", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830645854, loc:(*time.Location)(0x1956800)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"nfs-client-provisioner-7644c576cb-tsb7m_b2b489ea-5633-11ee-831f-0a67566a45da\",\"leaseDurationSeconds\":15,\"acquireTime\":\"2023-09-18T14:57:33Z\",\"renewTime\":\"2023-09-18T14:57:33Z\",\"leaderTransitions\":0}"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'nfs-client-provisioner-7644c576cb-tsb7m_b2b489ea-5633-11ee-831f-0a67566a45da became leader'

可看到报错信息:can't make reference' 
由于 k8s 1.20 版本的 apiserver 废弃了selfLink 字段导致动态存储不可用
解决办法:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/issues/25

vim /etc/kubernetes/manifests/kube-apiserver.yaml
- kube-apiserver
- --feature-gates=RemoveSelfLink=false    #添加这行