Cloud Native Storage (CNS) on vSphere

Page last updated:

This topic explains how to integrate Cloud Native Storage (CNS) with VMware Tanzu Kubernetes Grid Integrated Edition with a Cloud Storage Interface (CSI). This integration enables TKGI clusters to use external container storage.

This topic provides procedures for installing CSI on a TKGI cluster, verifying the installation and resizing persistent volumes.

Overview

Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.

vSphere storage backs the volumes, and you can set a storage policy directly on the volumes. After you create the volumes, you can use the vSphere client to review the volumes and their backing virtual disks, and monitor their storage policy compliance.

To create persistent volumes using CNS on vSphere, see:


To resize a persistent volume, see:


For more information, see Getting Started with VMware Cloud Native Storage.

Note: The following procedures describe configuration of CNS v2.0 and later. To configure CNS v1.0.2, see Cloud Native Storage (CNS) on vSphere in the TKGI v1.8 documentation.

Prerequisites for CNS with TKGI

  • CNS v2.0 or later
  • vSphere v6.7U3 or later, or vSphere v7.0 or later
  • TKGI v1.7 or later
    • Support upgrading virtual hardware version on Kubernetes cluster VMs
  • Firewall and network configuration:
    • Enable the following components to access vCenter:
      • Cluster master nodes
      • Cluster worker nodes, so their CSI components can provision their disks
      • All Pods running CSI components
  • TKGI plan configuration:
    • In the TKGI tile, configure a Plan with the Allow Privileged checkbox enabled, so containers run in privileged mode

Manually Install CSI on a TKGI Cluster

This section provides procedures for manually installing CSI on a TKGI cluster. Installing CSI on a TKGI cluster requires a number of steps:

Create a TKGI cluster

To create a TKGI-provisioned Kubernetes cluster:

  1. Create a cluster using TKGI create-cluster:

    tkgi create-cluster CLUSTER-NAME --external-hostname EXTERNAL-HOSTNAME  \
    --plan PLAN-NAME --num-nodes 3 --network-profile single-tier-profile
    

    Where:

    • CLUSTER-NAME is the name you want to apply to your new cluster.
    • EXTERNAL-HOSTNAME is the address from which to access Kubernetes API.
    • PLAN-NAME is the name of the plan to base the new cluster on. To provide the ability to resize the cluster’s persistent volume chose a plan where Allow Privileged is selected. For more information, see Plans in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere with NSX-T.
    > tkgi create-cluster tkgi-cluster-5-shared-t1 --external-hostname tkgi-cluster-5-shared-t1 --plan large --num-nodes 3 --network-profile single-tier-profile
    
  2. Confirm that all of the VMs in the Kubernetes cluster have hardware compatible with VMware version 15.

    VM Instance Summary pane in vCenter

Create a CSI Secret

To create a CSI secret:

  1. Create the following configuration file csi-vsphere.conf anywhere in your system:

    [Global]
    cluster-id = CLUSTER-ID
    
    [VirtualCenter "VCENTER-ADDRESS"]
    insecure-flag = "true"
    user = "USERNAME"
    password = "PASSWORD"
    port = "PORT"
    datacenters = "DATACENTERS"
    

    Where:

    • VCENTER-ADDRESS is the IP address of your vCenter.
    • insecure-flag if set to ‘true’ this flag disables certificate validation
    • USERNAME is the username of the account to manage vCenter resources.
    • PASSWORD is the password for the vCenter user.
    • PORT is the port to use to reach the vCenter.
    • CLUSTER-ID is an unique identifier, such as the cluster name.
    • DATACENTERS is the name of your datacenter.

    For example:

    [Global]
    cluster-id = CLUSTER-ID
    
    [VirtualCenter "10.1.1.1"]
    insecure-flag = "true"
    user = "administrator@vsphere.local"
    password = "abc123!"
    port = "443"
    datacenters = "vSAN_Datacenter"
    
  2. Create a secret based on the configuration file:

    kubectl create secret generic vsphere-config-secret \
    --from-file=csi-vsphere.conf --namespace=kube-system
    

    For example:

    > kubectl create secret generic vsphere-config-secret \
    --from-file=csi-vsphere.conf --namespace=kube-system
    secret/vsphere-config-secret created

  3. Confirm that the secret exists:

    kubectl get secret/vsphere-config-secret -n kube-system
    

    For example:

    > kubectl get secret/vsphere-config-secret -n kube-system
    NAME                    TYPE     DATA   AGE vsphere-config-secret   Opaque   1      37s

Fore more information about secrets, see Secrets in the Kubernetes documentation.

Create RBAC for CSI Access

To create role-based access control (RBAC) of the CSI:

  1. Create a manifest vsphere-csi-controller-rbac.yaml that defines RBAC of the CSI with a ServiceAccount, ClusterRole, and ClusterRoleBinding.

    • For an example manifest template, use the RBAC Manifest template below.
  2. Create the RBAC objects:

    kubectl apply -f vsphere-csi-controller-rbac.yaml 
    

    For example:

    > kubectl apply -f vsphere-csi-controller-rbac.yaml
    serviceaccount/vsphere-csi-controller created clusterrole.rbac.authorization.k8s.io/vsphere-csi-controller-role created clusterrolebinding.rbac.authorization.k8s.io/vsphere-csi-controller-binding created

Install the vSphere CSI Driver

To install the vSphere CSI driver:

  1. Create a manifest vsphere-csi-controller-deployment.yaml that defines deployment and CSIDriver objects for installing the CSI controller.

  2. Create the CSI driver objects:

    kubectl apply -f vsphere-csi-controller-deployment.yaml
    

    For example:

    > kubectl apply -f vsphere-csi-controller-deployment.yaml
    statefulset.apps/vsphere-csi-controller created csidriver.storage.k8s.io/csi.vsphere.vmware.com created

  3. Create a manifest vsphere-csi-node-ds.yaml that defines the DaemonSet for the CSI controller.

  4. Create the DaemonSet:

    kubectl apply -f vsphere-csi-node-ds.yaml
    

    For example:

    > kubectl apply -f vsphere-csi-node-ds.yaml
    daemonset.apps/vsphere-csi-node created

Verify a CSI Installation

This section provides procedures for verifying the CSI installation. Verifying CSI on a TKGI cluster requires a number of steps:

Verify that CSI Deployed Successfully

To verify that CSI was successfully deployed:

  1. Run the following kubectl commands:

    kubectl get deployment --namespace=kube-system
    kubectl get daemonsets POD-NAME --namespace=kube-system
    kubectl get pods --namespace=kube-system
    

    Where POD-NAME is your Pod’s name.

    For example:

    > kubectl get deployment --namespace=kube-system 
    
    NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    coredns                  3/3     3            3           45h
    metrics-server           1/1     1            1           45h
    vsphere-csi-controller   1/1     1            1           7m32s
    
    > kubectl get daemonsets vsphere-csi-node --namespace=kube-system  
    NAME               DESIRED   CURRENT   READY   UP-TO-DATE  AVAILABLE   NODE SELECTOR   AGE  
    vsphere-csi-node   3         3         3       3           3           <none>          13m
    
    > kubectl get pods --namespace=kube-system  
    NAME                                    READY   STATUS    RESTARTS  AGE  
    coredns-6f9bcd8956-9x7wn                1/1     Running   0         22h  
    coredns-6f9bcd8956-clksx                1/1     Running   0         53m  
    coredns-6f9bcd8956-rjmfl                1/1     Running   0         22h  
    kubernetes-dashboard-5fc4ccc79f-kwnd4   1/1     Running   0         22h  
    metrics-server-7f85c59675-vz4bs         1/1     Running   0         22h  
    vsphere-csi-controller-0                5/5     Running   0         14m  
    sphere-csi-node-64dmx                  3/3     Running   0         13m  
    vsphere-csi-node-nnldx                  3/3     Running   0         13m  
    vsphere-csi-node-pdbb5                  3/3     Running   0         13m
    

Verify that All Pods Can Access vCenter

To confirm the vsphere-csi-node Pods are in the kube-system namespace:

  1. Run the following kubectl command:

    kubectl get pod -n kube-system -o wide
    

    For example:

    > kubectl get pod -n kube-system -o wide
    NAME                     READY   STATUS   RESTARTS  AGE     IP             NODE                                  NOMINATEDNODE   READINESS GATES
    vsphere-csi-controller-0 5/5     Running  5         24h     172.16.30.3    c4e3819f-00fc-457b-beda-26fbdd589c53  <none>           <none>
    vsphere-csi-node-64dmx   3/3     Running  3         24h     172.16.30.9    c9b4f441-4c08-43cf-bb17-8be80ed676a4  <none>           <none>
    vsphere-csi-node-6bp4x   3/3     Running  0         5h15m   172.16.30.13   80dc0ceb-d460-4538-99b6-d7d8eacc4b74  <none>           <none>
    vsphere-csi-node-92tvh   3/3     Running  3         5h21m   172.16.30.11   651bfb0d-084d-481f-98f6-f811284676ef  <none>           <none>
    vsphere-csi-node-l4v2q   3/3     Running  0         4h33m   172.16.30.17   9233bd53-7e94-4b94-a197-f459ccdc25dd  <none>           <none>
    vsphere-csi-node-nnldx   3/3     Running  3         24h     172.16.30.8    c4e3819f-00fc-457b-beda-26fbdd589c53  <none>           <none>
    vsphere-csi-node-pw7hw   3/3     Running  3         5h18m   172.16.30.12   1b84ae77-45bf-4fa1-9f0b-cbc13bb75894  <none>           <none>
    
  2. Review the returned list for the following Pods:

    • vsphere-csi-controller-0
    • vsphere-csi-node-64dmx
    • vsphere-csi-node-6bp4x
    • vsphere-csi-node-92tvh
    • vsphere-csi-node-l4v2q
    • vsphere-csi-node-nnldx
    • vsphere-csi-node-pw7hw

    All of the Pods listed above must be able to access vCenter. This means that the Floating IP address allocated to the SNAT rule for this namespace in the T0 (or T1 if shared T1 model is used) must be able to reach vCenter.

Verify that CSI Custom Resource Definitions are Working

To validate that the CSI Custom Resource definitions are working:

  1. To validate the status of the CSI nodes:

    kubectl get CSINode
    

    For example:

    > kubectl get CSINode
    NAME                                   CREATED AT
    3a0ea98b-879e-4f7a-abbd-e3ad426c8a1b   2020-03-05T22:29:00Z
    c4e3819f-00fc-457b-beda-26fbdd589c53   2020-03-05T22:28:57Z
    c9b4f441-4c08-43cf-bb17-8be80ed676a4   2020-03-05T22:29:00Z

  2. To validate the status of the CSI drivers:

    kubectl get csidrivers
    kubectl describe csidrivers
    

    For example:

    > kubectl get csidrivers
    NAME                     CREATED AT
    csi.vsphere.vmware.com   2020-03-05T22:28:21Z  

    > kubectl describe csidrivers Name:         csi.vsphere.vmware.com Namespace: Labels:       Annotations:  kubectl.kubernetes.io/last-applied-configuration                 {"apiVersion":"storage.k8s.io/v1beta1","kind":"CSIDriver","metadata":{"annotations":{},"name":"csi.vsphere.vmware.com"},"spec":{"attachReq... API Version:  storage.k8s.io/v1beta1 Kind:         CSIDriver Metadata:   Creation Timestamp:  2020-03-05T22:28:21Z   Resource Version:    153505   Self Link: /apis/storage.k8s.io/v1beta1/csidrivers/csi.vsphere.vmware.com   UID:                 906e512c-c897-40cb-8c97-9975fce2fcf8 Spec:   Attach Required:    true   Pod Info On Mount:  false Events:              

Verify that ProviderID was Added to Nodes

To verify that a ProviderID was added to your nodes:

  1. Run the following:

    kubectl describe nodes | grep "ProviderID"**  
    

    For example:

    > kubectl describe nodes | grep "ProviderID"**
    ProviderID:                  vsphere://421025c3-0ce4-8cff-8229-1a2ec0bf2d97
    ProviderID:                  vsphere://42109234-71ec-3f26-5ddd-9c97c5a02fe9
    ProviderID:                  vsphere://4210ecc1-e7d8-a130-19e5-f20804b5b36e 
    

Create a vSphere Storage Class

To create a vSphere Storage Class:

  1. Open vCenter.
  2. Open the vSAN Datastore Summary pane.

    vSAN Datastore Summary pane in vCenter

  3. Determine the datastoreurl value for your Datastore.

  4. Create the following YAML:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: ALLOW-EXPANSION
    parameters:
      datastoreurl: "DATASTORE-URL"
    

    Where:

    • ALLOW-EXPANSION defines whether the cluster’s persistent volume size is either resizable or static. Set to true for resizable and false for static size.
    • DATASTORE-URL is the URL to your Datastore. For a non-vSAN datastore, the datastoreurl value looks like ds:///vmfs/volumes/5e66e525-8e46bd39-c184-005056ae28de/.

    For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: true
    parameters:
      datastoreurl: "ds:///vmfs/volumes/vsan:52d8eb4842dbf493-41523be9cd4ff7b7/"
    

Resize a Persistent Volume

Persistent volumes created using the CNS v2.0 StorageClass procedures above can be resized. CNS supports persistent volume resizing of only off-line volumes.

To resize a persistent volume:

  1. Stop all pods connected to the persistent volume.
  2. Wait until all pods have stopped. To determine if the pods have stopped running:

    kubectl  get statefulset -n NAMESPACE
    

    Where NAMESPACE is the namespace the persistent volume is in.

    For example:

    > kubectl  get statefulset -n csi-namespace
    NAME          READY   AGE
    postgres-wl   0/0     15m
    > kubectl get pods -n csi-namespace No resources found in csi-resize-test namespace.
    > kubectl get pvc -n csi-namespace NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE postgredb-postgres-wl-0 Bound pvc-c22811a5-0650-4492-a7f0-0d21a4971c5a 1Gi RWO postgres-sc-csi-resize 15m

  3. Wait until all pods have stopped.

  4. Edit the persistent volume:

    kubectl edit pvc STATEFUL-SET-NAME -n NAMESPACE
    

    Where:

    • STATEFUL-SET-NAME is the name of the Pod with the persistent volume you are resizing.
    • NAMESPACE is the namespace the Pod is in.

    For example:

    > kubectl  edit pvc postgredb-postgres-wl-0 -n csi-namespace
    
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
      storage: 1Gi
    storageClassName: postgres-sc-csi-resize
    volumeMode: Filesystem
    volumeName: pvc-c22811a5-0650-4492-a7f0-0d21a4971c5a
    
  5. Modify the storage size.

    Note: Kubernetes does not support shrinking of persistent volumes.

  6. Restart the Pod using kubectl scale:

    kubectl scale statefulset STATEFUL-SET-NAME  -n NAMESPACE --replicas=1
    

    Where:

    • STATEFUL-SET-NAME is the name of the Pod with the persistent volume you are resizing.
    • NAMESPACE is the namespace the Pod is in.

    For example:

    > kubectl scale statefulset postgres-wl  -n csi-namespace --replicas=1
    
  7. Confirm the Pod is running:

    kubectl get statefulset -n NAMESPACE
    kubectl get pods -n NAMESPACE
    kubectl get pvc -n NAMESPACE
    kubectl get pv | grep STATEFUL-SET-NAME
    

    Where:

    • STATEFUL-SET-NAME is the name of the Pod with the persistent volume you are resizing.
    • NAMESPACE is the namespace the Pod is in.

    For example:

    > kubectl  get statefulset -n csi-namespace
    NAME          READY   AGE
    postgres-wl   1/1     24m  
    
    > kubectl get pods -n csi-namespace
    NAME            READY   STATUS    RESTARTS   AGE
    postgres-wl-0   2/2     Running   0          49s  
    
    > kubectl  get pvc -n csi-namespace
    NAME                      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
    postgredb-postgres-wl-0   Bound    pvc-c22811a5-0650-4492-a7f0-0d21a4971c5a   5Gi        RWO            postgres-sc-csi-resize   24m  
    
    > kubectl  get pv | grep test
    pvc-c22811a5-0650-4492-a7f0-0d21a4971c5a   5Gi        RWO            Delete           Bound    csi-namespace/postgredb-postgres-wl-0   postgres-sc-csi-resize            23m
    
  8. Confirm the Pod’s persistent volume has been resized:

    kubectl  exec -it STATEFUL-SET-NAME -n NAMESPACE -- sh
    

    Where:

    • STATEFUL-SET-NAME is the name of the Pod with the persistent volume you are resizing.
    • NAMESPACE is the namespace the Pod is in.

    For example:

    > kubectl  exec -it postgres-wl-0 -n csi-namespace -- sh
    # df -h
    Filesystem      Size  Used Avail Use% Mounted on
    overlay          30G  1.8G   27G   7% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs           7.9G     0  7.9G   0% /sys/fs/cgroup
    /dev/sdb2        48G  3.1G   42G   7% /etc/hosts
    /dev/sdc1        30G  1.8G   27G   7% /etc/hostname
    shm              64M  8.0K   64M   1% /dev/shm
    /dev/sdd        4.9G  847M  3.9G  18% /var/lib/postgresql/data
    tmpfs           7.9G   12K  7.9G   1% /run/secrets/kubernetes.io/serviceaccount
    
    

For more information about resizing persistent volumes, see Resizing Persistent Volumes using Kubernetes in the Kubernetes documentation.

Configuration File Templates

For the CNS v2.0 configuration file templates, see the template files below:

Example manifests are also provided in v.2.0.0 in the kubernetes-sigs/vsphere-csi-driver GitHub repository.

RBAC Manifest

To define RBAC for CSI, create a file named vsphere-csi-controller-rbac.yaml populated with the following:

kind: ServiceAccount
apiVersion: v1
metadata:
  name: vsphere-csi-controller
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vsphere-csi-controller-role
rules:
  - apiGroups: [""]
    resources: ["nodes", "persistentvolumeclaims", "pods"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims/status"]
    verbs: ["update", "patch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses", "csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["volumeattachments"]
    verbs: ["get", "list", "watch", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: vsphere-csi-controller-binding
subjects:
  - kind: ServiceAccount
    name: vsphere-csi-controller
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: vsphere-csi-controller-role
  apiGroup: rbac.authorization.k8s.io

CSI Driver Manifest

To define a CSI driver, create a file named vsphere-csi-controller-deployment.yaml populated with the following:

# Minimum Kubernetes version - 1.16
# For prior releases make sure to add required --feature-gates flags
kind: Deployment
apiVersion: apps/v1
metadata:
  name: vsphere-csi-controller
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 0
  selector:
    matchLabels:
      app: vsphere-csi-controller
  template:
    metadata:
      labels:
        app: vsphere-csi-controller
        role: vsphere-csi
    spec:
      serviceAccountName: vsphere-csi-controller
      dnsPolicy: "Default"
      containers:
        - name: csi-attacher
          image: quay.io/k8scsi/csi-attacher:v2.0.0
          args:
            - "--v=4"
            - "--timeout=300s"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
        - name: csi-resizer
          image: quay.io/k8scsi/csi-resizer:v0.5.0
          args:
            - "--v=4"
            - "--csiTimeout=300s"
            - "--csi-address=$(ADDRESS)"
            - "--leader-election"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
        - name: vsphere-csi-controller
          image: gcr.io/cloud-provider-vsphere/csi/release/driver:v2.0.0
          lifecycle:
            preStop:
              exec:
                command: ["/bin/sh", "-c", "rm -rf /var/vcap/data/csi/sockets/pluginproxy/csi.vsphere.vmware.com"]
          imagePullPolicy: "Always"
          env:
            - name: CSI_ENDPOINT
              value: unix:///var/vcap/data/csi/sockets/pluginproxy/csi.sock
            - name: X_CSI_MODE
              value: "controller"
            - name: VSPHERE_CSI_CONFIG
              value: "/etc/cloud/csi-vsphere.conf"
            - name: LOGGER_LEVEL
              value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
          volumeMounts:
            - mountPath: /etc/cloud
              name: vsphere-config-volume
              readOnly: true
            - mountPath: /var/vcap/data/csi/sockets/pluginproxy/
              name: socket-dir
          ports:
            - name: healthz
              containerPort: 9808
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: healthz
            initialDelaySeconds: 10
            timeoutSeconds: 3
            periodSeconds: 5
            failureThreshold: 3
        - name: liveness-probe
          image: quay.io/k8scsi/livenessprobe:v2.0.0
          args:
            - "--csi-address=$(ADDRESS)"
          env:
            - name: ADDRESS
              value: /var/vcap/data/csi/sockets/pluginproxy/csi.sock
          volumeMounts:
            - mountPath: /var/vcap/data/csi/sockets/pluginproxy/
              name: socket-dir
        - name: vsphere-syncer
          image: gcr.io/cloud-provider-vsphere/csi/release/syncer:v2.0.0
          args:
            - "--leader-election"
          imagePullPolicy: "Always"
          env:
            - name: FULL_SYNC_INTERVAL_MINUTES
              value: "30"
            - name: VSPHERE_CSI_CONFIG
              value: "/etc/cloud/csi-vsphere.conf"
            - name: LOGGER_LEVEL
              value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
          volumeMounts:
            - mountPath: /etc/cloud
              name: vsphere-config-volume
              readOnly: true
        - name: csi-provisioner
          image: k8s.gcr.io/sig-storage/csi-provisioner:v2.0.0
          args:
            - "--v=4"
            - "--timeout=300s"
            - "--csi-address=$(ADDRESS)"
            - "--strict-topology"
            - "--leader-election"
          env:
            - name: ADDRESS
              value: /csi/csi.sock
          volumeMounts:
            - mountPath: /csi
              name: socket-dir
      volumes:
      - name: vsphere-config-volume
        secret:
          secretName: vsphere-config-secret
      - name: socket-dir
        hostPath:
          path: /var/vcap/data/csi/sockets/pluginproxy/csi.vsphere.vmware.com
          type: DirectoryOrCreate
---
apiVersion: v1
data:
  "csi-migration": "false"
kind: ConfigMap
metadata:
  name: csi-feature-states
  namespace: kube-system
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: csi.vsphere.vmware.com
spec:
  attachRequired: true
  podInfoOnMount: false

DaemonSet Manifest

To define a DaemonSet manifest, create a file named vsphere-csi-node-ds.yaml populated with the following:

# Minimum Kubernetes version - 1.16
# For prior releases make sure to add required --feature-gates flags
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: vsphere-csi-node
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: vsphere-csi-node
  updateStrategy:
    type: "RollingUpdate"
  template:
    metadata:
      labels:
        app: vsphere-csi-node
        role: vsphere-csi
    spec:
      dnsPolicy: "Default"
      containers:
      - name: node-driver-registrar
        image: quay.io/k8scsi/csi-node-driver-registrar:v1.2.0
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh", "-c", "rm -rf /registration/csi.vsphere.vmware.com /var/vcap/data/kubelet/plugins_registry/csi.vsphere.vmware.com /var/vcap/data/kubelet/plugins_registry/csi.vsphere.vmware.com-reg.sock"]
        args:
        - "--v=5"
        - "--csi-address=$(ADDRESS)"
        - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
        env:
        - name: ADDRESS
          value: /csi/csi.sock
        - name: DRIVER_REG_SOCK_PATH
          value: /var/vcap/data/kubelet/plugins/csi.vsphere.vmware.com/csi.sock
        securityContext:
          privileged: true
        volumeMounts:
        - name: plugin-dir
          mountPath: /csi
        - name: registration-dir
          mountPath: /registration
      - name: vsphere-csi-node
        image: gcr.io/cloud-provider-vsphere/csi/release/driver:v2.0.0
        imagePullPolicy: "Always"
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: CSI_ENDPOINT
          value: unix:///csi/csi.sock
        - name: X_CSI_MODE
          value: "node"
        - name: X_CSI_SPEC_REQ_VALIDATION
          value: "false"
        - name: VSPHERE_CSI_CONFIG
          value: "/etc/cloud/csi-vsphere.conf" # here csi-vsphere.conf is the name of the file used for creating secret using "--from-file" flag
        - name: X_CSI_DEBUG
          value: "true"
        - name: LOGGER_LEVEL
          value: "PRODUCTION" # Options: DEVELOPMENT, PRODUCTION
        securityContext:
          privileged: true
          capabilities:
            add: ["SYS_ADMIN"]
          allowPrivilegeEscalation: true
        volumeMounts:
        - name: vsphere-config-volume
          mountPath: /etc/cloud
          readOnly: true
        - name: plugin-dir
          mountPath: /csi
        - name: pods-mount-dir
          mountPath: /var/vcap/data/kubelet
          # needed so that any mounts setup inside this container are
          # propagated back to the host machine.
          mountPropagation: "Bidirectional"
        - name: device-dir
          mountPath: /dev
        ports:
          - name: healthz
            containerPort: 9808
            protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: healthz
          initialDelaySeconds: 10
          timeoutSeconds: 3
          periodSeconds: 5
          failureThreshold: 3
      - name: liveness-probe
        image: quay.io/k8scsi/livenessprobe:v1.1.0
        args:
        - --csi-address=/csi/csi.sock
        volumeMounts:
        - name: plugin-dir
          mountPath: /csi
      volumes:
      - name: vsphere-config-volume
        secret:
          secretName: vsphere-config-secret
      - name: registration-dir
        hostPath:
          path: /var/vcap/data/kubelet/plugins_registry
          type: Directory
      - name: plugin-dir
        hostPath:
          path: /var/vcap/data/kubelet/plugins/csi.vsphere.vmware.com/
          type: DirectoryOrCreate
      - name: pods-mount-dir
        hostPath:
          path: /var/vcap/data/kubelet
          type: Directory
      - name: device-dir
        hostPath:
          path: /dev
      tolerations:
        - effect: NoExecute
          operator: Exists
        - effect: NoSchedule
          operator: Exists

Please send any feedback you have to pks-feedback@pivotal.io.