Deploying and Managing Cloud Native Storage (CNS) on vSphere

Page last updated:

This topic describes how to use the vSphere Container Storage Interface (CSI) Driver that is automatically installed to clusters by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI).

If your administrator has not enabled automatic vSphere CSI Driver installation, you must manually install a vSphere CSI Driver on your clusters. For more information, see Deploying and Managing Cloud Native Storage (CNS) on vSphere in the Tanzu Kubernetes Grid Integrated Edition v1.10 documentation.

Overview

vSphere Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.

You can install vSphere CNS on TKGI-provisioned clusters by configuring TKGI to automatically install a vSphere CSI Driver. To enable automatic CSI driver installation on your clusters, see Storage in Installing TKGI on vSphere.

When automatic vSphere CSI Driver installation is enabled, your clusters use your tile Kubernetes Cloud Provider storage settings as the default vSphere CNS configuration.

You can customize, deploy and manage vSphere CNS volumes using the vSphere CSI Driver:

The automatically deployed vSphere CSI Driver supports high availability (HA) configurations. HA support is automatically enabled on clusters with multiple master nodes and uses only one active CSI Controller.

Use the vSphere client to review your cluster storage volumes and their backing virtual disks, and to set a storage policy on your storage volumes or monitor policy compliance. vSphere storage backs up your cluster volumes.

Note: If you have an existing cluster with a manually installed vSphere CSI driver and your administrator has enabled automatic vSphere CSI Driver installation, you must uninstall the manually installed vSphere CSI Driver from your cluster. For more information, see Uninstall a Manually Installed vSphere CSI Driver below.

For more information about VMware CNS, see Getting Started with VMware Cloud Native Storage.

For more information about using the Kubernetes CSI Driver, see Persistent Volumes in the Kubernetes documentation.

vSphere CSI Driver Supported Features and Requirements

The vSphere CSI Driver supports different features depending on driver version, environment and storage type.

TKGI supports only the following vSphere CSI Driver features:

  • Enhanced Object Health in UI for vSAN Datastores
  • Dynamic Block PV support*
  • Dynamic Virtual Volume (vVOL) PV support
  • Static PV Provisioning
  • Kubernetes Multi-node Control Plane support
  • Encryption support via VMcrypt*
  • Dynamic File PV support*

    *For information on the usage limitations and environment and version requirements of these vSphere CSI Driver features, see vSphere CSI Driver - Supported Features Matrix in the Kubernetes vSphere CSI Driver documentation.


For information on the vCenter, datastore, and cluster types supported by the vSphere CSI Driver, see the Notes in Compatibility Matrix for vSphere CSI Driver in the Kubernetes vSphere CSI Driver documentation.

Customize vSphere File Volumes

To create, modify or remove a customized vSphere file volume:

Prerequisites

To use file volumes, you must enable vSAN File Services in the vSphere vCenter. For information about enabling vSAN File Services, see Configure File Services in the VMware vSphere documentation.

Create a Cluster With Customized File Volume Parameters

To create a new cluster with a vSphere file volume:

  1. Create a file volume configuration file. For information, see File Volume Configuration below.
  2. To create a cluster with attached file volumes:

    tkgi create-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your config file.

    For example:

    $ tkgi create-cluster demo -e demo.cluster --plan Small --config-file ./conf1.json
    

Modify a Cluster With Customized File Volume Parameters

To modify an existing cluster with a vSphere file volume:

  1. Create a file volume configuration file. For information, see File Volume Configuration below.
  2. To update your cluster with file volumes:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your config file.

Remove File Volume Parameters from a Cluster

To remove a vSphere file volume configuration from a cluster:

  1. Create a file volume configuration file containing either the disable_target_vsan_fileshare_datastore_urls or disable_net_permissions parameters set to true to disable an existing file volume parameter. For information, see File Volume Configuration below.

    For example:

    {
        "disable_target_vsan_fileshare_datastore_urls": true,
        "disable_net_permissions": true
    }
    
  2. To remove the configured file volume parameter from your cluster:

    tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE 
    

    Where:

    • CLUSTER-NAME is the name of your cluster.
    • CONFIG-FILE is the name of your config file.

File Volume Configuration

To customize a vSphere file volume, create a JSON or YAML formatted file volume configuration file using the supported file volume parameters below.

For example:

{
  "target_vsan_fileshare_datastore_urls": "ds:///vmfs/volumes/vsan:52635b9067079319-95a7473222c4c9cd/",
  "net_permissions": [
    {
      "name": "demo1",
      "ips": "192.168.0.0/16",
      "permissions": "READ_WRITE",
      "rootsquash": false
    },
    {
      "name": "demo2",
      "ips": "10.0.0.0/8",
      "permissions": "READ_ONLY",
      "rootsquash": false
    }
  ]
}

The following are accepted File Volume configuration file parameters:

Name Type Description
target_vsan_fileshare_datastore_urls string A comma separated list of datastores for deploying file share volumes.
disable_target_vsan_fileshare_datastore_urls Boolean Disable the target_vsan_fileshare_datastore_urls.
Values: true, false.
Default Value: false.
net_permissions Array Properties defining a NetPermissions object.
disable_net_permissions Boolean Disable the net_permissions.
Values: true, false.
Default Value: false.

The following are supported NetPermission object configuration file parameters:

Name Type Description
name string Name of the NetPermission object.
ips string IP range or IP subnet affected by the NetPermission restrictions.
Default Value: "*".
permissions string Access permission to the file share volume.
Values: "READ_WRITE", "READ_ONLY", "NO_ACCESS".
Default Value: "READ_WRITE".
rootsquash Boolean Security access level for the file share volume.
Values: true, false.
Default Value: false.

Create or Use CNS Block Volumes

To dynamically provision a block volume using the vSphere CSI Driver:

  1. Create a vSphere Storage Class
  2. Create a PersistentVolumeClaim
  3. Create Workloads Using Persistent Volumes

For more information on vSphere CSI Driver configuration, see the example/vanilla-k8s-block-driver configuration for the CSI driver version you are using in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.

Create a vSphere Storage Class

To create a vSphere Storage Class:

  1. Open vCenter.
  2. Open the vSAN Datastore Summary pane.

    vSAN Datastore Summary pane in vCenter

  3. Determine the datastoreurl value for your Datastore.

  4. Create the following YAML:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: ALLOW-EXPANSION
    parameters:
      datastoreurl: "DATASTORE-URL"
    

    Where:

    • ALLOW-EXPANSION defines whether the cluster’s persistent volume size is either resizable or static. Set to true for resizable and false for static size.
    • DATASTORE-URL is the URL to your Datastore. For a non-vSAN datastore, the datastoreurl value looks like ds:///vmfs/volumes/5e66e525-8e46bd39-c184-005056ae28de/.

    For example:

    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: demo-sts-storageclass
      annotations:
          storageclass.kubernetes.io/is-default-class: "true"
    provisioner: csi.vsphere.vmware.com
    allowVolumeExpansion: true
    parameters:
      datastoreurl: "ds:///vmfs/volumes/vsan:52d8eb4842dbf493-41523be9cd4ff7b7/"
    

    For more information about StorageClass, see Storage Classes in the Kubernetes documentation.

Create a PersistentVolumeClaim

To create a Persistent Volume using the vSphere CSI Driver:

  1. Create a Storage Class. For more information, see Create a vSphere Storage Class below.
  2. To apply the StorageClass configuration: kubectl apply -f CONFIG-FILE Where CONFIG-FILE is the name of your StorageClass configuration file.
  3. Create the PersistentVolumeClaim configuration for the file volume. For information about configuring a PVC, see Persistent Volumes in the Kubernetes documentation.

    For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: example-vanilla-block-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
      storageClassName: example-vanilla-block-sc
    
  4. To apply the PVC configuration:

    kubectl apply -f CONFIG-FILE 
    

    Where CONFIG-FILE is the name of your PVC configuration file.

Create Workloads Using Persistent Volumes

  1. Create a Pod configuration file containing volumeMounts and volumes parameters.

    For example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: example-vanilla-block-pod
    spec:
      containers:
        - name: test-container
          image: gcr.io/google_containers/busybox:1.24
          command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html  && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"]
          volumeMounts:
            - name: test-volume
              mountPath: /mnt/volume1
      restartPolicy: Never
      volumes:
        - name: test-volume
          persistentVolumeClaim:
            claimName: example-vanilla-block-pvc
    
  2. To apply the Pod configuration to your workload:

    kubectl apply -f CONFIG-FILE 
    

    Where CONFIG-FILE is the name of your configuration file.

For more information and examples of Pod configurations, see the example configurations for the CSI driver version you are using in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.

Uninstall a Manually Installed vSphere CSI Driver

If your administrator has enabled automatic vSphere CSI Driver Integration and you have a cluster that uses a manually installed vSphere CSI Driver, the manually installed driver will no longer function after upgrading the cluster.

To uninstall a manually installed CSI driver:

  1. Confirm your TKGI administrator has enabled automatic vSphere CSI Driver Integration on the TKGI tile.
  2. Upgrade your Kubernetes cluster to TKGI v1.11.
  3. Remove the manually installed CSI driver from the cluster. For more information, see Remove a Manually Installed vSphere CSI Driver below.
  4. To restart CSI jobs, restart csi-node-registrar on each worker node:

    bosh -d DEPLOYMENT ssh WORKER-NODE
    sudo bash
    monit restart csi-node-registrar
    

    Where:

    • DEPLOYMENT is the name of the deployment.
    • WORKER-NODE is the worker node VM.
  5. To verify the CSI jobs are in running state, run the following on each master node:

    bosh -d DEPLOYMENT ssh MASTER-NODE
    sudo bash
    monit summary | grep csi
    

    Where:

    • DEPLOYMENT is the name of the deployment.
    • MASTER-NODE is the Master node VM.
  6. To restart a CSI job that is not in running state:

    monit restart JOB-NAME
    

    Where JOB-NAME is the name of the CSI job to restart.

Remove a Manually Installed vSphere CSI Driver

If you have a cluster that uses a manually installed vSphere CSI Driver, and you upgrade the cluster after your administrator has enabled automatic vSphere CSI Driver Integration, you should remove the manually installed driver. While automatic vSphere CSI Driver Integration is enabled, TKGI enables the integrated driver for a cluster after upgrading it, and the cluster’s manually installed driver no longer functions.

To remove a manually installed vSphere CSI driver:

  1. Create a manifest remove-vsphere-csi-node-ds.yaml that removes the CSI controller DaemonSet.

  2. Remove the DaemonSet:

    kubectl apply -f remove-vsphere-csi-node-ds.yaml
    
  3. Create a manifest remove-vsphere-csi-controller-deployment.yaml that removes the deployment and CSIDriver objects for the CSI controller.

  4. Remove the CSI driver objects:

    kubectl apply -f remove-vsphere-csi-controller-deployment.yaml
    
  5. Create a manifest remove-vsphere-csi-controller-rbac.yaml that removes the RBAC rules.

  6. Remove the RBAC resource:

    kubectl apply -f remove-vsphere-csi-controller-rbac.yaml
    

Please send any feedback you have to pks-feedback@pivotal.io.