Deploying and Managing Cloud Native Storage (CNS) on vSphere
Page last updated:
This topic describes how to use the vSphere Container Storage Interface (CSI) Driver that is automatically installed to clusters by VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) on vSphere.
If your TKGI environment is on vSphere and your administrator has not enabled automatic vSphere CSI Driver installation, you must manually install a vSphere CSI Driver on your clusters. For more information, see Manually Installing the vSphere CSI Driver.
Overview
vSphere Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.
You can install vSphere CNS on TKGI-provisioned clusters by configuring TKGI to automatically install a vSphere CSI Driver. To enable automatic CSI driver installation on your clusters, see Storage in Installing TKGI on vSphere.
When automatic vSphere CSI Driver installation is enabled, your clusters use your tile Kubernetes Cloud Provider storage settings as the default vSphere CNS configuration.
You can customize, deploy and manage vSphere CNS volumes using the vSphere CSI Driver:
- To customize file volumes, see Customize vSphere File Volumes below.
- To use and customize CNS Volumes, see Create or Use CNS Block Volumes below.
The automatically deployed vSphere CSI Driver supports high availability (HA) configurations. HA support is automatically enabled on clusters with multiple control plane nodes and uses only one active CSI Controller.
Use the vSphere client to review your cluster storage volumes and their backing virtual disks, and to set a storage policy on your storage volumes or monitor policy compliance. vSphere storage backs up your cluster volumes.
Note: If you have an existing cluster with a manually installed vSphere CSI driver and your administrator has enabled automatic vSphere CSI Driver installation, you must uninstall the manually installed vSphere CSI Driver from your cluster. For more information, see Uninstall a Manually Installed vSphere CSI Driver below.
For more information about VMware CNS, see Getting Started with VMware Cloud Native Storage.
For more information about using the Kubernetes CSI Driver, see Persistent Volumes in the Kubernetes documentation.
vSphere CSI Driver Supported Features and Requirements
The vSphere CSI Driver supports different features depending on driver version, environment and storage type.
TKGI supports only the following vSphere CSI Driver features:
- Enhanced Object Health in UI for vSAN Datastores
- Dynamic Block PV support*
- Dynamic Virtual Volume (vVOL) PV support
- Static PV Provisioning
- Kubernetes Multi-node Control Plane support
- Encryption support via VMcrypt*
Dynamic File PV support*
*For information on the usage limitations and environment and version requirements of these vSphere CSI Driver features, see Functionality Supported by vSphere Container Storage Plug-in in Supported Kubernetes Functionality and Limitations in the VMware vSphere Container Storage Plug-in documentation.
For information on the vCenter, datastore, and cluster types supported by the vSphere CSI Driver, see
vSphere Functionality Supported by vSphere Container Storage Plug-in
in the VMware vSphere Container Storage Plug-in documentation.
For information on the scaling limitations of the vSphere CSI Driver, see Configuration Maximums for vSphere Container Storage Plug-in in the VMware vSphere Container Storage Plug-in documentation.
Customize vSphere File Volumes
To create, modify or remove a customized vSphere file volume:
- Create a Cluster With Customized File Volume Parameters
- Modify a Cluster With Customized File Volume Parameters
- Remove File Volume Parameters from a Cluster
Prerequisites
To use file volumes, you must enable vSAN File Services in the vSphere vCenter. For information about enabling vSAN File Services, see Configure File Services in the VMware vSphere documentation.
Create a Cluster With Customized File Volume Parameters
To create a new cluster with a vSphere file volume:
- Create a file volume configuration file. For information, see File Volume Configuration below.
To create a cluster with attached file volumes:
tkgi create-cluster CLUSTER-NAME --config-file CONFIG-FILE
Where:
CLUSTER-NAME
is the name of your cluster.CONFIG-FILE
is the name of your config file.
For example:
$ tkgi create-cluster demo -e demo.cluster --plan Small --config-file ./conf1.json
Modify a Cluster With Customized File Volume Parameters
To modify an existing cluster with a vSphere file volume:
- Create a file volume configuration file. For information, see File Volume Configuration below.
To update your cluster with file volumes:
tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE
Where:
CLUSTER-NAME
is the name of your cluster.CONFIG-FILE
is the name of your config file.
Remove File Volume Parameters from a Cluster
To remove a vSphere file volume configuration from a cluster:
Create a file volume configuration file containing either the
disable_target_vsan_fileshare_datastore_urls
ordisable_net_permissions
parameters set totrue
to disable an existing file volume parameter. For information, see File Volume Configuration below.
For example:{ "disable_target_vsan_fileshare_datastore_urls": true, "disable_net_permissions": true }
To remove the configured file volume parameter from your cluster:
tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE
Where:
CLUSTER-NAME
is the name of your cluster.CONFIG-FILE
is the name of your config file.
File Volume Configuration
To customize a vSphere file volume, create a JSON or YAML formatted file volume configuration file using the supported file volume parameters below.
For example:
{
"target_vsan_fileshare_datastore_urls": "ds:///vmfs/volumes/vsan:52635b9067079319-95a7473222c4c9cd/",
"net_permissions": [
{
"name": "demo1",
"ips": "192.168.0.0/16",
"permissions": "READ_WRITE",
"rootsquash": false
},
{
"name": "demo2",
"ips": "10.0.0.0/8",
"permissions": "READ_ONLY",
"rootsquash": false
}
]
}
The following are accepted File Volume configuration file parameters:
Name | Type | Description |
---|---|---|
target_vsan_fileshare_datastore_urls | string | A comma separated list of datastores for deploying file share volumes. |
disable_target_vsan_fileshare_datastore_urls | Boolean | Disable the target_vsan_fileshare_datastore_urls. Values: true , false .Default Value: false . |
net_permissions | Array | Properties defining a NetPermissions object. |
disable_net_permissions | Boolean | Disable the net_permissions. Values: true , false .Default Value: false . |
The following are supported NetPermission object configuration file parameters:
Name | Type | Description |
---|---|---|
name | string | Name of the NetPermission object. |
ips | string | IP range or IP subnet affected by the NetPermission restrictions. Default Value: "*" . |
permissions | string | Access permission to the file share volume. Values: "READ_WRITE" , "READ_ONLY" , "NO_ACCESS" .Default Value: "READ_WRITE" . |
rootsquash | Boolean | Security access level for the file share volume. Values: true , false .Default Value: false . |
Create or Use CNS Block Volumes
To dynamically provision a block volume using the vSphere CSI Driver:
- Create a vSphere Storage Class
- Create a PersistentVolumeClaim
- Create Workloads Using Persistent Volumes
For more information on vSphere CSI Driver configuration, see the example/vanilla-k8s-block-driver
configuration
for the CSI driver version you are using
in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.
Create a vSphere Storage Class
To create a vSphere Storage Class:
- Open vCenter.
Open the vSAN Datastore Summary pane.
Determine the
datastoreurl
value for your Datastore.Create the following YAML:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: demo-sts-storageclass annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: csi.vsphere.vmware.com allowVolumeExpansion: ALLOW-EXPANSION parameters: datastoreurl: "DATASTORE-URL"
Where:
ALLOW-EXPANSION
defines whether the cluster’s persistent volume size is either resizable or static. Set totrue
for resizable andfalse
for static size.DATASTORE-URL
is the URL to your Datastore. For a non-vSAN datastore, thedatastoreurl
value looks likeds:///vmfs/volumes/5e66e525-8e46bd39-c184-005056ae28de/
.
For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: demo-sts-storageclass annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: csi.vsphere.vmware.com allowVolumeExpansion: true parameters: datastoreurl: "ds:///vmfs/volumes/vsan:52d8eb4842dbf493-41523be9cd4ff7b7/"
For more information about StorageClass, see Storage Classes in the Kubernetes documentation.
Create a PersistentVolumeClaim
To create a Persistent Volume using the vSphere CSI Driver:
- Create a Storage Class. For more information, see Create a vSphere Storage Class below.
- To apply the StorageClass configuration:
kubectl apply -f CONFIG-FILE
WhereCONFIG-FILE
is the name of your StorageClass configuration file. Create the PersistentVolumeClaim configuration for the file volume. For information about configuring a PVC, see Persistent Volumes in the Kubernetes documentation.
For example:apiVersion: v1 kind: PersistentVolumeClaim metadata: name: example-vanilla-block-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi storageClassName: example-vanilla-block-sc
To apply the PVC configuration:
kubectl apply -f CONFIG-FILE
Where
CONFIG-FILE
is the name of your PVC configuration file.
Create Workloads Using Persistent Volumes
Create a Pod configuration file containing
volumeMounts
andvolumes
parameters.
For example:apiVersion: v1 kind: Pod metadata: name: example-vanilla-block-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: ["/bin/sh", "-c", "echo 'hello' > /mnt/volume1/index.html && chmod o+rX /mnt /mnt/volume1/index.html && while true ; do sleep 2 ; done"] volumeMounts: - name: test-volume mountPath: /mnt/volume1 restartPolicy: Never volumes: - name: test-volume persistentVolumeClaim: claimName: example-vanilla-block-pvc
To apply the Pod configuration to your workload:
kubectl apply -f CONFIG-FILE
Where
CONFIG-FILE
is the name of your configuration file.
For more information and examples of Pod configurations, see the example
configurations
for the CSI driver version you are using
in vsphere-csi-driver in the VMware kubernetes-sigs GitHub repo.
Uninstall a Manually Installed vSphere CSI Driver
If your administrator has enabled automatic vSphere CSI Driver Integration and you have a cluster that uses a manually installed vSphere CSI Driver, the manually installed driver will no longer function after upgrading the cluster.
To uninstall a manually installed CSI driver:
- Confirm your TKGI administrator has enabled automatic vSphere CSI Driver Integration on the TKGI tile.
- Upgrade your Kubernetes cluster to the TKGI version of the TKGI tile.
- Remove the manually installed CSI driver from the cluster.
For more information, see Remove a Manually Installed vSphere CSI Driver below.
To restart CSI jobs on all worker nodes:
bosh -d DEPLOYMENT ssh worker "sudo monit restart csi-node" bosh -d DEPLOYMENT ssh worker "sudo monit restart csi-node-registrar"
Where:
DEPLOYMENT
is the name of the deployment.
To verify that the CSI jobs on all control plane nodes are in a running state:
bosh -d DEPLOYMENT ssh master "sudo monit summary | grep csi"
Where:
DEPLOYMENT
is the name of the deployment.
If a CSI job is not in a running state, start the CSI job:
bosh -d DEPLOYMENT ssh NODE-VM "sudo monit start JOB-NAME"
Where:
DEPLOYMENT
is the name of the deployment.NODE-VM
is the control plane node VM.JOB-NAME
is the name of the CSI job to start.
Remove a Manually Installed vSphere CSI Driver
If you have a cluster that uses a manually installed vSphere CSI Driver, and you upgrade the cluster after your administrator has enabled automatic vSphere CSI Driver Integration, you should remove the manually installed driver. While automatic vSphere CSI Driver Integration is enabled, TKGI enables the integrated driver for a cluster after upgrading it, and the cluster’s manually installed driver no longer functions.
To remove a manually installed vSphere CSI driver:
Run the following command:
kubectl delete -f vsphere-csi-driver.yaml
Migrate an In-Tree vSphere Storage Volume to the vSphere CSI Driver
You can use tkgi update-cluster
to migrate the PersistentVolume (PV) and PersistentVolumeClaim (PVC) on an existing TKGI cluster
from the In-Tree vSphere Storage Driver to the automatically installed vSphere CSI Driver.
Warning: Due to Known Issues in the vSphere CSI Driver, VMware recommends that you migrate to the vSphere CSI Driver after upgrading to TKGI v1.12.7 or later. For more information, see VMDKs Are Deleted during Migration from In-Tree Storage to CSI in Release Notes.
Migrating a TKGI cluster from the In-Tree vSphere Storage Driver to the vSphere CSI Driver requires the following:
- You must use TKGI CLI v1.12 or later.
- TKGI automatic vSphere CSI Driver integration must be enabled.
For information on enabling the vSphere CSI Driver Integration option on the TKGI tile, see Storage in
Installing Tanzu Kubernetes Grid Integrated Edition on vSphere.
- TKGI must be installed on vSphere v7.0 U2 or later.
- The cluster must be a Linux TKGI cluster.
To migrate a cluster from an In-Tree vSphere Storage Driver to the vSphere CSI Driver:
- Upgrade your Kubernetes cluster to the current TKGI version of the TKGI tile.
Review and complete all relevant steps documented in the vSphere CSI Migration documentation:
- Prerequisites for Installing the vSphere Container Storage Plug-in
- Migrating In-Tree vSphere Volumes to vSphere Container Storage Plug-in
- vSphere Container Storage Plug-in Upgrade Considerations and Guidelines
Warning: Before migrating to the vSphere CSI driver, confirm your cluster’s volume storage is configured as described in Things to consider before turning on Migration.
- Prerequisites for Installing the vSphere Container Storage Plug-in
Create a configuration file containing the following:
{ "enable_csi_migration": "true" }
To migrate your cluster to the vSphere CSI Driver:
tkgi update-cluster CLUSTER-NAME --config-file CONFIG-FILE
Where:
CLUSTER-NAME
is the name of your cluster.CONFIG-FILE
is the name of the config file you created in the preceding steps.
Note: You cannot migrate the PV or the PVC on a cluster from the vSphere CSI Driver to the In-Tree vSphere Storage Driver.
Please send any feedback you have to pks-feedback@pivotal.io.