Cloud Native Storage (CNS) on vSphere
Page last updated:
Warning: VMware Enterprise PKS v1.7 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.
This topic explains how to integrate Cloud Native Storage (CNS) with VMware Enterprise PKS with a Cloud Storage Interface (CSI). This integration enables PKS clusters to use external container storage.
Overview
Cloud Native Storage (CNS) provides comprehensive data management for stateful, containerized apps, enabling apps to survive restarts and outages. Stateful containers can use vSphere storage primitives such as standard volume, persistent volume, and dynamic provisioning, independent of VM and container lifecycle.
vSphere storage backs the volumes, and you can set a storage policy directly on the volumes. After you create the volumes, you can use the vSphere client to review the volumes and their backing virtual disks, and monitor their storage policy compliance.
For more information, see Getting Started with VMware Cloud Native Storage.
Prerequisites for CNS with PKS
- vSphere v6.7U3 or later
- NSX-T version compatible with vSphere version above
- NSX-T v2.4.0 and later are compatible with vSphere v6.7U3
- See the VMware Product Interoperability Matrices for other version compatibilities
- PKS v1.7 or later
- Support upgrading virtual hardware version on Kubernetes cluster VMs
- Firewall and network configuration:
- Enable the following components to access vCenter:
- Cluster master nodes
- Cluster worker nodes, so their CSI components can provision their disks
- All Pods running CSI components
- Enable the following components to access vCenter:
- PKS plan configuration:
- In the PKS tile, configure a Plan with the Allow Privileged checkbox enabled, so containers run in privileged mode
Manually Install CSI on a PKS Cluster
Step 1: Create a PKS cluster
Create a PKS cluster:
pks create-cluster pks-cluster-5-shared-t1 --external-hostname pks-cluster-5-shared-t1 --plan large --num-nodes 3 --network-profile single-tier-profileMake sure that all VMs in the Kubernetes cluster have hardware compatible with VMware version 15.

Step 2: Create a CSI Secret
Create the following configuration file
csi-vsphere.confanywhere in your system:[Global] cluster-id = CLUSTER-ID [VirtualCenter "10.1.1.1"] insecure-flag = "true" user = "administrator@vsphere.local" password = "VMware1!" port = "443" datacenters = "vSAN_Datacenter"Where
CLUSTER-IDis an unique identifier, such as the cluster name.Create a secret based on the configuration file:
> kubectl create secret generic vsphere-config-secret \ --from-file=csi-vsphere.conf --namespace=kube-system secret/vsphere-config-secret created
Confirm that the secret exists:
> kubectl get secret/vsphere-config-secret -n kube-system NAME TYPE DATA AGE vsphere-config-secret Opaque 1 37s
Step 3: Create RBAC for CSI Access
Create a manifest
vsphere-csi-controller-rbac.yamlthat defines role-based access control (RBAC) of the CSI with aServiceAccount,ClusterRole, andClusterRoleBinding.- As a template, use the RBAC Manifest template below.
Create the RBAC objects:
> kubectl apply -f vsphere-csi-controller-rbac.yaml serviceaccount/vsphere-csi-controller created clusterrole.rbac.authorization.k8s.io/vsphere-csi-controller-role created clusterrolebinding.rbac.authorization.k8s.io/vsphere-csi-controller-binding created
Step 4: Install the vSphere CSI driver
Create a manifest
vsphere-csi-controller-ss.yamlthat definesStatefulSetandCSIDriverobjects for installing the CSI controller.- As a template, use the CSI Driver Manifest template below.
Create the CSI driver objects:
> kubectl apply -f vsphere-csi-controller-ss.yaml statefulset.apps/vsphere-csi-controller created csidriver.storage.k8s.io/csi.vsphere.vmware.com created
Create a manifest
vsphere-csi-node-ds.yamlthat defines theDaemonSetfor the CSI controller.- As a template, use the DaemonSet Manifest template below.
Create the DaemonSet:
> kubectl apply -f vsphere-csi-node-ds.yaml daemonset.apps/vsphere-csi-node created
Verify that CNS works for the cluster by following the Verify a CSI Installation steps below.
Verify a CSI Installation
Verify that CSI Deployed Successfully
> kubectl get statefulset --namespace=kube-system NAME READY AGE > kubectl get daemonsets vsphere-csi-node --namespace=kube-system NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE vsphere-csi-node 3 3 3 3 3 <none> 13m > kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE coredns-6f9bcd8956-9x7wn 1/1 Running 0 22h coredns-6f9bcd8956-clksx 1/1 Running 0 53m coredns-6f9bcd8956-rjmfl 1/1 Running 0 22h kubernetes-dashboard-5fc4ccc79f-kwnd4 1/1 Running 0 22h metrics-server-7f85c59675-vz4bs 1/1 Running 0 22h vsphere-csi-controller-0 5/5 Running 0 14m vsphere-csi-node-64dmx 3/3 Running 0 13m vsphere-csi-node-nnldx 3/3 Running 0 13m vsphere-csi-node-pdbb5 3/3 Running 0 13m
Verify that All Pods Can Access vCenter
After CSI is installed in the Kubernetes cluster, you see these Pods in the kube-system namespace:
> kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATEDNODE READINESS GATES vsphere-csi-controller-0 5/5 Running 5 24h 172.16.30.3 c4e3819f-00fc-457b-beda-26fbdd589c53 <none> <none> vsphere-csi-node-64dmx 3/3 Running 3 24h 172.16.30.9 c9b4f441-4c08-43cf-bb17-8be80ed676a4 <none> <none> vsphere-csi-node-6bp4x 3/3 Running 0 5h15m 172.16.30.13 80dc0ceb-d460-4538-99b6-d7d8eacc4b74 <none> <none> vsphere-csi-node-92tvh 3/3 Running 3 5h21m 172.16.30.11 651bfb0d-084d-481f-98f6-f811284676ef <none> <none> vsphere-csi-node-l4v2q 3/3 Running 0 4h33m 172.16.30.17 9233bd53-7e94-4b94-a197-f459ccdc25dd <none> <none> vsphere-csi-node-nnldx 3/3 Running 3 24h 172.16.30.8 c4e3819f-00fc-457b-beda-26fbdd589c53 <none> <none> vsphere-csi-node-pw7hw 3/3 Running 3 5h18m 172.16.30.12 1b84ae77-45bf-4fa1-9f0b-cbc13bb75894 <none> <none>
All these Pods must be able to access vCenter. This means that the Floating IP address allocated to the SNAT rule for this namespace in the T0 (or T1 if shared T1 model is used) must be able to reach vCenter.
Verify that CSI Custom Resource Definitions are Working
> kubectl get CSINode NAME CREATED AT 3a0ea98b-879e-4f7a-abbd-e3ad426c8a1b 2020-03-05T22:29:00Z c4e3819f-00fc-457b-beda-26fbdd589c53 2020-03-05T22:28:57Z c9b4f441-4c08-43cf-bb17-8be80ed676a4 2020-03-05T22:29:00Z > kubectl describe CSINode Name: 3a0ea98b-879e-4f7a-abbd-e3ad426c8a1b Namespace: Labels:Annotations: API Version: storage.k8s.io/v1beta1 Kind: CSINode Metadata: Creation Timestamp: 2020-03-05T22:29:00Z Owner References: API Version: v1 Kind: Node Name: 3a0ea98b-879e-4f7a-abbd-e3ad426c8a1b UID: 2ab6f1cb-a2f7-41a9-87b8-4197177c6b70 Resource Version: 153666 Self Link: /apis/storage.k8s.io/v1beta1/csinodes/3a0ea98b-879e-4f7a-abbd-e3ad426c8a1b UID: 79144892-45b3-4fa7-8242-a2468993260a Spec: Drivers: Name: csi.vsphere.vmware.com Node ID: 3a0ea98b-879e-4f7a-abbd-e3ad426c8a1b Topology Keys: Events: Name: c4e3819f-00fc-457b-beda-26fbdd589c53 Namespace: Labels: Annotations: API Version: storage.k8s.io/v1beta1 Kind: CSINode Metadata: Creation Timestamp: 2020-03-05T22:28:57Z Owner References: API Version: v1 Kind: Node Name: c4e3819f-00fc-457b-beda-26fbdd589c53 UID: 442cdc30-2e2b-4e7f-ac2b-17e667f18688 Resource Version: 153646 Self Link: /apis/storage.k8s.io/v1beta1/csinodes/c4e3819f-00fc-457b-beda-26fbdd589c53 UID: acbc421b-cf7f-415d-bc10-1316b28dbd47 Spec: Drivers: Name: csi.vsphere.vmware.com Node ID: c4e3819f-00fc-457b-beda-26fbdd589c53 Topology Keys: Events: Name: c9b4f441-4c08-43cf-bb17-8be80ed676a4 Namespace: Labels: Annotations: API Version: storage.k8s.io/v1beta1 Kind: CSINode Metadata: Creation Timestamp: 2020-03-05T22:29:00Z Owner References: API Version: v1 Kind: Node Name: c9b4f441-4c08-43cf-bb17-8be80ed676a4 UID: c4d8e053-8a3f-466e-bd1f-d491f58cabc8 Resource Version: 153663 Self Link: /apis/storage.k8s.io/v1beta1/csinodes/c9b4f441-4c08-43cf-bb17-8be80ed676a4 UID: 69958fca-95ac-4e70-b690-af81c2434fc5 Spec: Drivers: Name: csi.vsphere.vmware.com Node ID: c9b4f441-4c08-43cf-bb17-8be80ed676a4 Topology Keys: Events:
> kubectl get csidrivers NAME CREATED AT csi.vsphere.vmware.com 2020-03-05T22:28:21Z > kubectl describe csidrivers Name: csi.vsphere.vmware.com Namespace: Labels:Annotations: kubectl.kubernetes.io/last-applied-configuration {"apiVersion":"storage.k8s.io/v1beta1","kind":"CSIDriver","metadata":{"annotations":{},"name":"csi.vsphere.vmware.com"},"spec":{"attachReq... API Version: storage.k8s.io/v1beta1 Kind: CSIDriver Metadata: Creation Timestamp: 2020-03-05T22:28:21Z Resource Version: 153505 Self Link: /apis/storage.k8s.io/v1beta1/csidrivers/csi.vsphere.vmware.com UID: 906e512c-c897-40cb-8c97-9975fce2fcf8 Spec: Attach Required: true Pod Info On Mount: false Events:
Verify that ProviderID was Added to Nodes
> kubectl describe nodes | grep "ProviderID"** ProviderID: vsphere://421025c3-0ce4-8cff-8229-1a2ec0bf2d97 ProviderID: vsphere://42109234-71ec-3f26-5ddd-9c97c5a02fe9 ProviderID: vsphere://4210ecc1-e7d8-a130-19e5-f20804b5b36e
Create a vSphere Storage Class
Create the following YAML:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: demo-sts-storageclass
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.vsphere.vmware.com
parameters:
datastoreurl: "ds:///vmfs/volumes/vsan:52d8eb4842dbf493-41523be9cd4ff7b7/"
For a non-vSAN datastore, the datastoreurl value looks like ds:///vmfs/volumes/5e66e525-8e46bd39-c184-005056ae28de/.
You can find the datastoreurl value in vCenter:

Configuration File Templates
RBAC Manifest
Define RBAC for CSI with a vsphere-csi-controller-rbac.yaml file that looks like this:
kind: ServiceAccount
apiVersion: v1
metadata:
name: vsphere-csi-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: vsphere-csi-controller-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: vsphere-csi-controller-binding
subjects:
- kind: ServiceAccount
name: vsphere-csi-controller
namespace: kube-system
roleRef:
kind: ClusterRole
name: vsphere-csi-controller-role
apiGroup: rbac.authorization.k8s.io
CSI Driver Manifest
Define a CSI driver with a vsphere-csi-controller-ss.yaml file that looks like this:
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: vsphere-csi-controller
namespace: kube-system
spec:
serviceName: vsphere-csi-controller
replicas: 1
updateStrategy:
type: "RollingUpdate"
selector:
matchLabels:
app: vsphere-csi-controller
template:
metadata:
labels:
app: vsphere-csi-controller
role: vsphere-csi
spec:
serviceAccountName: vsphere-csi-controller
dnsPolicy: "Default"
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.1.1
args:
- "--v=4"
- "--timeout=300s"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- mountPath: /csi
name: socket-dir
- name: vsphere-csi-controller
image: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.2
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /var/vcap/data/csi/sockets/pluginproxy/csi.vsphere.vmware.com"]
args:
- "--v=4"
imagePullPolicy: "Always"
env:
- name: CSI_ENDPOINT
value: unix:///var/vcap/data/csi/sockets/pluginproxy/csi.sock
- name: X_CSI_MODE
value: "controller"
- name: VSPHERE_CSI_CONFIG
value: "/etc/cloud/csi-vsphere.conf"
volumeMounts:
- mountPath: /etc/cloud
name: vsphere-config-volume
readOnly: true
- mountPath: /var/vcap/data/csi/sockets/pluginproxy/
name: socket-dir
ports:
- name: healthz
containerPort: 9808
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 5
failureThreshold: 3
- name: liveness-probe
image: quay.io/k8scsi/livenessprobe:v1.1.0
args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/vcap/data/csi/sockets/pluginproxy/csi.sock
volumeMounts:
- mountPath: /var/vcap/data/csi/sockets/pluginproxy/
name: socket-dir
- name: vsphere-syncer
image: gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.2
args:
- "--v=2"
imagePullPolicy: "Always"
env:
- name: FULL_SYNC_INTERVAL_MINUTES
value: "30"
- name: VSPHERE_CSI_CONFIG
value: "/etc/cloud/csi-vsphere.conf"
volumeMounts:
- mountPath: /etc/cloud
name: vsphere-config-volume
readOnly: true
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.2.2
args:
- "--v=4"
- "--timeout=300s"
- "--csi-address=$(ADDRESS)"
- "--feature-gates=Topology=true"
- "--strict-topology"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- mountPath: /csi
name: socket-dir
volumes:
- name: vsphere-config-volume
secret:
secretName: vsphere-config-secret
- name: socket-dir
hostPath:
path: /var/vcap/data/csi/sockets/pluginproxy/csi.vsphere.vmware.com
type: DirectoryOrCreate
---
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: csi.vsphere.vmware.com
spec:
attachRequired: true
podInfoOnMount: false
DaemonSet Manifest
Define a DaemonSet with a vsphere-csi-node-ds.yaml file that looks like this:
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: vsphere-csi-node
namespace: kube-system
spec:
selector:
matchLabels:
app: vsphere-csi-node
updateStrategy:
type: "RollingUpdate"
template:
metadata:
labels:
app: vsphere-csi-node
role: vsphere-csi
spec:
dnsPolicy: "Default"
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/csi.vsphere.vmware.com /var/vcap/data/kubelet/plugins_registry/csi.vsphere.vmware.com /var/vcap/data/kubelet/plugins_registry/csi.vsphere.vmware.com-reg.sock"]
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/vcap/data/kubelet/plugins_registry/csi.vsphere.vmware.com/csi.sock
securityContext:
privileged: true
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: vsphere-csi-node
image: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.2
imagePullPolicy: "Always"
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: X_CSI_MODE
value: "node"
- name: X_CSI_SPEC_REQ_VALIDATION
value: "false"
- name: VSPHERE_CSI_CONFIG
value: "/etc/cloud/csi-vsphere.conf" # here csi-vsphere.conf is the name of the file used for creating secret using "--from-file" flag
args:
- "--v=4"
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
volumeMounts:
- name: vsphere-config-volume
mountPath: /etc/cloud
readOnly: true
- name: plugin-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: /var/vcap/data/kubelet
# needed so that any mounts setup inside this container are
# propagated back to the host machine.
mountPropagation: "Bidirectional"
- name: device-dir
mountPath: /dev
ports:
- name: healthz
containerPort: 9808
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 10
timeoutSeconds: 3
periodSeconds: 5
failureThreshold: 3
- name: liveness-probe
image: quay.io/k8scsi/livenessprobe:v1.1.0
args:
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
volumeMounts:
- name: plugin-dir
mountPath: /csi
volumes:
- name: vsphere-config-volume
secret:
secretName: vsphere-config-secret
- name: registration-dir
hostPath:
path: /var/vcap/data/kubelet/plugins_registry
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: /var/vcap/data/kubelet/plugins_registry/csi.vsphere.vmware.com
type: DirectoryOrCreate
- name: pods-mount-dir
hostPath:
path: /var/vcap/data/kubelet
type: Directory
- name: device-dir
hostPath:
path: /dev
Please send any feedback you have to pks-feedback@pivotal.io.
