Adding Custom Linux Workloads
Warning: VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) v1.8 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.
Note: As of v1.8, Enterprise PKS has been renamed to VMware Tanzu Kubernetes Grid Integrated Edition. Some screenshots in this documentation do not yet reflect the change.
Page last updated:
This topic describes how to add custom workloads to VMware Tanzu Kubernetes Grid Integrated Edition clusters.
Custom workloads define what a cluster includes out of the box. For example, you can use custom workloads to configure metrics or logging.
Create a YAML configuration for your custom workloads. Consult the following example from the Kubernetes documentation:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template metadata: # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is # generated from the deployment name labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
To apply custom Kubernetes workloads to every cluster created on a plan, enter your YAML configuration in the (Optional) Add-ons - Use with caution field in the pane for configuring a plan in the Tanzu Kubernetes Grid Integrated Edition tile.
For more information, see the Plans section of the Installing Tanzu Kubernetes Grid Integrated Edition topic for your IaaS. For example, Plans in Installing Tanzu Kubernetes Grid Integrated Edition on vSphere.
Please send any feedback you have to firstname.lastname@example.org.