LATEST VERSION: v1.1 - RELEASE NOTES
Pivotal Container Service v1.1

Deploy and Access Basic Workloads

Page last updated:

This topic describes how to deploy and access basic workloads in Pivotal Container Service (PKS).

If you use Google Cloud Platform (GCP) or vSphere with NSX-T integration, your cloud provider can configure a load balancer for your workload.

If you use vSphere without NSX-T, you can choose to configure your own external load balancer or expose static ports to access your workload without a load balancer.

Access Workloads Using an Internal Load Balancer

If you use GCP or vSphere with NSX-T, follow the steps below to deploy and access basic workloads using a load balancer configured by your cloud provider.

Note: This approach creates a dedicated load balancer for each workload. This may be an inefficient use of resources in clusters with many apps.

  1. Expose the workload using a Service with type: LoadBalancer. See the Kubernetes documentation for more information about the LoadBalancer Service type.

  2. Download the spec for a basic NGINX app from the cloudfoundry-incubator/kubo-ci GitHub repository.

  3. Run kubectl create -f nginx.yml to deploy the basic NGINX app. This command creates three pods (replicas) that span three worker nodes.

  4. Wait until your cloud provider creates a dedicated load balancer and connects it to the worker nodes on a specific port.

  5. Run kubectl get svc nginx and retrieve the load balancer IP address and port number.

  6. On the command line of a server with network connectivity and visibility to the IP address of the worker node, run curl http://EXTERNAL-IP:PORT to access the app. Replace EXTERNAL-IP with the IP address of the load balancer and PORT with the port number.

Access Workloads Using an External Load Balancer

All deployments can use an external load balancer. To use an external load balancer, follow the steps below to deploy and access basic workloads.

  1. Expose every workload and app using a Service with type: NodePort. See the Kubernetes documentation for more information about the NodePort Service type.

  2. Map each node port exposed in the worker nodes that you need to an external port in your external load balancer. The process to map these ports depends on your load balancer. See your external load balancer documentation for more information.

  3. For each app, run curl http://LOAD-BALANCER-IP:EXTERNAL-PORT. Replace LOAD-BALANCER-IP with the IP address of your external load balancer and EXTERNAL-PORT with the external port number.

Access Workloads without a Load Balancer

If you use vSphere without NSX-T integration, you do not have a load balancer configured by your cloud provider. You can choose to configure your own external load balancer or follow the procedures in this section to access your workloads without a load balancer.

If you do not use an external load balancer, you can configure the NGINX service to expose a static port on each worker node. From outside the cluster, you can reach the service at http://NODE-IP:NODE-PORT.

To expose a static port on your workload, perform the following steps:

  1. Download the spec for a basic NGINX app from the cloudfoundry-incubator/kubo-ci GitHub repository.

  2. Run kubectl create -f nginx.yml to deploy the basic NGINX app. This command creates three pods (replicas) that span three worker nodes.

  3. Expose the workload using a Service with type: NodePort. See the Kubernetes documentation for more information about the NodePort Service type.

  4. Retrieve the IP address for a worker node with a running NGINX pod.

    Note: If you deployed more than four worker nodes, some worker nodes may not contain a running NGINX pod. Select a worker node that contains a running NGINX pod.

    You can retrieve the IP address for a worker node with a running NGINX pod in one of the following ways:

    • On the command line, run kubectl get nodes. Select a node name, then locate the node name in the vCenter or GCP Console to find the IP address.
    • On the Ops Manager command line, run bosh vms to find the IP address.
  5. On the command line, run kubectl get svc nginx. Find the node port number in the 3XXXX range.

  6. On the command line of a server with network connectivity and visibility to the IP address of the worker node, run curl http://NODE-IP:NODE-PORT to access the app. Replace NODE-IP with the IP address of the worker node, and NODE-PORT with the node port number.


Please send any feedback you have to pks-feedback@pivotal.io.

Create a pull request or raise an issue on the source for this page in GitHub