Creating Workload Routes

Page last updated:

The topic describes how to deploy a workload and create a TCP route so that it is publicly reachable. Once you create a route, your workload can receive traffic through a dedicated port at the IP address or DNS name associated with the Ingress Gateway component of Pivotal Ingress Router.

The procedures in this topic assumes that the routing to workloads feature has been enabled.

Create a Workload Route

To create a workload route:

  1. Create a cluster using the PKS CLI as described in Creating and Accessing Clusters.

    $ pks create-cluster <CLUSTER NAME> -e <HOSTNAME> -p <PLAN> --wait
    
  2. Authenticate with your newly created cluster.

    $ pks get-credentials <CLUSTER NAME>
    
  3. Deploy a workload that is capable of listening for network traffic. The following example deploys nginx, a simple webserver:

    $ cat << EOF > nginx-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 1
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            ports:
            - containerPort: 80
    EOF
    
    $ kubectl apply -f nginx-deployment.yaml
    
  4. Expose the pod with a Service of type NodePort. The following example exposes port 80 in the nginx container as a random high number port on the worker VM (i.e: traffic reaches port 32671 on the worker VM and is sent to the nginx server listening on port 80)

    cat << EOF > nginx-nodeport.yaml
    kind: Service
    apiVersion: v1
    metadata:
      name: nginx
    spec:
      type: NodePort
      ports:
      - port: 80
        targetPort: 80
      selector:
        app: nginx
    EOF
    $ kubectl apply -f nginx-nodeport.yaml
    
  5. Retrieve the random high number port that Kubernetes has allocated for the NodePort service. In the example below, 32671 is the port allocated for the service.

    $ kubectl --namespace my-workloads get svc
    NAME    TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    nginx   NodePort   10.100.200.113   <none>        80:32671/TCP   20m
    
  6. Retrieve the IP(s) of worker VMs. This will be used in the next step.

    $ kubectl get node -o=wide
    NAME   STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
    vm-a   Ready    <none>   22h   v1.13.5   10.0.11.23    1.2.3.4       Ubuntu 16.04.6 LTS   4.15.0-64-generic   docker://18.6.3
    vm-b   Ready    <none>   21h   v1.13.5   10.0.11.24    2.3.4.1       Ubuntu 16.04.6 LTS   4.15.0-64-generic   docker://18.6.3
    vm-c   Ready    <none>   21h   v1.13.5   10.0.11.25    3.4.1.2       Ubuntu 16.04.6 LTS   4.15.0-64-generic   docker://18.6.3
    
  7. Confirm the webserver can be reached internally. The command below, runs a short-lived container to make an HTTP request to the nginx webserver. If the deployed workload is not an HTTP server, consider using nc.

    $ kubectl run test-nodeport-to-nginx --rm --restart=Never -it --image=busybox \
      -- wget -O - 'http://<ANY WORKER VM IP>:<NODE PORT>'
    
  8. Create a VirtualService for the workload. The workload-routing-controller running on the system worker VM watches for new VirtualService CRDs on this cluster. This step signals intent to the system cluster to configure routing for the workload.

    $ cat <<EOF > nginx-virtualservice.yaml
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
      name: workload-1 # reusing this name will overwrite the virtual service of the same name
    spec:
      tcp:
       - route:
           - destination:
               host: nodeport
               port:
                 number: <NODE PORT> # this is the port that Kubernetes randomly picked
    EOF
    $ kubectl apply -f nginx-virtualservice.yaml
    
  9. Determine the externally reachable port that is dedicated to this workload. After a few seconds, the same VirtualService object will be populated with a port that is allocated for the workload.

    $ kubectl get virtualservice workload-1 \
      -o=jsonpath='{.metadata.annotations.nodeport/<NODE PORT>}'
    
  10. Connect to your workload

    $ curl http://<LOAD BALANCER IP>:<EXTERNALLY REACHABLE PORT>
    

    Alternatively, instead of the load balancer IP, you can use the DNS associated with the load balancer IP.

Stop Routing to a Workload

  1. To stop routing to this workload, delete the VirtualService that configured the routing:

    $ kubectl delete virtualservice workload-1
    virtualservice.networking.istio.io "nginx-2" deleted
    

    The workload will no longer be publicly reachable but it will still be exposed internally via NodePort.