Installing Pivotal Ingress Router

Page last updated:

Pivotal Ingress Router is currently in beta. For questions or to report any issues, please reach out to your primary Pivotal contact.

The following steps are necessary before installing the Pivotal Ingress Router in order to route to PKS API Masters and, optionally, workloads running on worker VMs. You will deploy a Kubernetes cluster for platform services with PKS, install the Pivotal Ingress Router onto the platform services cluster, and configure DNS and infrastructure load balancers to route requests for a custom domain to the Pivotal Ingress Router Ingress Gateways.

Prerequisites

  • Deploy Pivotal Container Service (PKS) version 1.5 or greater
  • Deploy Harbor or another container registry to store container images.
  • Install the PKS CLI. The CLI will be used to create and manage Kubernetes clusters.
  • Install Docker. Docker will be used to retag your images and push them to the private registry.
  • Install the Helm CLI. Helm will be used to template deployments onto kubernetes. We recommend version 2.14.3, or 2.15.1+. Do not use Helm 2.15.0.
  • Install the Istio CLI. Istioctl will be used to install Istio onto kubernetes. This is included with this product on the Pivotal Network. Istioctl should also match the version of Istio in use. In this case, Istio 1.4.5.
  • Install the Kubernetes CLI, deploy and manage software on Kubernetes clusters.
  • Download Pivotal Ingress Router from Pivotal Network.

Additionally, you will need the ability to configure infrastructure load balancers and DNS. On premise, this may be done in coordination with your networking team. In public clouds, you may do this via your cloud provider’s GUI console or with a CLI.

Examples are provided for GCP using the gcloud CLI.

Prepare Environment for Installation

Deploy a platform services cluster with PKS

These steps should be executed from wherever the PKS API is accessible. If necessary, this can be done from the Operations Manager VM; instructions for SSH into Operations Manager VM.

  1. Use PKS CLI to log in as an admin user. The password can be found in the Credentials tab of the Pivotal Container Service tile and is called Uaa Admin Password.

    export OM_TARGET=<OPSMAN_URL>
    export OM_USERNAME=<OPSMAN_USERNAME>
    export OM_PASSWORD=<OPSMAN_PASSWORD>
    # export OM_SKIP_SSL_VALIDATION=true # uncomment if using a self-signed cert
    
    om credentials -p pivotal-container-service -c .properties.uaa_admin_password
    
  2. Choose a fully qualified domain name to use for the platform services cluster. This domain name will be used to target your PKS cluster with the kubectl CLI. For example, platform-services.example.com.

  3. Create a cluster called platform-services on which you will install the Pivotal Ingress Router. This cluster needs to be created with your chosen domain name and a PKS plan configured with at least a medium worker VM type and Allow Privileged checked.

    ps_subdomain=<CHOSEN_SUBDOMAIN_NAME> # Set if using Azure
    ps_domain=<CHOSEN_DOMAIN_NAME>
    plan_name=<PRIVILEGED_PLAN_NAME>
    
    pks create-cluster platform-services -e ${ps_domain} -p ${plan_name} --wait
    

    For example, if your ps_domain is platform-services.example.com, and you want to use the plan privileged:

    ps_subdomain=platform-services
    ps_domain=platform-services.example.com
    plan_name=privileged
    
    pks create-cluster platform-services -e ${ps_domain} -p ${plan_name} --wait
    
    Do not rely on the plan name when verifying these requirements. Check the plan details to make sure it includes at least a medium worker VM type and has Allow Privileged checked Changing the plan assigned to a cluster is not currently supported so increasing the worker size in the future requires changing the plan configuration itself (reference).

Configure a load balancer for the platform services cluster

While the platform services cluster is being created you can configure a TCP load balancer to listen on port 8443 and forward to your Kubernetes Masters.

To configure a load balancer, select your infrastructure below and follow the corresponding instructions:

For GCP environments:

  1. Create your TCP load balancer.
  2. env_name=CHOSEN_ENV_NAME
    region=REGION
    
    gcloud compute target-pools create ${env_name}-platform-services \
        --region ${region} \
        --session-affinity NONE
    gcloud compute addresses create ${env_name}-platform-services-ip \
        --region ${region}
    
    external_ip=$(gcloud compute addresses describe \
        ${env_name}-platform-services-ip \
        --region ${region} --format=json | jq -r .address)
    gcloud compute forwarding-rules create ${env_name}-platform-services \
        --target-pool ${env_name}-platform-services \
        --region ${region} \
        --address ${external_ip} \
        --ports 8443
    
  3. Create a DNS A record for your chosen DNS name to point at your load balancer.
  4. dns_zone=DNS_ZONE
    external_ip=EXTERNAL_IP
    
    gcloud dns record-sets transaction start --zone ${dns_zone}
    gcloud dns record-sets transaction add ${external_ip} --name ${ps_domain} \
      --ttl 300 --type A --zone ${dns_zone}
    gcloud dns record-sets transaction execute --zone ${dns_zone}
    
  5. Once the cluster is ready, add the platform services cluster’s master VMs to the load balancer’s pool.
    1. Run the pks cluster platform-services command and record the Kubernetes Master IP and UUID values.
    2. In GCP, go to Compute Engine > VM instances
    3. In the search bar, enter the Kubernetes Master IP and UUID.
    4. Record VM instance name or names.
    5. Run the following command:
    6. instances=INSTANCE_VM1,INSTANCE_VM2,INSTANCE_VM3
      instances_zone=INSTANCES_ZONE
      
      gcloud compute target-pools add-instances ${env_name}-platform-services \
        --instances ${instances} \
        --region ${region} \
        --instances-zone ${instances_zone}
      
For Azure environments:

  1. Create your load balancer.
  2. resource_group_name=CHOSEN_RESOURCE_GROUP_NAME
    
    az network lb create \
      --resource-group ${resource_group_name} \
      --name ${resource_group_name}-platform-services \
      --sku standard \
      --public-ip-address ${resource_group_name}-platform-services-ip \
      --frontend-ip-name LoadBalancerFrontEnd \
      --backend-pool-name ${resource_group_name}-platform-services
    
  3. Create the Health Probe.
  4. az network lb probe create \
      --resource-group ${resource_group_name} \
      --lb-name ${resource_group_name}-platform-services \
      --name ${resource_group_name}-platform-services-kube-api \
      --protocol tcp \
      --port 8443
    
  5. Create the Load Balancer Rule for port 8443.
  6. az network lb rule create \
      --resource-group ${resource_group_name} \
      --lb-name ${resource_group_name}-platform-services \
      --name ${resource_group_name}-platform-services-kube-api \
      --protocol tcp \
      --frontend-port 8443 \
      --backend-port 8443 \
      --frontend-ip-name LoadBalancerFrontEnd \
      --backend-pool-name ${resource_group_name}-platform-services \
      --probe-name ${resource_group_name}-platform-services-kube-api
    
  7. Create the Security Group Rule for port 8443 on the relevant security groups for your master VMs.
  8. network_security_group_name=NETWORK_SECURITY_GROUP
    priority=PRIORITY
    
    az network nsg rule create \
      --resource-group ${resource_group_name} \
      --name ${resource_group_name}-kubernetes-api \
      --nsg-name ${network_security_group_name} \
      --priority ${priority} \
      --destination-port-ranges 8443 \
      --protocol tcp
    
  9. Create a DNS A record for your chosen DNS name to point at your load balancer.
  10. dns_zone=DNS_ZONE
    external_ip="$(az network public-ip show \
      --resource-group ${resource_group_name} \
      --name ${resource_group_name}-platform-services-ip | jq -r .ipAddress)"
    
    az network dns record-set a add-record \
      --resource-group ${resource_group_name} \
      --ipv4-address ${external_ip} \
      --record-set-name ${ps_subdomain} \
      --zone-name ${dns_zone}
    
  11. Once the cluster is ready, add the platform services cluster’s master VMs to the load balancer’s pool.
    1. Run the pks cluster platform-services command and record the UUID value.
    2. In Azure, go to Availability Sets
    3. In the search bar, enter the UUID.
    4. Select the availability set that contains the UUID and appended by master.
    5. Click on each VM and select Settings > Networking to get the Network Interface Name.
    6. Run the following command for each Network Interface to attach the instance to the backend pool.
    7. nic_name=NETWORK_INTERFACE_NAME
      
      az network nic ip-config address-pool add \
        --address-pool ${resource_group_name}-platform-services \
        --ip-config-name ipconfig0 \
        --lb-name ${resource_group_name}-platform-services \
          --nic-name ${nic_name} \
        --resource-group ${resource_group_name}
      
For vSphere environments:

  1. Ensure your on-premise infrastructure load balancer is configured to listen on port 8443.
  2. Create a DNS A record for your chosen DNS name to point at your load balancer.
  3. Once the cluster is ready, add the platform services cluster’s master VMs to the load balancer’s pool.
  4. Note: You can find the INSTANCE_VM and INSTANCES_ZONE values by looking at kubernetes_master_ips field in the output of the pks cluster platform-services command and mapping that IP to the instance VM(s) created by your IAAS.

Confirm Cluster Access

  1. Run the following command to get the kube config for the platform services cluster. For more info, see Retrieving Cluster Credentials and Configuration.

    pks get-credentials platform-services
    
  2. Verify kubectl access using cluster-info.

    kubectl cluster-info
    

    You should see the following:

    Kubernetes master is running at https://platform-services.example.com:8443
    CoreDNS is running at https://platform-services.example.com:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    

Install Pivotal Ingress Router

The following steps will install Pivotal Ingress Router and its dependencies to enable automated routing to masters. The Pivotal Ingress Router components are the cluster registrar and the master routing controller. The cluster registrar syncs cluster information from the PKS API, and the master routing controller configures Istio with the correct route information. Upon installing both components, new clusters will be immediately accessible after creation.

Download Pivotal Ingress Router from Pivotal Network

  1. Download Pivotal Ingress Router from Pivotal Network.

  2. Extract the file it in a location that from which you can access the platform-services cluster created above with kubectl.

    tar -xvf ingress-router-0.5.0-build.01.tar
    
  3. Move into the directory. The instructions will assume from here on that you are at the root level of the ingress-router directory.

    cd ingress-router
    

Relocate container images to private registry

The Pivotal Ingress Router images need to be uploaded to a private registry that Kubernetes can use.

  1. Load all the image tarballs from the images directory to your docker.

    export ingress_router_version="$(cat ingress-router-version.txt)"
    export istio_version="$(cat istio-version.txt)"
    
    docker load < images/cluster_registrar_${ingress_router_version}.tar
    docker load < images/ingress_router_operator_${ingress_router_version}.tar
    docker load < images/master_routing_controller_${ingress_router_version}.tar
    docker load < images/workload_routing_controller_manager_${ingress_router_version}.tar
    docker load < images/workload_routing_controller_${ingress_router_version}.tar
    docker load < images/kubectl_${istio_version}.tar
    docker load < images/pilot_${istio_version}.tar
    docker load < images/proxy_init_${istio_version}.tar
    docker load < images/proxyv2_${istio_version}.tar
    
  2. Retag the images for your registry. The Pivotal Ingress Router Helm installation will assume everything is placed in a single registry, in our case called pivotalcf.

    registry_url=<REGISTRY_URL>
    
    docker tag pivotalcf/cluster-registrar:${ingress_router_version} \
      ${registry_url}/pivotalcf/cluster-registrar:${ingress_router_version}
    
    docker tag pivotalcf/ingress-router-operator:${ingress_router_version} \
      ${registry_url}/pivotalcf/ingress-router-operator:${ingress_router_version}
    
    docker tag pivotalcf/master-routing-controller:${ingress_router_version} \
      ${registry_url}/pivotalcf/master-routing-controller:${ingress_router_version}
    
    docker tag pivotalcf/workload-routing-controller-manager:${ingress_router_version} \
      ${registry_url}/pivotalcf/workload-routing-controller-manager:${ingress_router_version}
    
    docker tag pivotalcf/workload-routing-controller:${ingress_router_version} \
      ${registry_url}/pivotalcf/workload-routing-controller:${ingress_router_version}
    
    docker tag gcr.io/cf-routing/kubectl:${istio_version} \
      ${registry_url}/pivotalcf/kubectl:${istio_version}
    
    docker tag gcr.io/cf-routing/pilot:${istio_version} \
      ${registry_url}/pivotalcf/pilot:${istio_version}
    
    docker tag gcr.io/cf-routing/proxy_init:${istio_version} \
      ${registry_url}/pivotalcf/proxy_init:${istio_version}
    
    docker tag gcr.io/cf-routing/proxyv2:${istio_version} \
      ${registry_url}/pivotalcf/proxyv2:${istio_version}
    
  3. Push the images to your registry.

    docker push ${registry_url}/pivotalcf/cluster-registrar:${ingress_router_version}
    docker push ${registry_url}/pivotalcf/ingress-router-operator:${ingress_router_version}
    docker push ${registry_url}/pivotalcf/master-routing-controller:${ingress_router_version}
    docker push ${registry_url}/pivotalcf/workload-routing-controller-manager:${ingress_router_version}
    docker push ${registry_url}/pivotalcf/workload-routing-controller:${ingress_router_version}
    docker push ${registry_url}/pivotalcf/kubectl:${istio_version}
    docker push ${registry_url}/pivotalcf/pilot:${istio_version}
    docker push ${registry_url}/pivotalcf/proxy_init:${istio_version}
    docker push ${registry_url}/pivotalcf/proxyv2:${istio_version}
    
    If running a private registry, you’ll need to ensure your local docker daemon trusts the certificate authority that signed the registry’s server certificate. For example, when using Docker Desktop on Mac, follow these instructions . Otherwise docker push will fail with an error like x509: certificate signed by unknown authority.

Use Istioctl to install Istio

  1. Fill out the hub value in istio/configuration-profile.yaml to the image registry hub that your ingress router images reside in.

    The credentials for the image registry hub will come from an image-registry-credentials secret. This secret is created when you install the ingress router via Helm. The Istio installation will be in a failing state until it is installed.
  2. Using the istioctl CLI, run the commands below to install Istio in the ingress-router-system namespace on the platform-services cluster.

    istioctl manifest apply \
      -f istio/configuration-profile.yaml \
      --set defaultNamespace="${namespace}"
    

Configure User Provided Values

  1. Fill out all of the values in helm/ingress-router/user-provided-values.yaml as documented within the file. Further details are provided below for the following fields:

    • Create a UAA Client for pks.clientID and pks.clientSecret.
    • Retrieve the PKA CA Certificate for pks.caCertificate.
    • Retrieve the PKS API URL for pks.apiURL.
    It is recommended to store user-provided-values.yaml somewhere safe since it will be used in future Pivotal Ingress Router upgrades.
  2. Create a UAA Client to access PKS. Follow Grant PKS Access to a Client to create a client with pks.clusters.admin authority. Add the values used to create the client as pks.clientID and pks.clientSecret.

    It’s important that this is a UAA client not a UAA user.
  3. Get the PKS CA Certificate with the following command. Add this value as pks.caCertificate.

    export OM_TARGET=<OPSMAN_URL>
    export OM_USERNAME=<OPSMAN_USERNAME>
    export OM_PASSWORD=<OPSMAN_PASSWORD>
    
    om credentials -p pivotal-container-service \
      -c .pivotal-container-service.pks_tls -f cert_pem
    
  4. Retrieve the PKS API URL. In Ops Manager, check the API Hostname (FQDN) in the form on the PKS API pane. Append https:// in front and add this as pks.apiUrl.

Your cluster must be able to reach this address via NAT or the “Allow outbound internet access from Kubernetes cluster vms” setting.)

  1. (Optional) Enable the routing to workloads feature by setting ingressRouter.experimental.workloadRouting.enabled to true. If you enable this feature, you must also complete the steps in the Configure Port Range for Routing to Workloads section. For more information about this feature, see the Routing to Workloads topic.

Render helm chart and apply Kubernetes config

  1. Create the ingress-router-system namespace if it doesn’t exist i.e in the case of upgrading an existing installation.

    namespace=ingress-router-system
    
    kubectl create namespace ${namespace}
    
  2. Now install the Pivotal Ingress Router chart in the same namespace.

    helm template helm/ingress-router \
      --namespace ${namespace} \
      -f helm/ingress-router/user-provided-values.yaml \
      > /tmp/ingress-router-installation.yaml
    
    kubectl apply -f /tmp/ingress-router-installation.yaml
    
  3. Verify that Pivotal Ingress Router is running successfully by running the following command:

    kubectl get pods -n ${namespace}
    

    All the pods should either be running/completed:

    NAME                                         READY   STATUS      RESTARTS   AGE
    cluster-registrar-6c645fd4dd-8gwvj           1/1     Running     0          8m41s
    ingress-router-operator-7d857f7ff7-mqw7v     1/1     Running     0          11s
    istio-ingressgateway-75cfbc9c6c-7vmpv        1/1     Running     0          8m25s
    istio-ingressgateway-75cfbc9c6c-dcmtj        1/1     Running     0          8m41s
    istio-ingressgateway-75cfbc9c6c-lh828        1/1     Running     0          8m25s
    istio-init-crd-10-c7nzc                      0/1     Completed   0          8m41s
    istio-init-crd-11-l7t6t                      0/1     Completed   0          8m41s
    istio-init-crd-12-8clfd                      0/1     Completed   0          8m40s
    istio-pilot-7b6c85897d-crqkr                 1/1     Running     0          8m41s
    master-routing-controller-865fcdc885-ftvmn   1/1     Running     0          8m41s
    

    If you have enabled routing to workloads, you should see a few other pods running as well:

    NAME                                                                  READY   STATUS      RESTARTS   AGE
    pod/1547b57f-a9af-4ef6-906f-43857202d519-57cb67787-4nhk7              1/1     Running     7          7h
    pod/b108f493-b041-4801-ad8f-e7f825f33d3e-5997577d68-5t749             1/1     Running     0          7h
    pod/cluster-registrar-6b95f7cccc-kpb9h                                1/1     Running     0          7h
    pod/install-virtualservice-crd-b108f493-b041-4801-ad8f-e7f825f49nk6   0/1     Completed   0          7h
    pod/istio-ingressgateway-778bf78579-wn97r                             1/1     Running     0          7h
    pod/istio-init-crd-10-zd9ds                                           0/1     Completed   0          7h
    pod/istio-init-crd-11-rjz5t                                           0/1     Completed   0          7h
    pod/istio-init-crd-12-9jkbm                                           0/1     Completed   0          7h
    pod/istio-pilot-77b89947fc-9fk8r                                      1/1     Running     0          7h
    pod/master-routing-controller-74dbd56799-vpl4k                        1/1     Running     0          7h
    pod/workload-routing-controller-manager-c879dd986-zf79j               1/1     Running     0          7h
    
    If all your pods are not running or completed, you might need to increase the size of the worker VM. We have tested this successfully on a 1 CPU/2GB worker VM, which is Medium on GCP.
  4. If you’d like to inspect more of what is installed, the following commands are helpful:

    kubectl get all -n ${namespace}
    kubectl get istio-io -n ${namespace}
    

Configure gateway DNS and load balancer

To configure gateway DNS and your load balancer, select your infrastructure below and follow the corresponding steps:

For GCP environments:

  1. Inspect the istio-ingressgateway service. A load balancer service is automatically created. Under the TYPE column, it shows a type of LoadBalancer. This is used to forward connections from an external IP to the Istio Ingress Gateway
  2. kubectl get services -n ${namespace}
    
    NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                                                                                                                                                                                                          AGE
    ingress-router-operator-metrics   ClusterIP      10.100.200.190   <none>            8383/TCP,8686/TCP                                                                                                                                                                                                30h
    istio-ingressgateway              LoadBalancer   10.100.200.122   104.154.157.119   15020:32169/TCP,8443:31400/TCP,31000:30685/TCP,31001:32178/TCP,31002:30294/TCP,31003:31672/TCP,31004:31288/TCP,31005:32546/TCP,31006:30916/TCP,31007:30049/TCP,31008:30713/TCP,31009:32704/TCP,31010:30634/TCP   34h
    istio-pilot                       ClusterIP      10.100.200.50    <none>            15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                                                                                           34h
    
  3. Record the external IP of the istio-ingressgateway load balancer as shown above.
  4. Setup any wildcard DNS records that you’d like to use for accessing clusters that resolve to the external IP of the load balancer, e.g. *.clusters.example.com. For example:
  5. external_ip=EXTERNAL_IP
    dns_zone=DNS_ZONE
    dns_name=DNS_NAME
    
    gcloud dns record-sets transaction start --zone ${dns_zone}
    gcloud dns record-sets transaction add ${external_ip} --name ${dns_name} \
      --ttl 300 --type A --zone ${dns_zone}
    gcloud dns record-sets transaction execute --zone ${dns_zone}
    
For Azure environments:

  1. Inspect the istio-ingressgateway service. A load balancer service is automatically created. Under the TYPE column, it shows a type of LoadBalancer. This is used to forward connections from an external IP to the Istio Ingress Gateway
  2. kubectl get services -n ${namespace}
    
    NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                                                                                                                                                                                                          AGE
    ingress-router-operator-metrics   ClusterIP      10.100.200.190   <none>            8383/TCP,8686/TCP                                                                                                                                                                                                30h
    istio-ingressgateway              LoadBalancer   10.100.200.122   104.154.157.119   15020:32169/TCP,8443:31400/TCP,31000:30685/TCP,31001:32178/TCP,31002:30294/TCP,31003:31672/TCP,31004:31288/TCP,31005:32546/TCP,31006:30916/TCP,31007:30049/TCP,31008:30713/TCP,31009:32704/TCP,31010:30634/TCP   34h
    istio-pilot                       ClusterIP      10.100.200.50    <none>            15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                                                                                           34h
    
  3. Record the external IP of the istio-ingressgateway load balancer as shown above.
  4. Setup any wildcard DNS records that you’d like to use for accessing clusters that resolve to the external IP of the load balancer, e.g. *.clusters.example.com. For example:
  5. external_ip=EXTERNAL_IP
    dns_zone=DNS_ZONE
    dns_name=DNS_NAME # Only the subdomain e.g *.clusters
    
    az network dns record-set a add-record \
      --resource-group ${resource_group_name} \
      --ipv4-address ${external_ip} \
      --record-set-name ${dns_name} \
      --zone-name ${dns_zone}
    
For vSphere environments:

  1. Inspect the istio-ingressgateway service, which is of type NodePort.
  2. kubectl get services -n ${namespace}
    
    NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                                                                                                                                                                                                          AGE
    ingress-router-operator-metrics   ClusterIP      10.100.200.190   <none>            8383/TCP,8686/TCP                                                                                                                                                                                                30h
    istio-ingressgateway              NodePort       10.100.200.122   <none>            15020:32169/TCP,8443:31400/TCP,31000:30685/TCP,31001:31001/TCP,31002:31002/TCP,31003:31003/TCP,31004:31004/TCP,31005:31005/TCP,31006:31006/TCP,31007:31007/TCP,31008:31008/TCP,31009:31009/TCP,31010:31010/TCP   34h
    istio-pilot                       ClusterIP      10.100.200.50    <none>            15010/TCP,15011/TCP,8080/TCP,15014/TCP                                                                                                                                                                           34h
    
  3. Ensure there is a load balancer to forward connections from an external IP to the Istio Ingress Gateway. You will need to configure your TCP load balancer to accept connections on port 8443 and forward them to the IP address of the platform-services worker at port 31400.
  4. Configure an external IP on your TCP load balancer.
  5. Setup any wildcard DNS records that you’d like to use for accessing clusters that resolve to the external IP of the load balancer, e.g. *.clusters.example.com

Configure Port Range for Routing to Workloads

The procedure in this section is only required if you enabled the routing to workloads feature when you completed the Configure User Provided Values section.

To use workload routing, a port range, or set of port ranges must be configured. The WorkloadRoutingControllers choose ports from the port range, or port ranges, when a VirtualService is created. These ports will opened on the IngressGateway pod, allowing connections originating from outside the network to reach the workload.

For example, given the port range is 31000-31010, when a VirtualService is created for a workload, clients from outside the network can make a request to the Load Balancer IP or System Cluster Kubernetes Worker IP(s) on a port in the range to connect to the workload. If this is the first VirtualService, that workload will be reachable at <Load Balancer IP or System Cluster Kubernetes Worker IP>:31000 (31000 since its the first number in the range).

If you have a vSphere environment without NSX-T, Pivotal Ingress Router is deployed with NodePort and the max allowable range is limited to the Node Port range (default is 30000-32767). If there are existing NodePorts they cannot conflict with the configured port range. Additionally, you must configure your TCP load balancer to accept connections on these ports and forward them to the IP addresses of the platform-services workers on the same ports.

To create a port range, create the following custom resource and apply it with kubectl -n ingress-router-system apply -f <File name>:

---
apiVersion: ingressrouter.pivotal.io/v1
kind: PortAllocation
metadata:
  name: workload-allocation
  namespace: ingress-router-system
spec:
  portRanges:
  - start: 31000
    end: 31010

This create a range of 31000 to 31010 for workloads.

Validate that Kubernetes APIs are Routable

To validate that Kubernetes APIs for new clusters are routable, follow the steps in Creating and Accessing Clusters. After creating a cluster, you should be able to run kubectl commands against it.