Installing Tanzu Application Service for Kubernetes

This topic describes how to install Tanzu Application Service for Kubernetes.

Warning: VMware recommends Tanzu Application Service for Kubernetes v0.1.0 only for evaluation environments due to its current feature, scale, and security limitations.

Note: Tanzu Application Service for Kubernetes v0.1.0 is certified only for VMware Enterprise PKS v1.6 on vSphere with Flannel networking. It may work with other Kubernetes deployments.

Before proceeding, review the Configuring Installation Values topic to ensure that you have configured all of the required or recommended installation resources.

Installing Tanzu Application Service for Kubernetes

Installing Tanzu Application Service for Kubernetes takes approximately 15 minutes, depending on cluster resources and bandwidth.

  1. Run kubectl cluster-info and inspect the output to ensure that your Kubernetes client configuration targets the intended cluster for installation.

  2. Change into the tanzu-application-service directory in your terminal.

  3. Run the installation script, configured to use the deployment values you generated previously:

$ ./bin/install-tas.sh ../configuration-values

Post-Installation Networking Configuration

This section discusses how to configure DNS entries and load balancers for the Tanzu Application Service for Kubernetes ingress gateway. Follow one of the configuration cases below, depending on your installation.

DNS Configuration with No Load Balancer for Ingress Gateway

By default, Tanzu Application Service for Kubernetes is configured not to create a Kubernetes LoadBalancer service for the ingress gateway. If you have deployed Tanzu Application Service for Kubernetes in this configuration, and do not have an external load balancer to use for ingress to the installation, set up your DNS records to establish ingress connectivity directly to the worker nodes:

  1. Use the kubectl CLI to retrieve the list of worker nodes with their external IP addresses:

    $ kubectl get nodes --output='wide'
    For example:
    $ kubectl get nodes --output='wide'
    NAME                                   STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP
    5e329c31-f1d7-4548-936b-3a58d4b166d3   Ready    <none>   5h49m   v1.15.5   10.85.87.133   10.85.87.133
    a6ad3f07-787c-4d90-b8e1-032be34e9d7f   Ready    <none>   5h43m   v1.15.5   10.85.87.134   10.85.87.134
    a8eb78a2-e3b4-4d8a-8c32-67bf0e13c0bf   Ready    <none>   5h43m   v1.15.5   10.85.87.135   10.85.87.135
    af7dc8da-a7b0-4cf2-a940-c9248168e609   Ready    <none>   5h43m   v1.15.5   10.85.87.136   10.85.87.136
    cc6ef11f-e253-4553-9cb0-bebc7d958f64   Ready    <none>   5h42m   v1.15.5   10.85.87.137   10.85.87.137
    

  2. In your DNS zone, create a wildcard A record for the system domain, *.PLACEHOLDER-SYSTEM-DOMAIN, resolving to the set of external IP addresses for the worker nodes. Make sure you include the *. wildcard prefix so that all subdomains of the system domain also resolve to these IP addresses.

DNS Configuration with an External Load Balancer for Ingress Gateway

If you have deployed Tanzu Application Service for Kubernetes not to create a Kubernetes LoadBalancer service for the ingress gateway and do have an external load balancer to use for ingress to the TAS for Kubernetes installation, set it up to forward HTTP and HTTPS traffic to the Kubernetes worker nodes.

  1. Use the kubectl CLI to retrieve the list of worker nodes with their internal IP addresses:

    $ kubectl get nodes --output='wide'
    For example:
    $ kubectl get nodes --output='wide'
    NAME                                   STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP
    5e329c31-f1d7-4548-936b-3a58d4b166d3   Ready    <none>   5h49m   v1.15.5   10.85.87.133   10.85.87.133
    a6ad3f07-787c-4d90-b8e1-032be34e9d7f   Ready    <none>   5h43m   v1.15.5   10.85.87.134   10.85.87.134
    a8eb78a2-e3b4-4d8a-8c32-67bf0e13c0bf   Ready    <none>   5h43m   v1.15.5   10.85.87.135   10.85.87.135
    af7dc8da-a7b0-4cf2-a940-c9248168e609   Ready    <none>   5h43m   v1.15.5   10.85.87.136   10.85.87.136
    cc6ef11f-e253-4553-9cb0-bebc7d958f64   Ready    <none>   5h42m   v1.15.5   10.85.87.137   10.85.87.137
    

  2. Configure your external load balancer to forward traffic on TCP ports 80 and 443 to the set of internal IP addresses for the Kubernetes worker nodes.

  3. In your DNS zone, create a wildcard A record for the system domain, *.PLACEHOLDER-SYSTEM-DOMAIN, resolving to the external IP address of the load balancer. Make sure you include the *. wildcard prefix so that all subdomains of the system domain also resolve to this IP address.

DNS Configuration with a Kubernetes Load Balancer for Ingress Gateway

If you have instead optionally configured Tanzu Application Service for Kubernetes to use a Kubernetes LoadBalancer Service for the ingress gateway, set up DNS for your system domain to resolve to the external IP of the load balancer:

  1. Use kubectl to retrieve the value of the external IP assigned to the Istio ingress gateway service:

    $ kubectl -n istio-system get service istio-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[0].ip}'
    For example:
    $ kubectl -n istio-system get service istio-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[0].ip}'
    10.193.105.162
    

  2. In your DNS zone, create a wildcard A record for the system domain, *.PLACEHOLDER-SYSTEM-DOMAIN, resolving to this external IP address. Make sure you include the *. wildcard prefix so that all subdomains of the system domain also resolve to this IP address.

Post-Installation System Configuration

To enable buildpack-based apps to run on Tanzu Application Service for Kubernetes, you must configure the system correctly after the initial installation.

  1. Use the CF CLI to target the installation at the api subdomain of the system domain:

    $ cf api api.PLACEHOLDER-SYSTEM-DOMAIN --skip-ssl-validation

  2. In your terminal, change into the directory containing the tanzu-application-service and configuration-values directories.

  3. Set the CF_ADMIN_PASSWORD environment variable to the CF administrative password, stored in the cf_admin_password key in the configuration-values/deployment-values.yml file:

    $ CF_ADMIN_PASSWORD="$(bosh interpolate configuration-values/deployment-values.yml --path /cf_admin_password)"

  4. Log into the installation as the admin user:

    $ cf auth admin "$CF_ADMIN_PASSWORD"

  5. Enable the Diego-Docker feature flag to enable buildpack-based apps to run on the Kubernetes cluster:

    $ cf enable-feature-flag diego_docker

Post-Installation Validation

Note: The route for the test application defaults to a subdomain of the system domain.

  1. Ensure that you are still logged into the Tanzu Application Service for Kubernetes installation as the admin user.

  2. Create and target an organization and space for the verification application:

    $ cf create-org test-org
    $ cf create-space -o test-org test-space
    $ cf target -o test-org -s test-space
    

  3. Clone the Cloud Foundry test application from GitHub to your workstation and change into the resulting test-app directory.

  4. Push the test app to the installation:

    $ cf push test-app --hostname test-app

  5. While the cf push command is running, open another terminal pane and monitor the build logs:

    $ cf logs test-app

  6. After the cf push command succeeds, make a request to the app:

    $ curl test-app.PLACEHOLDER-SYSTEM-DOMAIN