Configuring an Azure Load Balancer for the TKGI API

Note: As of v1.8, Enterprise PKS has been renamed to VMware Tanzu Kubernetes Grid Integrated Edition. Some screenshots in this documentation do not yet reflect the change.

Page last updated:

This topic describes how to create a load balancer for the VMware Tanzu Kubernetes Grid Integrated Edition API using Azure.

Refer to the procedures in this topic to create a load balancer using Azure. To use a different load balancer, use this topic as a guide.

Overview

To configure your TKGI API Load Balancer on Azure, complete the following:

Note: Creating a TKGI API load balancer is an optional step when installing Tanzu Kubernetes Grid Integrated Edition on Azure. VMware recommends completing the steps below during Tanzu Kubernetes Grid Integrated Edition installation to simplify upgrading Tanzu Kubernetes Grid Integrated Edition to future versions.

Identify Your TKGI API VM

Before configuring your Azure Backend Pool you must know which of your VMs is the TKGI API VM.

To find the name of your TKGI API VM, complete either of the following procedures:

  • Use the Azure Dashboard:

    1. Open the Azure Dashboard.
    2. In the Azure Dashboard, locate the VM tagged with instance_group:pivotal-container-service. This is your TKGI API VM.
    3. Note the machine name and IP address for the listed TKGI API VM.
  • Use BOSH:

    1. On the command line, run bosh vms.
    2. Locate the VM tagged with instance_group:pivotal-container-service. This is your TKGI API VM.
    3. Note the machine name and IP address for each listed TKGI API VM.

Configure a Load Balancer Backend Pool

An Azure backend pool is a logical grouping of instances that receive similar traffic. On Azure, you must configure a load balancer backend pool to route your TKGI API traffic to your TKGI API VM.

Note: You must reconfigure your TKGI API load balancer backend pool whenever you modify your TKGI API VM.

  1. To open the backend pool configuration page for your TKGI API load balancer, do the following:
    1. From the Azure Dashboard, select All services from the left-hand menu.
    2. Select All Resources open the Load Balancers service.
    3. In the Settings menu, select Backend Pools.
    4. On the Backend Pools page, select the backend pool for your TKGI API load balancer.
  2. In the Virtual machines section, complete the following for the VM you identified while performing the steps in Identify Your TKGI API VM, above:
    1. Virtual machine: Select the VM ID for your TKGI API VM.
    2. IP address: Select the IP address corresponding to the VM specified in the Virtual machine column. Backend Pool configuration
  3. Click OK.

For information about Azure backend pools, see Backend pools in the Azure documentation. For more information about configuring your backend pool, see Remove or add VMs from the backend pool in the Azure documentation.

Create Health Probe

  1. From the Azure Dashboard, open the Load Balancers service.
  2. In the Settings menu, select Health probes.
  3. On the Health probes page, click Add.
  4. On the Add health probe page, complete the form as follows:
    1. Name: Name the health probe.
    2. Protocol: Select TCP.
    3. Port: Enter 9021.
    4. Interval: Enter the interval of time to wait between probe attempts.
    5. Unhealthy Threshold: Enter a number of consecutive probe failures that must occur before a VM is considered unhealthy.
  5. Click OK.

Create Load Balancing Rule

  1. From the Azure Dashboard, open the Load Balancers service.
  2. In the Settings menu, select Load Balancing Rules.
  3. On the Load balancing rules page, click Add.
  4. On the Add load balancing rules page, complete the form as follows:
    1. Name: Name the load balancing rule.
    2. IP Version: Select IPv4.
    3. Frontend IP address: Select the appropriate IP address. Clients communicate with your load balancer on the selected IP address and service traffic is routed to the target VM by this NAT rule.
    4. Protocol: Select TCP.
    5. Port: Enter 9021.
    6. Backend port: Enter 9021.
    7. Health Probe: Select the health probe that you created in Create Health Probe.
    8. Session persistence: Select None.
  5. Click OK.

Create Inbound Security Rule

  1. From the Azure Dashboard, open the Security Groups service.
  2. Click the name of the Security Group attached to the subnet where the TKGI API is deployed. If you deployed Tanzu Kubernetes Grid Integrated Edition using Terraform, the name of the Security Group ends with the suffix bosh-deployed-vms-security-group.
  3. In the Settings menu for your security group, select Inbound security rules.
  4. Click Add.
  5. On the Add inbound security rule page, click Advanced and complete the form as follows:
    1. Name: Name the inbound security rule.
    2. Source: Select Any.
    3. Source port range: Enter *.
    4. Destination: Select Any.
    5. Destination port range: Enter 9021,8443.
  6. Click OK.

Verify Hostname Resolution

  1. In a browser, log into Ops Manager.
  2. Click the Tanzu Kubernetes Grid Integrated Edition tile.
  3. Select TKGI API.
  4. Record the API Hostname (FQDN).
  5. Verify that the API hostname resolves to the IP address of the load balancer.

Next Step

After you have configured an Azure load balancer for the TKGI API, complete the Tanzu Kubernetes Grid Integrated Edition installation by returning to the Install the TKGI and Kubernetes CLIs step of Installing Tanzu Kubernetes Grid Integrated Edition on Azure.


Please send any feedback you have to pks-feedback@pivotal.io.