Configuring Load Balancing for PAS

This topic describes how to configure load balancing for Pivotal Application Service (PAS) by entering the names of your load balancers in the Resource Config pane of the PAS tile. This procedure varies by IaaS an installation method. See the section below that corresponds to your use case.

AWS

To configure the Gorouter or HAProxy to AWS Elastic Load Balancers, do the following:

  1. Record the names of your ELBs. If you followed the procedures in the Installing PCF on AWS Manually topic, you created the following:

    • pcf-ssh-elb: A SSH load balancer. This is a Classic Load Balancer.
    • pcf-tcp-elb: A TCP load balancer. This is a Classic Load Balancer.
    • pcf-web-elb: A web load balancer. This is an Application Load Balancer.
    • pcf-web-elb-target-group: a target group for the web load balancer
  2. In the PAS tile, click Resource Config. Resource config

  3. Enter the name of your SSH load balancer depending on which release you are using.

    • PAS: In the Load Balancers field of the Diego Brain row, enter the name of your SSH load balancer: pcf-ssh-elb.
    • Small Footprint Runtime: In the Load Balancers field of the Control row, enter the name of your SSH load balancer: pcf-ssh-elb.
  4. In the Load Balancers field of the Router row, enter the value determined by the type of load balancer you are using:

    • Application Load Balancer: Enter the name of the target group of your web load balancer, prefixed with alb:: alb:pcf-web-elb-target-group. The prefix indicates to Ops Manager that you entered the name of a target group, and is required for AWS Application Load Balancers or Network Load Balancers.
    • Clasic Load Balancer: Enter the name of the load balancer: pcf-web-elb.

      Note: If you are using HAProxy in your deployment, then put the name of the load balancers in the Load Balancers field of the HAProxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  5. In the Load Balancers field of the TCP Router row, enter the name of your TCP load balancer if you enabled TCP routing: pcf-tcp-elb.

AWS Terraform

To configure the Gorouter or HAProxy to AWS Network Load Balancers, do the following:

  1. In the PAS tile, click Resource Config.

  2. Enter the name of your SSH load balancer depending on which release you are using.

    • Pivotal Application Service (PAS): In the Load Balancers field of the Diego Brain row, enter the values of ssh_target_groups from the Terraform output, prefixed with “alb:”: alb:pcf-ssh-tg.
    • Small Footprint Runtime: In the Load Balancers field of the Control row, enter the values of ssh_target_groups from the Terraform output, prefixed with “alb:”: alb:pcf-ssh-tg.
  3. In the Load Balancers field of the Router row, enter all values of web_target_groups from the Terraform output, prefixed with “alb:”: alb:pcf-web-tg-80,alb:pcf-web-tg-443.

    Note: If you are using HAProxy in your deployment, then put the name of the load balancers in the Load Balancers field of the HAProxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  4. In the Load Balancers field of the TCP Router row, enter all values of tcp_target_groups from the Terraform output, prefixed with “alb:”: alb:default-tg-1024,alb:default-tg-1024.

  5. Click Save.

Azure

To configure the Gorouter to Azure Load Balancers, do the following:

  1. Select Resource Config. Resource config

  2. Ensure a Standard VM type is selected for the Router VM. The PAS deployment fails if you select a Basic VM type.

  3. Retrieve the name(s) of your external ALB by navigating to the Azure portal, clicking All resources, and locating your Load balancer resource.

    Note: The Azure portal sometimes displays the names of resources with incorrect capitalization. Always use the Azure CLI to retrieve the correctly capitalized name of a resource. az network lb list

  4. Locate the Router job in the Resource Config pane and enter the name of your external ALB in the field under Load Balancers.

  5. Retrieve the name of your Diego SSH Load Balancer by navigating to the Azure portal, clicking All resources, and locating your Load balancer resource.

  6. Locate the Diego Brain job in the Resource Config pane and enter the name of the Diego SSH Load Balancer in the field under Load Balancers.

  7. Ensure that the Internet Connected checkboxes are deselected for all jobs.

  8. Scale the number of instances as appropriate for your deployment.

    Note: For a high availability deployment of PCF on Azure, Pivotal recommends scaling the number of each PAS job to a minimum of three (3) instances. Using three or more instances for each job creates a sufficient number of availability sets and fault domains for your deployment. For more information, see Reference Architecture for Pivotal Cloud Foundry on Azure.

Azure Terraform

To configure the Gorouter to Azure Load Balancers, do the following:

  1. Select Resource Config. Resource config

    1. Ensure a Standard VM type is selected for the Router VM. The PAS deployment fails if you select a Basic VM type.
  2. Enter the value of web_lb_name from your Terraform output in the Resource Config pane under Load Balancers for the Router job.

  3. Enter the value of diego_ssh_lb_name from your Terraform output in the Resource Config pane under Load Balancers for the Diego Brain job.

  4. Ensure that the Internet Connected checkboxes are deselected for all jobs.

  5. Scale the number of instances as appropriate for your deployment.

    Note: For a high availability deployment of PCF on Azure, Pivotal recommends scaling the number of each PAS job to a minimum of three (3) instances. Using three or more instances for each job creates a sufficient number of availability sets and fault domains for your deployment. For more information, see Reference Architecture for Pivotal Cloud Foundry on Azure.

GCP

To Configure Gorouter to GCP Load Balancers, do the following:

  1. Navigate to the GCP Console and click Load balancing.

    Config lb

    You should see the SSH load balancer, the HTTP(S) load balancer, the TCP WebSockets load balancer, and the TCP router that you created in the Preparing to Deploy PCF on GCP topic.

  2. Record the name of your SSH load balancer and your TCP WebSockets load balancer, MY-PCF-wss-logs and MY-PCF-ssh-proxy.

  3. Click your HTTP(S) load balancer, MY-PCF-global-pcf. Pcf router

  4. Under Backend services, record the name of the backend service of the HTTP(S) load balancer, MY-PCF-http-lb-backend.

  5. In the PAS tile, click Resource Config.

    Resource config

  6. Under the LOAD BALANCERS column of the Router row, enter a comma-delimited list consisting of the name of your TCP WebSockets load balancer and the name of your HTTP(S) load balancer backend with the protocol prepended. For example, tcp:MY-PCF-wss-logs,http:MY-PCF-http-lb-backend.

    Note: Do not add a space between key/value pairs in the LOAD BALANCER field or it will fail.

    Note: If you are using HAProxy in your deployment, then enter the above load balancer values in the LOAD BALANCERS field of the HAProxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  7. If you have enabled TCP routing in the Networking pane and set up the TCP Load Balancer in GCP, add the name of your TCP load balancer, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-tcp-router.

  8. Enter the name of you SSH load balancer depending on which release you are using.

    • PAS: Under the LOAD BALANCERS column of the Diego Brain row, enter the name of your SSH load balancer prepended with tcp:. For example, tcp:MY-PCF-ssh-proxy.
    • Small Footprint Runtime: Under the LOAD BALANCERS column of the Control row, enter the name of your SSH load balancer prepended with tcp:.
  9. Verify that the Internet Connected checkbox for every job is unchecked. When preparing your GCP environment, you provisioned a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses to allow the jobs to reach the Internet.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, deselect the Internet Connected checkboxes. For more information about using NAT in GCP, see the GCP documentation.

  10. Click Save.

GCP Terraform

To Configure Gorouter to GCP Load Balancers, do the following:

  1. Click Resource Config.

  2. Under the LOAD BALANCERS column of the Router row, enter a comma-delimited list consisting of the values of ws_router_pool and http_lb_backend_name from your Terraform output. For example, tcp:pcf-cf-ws,http:pcf-httpslb. These are the names of the TCP WebSockets and HTTP(S) load balancers for your deployment.

    Note: Do not add a space between key/value pairs in the LOAD BALANCER field or it will fail.

    Note: If you are using HAProxy in your deployment, then enter the above load balancer values in the LOAD BALANCERS field of the HAPRoxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  3. If you have enabled TCP routing in the Networking pane, add the value of tcp_router_pool from your Terraform output, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-cf-tcp.

  4. Enter the name of your SSH load balancer depending on which release you are using.

    • PAS: Under the LOAD BALANCERS column of the Diego Brain row, enter the value of ssh_router_pool from your Terraform output, prepended with tcp:. For example, tcp:MY-PCF-ssh-proxy.
    • Small Footprint Runtime: Under the LOAD BALANCERS column of the Control row, enter the value of ssh_router_pool from your Terraform output, prepended with tcp:.
  5. Verify that the Internet Connected checkbox for every job is checked. The terraform templates do not provision a Network Address Translation (NAT) box for Internet connectivity to your VMs so instead they will be provided with ephemeral public IP addresses to allow the jobs to reach the Internet.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, deselect the Internet Connected checkboxes. For more information about using NAT in GCP, see the GCP documentation.

  6. Click Save.

Openstack

Unless you are using your own load balancer, you must enable traffic flow to the OpenStack private subnet as follows. Give each HAProxy a way of routing traffic into the private subnet by providing public IP addresses as floating IP addresses.

  1. Click Resource Config.

    Resource config

  2. Enter one or more IP addresses in Floating IPs for each HAProxy.

  3. (Optional) If you have enabled the TCP Routing feature, enter one or more IP addresses in Floating IPs column for each TCP Router.

  4. Click Save.

Create a pull request or raise an issue on the source for this page in GitHub