Configuring Load Balancing for PAS

Page last updated:

Warning: Pivotal Cloud Foundry (PCF) v2.5 is no longer supported because it has reached the End of General Support (EOGS) phase as defined by the Support Lifecycle Policy. To stay up to date with the latest software and security updates, upgrade to a supported version.

This topic describes how to configure load balancing for Pivotal Application Service (PAS) by entering the names of your load balancers in the Resource Config pane of the PAS tile. This procedure varies by IaaS an installation method. See the section below that corresponds to your use case.

AWS

AWS Paved Manually

To configure the Gorouter or HAProxy to AWS Elastic Load Balancers:

  1. Record the names of your ELBs. If you followed the procedures in the Installing PCF on AWS Manually topic, you created the following:

    • pcf-ssh-elb: An SSH load balancer. This is a Classic Load Balancer.
    • pcf-tcp-elb: A TCP load balancer. This is a Classic Load Balancer.
    • pcf-web-elb: A web load balancer. This is an Application Load Balancer.
    • pcf-web-elb-target-group: A target group for the web load balancer.
  2. In the tile, select Resource Config.

  3. Enter the name of your SSH load balancer, depending on which release you are using.

    • PAS: In the Load Balancers field of the Diego Brain row, enter the name of your SSH load balancer: pcf-ssh-elb.
    • Small Footprint PAS: In the Load Balancers field of the Control row, enter the name of your SSH load balancer: pcf-ssh-elb.
  4. In the Load Balancers field of the Router row, enter the value determined by the type of load balancer you are using:

    • Application Load Balancer: Enter the name of the target group of your web load balancer, prefixed with alb:: alb:pcf-web-elb-target-group. The prefix indicates to Ops Manager that you entered the name of a target group, and is required for AWS Application Load Balancers or Network Load Balancers.
    • Classic Load Balancer: Enter the name of the load balancer: pcf-web-elb.

      Note: If you are using HAProxy in your deployment, put the name of the load balancers in the Load Balancers field of the HAProxy row instead of the Router row. For a high-availability configuration, scale up the HAProxy job to more than one instance.

  5. In the Load Balancers field of the TCP Router row, enter the name of your TCP load balancer if you enabled TCP routing: pcf-tcp-elb.

AWS with Terraform

To set up load balancing for PAS on AWS using Terraform:

  1. Before you install PAS on AWS, you must configure the security groups for web, ssh, and tcp. To do this using the Ops Manager CLI, run:

    om -k -t "OPS-MANAGER-FQDN" -u "USERNAME" -p "PASSWORD" curl --path /api/v0/staged/vm_extensions/web-lb-security-group -x PUT -d '{"name": "web-lb-security-group", "cloud_properties": { "security_groups": ["web_lb_security_groups"] }}'
    
    om -k -t "OPS-MANAGER-FQDN" -u "USERNAME" -p "PASSWORD" curl --path /api/v0/staged/vm_extensions/ssh-lb-security-group -x PUT -d '{"name": "ssh-lb-security-group", "cloud_properties": { "security_groups": ["ssh_lb_security_groups"] }}'
    
    om -k -t "OPS-MANAGER-FQDN" -u "USERNAME" -p "PASSWORD" curl --path /api/v0/staged/vm_extensions/tcp-lb-security-group -x PUT -d '{"name": "tcp-lb-security-group", "cloud_properties": { "security_groups": ["tcp_lb_security_groups"] }}'
    

    Where:

    • OPS-MANAGER-FQDN is the URL at which you access your Ops Manager instance. This corresponds to ops_manager_dns in the Terraform output.
    • USERNAME is the user name you entered when configuring internal authentication.
    • PASSWORD is the password you entered when configuring internal authentication.

      Note: If you did not configure internal authentication, you must modify this command to use a client ID and secret instead of user name and password. For more information, see Authentication in the Om repository on GitHub.


      For more information about the Ops Manager CLI, see the Om repository on GitHub.
  2. Create a file named vm_extensions_config.yml with the following content, depending on which release you are using:

    • Pivotal Application Service (PAS):

      ---
      product-name: cf
      resource-config:
        diego_brain:
          elb_names:
          - alb:SSH_TARGET_GROUP_1
          - alb:SSH_TARGET_GROUP_2
          additional_vm_extensions:
          - ssh-lb-security-groups
        router:
          elb_names:
          - alb:WEB_TARGET_GROUPS_1
          - alb:WEB_TARGET_GROUPS_2
          additional_vm_extensions:
          - web-lb-security-groups
        tcp_router:
          elb_names:
          - alb:TCP_TARGET_GROUP_1
          - alb:TCP_TARGET_GROUP_2
          additional_vm_extensions:
          - tcp-lb-security-groups
      
    • Small Footprint PAS:

        ---
        product-name: cf
        resource-config:
          control:
            elb_names:
            - alb:SSH_TARGET_GROUP_1
            - alb:SSH_TARGET_GROUP_2
            additional_vm_extensions:
            - ssh-lb-security-groups
          router:
            elb_names:
            - alb:WEB_TARGET_GROUPS_1
            - alb:WEB_TARGET_GROUPS_2
            additional_vm_extensions:
            - web-lb-security-groups
          tcp_router:
            elb_names:
            - alb:TCP_TARGET_GROUP_1
            - alb:TCP_TARGET_GROUP_2
            additional_vm_extensions:
            - tcp-lb-security-groups
      
  3. Replace values in the file as follows:

    • SSH_TARGET_GROUP_X: Enter your SSH target groups. You can find these values by running:

      terraform output ssh_target_groups
      
    • WEB_TARGET_GROUPS_X: Enter your web target groups. You can find these values by running:

      terraform output web_target_groups
      
    • TCP_TARGET_GROUP_X: Enter your TCP target groups. You can find these values by running:

      terraform output tcp_target_groups
      
  4. Apply the VM extension configuration using the om CLI. For more information about om, see the Om repository on GitHub.

    om -k \
      -t "OPS-MANAGER-FQDN" \
      -u "USERNAME" \
      -p "PASSWORD" \
      configure-product \
      -c vm_extensions_config.yml
    

    Where:

    • OPS-MANAGER-FQDN is the URL at which you access your Ops Manager instance. This corresponds to ops_manager_dns in the Terraform output.
    • USERNAME is the user name you entered when configuring internal authentication.
    • PASSWORD is the password you entered when configuring internal authentication.

      Note: If you did not configure internal authentication, you must modify this command to use a client ID and secret instead of user name and password. For more information, see Authentication in the Om repository on GitHub.

Azure

Azure Paved Manually

To configure the Gorouter to use Azure load balancers:

  1. Select Resource Config.

  2. Ensure a Standard VM type is selected for the Router VM. The PAS deployment fails if you select a Basic VM type.

  3. Retrieve the name(s) of your external ALB by navigating to the Azure portal, clicking All resources, and locating your Load balancer resource.

    Note: The Azure portal sometimes displays the names of resources with incorrect capitalization. Always use the Azure CLI to retrieve the correctly capitalized name of a resource. To see the list of resources, run az network lb list.

  4. Locate the Router job in the Resource Config pane and enter the name of your external ALB in the field under Load Balancers.

  5. Retrieve the name of your Diego SSH load balancer by navigating to the Azure portal, clicking All resources, and locating your Load balancer resource.

  6. Locate the Diego Brain job in the Resource Config pane and enter the name of the Diego SSH Load Balancer in the field under Load Balancers.

  7. Ensure that the Internet Connected checkboxes are disabled for all jobs.

  8. Scale the number of instances as appropriate for your deployment.

    Note: For a high-availability deployment of PCF on Azure, Pivotal recommends scaling the number of each PAS job to a minimum of three instances. Using three or more instances for each job creates a sufficient number of availability sets and fault domains for your deployment. For more information, see Azure Reference Architecture.

Azure with Terraform

To configure the Gorouter to use Azure load balancers:

  1. Select Resource Config.

  2. Ensure a Standard VM type is selected for the Router VM. The PAS deployment fails if you select a Basic VM type.

  3. Enter the value of web_lb_name from your Terraform output in the Resource Config pane under Load Balancers for the Router job.

  4. Enter the value of diego_ssh_lb_name from your Terraform output in the Resource Config pane under Load Balancers for the Diego Brain job.

  5. Ensure that the Internet Connected checkboxes are disabled for all jobs.

  6. Scale the number of instances as appropriate for your deployment.

    Note: For a high-availability deployment of PCF on Azure, Pivotal recommends scaling the number of each PAS job to a minimum of three instances. Using three or more instances for each job creates a sufficient number of availability sets and fault domains for your deployment. For more information, see Azure Reference Architecture.

GCP

GCP Paved Manually

To configure the Gorouter to use GCP load balancers:

  1. Navigate to the GCP Console and select Load balancing.

    Config lb

    You should see the SSH load balancer, the HTTP(S) load balancer, the TCP WebSockets load balancer, and the TCP router that you created in Preparing to Deploy Ops Manager on GCP Manually.

  2. Record the name of your SSH load balancer and your TCP WebSockets load balancer, PCF-wss-logs and PCF-ssh-proxy.

  3. Click your HTTP(S) load balancer, PCF-global-pcf. Pcf router

  4. Under Backend services, record the name of the back end service of the HTTP(S) load balancer, MY-PCF-http-lb-backend.

  5. In the PAS tile, select Resource Config.

  6. Under the LOAD BALANCERS column of the Router row, enter a comma-separated list consisting of the name of your TCP WebSockets load balancer and the name of your HTTP(S) load balancer back end with the protocol prepended. For example, tcp:PCF-wss-logs,http:PCF-http-lb-backend.

    Note: Do not add a space between key and value pairs in the LOAD BALANCER field, or it fails.

    Note: If you are using HAProxy in your deployment, enter the above load balancer values in the LOAD BALANCERS field of the HAProxy row instead of the Router row. For a high-availability configuration, scale up the HAProxy job to more than one instance.

  7. If you enabled TCP routing in the Networking pane in the PAS tile and set up the TCP load balancer in GCP, add the name of your TCP load balancer, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-tcp-router.

  8. Enter the name of you SSH load balancer depending on which release you are using.

    • PAS: Under the LOAD BALANCERS column of the Diego Brain row, enter the name of your SSH load balancer prepended with tcp:. For example, tcp:MY-PCF-ssh-proxy.
    • Small Footprint PAS: Under the LOAD BALANCERS column of the Control row, enter the name of your SSH load balancer prepended with tcp:.
  9. Verify that the Internet Connected checkbox for every job is disabled. When preparing your GCP environment, you provisioned a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses to allow the jobs to reach the Internet.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, disable the Internet Connected checkboxes. For more information about using NAT in GCP, see VPC network overview in the GCP documentation.

  10. Click Save.

GCP with Terraform

To configure the Gorouter to use GCP load balancers:

  1. Select Resource Config.

  2. Under the LOAD BALANCERS column of the Router row, enter a comma-separated list consisting of the values of ws_router_pool and http_lb_backend_name from your Terraform output. For example, tcp:pcf-cf-ws,http:pcf-httpslb. These are the names of the TCP WebSockets and HTTP(S) load balancers for your deployment.

    Note: Do not add a space between key and value pairs in the LOAD BALANCER field, or it fails.

    Note: If you are using HAProxy in your deployment, then enter the above load balancer values in the LOAD BALANCERS field of the HAPRoxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  3. If you enabled TCP routing in the Networking pane of the PAS tile, add the value of tcp_router_pool from your Terraform output, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-cf-tcp.

  4. Enter the name of your SSH load balancer, depending on which release you are using:

    • PAS: Under the LOAD BALANCERS column of the Diego Brain row, enter the value of ssh_router_pool from your Terraform output, prepended with tcp:. For example, tcp:PCF-ssh-proxy.
    • Small Footprint PAS: Under the LOAD BALANCERS column of the Control row, enter the value of ssh_router_pool from your Terraform output, prepended with tcp:.
  5. Verify that the Internet Connected checkbox for every job is enabled. The Terraform templates do not provision a Network Address Translation (NAT) box for Internet connectivity to your VMs, so they are provided with ephemeral public IP addresses to allow the jobs to reach the Internet.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, disable the Internet Connected checkboxes. For more information about using NAT in GCP, see VPC network overview in the GCP documentation.

  6. Click Save.

Openstack

Unless you are using your own load balancer, you must provide HAProxy with public IP addresses to use as floating IP addresses. This allows the HAProxy route traffic into the OpenStack private subnet.

To provide HAProxy with public IP addresses:

  1. Select Resource Config.

  2. Enter one or more IP addresses in Floating IPs for each HAProxy.

  3. (Optional) If you enabled the TCP routing feature, enter one or more IP addresses in Floating IPs column for each TCP router.

  4. Click Save.