Configuring Load Balancing for PAS

This topic describes how to configure load balancing for Pivotal Application Service (PAS) by entering the names of your load balancers in the Resource Config pane of the PAS tile. This procedure varies by your IaaS and how you prepared (“paved”) it for installing PAS. See the section below that corresponds to your use case.

AWS

AWS with Terraform

If you are installing PAS on AWS using Terraform, do the following to set up load balancing:

  1. Create a file named vm_extensions_config.yml with the following content, depending on which release you are using:

    • Pivotal Application Service (PAS):
    ---
    product-name: cf
    resource-config:
      diego_brain:
        elb_names:
        - alb:SSH_TARGET_GROUP_1
        - alb:SSH_TARGET_GROUP_2
        additional_vm_extensions:
        - ssh-lb-security-groups
      router:
        elb_names:
        - alb:WEB_TARGET_GROUPS_1
        - alb:WEB_TARGET_GROUPS_2
        additional_vm_extensions:
        - web-lb-security-groups
      tcp_router:
        elb_names:
        - alb:TCP_TARGET_GROUP_1
        - alb:TCP_TARGET_GROUP_2
        additional_vm_extensions:
        - tcp-lb-security-groups
    
    • Small Footprint Runtime:
    ---
    product-name: cf
    resource-config:
      control:
        elb_names:
        - alb:SSH_TARGET_GROUP_1
        - alb:SSH_TARGET_GROUP_2
        additional_vm_extensions:
        - ssh-lb-security-groups
      router:
        elb_names:
        - alb:WEB_TARGET_GROUPS_1
        - alb:WEB_TARGET_GROUPS_2
        additional_vm_extensions:
        - web-lb-security-groups
      tcp_router:
        elb_names:
        - alb:TCP_TARGET_GROUP_1
        - alb:TCP_TARGET_GROUP_2
        additional_vm_extensions:
        - tcp-lb-security-groups
    
  2. Replace values in the file as follows;

    • SSH_TARGET_GROUP_X: Enter your SSH target groups. You can find these values by running terraform output ssh_target_groups.
    • WEB_TARGET_GROUPS_X: Enter your web target groups. You can find these values by running terraform output web_target_groups.
    • TCP_TARGET_GROUP_X: Enter your TCP target groups. You can find these values by running terraform output tcp_target_groups.
  3. Apply the VM extension configuration using the om CLI. For more information about om, see pivotal-cf/om in GitHub.

    om -k \
      -t "YOUR-OPS-MANAGER-FQDN" \
      -u "USERNAME" \
      -p "PASSWORD" \
      configure-product \
      -c vm_extensions_config.yml
    

    Where:

    • YOUR-OPS-MANAGER-FQDN is the URL at which you access your Ops Manager instance. This corresponds to ops_manager_dns in the Terraform output.
    • USERNAME is the user name you entered when configuring internal authentication.
    • PASSWORD is the password you entered when configuring internal authentication.

      Note: If you did not configure internal authentication, you must modify this command to use a client ID and secret instead of user name and password. See Authentication in the om repository.

AWS Paved Manually

To configure the Gorouter or HAProxy to AWS Elastic Load Balancers, do the following:

  1. Record the names of your ELBs. If you followed the procedures in the Installing Pivotal Platform on AWS Manually topic, you created the following:

    • pcf-ssh-elb: A SSH load balancer. This is a Classic Load Balancer.
    • pcf-tcp-elb: A TCP load balancer. This is a Classic Load Balancer.
    • pcf-web-elb: A web load balancer. This is an Application Load Balancer.
    • pcf-web-elb-target-group: a target group for the web load balancer
  2. In the PAS tile, click Resource Config.

  3. Enter the name of your SSH load balancer. Resource config lb

    1. Show the LOAD BALANCERS field underneath the job that handles SSH requests. This depends on the PAS release you are using:
      • PAS: Click the icon next to the Diego Brain job name to expand the row.
      • Small Footprint Runtime: Click the icon next to the Control job name to expand the row.
    2. In the LOAD BALANCERS field, enter the name of your SSH load balancer: pcf-ssh-elb.
  4. Click to expand the Router row, and fill in the LOAD BALANCERS field with a value determined by the type of load balancer you are using:

    • Application Load Balancer: Enter the name of the target group of your web load balancer, prefixed with alb:: alb:pcf-web-elb-target-group. The prefix indicates to Ops Manager that you entered the name of a target group, and is required for AWS Application Load Balancers or Network Load Balancers.
    • Classic Load Balancer: Enter the name of the load balancer: pcf-web-elb.

      Note: If you are using HAProxy in your deployment, then put the name of the load balancers in the LOAD BALANCERS field of the HAProxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  5. If you enabled TCP routing, expand the TCP Router row and enter the name of your TCP load balancer: pcf-tcp-elb.

Azure

Azure with Terraform

To configure the Gorouter to Azure Load Balancers, do the following:

  1. Select Resource Config.

    1. Ensure a Standard VM type is selected for the Router VM. The PAS deployment fails if you select a Basic VM type.
  2. Click the icon next to the Router job name to expand the row, showing a LOAD BALANCERS field and INTERNET CONNECTED checkbox underneath. Resource config lb

  3. Enter the value of web_lb_name from your Terraform output in the Resource Config pane under LOAD BALANCERS for the Router job.

  4. Click to expand the row for the job that handles SSH requests. This depends on the PAS release you are using:

    • PAS: Click the icon next to Diego Brain.
    • Small Footprint Runtime: Click the icon next to Control.
  5. For the SSH load balancer, enter the value of diego_ssh_lb_name from your Terraform output.

  6. Ensure that the INTERNET CONNECTED checkboxes are deselected for all jobs.

  7. Scale the number of instances as appropriate for your deployment.

    Note: For a high availability deployment of Pivotal Platform on Azure, Pivotal recommends scaling the number of each PAS job to a minimum of three (3) instances. Using three or more instances for each job creates a sufficient number of availability sets and fault domains for your deployment. For more information, see Reference Architecture for Pivotal Platform on Azure.

Azure Paved Manually

To configure the Gorouter to Azure Load Balancers, do the following:

  1. Select Resource Config.

  2. Ensure a Standard VM type is selected for the Router VM. The PAS deployment fails if you select a Basic VM type.

  3. Retrieve the name(s) of your external ALB by navigating to the Azure portal, clicking All resources, and locating your Load balancer resource.

    Note: The Azure portal sometimes displays the names of resources with incorrect capitalization. Always use the Azure CLI to retrieve the correctly capitalized name of a resource. az network lb list

  4. In the PAS tile > Resource Config pane, click the icon next to the Router job name to expand the row, showing a LOAD BALANCERS field and INTERNET CONNECTED checkbox underneath. Resource config lb

  5. Enter the name of your external ALB in the field under LOAD BALANCERS.

  6. Retrieve the name of your Diego SSH Load Balancer by navigating to the Azure portal, clicking All resources, and locating your Load balancer resource.

  7. In the PAS tile > Resource Config pane, expand the row for the job that handles SSH requests. This depends on the PAS release you are using:

    • PAS: Click the icon next to Diego Brain.
    • Small Footprint Runtime: Click the icon next to Control.
  8. Enter the name of the Diego SSH Load Balancer in the field under LOAD BALANCERS.

  9. Ensure that the INTERNET CONNECTED checkboxes are deselected for all jobs.

  10. Scale the number of instances as appropriate for your deployment.

    Note: For a high availability deployment of Pivotal Platform on Azure, Pivotal recommends scaling the number of each PAS job to a minimum of three (3) instances. Using three or more instances for each job creates a sufficient number of availability sets and fault domains for your deployment. For more information, see Reference Architecture for Pivotal Platform on Azure.

GCP

GCP with Terraform

To Configure Gorouter to GCP Load Balancers, do the following:

  1. Click Resource Config.

  2. Click the icon next to the Router job name to expand the row, showing a LOAD BALANCERS field and INTERNET CONNECTED checkbox underneath.

  3. In the the LOAD BALANCERS field, enter a comma-delimited list consisting of the values of ws_router_pool and http_lb_backend_name from your Terraform output. For example, tcp:pcf-cf-ws,http:pcf-httpslb. These are the names of the TCP WebSockets and HTTP(S) load balancers for your deployment.

    Resource config lb

    Note: Do not add a space between key/value pairs in the LOAD BALANCER field or it will fail.

    Note: If you are using HAProxy in your deployment, then enter the above load balancer values in the LOAD BALANCERS field of the HAProxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  4. If you have enabled TCP routing in the Networking pane, add the value of tcp_router_pool from your Terraform output, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-cf-tcp.

  5. Expand the row for the job that handles SSH requests. This depends on the PAS release you are using:

    • PAS: Click the icon next to Diego Brain.
    • Small Footprint Runtime: Click the icon next to Control.
  6. Under LOAD BALANCERS for the SSH job, enter the value of ssh_router_pool from your Terraform output, prepended with tcp:. For example, tcp:MY-PCF-ssh-proxy.

  7. Verify that the Internet Connected checkbox for every job is checked. The terraform templates do not provision a Network Address Translation (NAT) box for Internet connectivity to your VMs so instead they will be provided with ephemeral public IP addresses to allow the jobs to reach the Internet.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, deselect the Internet Connected checkboxes. For more information about using NAT in GCP, see the GCP documentation.

  8. Click Save.

GCP Paved Manually

To Configure Gorouter to GCP Load Balancers, do the following:

  1. Navigate to the GCP Console and click Load balancing.

    Config lb

    You should see the SSH load balancer, the HTTP(S) load balancer, the TCP WebSockets load balancer, and the TCP router that you created in the Preparing to Deploy Pivotal Platform on GCP topic.

  2. Record the name of your SSH load balancer and your TCP WebSockets load balancer, MY-PCF-wss-logs and MY-PCF-ssh-proxy.

  3. Click your HTTP(S) load balancer, MY-PCF-global-pcf. Pcf router

  4. Under Backend services, record the name of the backend service of the HTTP(S) load balancer, MY-PCF-http-lb-backend.

  5. In the PAS tile, click Resource Config.

  6. Click the icon next to the Router job name to expand the row, showing a LOAD BALANCERS field and INTERNET CONNECTED checkbox underneath.

  7. In the the LOAD BALANCERS field, enter a comma-delimited list consisting of the name of your TCP WebSockets load balancer and the name of your HTTP(S) load balancer backend with the protocol prepended. For example, tcp:MY-PCF-wss-logs,http:MY-PCF-http-lb-backend.

    Resource config lb

    Note: Do not add a space between key/value pairs in the LOAD BALANCER field or it will fail.

    Note: If you are using HAProxy in your deployment, then enter the above load balancer values in the LOAD BALANCERS field of the HAProxy row instead of the Router row. For a high availability configuration, scale up the HAProxy job to more than one instance.

  8. If you have enabled TCP routing in the Networking pane and set up the TCP Load Balancer in GCP, add the name of your TCP load balancer, prepended with tcp:, to the LOAD BALANCERS column of the TCP Router row. For example, tcp:pcf-tcp-router.

  9. Expand the row for the job that handles SSH requests. This depends on the PAS release you are using:

    • PAS: Click the icon next to Diego Brain.
    • Small Footprint Runtime: Click the icon next to Control.
  10. Under LOAD BALANCERS for the SSH job, enter the value of ssh_router_pool from your Terraform output, prepended with tcp:. For example, tcp:MY-PCF-ssh-proxy.

  11. Verify that the Internet Connected checkbox for every job is unchecked. When preparing your GCP environment, you provisioned a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses to allow the jobs to reach the Internet.

    Note: If you want to provision a Network Address Translation (NAT) box to provide Internet connectivity to your VMs instead of providing them with public IP addresses, deselect the Internet Connected checkboxes. For more information about using NAT in GCP, see the GCP documentation.

  12. Click Save.

OpenStack

Unless you are using your own load balancer, you must provide HAProxy with public IP addresses to use as floating IP addresses. This lets the HAProxy route traffic into the OpenStack private subnet.

  1. Click Resource Config.

    Resource config

  2. Click the icon next to the HAProxy job name to expand the row, showing the FLOATING IPS field underneath.

  3. Enter one or more IP addresses under FLOATING IPS.

  4. (Optional) If you have enabled TCP Routing, expand the TCP Router row and enter the IP addresses of your TCP routers under FLOATING IPS.

  5. Click Save.

vSphere with NSX-T or NSX-V

The PAS Tile > Resource Config pane lets you configure vSphere NSX-T or NSX-V load balancing and security group membership for PAS jobs:

  1. Click Resource Config.

    Along the left side, a vertical menu shows "Resource Config" selected. The rest of the image shows part of the Resource Config pane, which is organized as a grid. Column headings, not visible in this crop, are JOB, INSTANCES, VM TYPE, and PERSISTENT DISK TYPE.  A row at top lists values "Diego Brain," "Automatic: 1" in a dropdown, "Automatic: small(cpu: 1...)" in a dropdown, and "None." The row below lists "Router" as the JOB, values as above for the other fields, and is expanded via a down-arrow to show two more sets of fields: "NSX-T CONFIGURATION" with fields under for "NS Groups," "VIF Type," and "Logical Load Balancer;" and "NSX-V CONFIGURATION" with fields under for "Security Groups: and "Edge Load Balancers."

  2. For each PAS job that you want to load-balance, click the down-arrow to reveal its configuration fields. The external-facing job instance groups in PAS are:

    • Diego Brain in PAS, or Control in Small Footprint PAS, for SSH traffic
    • Router for web traffic
    • TCP Router for TCP traffic
  3. Enter a load balancer configuration for each job:

    • NSX-T:
      1. In the vSphere NSX Manager > Advanced Networks & Security > Groups pane, define an NSGroup to include VMs running each load-balanced job, such as pas-ssh-group, pas-tcp-group, and pas-web-group.
      2. In the NS Groups field in Resource Config, enter a comma-separated list of the NSGroups you defined.
      3. In the Logical Load Balancer: Enter a JSON-formatted structure defining a list of server_pools as pairs of name and port definitions.
    • NSX-V:
      1. Security Groups: Enter a comma-separated list of Security Groups defined in NSX-V to include each load-balanced job, such as pas-ssh-group, pas-tcp-group, and pas-web-group.
      2. Edge Load Balancers: Enter a JSON-formatted structure listing edge LBs, each defined by edge_name, pool_name, security_group, port, and monitor_port definitions.
  4. Click Save.